
For anyone who hasn’t yet heard the term (it was new to me too), AI characters are bots with fully fleshed-out profiles, bios, and personalities that post, comment, and engage on platforms like Facebook and Instagram as if they were real people. The kicker? Most of us won’t realize they’re bots – and we’ll respond to them as if they’re human.
These AI characters are already making waves on Meta platforms, and it’s only a matter of time before they make their way to LinkedIn. And that’s when things get really interesting.
LinkedIn’s Ethical Dilemma
Unlike Meta’s platforms, LinkedIn is (supposedly) built on real identities. We show up as our professional selves—no avatars, no fake names, no fictitious profiles. At least, that’s the theory.
In reality, fake profiles are already rampant on LinkedIn. LinkedIn claims to penalise them, but anyone who’s been on the platform for more than five minutes knows that fake accounts are alive and well. Even I maintain a test account for creating short links, trying out new features, and doing tasks I’d rather not perform live.
So if LinkedIn already struggles to manage fake profiles, what happens when AI characters start showing up? Profiles that look, sound, and behave like real people – but aren’t? Where does LinkedIn draw the ethical line? And more importantly, where do we draw the line?
The Growing Threat of Inauthentic Engagement
It’s not just profiles we need to worry about. AI tools can now:
- Write posts that sound human
- Generate images, videos, and presentations
- Conduct realistic DM conversations
I know of one company that’s already using AI to mimic human interactions in LinkedIn DMs. And let’s not forget engagement pods that use fake profiles to create fake engagement, boosting posts to achieve fake viral status. These activities are happening right now, and they’re only going to increase.
So, where does that leave a platform like LinkedIn, which is built on the premise of trust and authenticity?
How to Stay Authentic in a Sea of AI
Here’s the good news: While AI is becoming incredibly sophisticated, there are still ways to stand out as authentically human on LinkedIn. It starts with being mindful of how we use AI tools ourselves.
Take my recent experience with ChatGPT as an example. I asked it for the top 5 most recent LinkedIn updates. Of the five responses, only one was accurate. This aligns with what we know: AI isn’t great at timeliness.
And that’s where we, as individuals, have an edge. If we’re reporting industry news or updates before AI tools have the chance to catch up, we can position ourselves as authentic experts. AI can regurgitate the basics of our industries, but it still needs us for real-time insights and nuanced takes.
Using AI as an Ethical Tool
I’ll be the first to admit that AI tools can save us time and effort. When I asked ChatGPT to provide the 10 most important things LinkedIn newbies need to know, it nailed 95% of the answers. That’s impressive—and it proves that AI can handle some tasks exceptionally well.
But here’s the key: Don’t use AI-generated content without checking it yourself. AI is a tool, not a substitute for your expertise, insights, or voice. Use it to enhance your work, not replace it.
The Big Question: Where Do We Draw the Line?
It’s time we all ask ourselves some hard questions:
- Is it ethical to use AI to write posts without disclosing it?
- Is it ethical to maintain test accounts or engage in pods to boost visibility?
- Where does authenticity begin and end in an age where AI can do most of the heavy lifting?
LinkedIn prides itself on being a platform where real people connect with real people. But with AI characters lurking on the horizon, it’s becoming harder to tell who’s real and who’s not.
For me, the solution is simple: Show up as yourself. Use AI to assist you, but make sure your unique personality, voice, and experiences shine through. That’s something no AI character can replicate.