Last month I ran a small experiment. For seven days straight, I screenshotted every comment on my Instagram and LinkedIn posts, then went through each profile that left a comment. I wanted to answer one question: how many of these people are actually real?
The result was uncomfortable. Out of roughly 40 comments across both platforms that week, at least 12 came from accounts that showed clear signs of being automated — AI-generated profile photos, zero personal posts, generic one-word responses like “Amazing!” or “So true 🔥,” and account creation dates within the last 90 days. That is nearly one in three.
I work in IT security for a living. I spend my days identifying suspicious activity across networks and systems aboard MSC Cruises. But even I did not fully appreciate how thoroughly bots have colonized regular social media until I actually sat down and counted.
And the data backs up what I saw on a much larger scale.
The Numbers Are Worse Than Most People Think
According to the 2025 Imperva Bad Bot Report, published by cybersecurity firm Thales, automated bot traffic officially surpassed human-generated traffic for the first time in 2024 — accounting for 51% of all web traffic globally. Of that automated traffic, 37% was classified as “bad bots” — up from 32% in 2023. That is the sixth consecutive year of growth in malicious bot activity.
Cloudflare’s own 2025 Year in Review report confirmed a similar picture. Their data showed that non-AI bots generated roughly half of all HTML page requests in 2025, running 7% above human-generated traffic on average — and spiking to as much as 25% higher at certain points during the year.
To put this plainly: if you are scrolling through social media right now, the odds are roughly even that the next piece of engagement you see — a like, a follow, a comment — came from software, not a person.
What AI Engagement Farming Actually Looks Like
When most people hear “bots,” they picture the obvious spam accounts from five years ago — broken English, random usernames, clearly fake photos. Those still exist. But the new generation of social media bots is fundamentally different because they are powered by the same large language models (LLMs) that run tools like ChatGPT.
Here is what I have observed while monitoring accounts in my cybersecurity work and on my own feeds:
The comment patterns are eerily uniform. AI-generated comments tend to follow the same handful of templates: short affirmations (“Great insight!”), vague agreement (“This is so important, more people need to see this”), or emoji-heavy reactions that do not reference anything specific in the post. A real human responding to a detailed article about bond yields would mention bond yields. A bot says “Great content, keep it up! 👏.”
The profiles look increasingly real. Modern AI image generators produce faces that pass casual inspection. The tell is usually in the details — an inconsistent earring, a blurry background that doesn’t match the supposed location, or a profile that was created recently but claims years of professional experience. Meta reported in Q4 2024 that fake accounts made up approximately 3% of Facebook’s monthly active users — that translates to roughly 90 million fake profiles on a single platform.
They target specific niches. Bot operators are not random. If you post about finance, crypto, or tech, you get finance-themed bots. If you post about fitness, you get fitness bots. The AI tailors the engagement to your content category because the end goal is always the same — build follower counts, generate fake credibility, and eventually monetize through scams, affiliate links, or selling the accounts.
It Is Not Just Bots — The Platforms Are Part of the Problem
In March 2025, Meta drew significant backlash when Instagram users discovered a new feature: AI-suggested comments. A pencil icon appeared next to the comment bar that, when tapped, would generate an AI-written response based on the content of the post. Users could then post the AI comment as if it were their own words.
The reaction was immediate and negative. People accused Meta of deliberately inflating engagement metrics to make the platform appear more active than it actually is — which directly benefits advertising revenue. One user’s response captured the mood perfectly when they compared the experience to a Black Mirror episode.
This was not Meta’s first experiment along these lines. In 2024, the company created AI-generated profiles complete with backstories, photos, and personality traits. Some of these fake personas even had specific racial and sexual identities assigned to them. When users discovered this, Meta scaled back the project — but did not abandon the underlying approach.
The incentive structure here is important to understand. Social media platforms make money from engagement. Every comment, every like, every second of scroll time generates advertising revenue. Whether that engagement comes from a human or a bot, the advertising dollars still flow. Platforms have a financial conflict of interest when it comes to aggressively removing fake engagement, because doing so would shrink their reported metrics.
The Propaganda Angle Is Real Too
This is not only about commercial spam. A Graphika report published in November 2025 analyzed nine ongoing state-sponsored influence operations — including campaigns linked to China and Russia — and found that every single one had integrated generative AI tools into their operations. They use AI to create fake news anchor personas, generate translated propaganda at scale, and build networks of apparently independent accounts that amplify each other’s content.
The silver lining, according to the researchers, is that most of this AI-generated propaganda is low quality — what they call “AI slop.” It tends to get little genuine engagement. But the sheer volume is the point. If you flood enough platforms with enough content, some of it will stick. And as the AI tools improve, the quality gap between human-created and bot-created content is narrowing fast.
A Harvard Kennedy School study from early 2025 documented how scam operators use AI-generated images on Facebook — photorealistic pictures of children painting, animals doing tricks, gorgeous home interiors — to build massive page followings. The comments on these posts revealed that most users could not tell the images were fake, with people genuinely congratulating AI-generated children for AI-generated artwork.
How to Tell If You Are Talking to a Bot
After a year of paying close attention to this, here are the patterns I have learned to recognize. These are not foolproof, but they catch the majority of automated accounts:
Check the comment-to-content ratio. If someone leaves a comment that could apply to literally any post — “This is amazing!” or “So inspiring 🙌” — and their profile shows the same generic comments on dozens of unrelated posts, that is almost certainly automated.
Look at account age versus activity. A profile created three months ago with 2,000 followers but only five original posts is a red flag. Real accounts grow gradually. Bot networks need to appear established quickly.
Reverse image search the profile photo. This takes five seconds on Google. AI-generated faces are sometimes reused across multiple accounts, or you will find the image does not appear anywhere else online — which is unusual for a real person.
Watch for timing clusters. If a post receives 15 comments within two minutes of being published, and they all come from accounts with similar naming patterns (first name + random numbers), you are looking at a coordinated bot operation.
Read for specificity. Real humans reference specific details. They disagree, they ask follow-up questions, they share related personal experiences. Bots almost never do this because generating specific, contextually appropriate responses is harder than generating generic praise.
Why This Matters Beyond Just Being Annoying
The fake engagement problem is not just an aesthetic nuisance. It has real financial and psychological consequences.
For businesses, bot-inflated metrics lead to bad decisions. A company might invest heavily in a social media strategy that appears to generate engagement but is actually just attracting bot traffic. Marketing budgets get burned on audiences that do not exist. According to industry estimates, bot-driven ad fraud costs the global economy hundreds of billions of dollars annually.
For individual users, the psychological impact is more subtle but arguably worse. When your feed is full of engagement that looks real but is not, it distorts your sense of what is normal. You see accounts with thousands of enthusiastic comments and wonder why your own posts get three responses from people you actually know. This drives the comparison trap — the feeling that everyone else’s online life is more vibrant, more successful, more connected than yours. Except much of what you are comparing yourself to is manufactured.
And for society at large, the erosion of trust is the biggest casualty. When you can no longer tell whether a comment, a review, a recommendation, or even a news article was created by a human or generated by software, the default response becomes cynicism. You stop trusting anything online. And that cynicism is itself a tool that bad actors exploit — because when nobody believes anything, it becomes impossible to hold anyone accountable.
What You Can Actually Do About It
I am not going to tell you to delete social media. I use it professionally and personally, and I suspect you do too. But there are practical steps that make a difference.
First, treat engagement metrics with skepticism. When evaluating any account, business, or influencer, look at the quality of comments, not the quantity. Ten specific, thoughtful comments from real people are worth more than 500 generic affirmations from bot accounts.
Second, report suspicious accounts when you see them. Every major platform has reporting tools for fake accounts, and while no platform removes them fast enough, the reports do feed into detection algorithms that improve over time.
Third, support platforms and publications that prioritize verification. Smaller, moderated communities — newsletters, forums with real identity requirements, platforms like Substack or Mastodon where bot manipulation is harder — tend to have much higher signal-to-noise ratios than the major algorithmic platforms.
Fourth, and this might be the most important: be aware that the line between “real” and “fake” engagement is going to keep blurring. Meta is actively building AI comment features. X (formerly Twitter) has been overrun with bot activity since gutting its trust and safety team. TikTok’s recommendation algorithm already makes it difficult to distinguish organic virality from manufactured reach. This is not a problem that is going away. It is the new baseline reality of being online.
The honest truth is that the internet of 2025 is not the same internet we grew up with. More than half of its traffic is automated. The comments on your posts might be written by software. The followers on that influencer’s account might not exist. And the platforms that host all of this activity have limited financial incentive to fix it.
Knowing that does not solve the problem. But it does change how you navigate it — and that awareness is worth more than any engagement metric a bot could ever fake.
Sources:
1. Imperva / Thales — 2025 Bad Bot Report (April 2025)
2. Cloudflare — 2025 Radar Year in Review (December 2025)
3. Graphika via NBC News — AI Slop in Propaganda Campaigns (November 2025)
4. Harvard Kennedy School Misinformation Review — AI-Generated Images on Facebook (February 2025) 5. Business Standard — Meta AI Comments on Instagram (March 2025)
6. Target Internet — Social Media Bots and Fake Engagement (2025) 7. WP Engine — 2025 Website Traffic Trends Report (December 2025)
Disclaimer: This article reflects the author’s professional perspective and independent research. The observations about bot detection are based on the author’s experience in IT security and publicly available data. Social media platforms’ bot activity varies and detection methods are not 100% accurate. Always verify claims through multiple sources.


