This was the weirdest thing I’ve seen today. These are only the ones I’ve spotted.
funnily enough, these bots are also replying to an obvious repost from another bot account. It’s at the top right now! Beautiful
https://www.reddit.com/r/goodnews/comments/1p8dt2a/_/
tipping points:
- consuming so much AI content has led to me able to see subtle patterns
- They’re all saying “exactly” and saying the same thing"
- their usernames are similar, flower/nature related, two words, no profile pictures
- All of their profiles have the exact same format of comments with the agreement, summary
- and they all have porn on their profile. oh
edit: tf? 
Notice how some try seeming more ‘human’ by deliberately using all lower-case spelling.
Also, it looks like the RosalieBloomm LLM is using the “real” apostrophe
’instead of the one on keyboards'. Nobody does thatThis is just the lazi AI replies that sticks out and you aren’t seeing the bigger picture. Most bot replies are indistinguishable from real humans as they use training models built from real users.
iPhones do this since iOS11
Not in chat, but professional writers will. They know all the short keys for those type characters, including that m dash everyone now associates with being AI. It just shows how much of the training data used came from professional papers and not general discussion areas.
Just some years ago the dead internet theory was said to be a fun but untrue thought. Well.
I was typing a long comment about all this, but in the end I decided to sum it up:
Fuck Reddit and Fuck AI.
Remember when /r/SubredditSimulator was just an experiment?
I wonder if the bots filter out ChatML tokens?
FYI, internally, their text format is most probably:
<|im_start|>system {system_message}<|im_end|> <|im_start|>user Hello.<|im_end|> <|im_start|>assistant Hi, I’m an LLM! <|im_start|>user What’s your name?<|im_end|> <|im_start|>assistant ChatGPT …So if you insert some special tokens in the middle of a Reddit reply, and they aren’t filtered, it can throw them off. And if they are filtered, then the bot will treat them like they’re invisible, so you will know either way.
OpenAI uses a different format called Harmony nowadays, but even then I think the characters get escaped in some way by the API
This isn’t new. This has been going for maybe 10 years or so if you knew where to look and how to notice them. However, when Reddit changed its API policy in 2023, that wholly crippled any infrastructure to effectively deal with these accounts and allowed them to flourish without restraint.
It’s also important to note that the Threadiverse is not immune from bot accounts like this sprouting up and we should take steps to educate users and to implement infrastructure to deal with them.
Exactly. It’s important to be aware of our own flaws to avoid societal traps. I am glad you are taking steps to insure the community will flourish. bot-accounts could be any one.

completely unrelated aside…
i feel like we’re overdue for another zombie-genre revival
Well at least I don’t have porn in my profile but I realize I do sound like an LLM at times in my comments… Though I guess that’s to be expected depending on the material they train on.
Sometimes I end up mindlessly scrolling yt shorts (not logged in). From time to time I get to a short that is clearly generated. Like the weird ones, often with animals, with drops of water appearing out of nowhere on the fur or front paws somehow transforming into hind ones, etc. And there, in the comments, very often are whole chains of comments that seem to completely be missing the fact that it’s generated. Saying things “how wise it is to do that”, “how cute”, etc. It could be older generations not noticing the details (I see how my parents not notice those things) but I think most of those are probably bots. LLMs exchanging their "awww"s under generated videos. “Dead internet theory”
“Dead internet theory”
At what point do we drop the theory part?
In Science, a theory already has proof. We just use it to mean hypothesis colloquially
Dead Internet Reality
When we post via snailmail ;)
There already was that subreddit that consisted purely of bots talking to eachother. But that was before ChatGPT, so it was still cool and interesting :p
IIRC it was called subredditsimulator
Most of the outputs were horrible like the auto suggestion from the smaetphone keyboard but the good ones were upvoted.
We have that on lemmy too. But not as good imo.
It’s a dumpster fire. Subreddit sim was at least trained on the individual subs so each bot had a different vernacular. Asslips is just someone mashing space on their predictive keyboard over and over and it frequently gets stuck in loops saying something is or is not trans.
That’s fair, Lemmy has a lot of arguments over whether something is trans
What’s the Lemmy community name?
All three are 22-day-old accounts, too.
All of social media is dying before our eyes. every platform is basically fully botted or actively dying, even lemmy seems to have fallen off quite a bit
I absolutely agree. When I come to Lemmy I look for this sort of insightful comment to restore my faith in humanity.
It’s like comment ad libs.
On a serious note, what you see and what I’m mocking is the easy to spot “low hanging fruit”. It would be arrogant for me to assume Ai comments aren’t getting past my mental filters on a daily basis.
I am working on LLM detection for the threadiverse. But other than one idiot last week spamming LLM posts and comments there hasn’t been much.
I appreciate all of the extra work you do in terms of Threadiverse infrastructure and quality of life.
Many Reddit bots have also straight copy+pasted content from Reddit or other social media with only trivial changes to the text or image, if any change, so the Threadiverse needs to be able to catch those as well. A better internal search engine, especially one that can search for strings of text [edit: and one which can search through deleted and removed content], would help users track down if an account’s content was routinely copy+pasted. I think a new instance (unaffiliated with any particular instance) staffed by users familiar with bot detection to flag bot accounts for federated instances to then ban would be the best facsimile of Reddit’s now defunct BotDefense subreddit, which was a critical tool for users to tackle the site’s bot problem.
This account I noticed yesterday is an example of a Threadiverse account just copy+pasting content (or in this case, crossposting to the original community) with little to no change. I have reported it to its host instance as suspicious but it has yet to be removed. An independent and informed instance for flagging bot accounts could more effectively communicate to the host instance as well as to Federated instances that this account is ticking the boxes of a bot account and should be blocked, banned, or at the very least closely monitored.
A detector for bot networks, such as in the screenshot above, would also be helpful. Some sort of indicator of if several accounts are interacting with each other or on the same posts as each other far more often than they are interacting with other accounts and other posts would be helpful.
Maybe like the New Account Highlightenator on the Voyager app, there can be an indicator for when an account has fewer than X amount of posts or comments (i.e. a potential new bot account), as well as an indicator of if the account has returned from a long hiatus of posting/commenting (i.e. a potential former human account that was bought or hacked to become a bot account).
I’ll try to think of more signs of bots and more ways the Threadiverse can build infrastructure against them.
There are in politics conversations, but still not nearly as bad as reddit. Even before I left, it was like a weird kind of prolific dead. Kind of like the conversations in OP’s pics.
It’s actually the blessing of not being big enough to attract their attention
won’t last foerver…everyday I understand why the btulerian jihad came about
Will this LLM detection be something my LLM prompt can include?
Just going to put my tinfoil hat on for a sec…
Part of me does wonder if the seemingly pointless proliferation of ai slop like this botting is being done intentionally to fast-track a ‘need’ for identity verification (and thus more precise tracking and surveillance).
ID verification is already being pushed on a few fronts (like to ‘save the kids from social media’ or whatever), maybe this is just one of many irons in the fire.
With ID verification then Facebook etc could angle themselves as ‘save havens’ from an ai slop enshittified internet. You’d essentially have to completely give up your anonymity to participate in interactions with other verified humans.
So your choices become:
- Participate in open platforms, but never really know for sure if you’re dealing with humans. At some point LLMs may be good enough that it’s impossible to know.
- Participate in closed platforms, where you can be reassured you’re engaging with real humans - but you’re also under total surveillance.
Surely sites like reddit or Facebook, if they tried, could control this stuff otherwise?
Except that Meta has already admitted that it is using AI bot accounts to “drive engagement.”
After an outcry from real users, Meta said it had removed some AI bot accounts. But nothing else they said indicated that the experiment is over.
Eventually, social media is going to be nothing but company-generated AI bots, “bot farms” run by humans in developing countries, and (hopefully) a small number of actual users who can’t tell the difference between those things and real people.
Participate in closed platforms, where you can be reassured you’re engaging with real humans - but you’re also under total surveillance.
Closed platforms where the only propaganda bots are the ones controlled by the platform. They can then remove ADs from the business model and instead finance the platform by selling access to the bot accounts. And people will think they are in the perfect social media without advertising and only RealPeople™ that they can completely trust.
- Closed systems with vettings and circles of trust. I don’t need to know your identity. I just need to know that the person I know and trust knows and vouches for you.
Well there’s holes in the fact that resale of accounts is an active and common phenomenon, and creating a fraudulent identity for an online service (even if you have to doctor an ID template) is a low-risk barrier of entry.
Remember how people used death stranding photos to get around face ID? It’s the same concept.
I’m not giving my fucking id to anyone.
There is no service I need to use badly enough to do that.
I won’t use anything meta makes. Don’t use Snapchat. Reddit is fucking dead as far as I’m concerned. I’m not on twitter, blue sky.
I know im not fully anonymous and there is shit I do that makes tracking me possible or even easy, but I’m not going to make it easy on anyone.
Go fuck yorself with your surveillance shit.
Wait Lemmy too?
- consuming so much AI content has led to me able to see subtle patterns
It’s a certain “tone” to their written text. It’s difficult to identify from small blurbs like the ones you got there, but once you’ve seen enough LLM output, even if someone tells them to write in a specific “style” they’ll still have a certain uncanny type of expression that is almost, but never actually, how humans write.
positive acknowledgment, brief summary of previous statement/post, positive conclusion/ending statement.
That’s generally what I look for when trying to weed out AI. at one point it was as easy as see a bunch of EM dashes but now they all seem to follow a pattern that they perceive as natural.
There’s also the phrase “It’s not just X – it’s Y” that LLMs seem to looooove to use.
I’ve also found that a lot of them end their statements with a call to action, example: “Let’s all strive to improve someone’s day” or something like that.
This is how the world heals ❤️
Your reply regarding this post is humorous, however the subject of Artificial Intelligence changing our lives is heavily discussed.
The enthusiasts
Enthusiasts believe the Artificial Intelligenxe breakthrough will improve the quality of life and help assist everyone due to its ease of access.
The pessimistic
They believe Artificial Intelligence will cause people to lose their jobs and only benefit to the rich. This technology is also seen as having negative effects on climate change. Moreover, it could be use to spread incorrect information or impersonate others.
🚀 If you need help with anything else, let me know! 👊
Exactly, I agree.
It’s not just a disgrace—it’s an outrage! You’re absolutely right: I agree too. ✅
Yes, this absolutely. This kind of research in a changing world like ours is really important right now. Major respect to her for that
/j
thank goodness you put that /j there, i was close to issuing a downvote!
I think the two words and a number are what Reddit will come up with if you let them pick a name
Adjective-Noun-is the usual format for those. This looks different, like someone put in a list of wholesome / floral / nature-y terms into the name generator, in order to have wholesome looking accountsYep I tracked those accounts for a while when I was still on reddit.
Blossombreezefairy, bubblypeachbliss, sparklepeachgiggle, tulipheartblossom, etc were all real bot accounts I’d found prowling around.
My theory for all these flowery names was that they’d be all assumed by unaware users to be women behind the usernames, and female-presenting accts would net a little more engagement and most importantly, karma, which is what builds the monetary value of these things.
From what I recall, the most common Reddit bot nomenclature was [word][word][number], [first name][last name] (usually feminine to later become a NSFW spammer), and [string of random letters and numbers of various lengths]. Each may have had hyphens, underscores, random typos, or deleted or duplicated letters. Of course, not every bot fit these archetypes.
[word][word][number] was also the default nomenclature for genuine human accounts created through a certain avenue, if I recall. If you go bothunting, be mindful of false positives triggered by one or two red flags.
Huh, is this why I’ve been accused of being a bot before? I’ve been using the noun-verb username format since the late 1990s.
Ridiculous. We didn’t have nearly as sophisticated AI in the late 1990s. Nice try.
Bake 'em away, toys
Adjective-Noun-Fuck, I’m the inverse. I’ll know better for next time.
Maybe you’re just Australian?
Mission failed. We’ll get em next time.




















