As some of you may be aware, over the past few weeks there have been an increasing number of what I suspect are bots which will share one or even a few posts, all relevant to the communities they are shared in, at which point the account self-deletes. I’m torn as the stories are relevant, but they give me the impression of a narrative attack. I’ve seen only one of these accounts actually comment before deletion, otherwise they post and immediately nuke the account.

I have tagged mods and admins but have not heard any recognition on the problem. It’s also notable that by my impression this issue is getting worse. I noticed yesterday that communities I subscribe to which previously have not had this problem are now starting to receive these kinds of posts.

I want the fediverse to be a place to communicate with real people in good faith; this manner of posting runs contrary to that. So that begs the questions, is this actually a problem, and if so, what can be done about it?

  • GrantUsEyes@lemmy.zip
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    6 hours ago

    I’m unreasonably annoyed by the ones that target the comicstrips community.

    I have noticed the bots behaving differently. At first the posted a lot, for shorter amounts of time which got them found out quickly. Now they post once or twice a day for arround a week, then gone.

    I’ d like to know why the do it. What’s the point? But those posts get a lot of engagement so I feel conflicted about it.

  • m_‮f@discuss.online
    link
    fedilink
    English
    arrow-up
    110
    ·
    15 hours ago

    Lemmy just released 0.19.14, which addresses this somehow, but the announcement is vague:

    https://join-lemmy.org/news/2025-12-08_-_Lemmy_Release_0.19.14

    https://discuss.online/post/31855056

    Recently some malicious users started to use an exploit where they would post rule violating content and then delete the account. This would prevent admins and mods from viewing the user profile to find other posts, and would also prevent federation of ban actions.

    The new release fixes these problems. Thanks to @flamingos-cant for contributing to solve this.

    • Admiral Patrick@dubvee.org
      link
      fedilink
      English
      arrow-up
      52
      ·
      15 hours ago

      Good to know.

      I think this just fixes the bug where deleted accounts were invisible to admins. It’s a start but doesn’t fully address the problem. Still, having it federate the content removal is a step in the right direction.

      • halcyoncmdr@lemmy.world
        link
        fedilink
        English
        arrow-up
        28
        ·
        14 hours ago

        It’s a start but doesn’t fully address the problem.

        Eh, I’d say it addresses everything that matters.

        The root of the problem was that deleting the account was an exploit to avoid limit admin research and further actions, and federation of content removal. That’s the only reason they were bothering to do it. The fix allows admins to research properly, and for federation of removal actions.

        It doesn’t solve the root of the issue with bad actors, but that’s a much larger issue well beyond the scope of a couple bug fixes.

    • Skavau@piefed.social
      link
      fedilink
      English
      arrow-up
      19
      ·
      15 hours ago

      This seems to be dealing with the issue of finding them after the fact, rather than just automatically purging the posts. So it does help, but the best solution here is to just automate it.

  • Skavau@piefed.social
    link
    fedilink
    English
    arrow-up
    63
    arrow-down
    1
    ·
    edit-2
    15 hours ago

    Piefed already automatically has a toggle in its settings that can be activated. On piefed.social any account that self-deletes within 24 hours has all of its posts purged.

    This is a lemmy problem.

  • Rhynoplaz@lemmy.world
    link
    fedilink
    arrow-up
    20
    ·
    14 hours ago

    I’ve definitely noticed it, but I don’t understand why.

    I know karma farmers on Reddit would sell accounts or just try to create the appearance of legitimacy, but what does it accomplish to delete an account immediately after posting?

    • Skavau@piefed.social
      link
      fedilink
      English
      arrow-up
      23
      ·
      14 hours ago

      The person in question here is permanently banned from the Fediverse (effectively ban on sight for most instances) in part for spamming, but also because of maladaptive personality traits. They don’t accept that and instead still wish to “help” the fediverse via providing content (news spam)

      • FoxyFerengi@startrek.website
        link
        fedilink
        arrow-up
        11
        ·
        10 hours ago

        Wait, do they keep reusing the same username? Because I feel like I’ve been playing wack-a-mole blocking a certain user, and it makes a lot of sense that they’d just be creating new accounts constantly

        • Skavau@piefed.social
          link
          fedilink
          English
          arrow-up
          9
          ·
          10 hours ago

          They tend to use similar names. Not always though. Not sure if you’re talking about the same person.

      • Rhynoplaz@lemmy.world
        link
        fedilink
        arrow-up
        8
        ·
        12 hours ago

        Strange. I may never understand their motivation (and don’t take this as me expecting you or anyone else to know) but why on Earth would anyone go to such lengths to “support” a platform that has obviously decided it wants nothing to do with them?

        • Skavau@piefed.social
          link
          fedilink
          English
          arrow-up
          17
          ·
          edit-2
          12 hours ago

          This is what blights so many small reddit alternatives.

          The initial wave of users onto reddit clones often include a disproportionate amount of malcontents. A lot of people who don’t play well with others, who are banned from reddit (or at least banned from lots of subreddits) usually turn up first on these reddit alternatives and disrupt the community by repeatedly showing anti-social or disruptive, attention seeking behaviour. It’s not even necessarily related to any political persuasion. This stuff can collapse budding alternatives.

          The first wave of new users on reddit alternatives are, in my experience, more likely to have a lot of problem users and since the sites are so small, and usually sparsely moderated, they are much more disruptive than they would be on reddit. The Fediverse is large enough to avoid the problems of that to some extent now (a problem user who makes alts to troll, and bait and harass is a lot more visible on a small reddit clone with 1000 users vs. 50,000) but there’s something to be said by growing and ignoring the problems (as Reddit itself had done - which is why it’s now a site utterly infested with bots and astroturfers and trolls etc).

          The hope here is that the Fediverse can implement useful tools that disrupt the typical pattern of behaviour that these people present so they don’t increase and fester as the network grows. Best to nip this stuff now whilst the userbase is manageable.

  • lowspeedchase@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    1
    ·
    14 hours ago

    I want the fediverse to be a place to communicate with real people in good faith

    It’s a popularity problem. Once a ‘social network’ gets big enough to spread information to a decent user base, it will be astroturfed. Building a better mousetrap and all that jazz, they will still get through, it doesn’t matter the mitigation strategy you use. The best defense is to engage with others who are respectful, and not engage with those that are clearly rage baiting, circle jerking, promoting a narrative, etc. You will never not see it, the trick is to realize it doesn’t matter at all. Just do you.

  • Libb@piefed.social
    link
    fedilink
    English
    arrow-up
    14
    ·
    13 hours ago

    Not related to bots but more about account deletion, something I’ve been wishing for a long time is to make a separation between a user being able to delete their account if they fancy so and their content actually being removed. Content could just be anonymized or, at least, if the content is to actually be removed, a placeholder should be put in its place because when they delete their account not only do they delete their own posts but they also delete all comments made by other participants which is unfair and can also be a real loss as some comments are really interesting.

    • Grimy@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      9 hours ago

      I highly agree we have to deal with deleted content in a different way. Not only because of the loss of content but also because I think people running bots are combing through their comments once a day and deleting anything that would throw suspicion.

  • Admiral Patrick@dubvee.org
    link
    fedilink
    English
    arrow-up
    24
    ·
    edit-2
    15 hours ago

    They’ve also been throwing out rage bait in the “YSK” community. But yeah, I definitely agree it’s a problem. So much so that I’ve added some built-in filters for that in Tesseract (client-side) to deal with it. I don’t want to go into the technical details for fear of that jackass trying to counter them, but suffice it to say they’re effective based on their current M.O… I just need to finish this release and get it pushed out as it’s been approaching vaporware status over the last 1-2 months.

    I think Piefed is also taking action in either its UI or backend. I don’t recall exactly what, but I saw something mentioned a week or two ago on another post complaining about that self-deleting spammer account.

  • Lasherz@lemmy.world
    link
    fedilink
    arrow-up
    23
    ·
    15 hours ago

    From the mod perspective I can certainly tell you it’s an issue, however the risk of making a bad call on a new user exceeds the risk of letting one through, which will self delete anyways. It seems like an admin would need to solve it through any of a bunch of methods such as allowing posts to stay up past an account’s deletion, restricting new users from posting, requiring a certain amount of community engagement first, figuring out their script to autoban, or some better solution I haven’t thought of.

    • SpikesOtherDog@ani.social
      link
      fedilink
      English
      arrow-up
      7
      ·
      11 hours ago

      What about putting a delay on the beginning activity of an account? Maybe a 2-4 hour timer on new accounts where their new posts are only available to certain users. Once the account has matured, the restrictions can be lifted.

      • jordanlund@lemmy.world
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        9 hours ago

        That would be a fun way to implement a quarantine… Posts and comments by new users are only available to other new users. 😉

        Kind of like the “Hide Bots” toggle. “Hide New Users”.

        As far as THEY know, they’re active participants.

      • Lasherz@lemmy.world
        link
        fedilink
        arrow-up
        7
        ·
        11 hours ago

        I think the admins have a lot of ways to approach it, the question is the applicability to the problem, which can quickly change as bots adjust their approach, whether it will affect regular users negatively, and how it herds scripts into patterns that are immediately recognizable when it doesn’t fully work.

        • SpikesOtherDog@ani.social
          link
          fedilink
          English
          arrow-up
          5
          ·
          10 hours ago

          Fair, and I was considering that. This could be reviewed with heuristics and instead of instant bans, apply a review. If the admins don’t respond, then it’s not addressed.

  • jordanlund@lemmy.world
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    14 hours ago

    Yeah, I’ve been banning them for bad faith engagement in my communities with the “Remove Content” toggle, and it’s gotten to the point now where they are still creating accounts but not bothering to post in my communites, so that’s good.

    But the pattern of communities they post to with low/no/amenable moderation still lets them flood the channel.

  • In-person verification with Guy Fawkes masks to preserve anonymity. Meet with a trusted mod/admin, show paper poster with handwritten username on it, mod reads and verify. You’re confirmed human.

    I know a lemmy.world mod that claims to live in my city, we can verify each other.

    /okay just kidding, I’m too depressed and lazy to do this weird meetup thing.

    • Hello_there@fedia.io
      link
      fedilink
      arrow-up
      9
      ·
      14 hours ago

      Honestly, I think some sort of blind verification system is what the internet needs. Some kind of notary-like system where a verification happens locally with a person, and then no info on the person gets passed out except the verification to start an account.

      • reksas@sopuli.xyz
        link
        fedilink
        arrow-up
        4
        ·
        9 hours ago

        maybe if there was somekind of trust based verification, kind of like how certifications work. You are issued a “certification” that you are decent enough and you can issue further ones and if you mess up too badly, yours might get revoked. It would probably be a terrible hassle though. Maybe it might work as supplementary verification system though.

        • Hello_there@fedia.io
          link
          fedilink
          arrow-up
          4
          ·
          9 hours ago

          Yeah. Kind of crazy I can lose access to a 20 year old acct and Google is just like ‘meh can’t verify you’ and we all just think that makes sense. I should be able to somehow verify who I am and get reinstated.

      • LOGIC💣@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        13 hours ago

        I wonder… you know how webpages use widgets to try to see if you’re human? Apparently they collect various data about how you interact with the webpage to figure out whether you’re human. Perhaps they could use that data to make a fingerprint that can tell people apart. It maybe would be fairly inaccurate, but combining it with other information like IP address or location information could work together to make a better digital fingerprint.

        • thethunderwolf@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          5 hours ago

          That is a real thing, it’s called tracking, it’s NOT “fairly inaccurate”, and we should all be fighting it. Highly advanced and multifaceted digital fingerprinting, including behavioural fingerprinting, is being used by corporations to spy on our internet activity.

  • flamiera@kbin.melroy.org
    link
    fedilink
    arrow-up
    4
    ·
    13 hours ago

    Yeah it annoyed me when I noticed a couple of my comments disappeared after posting to those posts. I thought people were reporting and mods taking them down. I’d check the modlog and nothing. But, it is suspicious about this kind of behavior that’s been happening and again, it’s annoying. Why would anyone do that?

    I think the next time we see posts like them, is to take their idea, re-word them and post them. At least that way, they’ll remain up for people who want to contribute than wasting so much of everyone’s time trying to participate.

    • grue@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      8 hours ago

      I’d check the modlog and nothing.

      IIRC, “ban user” with “remove content” fails to get recorded in the modlog. Since that’s inconsistent with “remove post” getting logged, I assume it’s a bug.

  • lmmarsano@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    8
    ·
    edit-2
    8 hours ago

    Relevance is the only consideration that matters. Background & history is irrelevant.

    The best solution is for everyone to adopt this as regular practice to help users like OP lose their marbles at non-issues.