Concerns over AI surveillance in schools are intensifying after armed officers swarmed a 16-year-old student outside Kenwood High School in Baltimore when an AI gun detection system falsely flagged a Doritos bag as a firearm.

Allen was handcuffed at gunpoint. Police later showed him the AI-captured image that triggered the alert. The crumpled Doritos bag in his pocket had been mistaken for a gun.

  • Lucy :3@feddit.org
    link
    fedilink
    arrow-up
    90
    arrow-down
    1
    ·
    edit-2
    1 day ago

    1000062607

    Not a surprise, after all it involves pigs and a racist bullshit machine.

      • Lucy :3@feddit.org
        link
        fedilink
        arrow-up
        2
        arrow-down
        3
        ·
        15 hours ago

        The point is the image. And the reason it’s at the bottom is the artistic way to tell you how I felt; scrolling down the page, reading, then seeing the image and getting even angrier/annoyed.

          • Lucy :3@feddit.org
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            11 hours ago

            But… why? The point is not the text, but catching the specific feeling of scrolling/reading down the article, and seeing the pic. If you want to read any text, open the article ffs

              • Lucy :3@feddit.org
                link
                fedilink
                arrow-up
                2
                ·
                8 hours ago

                … And? I strongly doubt that either screen readers or manual reading would resemble the same effect. The entire point of the image is to feel the same as I did after reading the article. Scroll down the page, see the image, realize. That’s captured 1:1 by the screenshot. And not at all by manually copy-pasting the text, and then putting the image (with alt text) underneath. That wouldn’t be an Aha!-Moment, but utter confusion, because - again - the text is entirely irrelevant. Copying the text emphasizes its content, screenshotting emphasizes the moment.

                The only actually useful way to convey the message would be to add an alt text to the screenshot - which idk how to do on lemmy properly, in contrast to Mastodon.

                Besides, which modern screen reader does not have at least OCR?

    • CocaineShrimp@sh.itjust.works
      link
      fedilink
      arrow-up
      16
      ·
      1 day ago

      Fun fact: making false police reports / swatting is illegal in most places. Which AI company is going to face criminal charges for putting this poor kids life at risk and traumatizing them?

    • mrbeano@lemmy.zip
      link
      fedilink
      arrow-up
      38
      ·
      1 day ago

      “I don’t know what you expect us to do, the robot said we were in immediate mortal danger. So we started blasting…”

    • IninewCrow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      36
      ·
      1 day ago

      That’s why the AI was triggered in the first place … the kid is BLACK!!!

      How many white kids with crumpled bags of chips in their pockets went away without any notice … as soon as you have one black kid in that situation … BOOM … call in the SWAT Team!

      • Lucy :3@feddit.org
        link
        fedilink
        arrow-up
        10
        ·
        edit-2
        1 day ago

        Well… first we need to know how that shit was trained. I’d guess, either they take rl pics and let humans decide arbitrarily, they take historic data of surveillance pics + if bastards were sent out, or they use historic pics + if something was actually found/confirmed.

        All three will be heavily biased, especially #1. #2 would at least be based on “experts’” decisions, and #3 would be the least biased, but still way too much to be a basis for anything.

        Because: Human behaviour can be fixed. With the right measurements, biased people/racists can be retrained or taken off-duty. However, as soon as such a system is trained on biased data, gl correcting that. And no one will feel themselves responsible anyway.

        • IninewCrow@lemmy.ca
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 day ago

          This is the scary thing about the development of any kind of AI we are building as a society generally … it will be heavily biased according to our general ideas of race, identity and beliefs … no matter how you cut it, develop it, influence it or deal with it, the AI will always take away our human based general ideas of right, wrong, biases, beliefs, and perceptions.

          When you think about it, we (humanity as a whole) are trailer trash morons who dropped out of school years ago that shouldn’t have the responsibility of raising a child … but we went out and got pregnant anyway and now we’re raising a baby … sure, there are instances of trailer trash parents who do go out to raise decent people but I’ve seen my share of down and out people who raise children in absolutely the worst ways possible and churn out a whole generation of maladjusted people who end up with drug addictions, and their only hope is in doing the absolutely dumbest things possible to get by in life.

          We’re dumb parents and we’re raising a new being-AI-intelligence-whatever into existence. What do you think that new child (or growing AI) will become?

        • degen@midwest.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 day ago

          It’s shocking just how biased the third is despite sounding (relatively) reasonable and unbiased. But bias is the entire nature of AI, that’s the whole point: biasing a machine to transform and distill data to arrive at a desired output.

          Really, the instant anything is made into a process with or without AI there’s bias, and it’s inescapable.

          ETA: I guess what I’m getting at is

          Because: Human behaviour can be fixed. With the right measurements, biased people/racists can be retrained or taken off-duty.

          is bias in and of itself