• data_science_rocks@scribe.disroot.org
    link
    fedilink
    arrow-up
    5
    arrow-down
    27
    ·
    12 days ago

    You increase the sample size, you increase the number of hits. Proportionally AI is still just as safe. What a bullshit opinion piece. Inconsequential just like the fucks agreeing with this shit take.

    • prole@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      16
      arrow-down
      1
      ·
      12 days ago

      You increase the sample size, you increase the number of hits.

      Do you think statisticians aren’t well aware of this?

      • data_science_rocks@scribe.disroot.org
        link
        fedilink
        arrow-up
        2
        arrow-down
        14
        ·
        11 days ago

        I am a fucking statistician. And you need a fucking control group to establish causality.

        Gtfo if you don’t understand this basic principle.

        The article and your argument are both entirely devoid of substance.

        • FosterMolasses@leminal.space
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          1
          ·
          11 days ago

          If the statisticians involved in this case study are anywhere close to as unhinged as you are then it’s no wonder they got those results lol

          • NoModsNoMasters@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            11 days ago

            Homie been smokin’ them data science rocks, it seems.

            Literally made an account on this instance just to let them know I think they’re fucking dense, but I decided they’re not even worth interacting with personally.

    • Bonifratz@piefed.zip
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      3
      ·
      12 days ago

      Huh? The whole point of this emerging scientific debate is that AI use might be proportionally unsafe, i. e. it might be a risk factor causing and/or exacerbating psychosis. Now sure this is still just a hypothesis and it’s too early to make definite epidemiological statements, but it’s just as wrong to blankly state that AI is “still just as safe”.

      • data_science_rocks@scribe.disroot.org
        link
        fedilink
        arrow-up
        1
        arrow-down
        20
        ·
        12 days ago

        “just as safe” is a relational, not absolutist, statement. I’m saying AI is at X level of safety, and more cases emerging does not imply an increasing risk of psychosis. That risk is where it’s always been.

        You’re twisting my words because you’re likely one of those brain-dead AI haters.

        I don’t particularly love or hate AI, the difference is I look at it critically instead of emotionally. If the population at large had the same X propensity for psychosis as the rate seen with AI usage, that just means it’s correlation without causation.

        • Bonifratz@piefed.zip
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          1
          ·
          12 days ago

          Alright, but the point is that the “X level of safety” AI is at might be a dangerous level in the first place. I don’t think anybody is arguing that AI got more dangerous as a psychosis risk factor over the past year or so, they’re arguing that AI was a risk factor to begin with, and with increased AI use more evidence of this turns up. So you saying that the inherent risk of AI hasn’t changed is kind of a moot point because that’s not what the debate is about.

          Also notice that I clearly said it’s too early to tell one way or the other, so there’s no reason to malign me as uncritical.

          • data_science_rocks@scribe.disroot.org
            link
            fedilink
            arrow-up
            2
            arrow-down
            14
            ·
            12 days ago

            You ignored my last paragraph. Yes it’s too early to tell, hence the opinion piece saying “Almost Certainly Linked To” is a distortion of reality. It’s laughably biased, and inductive in less-critical readers.

            • Bonifratz@piefed.zip
              link
              fedilink
              English
              arrow-up
              9
              ·
              11 days ago

              I can agree with that. (As an aside, I think scientific findings are almost always exaggerated like this in popular journalism.)

              I’d say the long and short of it is that we simply don’t (and can’t) know yet. But I think more research on possible links between AI and psychotic delusions is definitely useful, because I find the idea of a connection plausible.

        • ZDL@lazysoci.al
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          11 days ago

          I don’t particularly love or hate AI …

          Says the person calling people “fucks agreeing with this shit take”, and “brain-dead AI haters” and “less-critical readers” and just in this thread alone. Who knows what else I’d find in looking in your full posting history.

          Not a very convincing act, even for a clank-fucker.

          • data_science_rocks@scribe.disroot.org
            link
            fedilink
            arrow-up
            1
            arrow-down
            4
            ·
            11 days ago

            Yes. Because taking a side is a shit take. Defending an article taking a side is a shit take.

            Whatever sort of “argument” you think you have by cherry-picking is a shit take.