• rowdy@lemmy.zip
    link
    fedilink
    arrow-up
    32
    arrow-down
    5
    ·
    2 days ago

    I hate AI slop as much as the next guy but aren’t medical diagnoses and detecting abnormalities in scans/x-rays something that generative models are actually good at?

    • medgremlin@midwest.social
      link
      fedilink
      arrow-up
      43
      ·
      2 days ago

      They don’t use the generative models for this. The AI’s that do this kind of work are trained on carefully curated data and have a very narrow scope that they are good at.

      • Ephera@lemmy.ml
        link
        fedilink
        English
        arrow-up
        11
        ·
        2 days ago

        Yeah, those models are referred to as “discriminative AI”. Basically, if you heard about “AI” from around 2018 until 2022, that’s what was meant.

        • medgremlin@midwest.social
          link
          fedilink
          arrow-up
          2
          ·
          1 day ago

          The discriminative AI’s are just really complex algorithms, and to my understanding, are not complete black-boxes. As someone who has a lot of medical problems I receive care for as well as being someone who will be a physician in about 10 months, I refuse to trust any black-box programming with my health or anyone else’s.

          Right now, the only legitimate use generative AI has in medicine is as a note-taker to ease the burden of documentation on providers. Their work is easily checked and corrected, and if your note-taking robot develops weird biases, you can delete it and start over. I don’t trust non-human things to actually make decisions.

          • sobchak@programming.dev
            link
            fedilink
            arrow-up
            2
            ·
            6 hours ago

            They are black boxes, and can even use the same NN architectures as the generative models (variations of transformers). They’re just not trained to be general-purpose all-in-one solutions, and have much more well-defined and constrained objectives, so it’s easier to evaluate how their performance may be in the real-world (unforeseen deficiencies, and unexpected failure modes are still a problem though).

      • Xaphanos@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        ·
        2 days ago

        That brings up a significant problem - there are widely different things that are called AI. My company’s customers are using AI for biochem and pharm research, protein folding, and other science stuff.

        • medgremlin@midwest.social
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          1 day ago

          I do have a tech background in addition to being a medical student and it really drives me bonkers that we’re calling these overgrown algorithms “AI”. The generative AI models I suppose are a little closer to earning the definition as they are black-box programs that develop themselves to a certain extent, but all of the reputable “AI” programs used in science and medicine are very carefully curated algorithms with specific rules and parameters that they follow.

        • jballs@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          My company cut funding for traditional projects and has prioritized funding for AI projects. So now anything that involves any form of automation is “AI”.

    • Mitchie151@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      2 days ago

      Image categorisation AI, or convolutional neural networks, have been in use since well before LLMs and other generative AI. Some medical imaging machines use this technology to highlight features such as specific organs in a scan. CNNs could likely be trained to be extremely proficient and reading X-rays, CT, MRI scans, but these are generally the less operator dependant types of scan, though they can get complicated. An ultrasound for example is highly dependent on the skill of the operator and in certain circumstances things can be made to look worse or better than they are.

      I don’t know why the technology hasn’t become more widespread in the domain. Probably because radiologists are paid really well and have a vested interest in preventing it… they’re not going to want to tag the images for their replacement. It’s probably also because medical data is hard to get permission for, to ethically train such a model you would need to ask every patient in for every type of scan it their images can be used for medical research which is just another form/hurdle to jump over for everyone.

    • MartianSands@sh.itjust.works
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      2 days ago

      It’s certainly not as bad as the problems generative AI tend to have, but it’s still difficult to avoid strange and/or subtle biases.

      Very promising technology, but likely to be good at diagnosing problems in Californian students and very hit-and-miss with demographics which don’t tend to sign up for studies in silicon valley

    • jj4211@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      2 days ago

      Basically AI is generally a decent answer to the needle in a haystack problem. Sure, a human with infinite time and attention can find the needle and perhaps more accurately than an AI could, but practically speaking if there’s just 10 needles in a haystack it’s considered a lost cause to find any of them.

      With AI it might find in that same stack 30 needles, of which only 7 of them are the needles, which means the AI finds more wrong answers than right, but ultimately you do end up finding 7 needles when you would have missed all 10 before, coming out ahead.

      So long as you don’t let an AI rule out review of a scan that a human really would have reviewed, it seems a win to potentially have more overall scans get a decent review and maybe catch things earlier in otherwise impractical preventative scans

      • Deceptichum@quokk.au
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        4
        ·
        2 days ago

        Despite what the luddites would have you believe, AI is an amazing assistive tool when paired with a human reviewing the results.

        • jj4211@lemmy.world
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          1 day ago

          The “luddite” reaction is largely a reaction to the overhype applied by the industry that pretends the current wave of text/image generators is general intelligence and in conjunction with robotics can replace every job and allow the upper class folks to live a full life without that pesky labor class.

          So it’s naturally to expect a wave of such hype pretending it’s unambiguously amazing and perfect to get hit with a counter that’s overly dismissive and treats AI as a very bad brand. Also, in some contexts even if it is a net win, it’s still kind of annoying. In my haystack example, a human would have reviewed 23 things confidently declared by the AI to be needles and said no to them. Practically speaking, that’s unimaginably better than reviewing millions of not-needles to get to some needles, but we are more annoyed because in our mind the things presented were supposed to be needles. Same applies to a lot of generative AI use, it might provide a decent chunk of content that’s nearly usable 20% of the time so quick as to be worth it, but it’s hard to ignore the 80% of suggestions that it throws at you that are unusably bad. Depends on your job and your niche as to what the percentage will be. From a creative perspective, it generates milquetoast stuff, which may suffice for backgrounds and stuff that doesn’t matter, but is a waste of time when attempted as the key creative elements.

          Broadly society has to navigate the nuanced middle ground, where it can be pretty good assistive technology but not go all out on it. Except of course there are areas likely to be significantly fully automated, like customer support or food order taking (though I prefer kiosks/apps for more precise ordering through tapping my way through, but either way not a human).

          • Deceptichum@quokk.au
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            The nuanced middle ground is what I said, treating it as an assistive tool with human review.

            • jj4211@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              1 day ago

              That’s fine, just saying so long as there are people pumping ridiculous amounts of money into the fiction that it can do anything and everything I won’t fault folks for having the counter reaction of being overly dismissive/repulsed by mentions of it.

              I’m hopeful for the day when the hype subsides and it settles into the appropriate level of usefulness and expectations, complete with perhaps less ludicrous overspend on the infrastructure.

        • CXORA@aussie.zone
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          1 day ago

          Simp all you want.

          You’ll also be shivering in the streets before long.