• yesman@lemmy.world
    link
    fedilink
    English
    arrow-up
    73
    arrow-down
    14
    ·
    4 个月前

    The most beneficial application of AI like this is to reverse-engineer the neural network to figure out how the AI works. In this way we may discover a new technique or procedure, or we might find out the AI’s methods are bullshit. Under no circumstance should we accept a “black box” explanation.

    • CheesyFox@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      29
      arrow-down
      6
      ·
      4 个月前

      good luck reverse-engineering millions if not billions of seemingly random floating point numbers. It’s like visualizing a graph in your mind by reading an array of numbers, except in this case the graph has as many dimensions as the neural network has inputs, which is the number of pixels the input image has.

      Under no circumstance should we accept a “black box” explanation.

      Go learn at least basic principles of neural networks, because this your sentence alone makes me want to slap you.

      • thecodeboss@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        4 个月前

        Don’t worry, researchers will just get an AI to interpret all those floating point numbers and come up with a human-readable explanation! What could go wrong? /s

      • petrol_sniff_king@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        7
        ·
        4 个月前

        Hey look, this took me like 5 minutes to find.

        Censius guide to AI interpretability tools

        Here’s a good thing to wonder: if you don’t know how you’re black box model works, how do you know it isn’t racist?

        Here’s what looks like a university paper on interpretability tools:

        As a practical example, new regulations by the European Union proposed that individuals affected by algorithmic decisions have a right to an explanation. To allow this, algorithmic decisions must be explainable, contestable, and modifiable in the case that they are incorrect.

        Oh yeah. I forgot about that. I hope your model is understandable enough that it doesn’t get you in trouble with the EU.

        Oh look, here you can actually see one particular interpretability tool being used to interpret one particular model. Funny that, people actually caring what their models are using to make decisions.

        Look, maybe you were having a bad day, or maybe slapping people is literally your favorite thing to do, who am I to take away mankind’s finer pleasures, but this attitude of yours is profoundly stupid. It’s weak. You don’t want to know? It doesn’t make you curious? Why are you comfortable not knowing things? That’s not how science is propelled forward.

        • Tja@programming.dev
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          2
          ·
          4 个月前

          “Enough” is doing a fucking ton of heavy lifting there. You cannot explain a terabyte of floating point numbers. Same way you cannot guarantee a specific doctor or MRI technician isn’t racist.

          • petrol_sniff_king@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            9
            ·
            4 个月前

            A single drop of water contains billions of molecules, and yet, we can explain a river. Maybe you should try applying yourself. The field of hydrology awaits you.

            • Tja@programming.dev
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              2
              ·
              4 个月前

              No, we cannot explain a river, or the atmosphere. Hence weather forecast is good for a few days and even after massive computer simulations, aircraft/cars/ships still need to do tunnel testing and real life testing. Because we only can approximate the real thing in our model.

              • petrol_sniff_king@lemmy.blahaj.zone
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                2
                ·
                4 个月前

                You can’t explain a river? It goes down hill.

                I understand that complicated things frieghten you, Tja, but I don’t understand what any of this has to do with being unsatisfied when an insurance company denies your claim and all they have to say is “the big robot said no… uh… leave now?”

    • CheeseNoodle@lemmy.world
      link
      fedilink
      English
      arrow-up
      27
      arrow-down
      7
      ·
      4 个月前

      iirc it recently turned out that the whole black box thing was actually a bullshit excuse to evade liability, at least for certain kinds of model.

      • Johanno@feddit.org
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        4 个月前

        Well in theory you can explain how the model comes to it’s conclusion. However I guess that 0.1% of the “AI Engineers” are actually capable of that. And those costs probably 100k per month.

        • CheeseNoodle@lemmy.world
          link
          fedilink
          English
          arrow-up
          14
          arrow-down
          1
          ·
          4 个月前

          This ones from 2019 Link
          I was a bit off the mark, its not that the models they use aren’t black boxes its just that they could have made them interpretable from the beginning and chose not to, likely due to liability.

      • Tryptaminev@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        4 个月前

        It depends on the algorithms used. Now the lazy approach is to just throw neural networks at everything and waste immense computation ressources. Of course you then get results that are difficult to interpret. There is much more efficient algorithms that are working well to solve many problems and give you interpretable decisions.

    • MystikIncarnate@lemmy.ca
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      4
      ·
      4 个月前

      IMO, the “black box” thing is basically ML developers hand waiving and saying “it’s magic” because they know it will take way too long to explain all the underlying concepts in order to even start to explain how it works.

      I have a very crude understanding of the technology. I’m not a developer, I work in IT support. I have several friends that I’ve spoken to about it, some of whom have made fairly rudimentary machine learning algorithms and neural nets. They understand it, and they’ve explained a few of the concepts to me, and I’d be lying if I said that none of it went over my head. I’ve done programming and development, I’m senior in my role, and I have a lifetime of technology experience and education… And it goes over my head. What hope does anyone else have? If you’re not a developer or someone ML-focused, yeah, it’s basically magic.

      I won’t try to explain. I couldn’t possibly recall enough about what has been said to me, to correctly explain anything at this point.

      • homura1650@lemm.ee
        link
        fedilink
        English
        arrow-up
        22
        ·
        4 个月前

        The AI developers understand how AI works, but that does not mean that they understand the thing that the AI is trained to detect.

        For instance, the cutting edge in protein folding (at least as of a few years ago) is Google’s AlphaFold. I’m sure the AI researchers behind AlphaFold understand AI and how it works. And I am sure that they have an above average understanding of molecular biology. However, they do not understand protein folding better than the physisits and chemists who have spent their lives studying the field. The core of their understanding is “the answer is somewhere in this dataset. All we need to do is figure out how to through ungoddly amounts of compute at it, and we can make predictions”. Working out how to productivly throw that much compute power at a problem is not easy either, and that is what ML researchers understand and are experts in.

        In the same way, the researchers here understand how to go from a large dataset of breast images to cancer predictions, but that does not mean they have any understanding of cancer. And certainly not a better understanding than the researchers who have spent their lives studying it.

        An open problem in ML research is how to take the billions of parameters that define an ML model and extract useful information that can provide insights to help human experts understand the system (both in general, and in understanding the reasoning for a specific classification). Progress has been made here as well, but it is still a long way from being solved.

        • Tryptaminev@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 个月前

          Thank you for giving some insights into ML, that is now often just branded “AI”. Just one note though. There is many ML algorithms that do not employ neural networks. They don’t have billions of parameters. Especially in binary choice image recognition (looks like cancer or no) stuff like support vector machines achieve great results and they have very few parameters.

          • 0ops@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 个月前

            Machine learning is a subset of Artificial intelligence, which is a field of research as old as computer science itself

            The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and support for robotics.[a] General intelligence—the ability to complete any task performable by a human on an at least equal level—is among the field’s long-term goals.[16]

            https://en.m.wikipedia.org/wiki/Artificial_intelligence

    • reddithalation@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      4 个月前

      our brain is a black box, we accept that. (and control the outcomes with procedures, checklists, etc)

      It feels like lots of prefessionals can’t exactly explain every single aspect of how they do what they do, sometimes it just feels right.