There have been multiple things which have gone wrong with AI for me but these two pushed me over the brink. This is mainly about LLMs but other AI has also not been particularly helpful for me.

Case 1

I was trying to find the music video from where a screenshot was taken.

I provided o4 mini the image and asked it where it is from. It rejected it saying that it does not discuss private details. Fair enough. I told it that it is xyz artist. It then listed three of their popular music videos, neither of which was the correct answer to my question.

Then I started a new chat and described in detail what the screenshot was. It once again regurgitated similar things.

I gave up. I did a simple reverse image search and found the answer in 30 seconds.

Case 2

I wanted a way to create a spreadsheet for tracking investments which had xyz columns.

It did give me the correct columns and rows but the formulae for calculations were off. They were almost correct most of the time but almost correct is useless when working with money.

I gave up. I manually made the spreadsheet with all the required details.

Why are LLMs so wrong most of the time? Aren’t they processing high quality data from multiple sources? I just don’t understand the point of even making these softwares if all they can do is sound smart while being wrong.

  • Voroxpete@sh.itjust.works
    link
    fedilink
    arrow-up
    97
    ·
    edit-2
    2 days ago

    Aren’t they processing high quality data from multiple sources?

    Here’s where the misunderstanding comes in, I think. And it’s not the high quality data or the multiple sources. It’s the “processing” part.

    It’s a natural human assumption to imagine that a thinking machine with access to a huge repository of data would have little trouble providing useful and correct answers. But the mistake here is in treating these things as thinking machines.

    That’s understandable. A multi-billion dollar propaganda machine has been set up to sell you that lie.

    In reality, LLMs are word prediction machines. They try to predict the words that would likely follow other words. They’re really quite good at it. The underlying technology is extremely impressive, allowing them to approximate human conversation in a way that is quite uncanny.

    But what you have to grasp is that you’re not interacting with something that thinks. There isn’t even an attempt to approximate a mind. Rather, what you have is a confabulation engine; a machine for producing plausible fictions. It does this by creating unbelievably huge matrices of words - literally operating in billions of dimensions at once, graphs with many times more axes than we have letters - and probabilistically associating them with each other. It’s all very clever, but what it produces is 100% fake, made up, totally invented.

    Now, because of the training data they’ve been fed, those made up answers will, depending on the question, sometimes ends up being right. For certain types of question they can actually be right quite a lot of the time. For other types of question, almost never. But the point is, they’re only ever right by accident. The “AI” is always, always constructing a fiction. That fiction just sometimes aligns with reality.

    • Outwit1294@lemmy.todayOP
      link
      fedilink
      arrow-up
      15
      arrow-down
      1
      ·
      2 days ago

      Confabulation is what it is, you are right.

      Why on Earth are investors backing this? Usually money filters out useless endeavours.

      • Ulrich@feddit.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        16 hours ago

        You have to understand that even if these companies never make a penny from the product itself, they still pay themselves with investor money. So they’ll fib or sometimes just outright lie about the potential of the product to get more investors. Investors are eating this shit up.

        • Outwit1294@lemmy.todayOP
          link
          fedilink
          arrow-up
          1
          ·
          12 hours ago

          That is my whole point about investors, that smart money does not take part in such things for so long. They see through the bullshit.

      • ZDL@lazysoci.al
        link
        fedilink
        arrow-up
        6
        ·
        1 day ago

        Really?

        Did money filter out the subprime mortgages before disaster?

        Did money filter out cryptocurrency?

        Did money filter out NFTs?

        Hey, why stick to the recent past. Did money filter out tulip bulbs?

        Money filters out nothing. Money is held by humans. Humans do stupid things. Humans run in packs. Human do stupid things in packs. And that means money does stupid things in packs.

        • Outwit1294@lemmy.todayOP
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          1 day ago

          Yes, but all these things were actually filtered out by money. It took a while but it happened.

          • ZDL@lazysoci.al
            link
            fedilink
            arrow-up
            3
            ·
            1 day ago

            I’m not sure I understand, then, what you mean by “filtered out by money”. If you mean “they collapsed eventually because they were idiotic ideas” then, well, yes. But they lasted for a long time before doing so and caused incalculable damage in the process. The tulip bulb craze (one of the earliest speculative crazes) lasted about 4 years. The subprime mortgage disaster took 8 years. The NFT fiasco lasted about 2 years. The dot-com bubble took 7 years to play out. The Japan real estate bubble was about 5 years.

            We’re only 3 years or so into the LLMbecile bubble. If you want to think of bubble collapses as “filtered out by money” we’ve got anywhere from next week to 2029 for the collapse.

            • jumping_redditor@sh.itjust.works
              link
              fedilink
              arrow-up
              0
              ·
              22 hours ago

              pretty sure that economists were able to calculate the damage caused by the last market crash. the market being permanently changed is inherently a correction force on the market

              • ZDL@lazysoci.al
                link
                fedilink
                arrow-up
                1
                ·
                8 hours ago

                So “filtered out by money” means “collapsed into ruins”.

                Well then, yes. The idiot ideas get “filtered out by money”. Which is a really obfuscatory way of saying “collapsed into ruins”. Not sure why you’d word it in such an odd way.

      • stabby_cicada@slrpnk.net
        link
        fedilink
        arrow-up
        12
        ·
        edit-2
        2 days ago

        Oh you sweet summer child.

        If you remember anything from this thread, remember this: capitalist markets do not care whether something is useful or useless. Capitalist markets care whether something will make money for its investors. If something totally useless will make money for its investors, the market will throw money at it.

        See: tulips, pet rocks, ethanol, cryptocurrency. And now AI.

        Because people are stupid. And people will spend money on stupid shit. And the empty hand of capitalism will support whatever people will spend money on, whether it’s stupid shit or not.

        (And because, unfortunately, AI tools are amazing at gathering information from their users. And I think the big tech companies are really aggressively pushing AI because they want very much to have users talking to their AI tools about what they need and what they want and what their interests are, because that’s the kind of big data they can make a lot of money from.)

      • xangadix@lemmy.world
        link
        fedilink
        arrow-up
        19
        ·
        2 days ago

        money filters out useless endeavours.

        That might have been true once, if ever, but it’s certainly not true anymore. Actually fabulation is where most of the money is. Most ‘investors’ have gotten rich by accident and by an incredible amount of luck. They will tell you it was hard work, swear and blood but that is never true, it’s being born in the right family and being in the right place at the time. These people aren’t any smarter or better then you and me. And are just as susceptible to bullshit as you and me. Maybe even more so, because they think there exceptional skill has gotten them where they are. This means they will put there money quite easily in any endeavour that sounds plausible and/or profitable on their mind, but what usually is complete nonsense. What is more, once a few of these have put money on the table, fomo kicks in and all the bro’s from the gym want in too, kicking of a cycle of complete and utter waste of money. All the while, telling everyone that this, THIS, this what the have put money on, is the next big thing.

      • Krudler@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 days ago

        See Quantum computing.

        Once governments started to set aside funding for it, the scams began. Google, Microsoft, they’re all in on it

        DWave is history, an AI example Builder was revealed to be 700 underpaid Indians.

        There’s like two useful algorithms right now. That we also can’t use because we cannot make matrices of qubits that are stable.

        Once the money and hype train starts rolling, it becomes about money men exploiting that hype to multiply their money… and the technology is completey secondary.

        • themoken@startrek.website
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          2 days ago

          Eh, I’ll agree that quantum computing hasn’t delivered much yet, but it shouldn’t be mentioned in the same sentence as LLMs. There’s a difference between tech that hasn’t become practical yet, and tech that is a gigantic grift pretending to be something it will categorically never achieve.

            • themoken@startrek.website
              link
              fedilink
              arrow-up
              2
              ·
              24 hours ago

              Because I think it has a stronger theoretical basis. We have been able to do simple operations with qubits and have been increasing those capabilities over the decades. It’s basically a matter of scale at this point.

              • Krudler@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                22 hours ago

                No, it really isn’t.

                We have almost no useful algorithms and there’s no algorithms in sight.

                And many of the ones that have been assumed to be useful, aren’t.

                It’s a gigantic shell game right now.

                • jumping_redditor@sh.itjust.works
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  22 hours ago

                  cracking cryptographic algorithms is a usecase that is useful to governments. the usefulness of a tool doesn’t care if it’s good for everyone, just that there are benefits to those that use it.

    • Kay Ohtie@pawb.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      Even the “thinking engine” ones are wild to watch in motion, if you ever turn on debugging. It’s like watching someone substitute the autosuggest of your keyboard for what words appear in your head when trying to think through something. It just generates something and then generates again using THAT output (multiple times maybe involved for each step).

      I watched one I installed locally for Home Assistant, as a test for various operations, just start repeating itself over and over to nearly everything before it just spat out something completely wrong.

      Garbage engines.

      • Voroxpete@sh.itjust.works
        link
        fedilink
        arrow-up
        7
        ·
        2 days ago

        I assume by “thinking engine” you mean “Reasoning AI”.

        Reasoning AI is just more bullshit. What happens is that they produce the output the way they always do - by guessing at a sequence of words that is statistically adjacent to the input they’re given - but then what they do is produce a randomly generated “Chain of thought” which is invented in the same way as the result; just pure statistical word association. Essentially they create the output the same way that a non-reasoning LLM does, then they give r themselves the prompt “Write a chain of thought for this output.” There’s a little extra stuff going on where they sort of check their own output, but in essence that’s just done by running the model multiple times and picking the output they converge on. So, just weighting the randomness, basically.

        I’m simplifying a lot here obviously, but that’s pretty much what’s going on.