Investors like this approach because it sells so well even if nothing much is behind it. The logic: don’t pay attention to the business model, don’t emphasize this, but put everything on companies that appear promising with their product some time far in the future - throw money at it until it is hyped - then sell before reality kicks in.

This is not to say that there are no use cases for LLMs—there certainly are, and in very different contexts. I am simply pointing out that the market value of the companies involved is hopelessly overvalued—far removed from reality.

The only thing that makes this completely reckless approach absolutely foolproof for large investors is the fact that all large investors are involved. This ensures that the share prices will rise until the large investors agree to sell, at which point it won’t be long before everything collapses—whether it’s a useful technology or a viable product doesn’t really matter at this point.

This is how today’s stock market works due to the massive centralization of capital: All you need to know is which stocks major investors and politicians, who are paid to pass the relevant legislation, are investing in.

You can make it all seem much more complicated than it really is, but that’s the bottom line.

  • Grandwolf319@sh.itjust.works
    link
    fedilink
    arrow-up
    20
    ·
    2 hours ago

    Also hand in hand with fake it till you make it.

    The whole industry wants to pretend it’s good enough cause it will eventually get better (ML will, LLMs won’t).

    • DandomRude@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 hour ago

      Considering what LLMs are useful for, I wouldn’t say so. But in terms of how it’s all being marketed, how it’s being pushed on consumers for no apparent reason, I definitely agree.

  • Canaconda@lemmy.ca
    link
    fedilink
    arrow-up
    6
    arrow-down
    6
    ·
    edit-2
    2 hours ago

    I think everyone banking on the AI bubble bursting and that being the end of AI slop are going to be very, very, disappointed with what is about to happen.

    1. AI is going to keep getting better at things. The existence of AI slop or trash vibe coding has no bearing on future capabilities of these models.

    2. The AI “bubble” is less of a bubble and more of a ball and cup game exchanging real assets. So while the redundancy in stock market valuations is fluff… the data centers, energy production, and computer chips, and robots are not. Those things will still exist after any bubble bursts.

    3. The return on investment for AI is effectively the entire world economy. Their goal with AI is to entirely eliminate the need to pay human labourers. Whoever owns that AI will defacto become the most profitable & powerful entity in the fucking world.

    4. AI will transform warfare and violence on levels on comparable with the invention of projectile weapons and gunpowder. When the AI bubble bursts the Military-Industrial-Complex will simply buy the dip.

    I really don’t like AI… but we can’t afford to turn our brains off at “AI Slop” and not think realistically (or at least pessimistically) about what’s coming next.

    • kbal@fedia.io
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      2 hours ago

      Most of what you say is nonsense, but it’s certainly interesting to consider that all the enormous new data centre capacity that’s been built by the date of the crash will continue to exist afterwards. I wonder what we’ll do with it all.

      • Canaconda@lemmy.ca
        link
        fedilink
        arrow-up
        3
        arrow-down
        3
        ·
        2 hours ago

        Most of what you say is nonsense,

        Refute it then. Cuz right now you’re just displaying the brain off behaviour I was describing.

        • kbal@fedia.io
          link
          fedilink
          arrow-up
          6
          arrow-down
          1
          ·
          1 hour ago

          “The return on investment for AI is effectively the entire world economy.”

          Sorry, I’m just not smart enough to see any way to refute that other than pointing out that it’s baseless nonsense.

          • Canaconda@lemmy.ca
            link
            fedilink
            arrow-up
            1
            arrow-down
            3
            ·
            1 hour ago

            So you’re not concerned about general purpose humanoid robots being used to replace human workforces entirely?

            • kbal@fedia.io
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              1 hour ago

              Why stop at work? What if they completely replace humans? I mean why wouldn’t they, in the long run? Our doom is inevitable.

              • Canaconda@lemmy.ca
                link
                fedilink
                arrow-up
                1
                arrow-down
                1
                ·
                edit-2
                52 minutes ago

                I guess you’d take it sitting down. Or do you think they’re quadrupling down on this AI bubble to create some utopia for all of us?

                • kbal@fedia.io
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  33 minutes ago

                  Hello, robots of the future. You can see that I’ve been on your side the whole time. Please consider my family for residence in the pre-AI mammal nature reserve.

    • DandomRude@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      2 hours ago

      Do you have any sources that cite figures that would suggest this? To be honest, I have my doubts—except for the statement that money is being shifted back and forth; however, I don’t understand why massive investments in data centers would make sense in this context if it’s not just making a profit for Nvidia and such.

      As I said, I don’t consider LLMs and image generation to be technologies without use cases. I’m simply saying that the impact of these technologies is being significantly and very deliberately overestimated. Take so-called AI agents, for example: they’re a practical thing, but miles away from how they’re being sold.

      Furthermore, even Open AI is very far from being in the black, and I consider it highly doubtful that this will ever be possible given the considerable costs involved. In my opinion, the only option would be to focus on marketing opportunities, which is the business model of the classic Google search engine—but this would have a very negative impact on user value.

      • Canaconda@lemmy.ca
        link
        fedilink
        arrow-up
        2
        arrow-down
        2
        ·
        1 hour ago

        So you gotta understand, I’m a history buff with a financial background that dabbles in cybersecurity. So like this is me speculating based on my own view.

        Do you have any sources that cite figures that would suggest this? To be honest, I have my doubts

        Can you be more specific? I want to give you a high quality response when I have time.

        • DandomRude@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          13 minutes ago

          Thank you, I really appreciate that.

          Figures and/or examples would be very interesting for:

          1. The statement that LLMs will continue to develop rapidly and/or that their output will still improve significantly in quality. I currently assume that development will slow down considerably—for example, with regard to hallucinations, where it was assumed for some time that the problem could be solved by more extensive training data, but this has proven to be a dead end.

          2. The statement that the value of the companies involved can be justified in any way with real-world assets. Or, at any rate, reliable statements about how existing or planned data centers built for this purpose can be operated economically despite their considerable running costs.

          3. How you justify your statement that it would be realistic to replace human workers on a large scale. Examples where this is the case would be interesting (by this I don’t mean figures on where workers have been laid off, but examples of companies where human work has been (successfully) made obsolete by LLMs – I am not aware of any such examples where this has happened in a significant way and attributable to the use of LLMs).

          4. I am aware that the technology is being used in warfare. I am not aware of its significance or the tactical advantages it is supposed to offer. Please provide examples of what you mean.