None of what I write in this newsletter is about sowing doubt or “hating,” but a sober evaluation of where we are today and where we may end up on the current path. I believe that the artificial intelligence boom — which would be better described as a generative AI boom — is (as I’ve said before) unsustainable, and will ultimately collapse. I also fear that said collapse could be ruinous to big tech, deeply damaging to the startup ecosystem, and will further sour public support for the tech industry.

Can’t blame Zitron for being pretty downbeat in this - given the AI bubble’s size and side-effects, its easy to see how its bursting can have some cataclysmic effects.

(Shameless self-promo: I ended up writing a bit about the potential aftermath as well)

  • Architeuthis@awful.systems
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    2 months ago

    On each step, one part of the model applies reinforcement learning, with the other one (the model outputting stuff) “rewarded” or “punished” based on the perceived correctness of their progress (the steps in its “reasoning”), and altering its strategies when punished. This is different to how other Large Language Models work in the sense that the model is generating outputs then looking back at them, then ignoring or approving “good” steps to get to an answer, rather than just generating one and saying “here ya go.”

    Every time I’ve read how chain-of-thought works in o1 it’s been completely different, and I’m still not sure I understand what’s supposed to be going on. Apparently you get a strike notice if you try too hard to find out how the chain-of-thinking process goes, so one might be tempted to assume it’s something that’s readily replicable by the competition (and they need to prevent that as long as they can) instead of any sort of notably important breakthrough.

    From the detailed o1 system card pdf linked in the article:

    According to these evaluations, o1-preview hallucinates less frequently than GPT-4o, and o1-mini hallucinates less frequently than GPT-4o-mini. However, we have received anecdotal feedback that o1-preview and o1-mini tend to hallucinate more than GPT-4o and GPT-4o-mini. More work is needed to understand hallucinations holistically, particularly in domains not covered by our evaluations (e.g., chemistry). Additionally, red teamers have noted that o1-preview is more convincing in certain domains than GPT-4o given that it generates more detailed answers. This potentially increases the risk of people trusting and relying more on hallucinated generation.

    Ballsy to just admit your hallucination benchmarks might be worthless.

    The newsletter also mentions that the price for output tokens has quadrupled compared to the previous newest model, but the awesome part is, remember all that behind-the-scenes self-prompting that’s going on while it arrives to an answer? Even though you’re not allowed to see them, according to Ed Zitron you sure as hell are paying for them (i.e. they spend output tokens) which is hilarious if true.

  • FredFig@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    2 months ago

    I’m terrified for the future, and not even on hater shit. The public numbers are bad, and barring some extremely surprising reports locked behind a wall of NDAs, the private numbers don’t seem much better - even Saltman, perpetual cheerleader he is, doesn’t have much to offer except desperation to keep the party going, barely even a week after their big model drop.

    Sam Altman responds to a user asking for the promised voice features with extreme pettiness. "how about a few weeks of gratitude for magic intelligence in the sky, and then you can have more toys soon?"

    Image description

    Sam Altman responds to a user asking for the promised voice features with extreme pettiness. “how about a few weeks of gratitude for magic intelligence in the sky, and then you can have more toys soon?”

    So if all the big tech players know that this is garbage, the continual doubling down on this either points to: 1. scrambling for the pie while it’s there, or 2. everything else they have to offer is even worse somehow? And in either case, the aura of being a tech company instead of a company is lost, and I don’t know what happens in the fallout. The probably best case scenario is that only tech workers like myself have to eat the blowback, but I suspect things won’t play out so cleanly.

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      User requests something that accommodates their actual use-case. Altman responds by dismissing it as “toys,” in that same cultivated faux-casual lowercase smarm that constitutes the bulk of his public identity. This man is not fit to be an executive.

  • s3p5r@lemm.ee
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 months ago

    I don’t toil in the mines of the big FAANG, but this tracks with what I’ve been seeing in my mine. I also predict it will end with lay-offs and companies collapsing.

    Zitron thinks a lot about the biggest companies and how it will ultimately hurt them, which is reasonable. But, I think it ironically downplays the scale of the bubble, and in turn, the impacts of it bursting.

    The expeditions into OpenAI’s financials have been very educational. If I were an investigative reporter, my next move would be to look at the networks created by venture capitalists and what is happening inside the companies who share the same patrons as Open AI. I don’t say that as someone who interacts with finances, just as someone who carefully watches organizational politics.

  • TommySoda@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    2 months ago

    I mean it’s gotten to the point where I can’t even keep track of all the different AI being pushed by companies. My prediction is some company is going to make a super efficient and helpful AI and everyone will start using that as a base point. Like how every company wanted a website before they all just migrated the majority of their information to social media like Facebook and Twitter. And let’s be honest, most of the big companies making AI are not going to be the ones to do it. And even though they are improving, they are more interested in making money than better AI. We haven’t seen a major breakthrough in months and the majority of progress is minimal. Every time they come out with a new model it’s usually just the same with more bells and whistles.