Not something I believe full stop, but imo there are signs that should there be a bubble, it will pop later than we may think. A few things for consideration.

Big tech continues to invest. They are greedy. They aren’t stupid. They have access to better economic forcasting than we do. I believe they are aware of markets for the /application/ of AI which will continue to be profitable in the future. Think of how many things are pOwErEd By ArTiFiCiAl InTelIGence. That’s really speak for we have api tokens we pay for.

Along these lines comes the stupid. Many of us have bosses who insist, if not demand, we use AI. The US Secretary of Defense had his own obnoxious version if this earlier this week. If the stupid want it, the demand will remain if not increase.

Artificial intellegence is self replicating, meaning if we feed it with whatever stupid queries we make, it will “get better” at the specifics and “create more versions”. This creates further reliance and demand on those products that “do exactly what we want”. It’s an opiate. Like that one tng episode with the headsets (weak allusion and shameless pandering I know)

IMO generative AI is a dead end which will only exacerbate existing inequity. That doesn’t mean there won’t continue to be tremendous buy in which will warp our collective culture maintaining it’s profitability. If the bubble bursts, I don’t think it will be for a while.

  • nymnympseudonym@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    You’ll get downvoted to hell and so will I, but I’ll share my personal observation working at a large IT company in the AI space.

    Everybody in my company and our competitors are automating the shit out of everything we can. In some cases it’s stuff we could have automated with regular cloud automation stuff, there just wasn’t organizational focus. But in ~75% of cases it’s automating things that used to require an engineer doing some Brain Work.

    Simple build breaks or bug fixes used now get auto-fixed and reviewed later. Not at 100% success rate but it started at like 15% and then 25% and …

    Whoops some problem in the automation scripts, only have a junior engineer on call right now and he doesn’t know Groovy syntax? No problem, not knowing the language is not a blocker anymore. Engineer just needs to tweak the AI suggestions.

    Code reviews? Well the AI already caught a lot of the common stuff in our org standards before the PR was submitted, so engineers are focusing on the tricky issues not the common easy to find ones.

    Management wants quantifiable numbers. Sometimes that’s easy (“X% of bugs fixed automatically saving ~Y person-hours”), sometimes like with code reviews it’s a quality thing that will show up over time.

    But we’re all scrambling like fuck, knowing full well that
    a) everything is up for change right now and nobody knows where this is going
    b) we coders are like the horseshoe makers we better figure out how the fuck to get in front of this
    c) just like the Internet – the companies that Figure It Out will be so much more efficient, that their competitors will Just Die

    I can only speak for large corporate IT. But AFAICT, it’s exactly like the Internet – just even more disruptive.

    To quote Jordan Peele: Stay woke, bitches!

    • jacksilver@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      16 days ago

      I’m just amazed whenever I hear people say things like this as I can’t get any model to spit out working code most of the time. And even when I can it’s inconsistent and/or questionable quality code.

      Is it because most of your work is small iterations on an existing code base? Are you only working with the most popular tools that are better supported by models?

      • nymnympseudonym@piefed.social
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        edit-2
        16 days ago

        Llama 4 sucked but with scaffolding could solve some common problems.

        o1/3 was way better less gaslighting

        Grok4 kicked it up a notch more like a pro coder

        GPT5 and Claude able to solve real problems, implement simple features.

        A lot depends on not just the codebase but on context, aka prompt engineering. Does the AI have access to relevant design docs? Interface definitions? Clearly written, well-formed bug? … but not so much that context is overwhelming and it doesn’t work well again

        • jacksilver@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          16 days ago

          Okay, that’s more or less what I was expecting. A lot of my work is on smaller problems with more open ended solutions and in those scenarios I find the AI only really helps with boiler plate stuff. Most of the packages I work with it only ever has a fleeting understanding or mixes up versioning so badly that it’s really hard to trust it.

    • A_Union_of_Kobolds@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      16 days ago

      There is only one thing for certain: the people who hold the purses dictate the policies.

      I sympathize for the IT workers who feel like they’re engineering their replacements. Eventually, only a fraction of those jobs will survive.

      I believe hardware and market limitations will curb AI growth in the near future, hopefully the dust will start to settle and the real people who need to feed their families will find a way through. I think one way or another, there will be a serious need for social safety net programs to offset the IT labor surplus, which, hopefully, could create a (Socialist) Red Wave.