cross-posted from: https://lemmy.zip/post/49954591

“No Duh,” say senior developers everywhere.

The article explains that vibe code often is close, but not quite, functional, requiring developers to go in and find where the problems are - resulting in a net slowdown of development rather than productivity gains.

Then there’s the issue of finding an agreed-upon way of tracking productivity gains, a glaring omission given the billions of dollars being invested in AI.

To Bain & Company, companies will need to fully commit themselves to realize the gains they’ve been promised.

“Fully commit” to see the light? That… sounds more like a kind of religion, not like critical or even rational thinking.

  • majster@lemmy.zip
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    2
    ·
    4 hours ago

    I’m a dev at a tech startup. Most devs at the company are pretty impressed by claude code and find it very useful. Hence the company has a pretty hefty budget allocated for it.

    What I need to do is think trough the problem at hand and claude will do the code->build->unit test cycles until it satisfies the objective. In the meantime I can drink cofee in peace and go to bathroom.

    To me and to many of my coworkers its a completley new work paradigm.

    • nic2555@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      2 hours ago

      Maybe I should try it to understand, but to me, this kind of feel like it would produce code that would not follow the company standards, code that will be harder to debug since the dev have little to no idea on how it work and code that is overall of less quality than the code produce by a dev that doesn’t use AI.

      And I would not trust those unit tests, since how can you be sure if they test the correct thing, if you never made them fail in the first place. A unit test that passes right away is not a test you should rely on.

      Don’t take it the wrong way, but if Claude write all of your code, doesn’t that make you more of a product owner than a dev ?

  • flatbield@beehaw.org
    link
    fedilink
    English
    arrow-up
    20
    ·
    edit-2
    7 hours ago

    I have a friend that is a professional programmer. They think AI will generate lots of work fixing the shit code it creates. I guess we will see.

    • thingsiplay@beehaw.org
      link
      fedilink
      arrow-up
      6
      ·
      5 hours ago

      I actually think a new field of “real” programmers will emerge, in which they are specialized at looking for Ai problems. So companies using Ai and get rid of programmers, will start hiring programmers to get rid of Ai problems.

  • thingsiplay@beehaw.org
    link
    fedilink
    arrow-up
    18
    arrow-down
    1
    ·
    9 hours ago

    Billions of dollars are spent, unimaginable amount of power is used, ton of programmers are fired, million of millions code is copied without license and credit, nasty bugs and security issues are added due to trusting the ai system or being lazy. Was it worth it? Many programmers get disposable as they have to use ai. That means “all” programmers are the same and differ only in what model they use, at least that’s the future if everyone is using ai from now on.

    Ai = productivity increases, quality decreases… oh wait, Ai = productivity seems to increase, quality does decrease

    • dinckelman@programming.dev
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      4 hours ago

      This is just a very fucked reminder of that easy success never comes without a cost. Unfortunately, normal people paid that debt, while business majors continue feeding the pump and dump machine

  • abbadon420@sh.itjust.works
    link
    fedilink
    arrow-up
    31
    ·
    10 hours ago

    They say the same about scrum.

    “It doesn’t work in you company, because you haven’t fully implemented all aspects of scrum”

    Coincidentally it costs about a gazillion dollars to become fully Scrum certified.

    • Knock_Knock_Lemmy_In@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 hour ago

      Scrum works because of 2 things.

      • Projects get simplified or abandoned much quicker

      • Tasks are assigned to the team, not the individual

      Everything else is entirely optional.

  • Not a newt@piefed.ca
    link
    fedilink
    English
    arrow-up
    27
    ·
    10 hours ago

    “Fully commit” to see the light? That… sounds more like a kind of religion, not like critical or even rational thinking.

    It also gives these shovel peddlers an excuse: “Oh, you’re not seeing gains? Are you even lifting AI-ing, bro? You probably have some employees not using enough AI, you have to blame them instead of us.”

        • AnyOldName3@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          4 hours ago

          It depends on the sense of wet that you’re using. Most of the time, the relevant kind of wet is how much water something contains, and water achieves peak theoretical wetness by that definition. It’s only in specific circumstances that the surface is coated evenly by a wetting agent definition is relevant, like painting or firefighting.

            • AnyOldName3@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              3 hours ago

              I know people who say exactly this kind of thing entirely seriously (potentially because they first saw it as an unlabelled joke that they took too seriously). Sometimes people are just incorrect pedants smugly picking fault with things that aren’t even wrong.

  • Ŝan@piefed.zip
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    7
    ·
    7 hours ago

    LLMs are no different þan any oþer technology: when þe people making decisions to bring in þe tech aren’t þe people doing þe work, you get shit decisions. You get LLMs, or Low Code/No Code platforms, or cloud migrations. Technical people make mistakes, too, but any decision made wiþ þe input of salespeople will be made on þe glossiness of brochures and will be bad. Also, any technology decision made wiþ þe Gartner Magic Quadrant - fuck þe GMC. Any decision process using it smells bad.

    • thingsiplay@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      5 hours ago

      Well, there is a key difference of Ai compared to other technology: The ability to “think” and “decide” themselves. That’s the point of the tech. The problem is, that people “think” that’s true.