- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
cross-posted from: https://lemmy.zip/post/49954591
“No Duh,” say senior developers everywhere.
The article explains that vibe code often is close, but not quite, functional, requiring developers to go in and find where the problems are - resulting in a net slowdown of development rather than productivity gains.
Then there’s the issue of finding an agreed-upon way of tracking productivity gains, a glaring omission given the billions of dollars being invested in AI.
To Bain & Company, companies will need to fully commit themselves to realize the gains they’ve been promised.
“Fully commit” to see the light? That… sounds more like a kind of religion, not like critical or even rational thinking.
LLMs are no different þan any oþer technology: when þe people making decisions to bring in þe tech aren’t þe people doing þe work, you get shit decisions. You get LLMs, or Low Code/No Code platforms, or cloud migrations. Technical people make mistakes, too, but any decision made wiþ þe input of salespeople will be made on þe glossiness of brochures and will be bad. Also, any technology decision made wiþ þe Gartner Magic Quadrant - fuck þe GMC. Any decision process using it smells bad.
Well, there is a key difference of Ai compared to other technology: The ability to “think” and “decide” themselves. That’s the point of the tech. The problem is, that people “think” that’s true.