- cross-posted to:
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
Reading the paper, AI did a lot better than I would expect. It showed experienced devs working on a familiar code base got 19% slower. It’s telling that they thought they had been more productive, but the result was not that bad tbh.
I wish we had similar research for experienced devs on unfamiliar code bases, or for inexperienced devs, but those would probably be much harder to measure.
I feel this – we had a junior dev on our project who started using AI for coding, without management approval BTW (it was a small company and we didn’t yet have a policy specifically for it. Alas.)
I got the fun task, months later, of going through an entire component that I’m almost certain was ‘vibe coded’ – it “worked” the first time the main APIs were called, but leaked and crashed on subsequent calls. It used double- and even triple-pointers to data structures, which the API vendor’s documentation upon some casual reading indicated could all be declared statically and re-used (this was an embedded system); needless arguments; mallocs and frees everywhere for no good reason (again due to all of the un-needed dynamic storage involving said double/triple pointers to stuff). It was a horrible mess.
It should have never gotten through code review, but the senior devs were themselves overloaded with work (another, separate problem) …
I took two days and cleaned it all up, much simpler, no mem leaks, and could actually be, you know, used more than once.
Fucking mess, and LLMs (don’t call it “AI”) just allow those who are lazy and/or inexperienced to skate through short-term tasks, leaving huge technical debt for those that have to clean up after.
If you’re doing job interviews, ensure the interviewee is not connected to LLMs in any way and make them do the code themselves. No exceptions. Consider blocking LLMs from your corp network as well and ban locally-installed things like Ollama.
The study was centered on bugfixing large established projects. This task is not really the one that AI helpers excel at.
Also small number of participants (16) , the participants were familiar with the code base and all tasks seems to be smaller in completion time can screw results.
Thus the divergence between studio results and many people personal experience that would experience increase of productivity because they are doing different tasks in a different scenario.
The study was centered on bugfixing large established projects. This task is not really the one that AI helpers excel at.
“AI is good for Hello World projects written in javascript.”
Managers will still fire real engineers though.
I find it more useful doing large language transformations and delving into unknown patterns, languages or environments.
If I know a source head to toe, and I’m proficient with that environment, it’s going to offer little help. Specially if it’s a highly specialized problem.
Since SVB crash there have been firings left and right. I suspect AI is only an excuse for them.
Same experience here, performance is mediocre at best on an established code base. Recall tends to drop sharply as the context expands leading to a lot of errors.
I’ve found coding agents to be great at bootstrapping projects on popular stacks, but once you reach a certain size it’s better to either make it work on isolated files, or code manually and rely on the auto complete.
familiar with the code base
Call me crazy but I think developers should understand what they’re working on, and using LLM tools doesn’t provide a shortcut there.
You have to get familiar with the codebase at some point. When you are unfamiliar, in my experience, LLMs can provide help understanding it. Copying large portions of code you don’t really understand and asking for an analysis and explanation.
Not so far ago I used it on assembly code. It would have taken ages to decipher what it was doing by myself. The AI sped up the process.
But once you are very familiar with a established project you had work a lot with, I don’t even bother asking LLMs anything, as in my experience, I come up with better answers quicker.
At the end of the day we must understand that a LLM is more or less an statistical autocomplete trained on a large dataset. If your solution is not on the dataset the thing is not going to really came up with a creative solution. And the thing is not going to run a debugger on your code either, afaik.
When I use it the question I ask myself the most before bothering is “is the solution likely to be on the training dataset?” or “is it a task that can be solved as a language problem?”
Don’t give yourselves to these unnatural men - machine men with machine minds and machine hearts! You are not machines! You are men!
You are not cattle!
You are men!
You have the love of humanity in your hearts!
You don’t hate!
Only the unloved hate - the unloved and the unnatural!
Soldiers!
Don’t fight for slavery! Fight for liberty!
In the 17th Chapter of St Luke it is written: “the Kingdom of God is within man” - not one man nor a group of men, but in all men!
In you!
Someone told me the best use of AI was writing unit tests and I died on the inside.
It would be interesting to see another study focusing on cognitive load. Maybe the AI let’s you offload some amount of thinking so you reserve that energy for things it’s bad at. But I could see how that would potentially be a wash as you need to clearly specify your requirements in the prompt, which is a different cognitive load.
I seem to recall a separate study showing that it just encouraged users to think more lazily, not more critically.
I wish AI was never invented, but surely this isn’t ture.
I’ve been able to solve coding issues that usually took me hours in minutes.
Wish it wasn’t so, but it’s been my reality
LLMs making you code faster means your slow not LLMs fast
I doubt anyone can write complex regex in ~30 seconds, LLM’s can
Ai-only vibe coders. As a development manager I can tell you that AI-augmented actual developers who know how to write software and what good and bad code looks like are unquestionably faster. GitHub Copilot makes creating a suite of unit tests and documentation for a class take very little time.
Try reading the article.
The article is a blog post summarizing the actual research. The researchers’ summary says:
We do not provide evidence that: AI systems do not currently speed up many or most software developers. We do not claim that our developers or repositories represent a majority or plurality of software development work.
The research shows that under their tested scenario and assumptions, devs were less productive.
The takeaway from this study is to measure and benchmark what’s important to your team. However many development teams have been doing that, albeit not in a formal study format, and finding AI improves productivity. It is not (only) “vibe productivity”.
And certainly I agree with the person you replied to: anecdotally, AI makes my devs more productive by cutting out the most grindy parts, like writing mocks for tests or getting that last missing coverage corner. So we have some measuring and validation to do.
The research explicitly showed that the anecdotes were flawed, and that actual measured productivity was the inverse of what the users imagined. That’s the entire point. You’re just saying “nuh uh, muh anecdotes.”
I did, thank you. Terms therein like “they spend more time prompting the AI” genuinely do not apply to a code copilot, like the one provided by GitHub, because it infers its prompt based on what you’re doing and the context of the file and application and creates an autocomplete based on its chat completion, which you can accept or ignore like any autocomplete.
You can start writing test templates and it will fill them out for you, and then write the next tests based on the inputs of your methods and the imports in the test class. You can write a whole class without any copilot usage and then start writing the xmldocs and it will autocomplete them for you based on work you already did. Try it for yourself if you haven’t already, it’s pretty useful.
I read the article (not the study only the abstract) and they were getting paid an hourly rate. It did not mention anything about whether or not they had expirence in using llms to code. I feel there is a sweet spot, has to do with context window size etc.
I was not consistently better a year and a half ago but now i know the limits caveats and methods.
I think this is a very difficult thing to quantify but haters gonna latch on to this, same as the study that said “ai makes you stupid” and “llms cant reason”… its a cool tool that has limits.
One interesting feature in this paper is that the programmers who used LLMs thought they were faster, they estimated it was saving about 20% of the time it would have taken without LLMs. I think that’s a clear sign that you shouldn’t trust your gut about how much time LLMs save you, you should definitely try to measure it.
The study did find a correlation between prior experience and performance. One of the developers who showed a positive speedup with AI was the one with the most previous experience using Cursor (over 50 hours).