Model Evaluation and Threat Research is an AI research charity that looks into the threat of AI agents! That sounds a bit AI doomsday cult, and they take funding from the AI doomsday cult organisat…
I feel this – we had a junior dev on our project who started using AI for coding, without management approval BTW (it was a small company and we didn’t yet have a policy specifically for it. Alas.)
I got the fun task, months later, of going through an entire component that I’m almost certain was ‘vibe coded’ – it “worked” the first time the main APIs were called, but leaked and crashed on subsequent calls. It used double- and even triple-pointers to data structures, which the API vendor’s documentation upon some casual reading indicated could all be declared statically and re-used (this was an embedded system); needless arguments; mallocs and frees everywhere for no good reason (again due to all of the un-needed dynamic storage involving said double/triple pointers to stuff). It was a horrible mess.
It should have never gotten through code review, but the senior devs were themselves overloaded with work (another, separate problem) …
I took two days and cleaned it all up, much simpler, no mem leaks, and could actually be, you know, used more than once.
Fucking mess, and LLMs (don’t call it “AI”) just allow those who are lazy and/or inexperienced to skate through short-term tasks, leaving huge technical debt for those that have to clean up after.
If you’re doing job interviews, ensure the interviewee is not connected to LLMs in any way and make them do the code themselves. No exceptions. Consider blocking LLMs from your corp network as well and ban locally-installed things like Ollama.
It should have never gotten through code review, but the senior devs were themselves overloaded with work
Ngl, as much as I dislike AI, I think this is really the bigger issue. Hiring a junior and then merging his contributions without code reviewing is a disaster waiting to happen with or without AI.
I feel this – we had a junior dev on our project who started using AI for coding, without management approval BTW (it was a small company and we didn’t yet have a policy specifically for it. Alas.)
I got the fun task, months later, of going through an entire component that I’m almost certain was ‘vibe coded’ – it “worked” the first time the main APIs were called, but leaked and crashed on subsequent calls. It used double- and even triple-pointers to data structures, which the API vendor’s documentation upon some casual reading indicated could all be declared statically and re-used (this was an embedded system); needless arguments; mallocs and frees everywhere for no good reason (again due to all of the un-needed dynamic storage involving said double/triple pointers to stuff). It was a horrible mess.
It should have never gotten through code review, but the senior devs were themselves overloaded with work (another, separate problem) …
I took two days and cleaned it all up, much simpler, no mem leaks, and could actually be, you know, used more than once.
Fucking mess, and LLMs (don’t call it “AI”) just allow those who are lazy and/or inexperienced to skate through short-term tasks, leaving huge technical debt for those that have to clean up after.
If you’re doing job interviews, ensure the interviewee is not connected to LLMs in any way and make them do the code themselves. No exceptions. Consider blocking LLMs from your corp network as well and ban locally-installed things like Ollama.
Ngl, as much as I dislike AI, I think this is really the bigger issue. Hiring a junior and then merging his contributions without code reviewing is a disaster waiting to happen with or without AI.
(old song, to the tune of My Favourite Things)
🎶 “Pointers to pointers to pointers to strings,
this code does some rather unusual things…!” 🎶