Model Evaluation and Threat Research is an AI research charity that looks into the threat of AI agents! That sounds a bit AI doomsday cult, and they take funding from the AI doomsday cult organisat…
Of course that’s what they’re doing. That’s the whole point. Generate a bunch of plausible-looking BS and move on.
Writing one UT (actually writing, not pressing tab) gives you ideas for other tests.
And unit tests are not some boring chore. When doing TDD, they help inform and guide the design. If the LLM is doing that thinking for you, too, you’re just flying blind. “Yeah, that looks about right.”
Can’t wait for this shit to show up in medical devices.
I’ve caught “professionals” pasting code from from forums and StackOverflow. Of course people are just blindly using LLMs the same way. Incredibly naive to think people aren’t already and won’t do so more in the future.
Of course that’s what they’re doing. That’s the whole point. Generate a bunch of plausible-looking BS and move on.
Writing one UT (actually writing, not pressing tab) gives you ideas for other tests.
And unit tests are not some boring chore. When doing TDD, they help inform and guide the design. If the LLM is doing that thinking for you, too, you’re just flying blind. “Yeah, that looks about right.”
Can’t wait for this shit to show up in medical devices.
That is like claiming people are directly copying from university books and implementing whatever they get without checking.
Of course there are nitwits like that, but they are few and far in between.
Anyone seriously using LLM prompts double checks their work.
I’ve caught “professionals” pasting code from from forums and StackOverflow. Of course people are just blindly using LLMs the same way. Incredibly naive to think people aren’t already and won’t do so more in the future.
Damn there are so many AI critics who have clearly not seriously tried it. It’s like the smartphone naysayers of 2007 but much much worse.
Ad hominem.