So many of the technically competent individuals I know are just gleefully throwing their competency and credibility into this ‘AI’ grift chipper with utter abandon.
I can’t see a viable path through it, and whenever they’ve articulated what they see on the other side, it is beyond repugnant and I truly don’t see any benefit in existing in that world if it ever manifests.
The “Easy Way” just became available, and humans are inherently lazy, so of course they’ll want to go all in and give it a try. Why not make the robots do all the hard work, right?
Eventually they’ll figure out that human nuance makes all the difference, but the robotic way is more profitable, so MEDIOCRITY will eventually win the day.
I just don’t get how so many people just start by it. Every time I set my expectations lower for what it can be useful at, it proceeds to prove itself likely to fail at that when I actually have a use case that I think one of the LLMs could tackle. Every step of the way. Being told by people that the LLMs are amazing, and that I only had a bad experience because I hadn’t used the very specific model and version they love, and every time I try to verify their feedback (my work is so die-hard they pay for access to every popular model and tool), it does roughly the same stuff, ever so slightly shuffling what they get right and wrong.
I feel gaslit as it keeps on being uselessly unreliable for any task that I would conceivably find it theoretically useful for.
I’ve had similar experiences. Try to do something semi-difficult and it fails, sometimes in an entertainingly shit way at least. Try something simple where I already know the answer? Good chance there’s at least one fundamental issue with the output.
So what are people who use this tech actually getting out of it? Do they just make it regurgitate things from StackOverflow? Do they have a larger tolerance for cleaning up trash? Or do they just not check the output?
For me, this is the most depressing part.
So many of the technically competent individuals I know are just gleefully throwing their competency and credibility into this ‘AI’ grift chipper with utter abandon.
I can’t see a viable path through it, and whenever they’ve articulated what they see on the other side, it is beyond repugnant and I truly don’t see any benefit in existing in that world if it ever manifests.
The “Easy Way” just became available, and humans are inherently lazy, so of course they’ll want to go all in and give it a try. Why not make the robots do all the hard work, right?
Eventually they’ll figure out that human nuance makes all the difference, but the robotic way is more profitable, so MEDIOCRITY will eventually win the day.
I just don’t get how so many people just start by it. Every time I set my expectations lower for what it can be useful at, it proceeds to prove itself likely to fail at that when I actually have a use case that I think one of the LLMs could tackle. Every step of the way. Being told by people that the LLMs are amazing, and that I only had a bad experience because I hadn’t used the very specific model and version they love, and every time I try to verify their feedback (my work is so die-hard they pay for access to every popular model and tool), it does roughly the same stuff, ever so slightly shuffling what they get right and wrong.
I feel gaslit as it keeps on being uselessly unreliable for any task that I would conceivably find it theoretically useful for.
I’ve had similar experiences. Try to do something semi-difficult and it fails, sometimes in an entertainingly shit way at least. Try something simple where I already know the answer? Good chance there’s at least one fundamental issue with the output.
So what are people who use this tech actually getting out of it? Do they just make it regurgitate things from StackOverflow? Do they have a larger tolerance for cleaning up trash? Or do they just not check the output?
We’re cooked. Gotta fight back