After two years’ wait, and Sam Altman talking up how scary smart it was, OpenAI finally released GPT-5! And it was … meh. It could program a bit better? It didn’t do anything else much better…
Yep. Terrible at many things, very good at others. At the end of the day, very useful technology.
Just as my grandmother always used ti say, “You can’t use a knife to beat an egg but you can fuck a player up with it.” RIP Gammy. She was the sweetest.
There’s value in the underlying theory of machine learning. ML models are exceptionally good at sifting enormous amounts of data so long as you’re cross checking outputs - which sounds like doing the work anyway, except now you know what to look for and check. Astronomers have been using this shit for years to sift radio telescope data, and particle physicists use it to sift collider results.
Is there danger? Absolutely. But saying it’s worthless is like saying the theory of relativity is worthless because it created nukes. Throwing the underlying theory out because people are using it to do shitty things is going to rapidly shrink your world, because a LOT of science has been used to do a LOT of harm, yet we still use it.
and a bunch of waffle about unrelated ML advancements in robotics, and it confused you into giving me a shit lecture on tech I already know about? why?
Most frustrating thing about this AI Psychosis crap, they think machines trained on inherently biased data can communicate fully without bias, and forget the rule parameters of the systems (and that they’re products designed in the Skinner mindset of monopolizing the user’s time and creating “positive experiences”).
The machines aren’t designed to scrub bias, they’re designed to appear to while alligning with their corporate developer’s goals. (Which is also fucked from a consent-autonomy angle if they ever do design AGI, which is essentially what Detroit Become Human was talking about).
Yep. Terrible at many things, very good at others. At the end of the day, very useful technology.
Just as my grandmother always used ti say, “You can’t use a knife to beat an egg but you can fuck a player up with it.” RIP Gammy. She was the sweetest.
just some uwu itsy bitsy critihype for my favorite worthless fashtech ❤️
how about you and your friend and your grandma all go fuck themselves ❤️
Tech can’t be fascist only people can. You seem like you’re loosing it… Get some help mate.
tech absolutely can have political inclination, crypto is libertarian, surveillance is fash, and whatever ai-bros are cooking is somewhere in between
Guns don’t kill people, people kill people.
holy shit, across 3 comments you did a full distributed darvo
stellar example of shitheadery so early on a sunday!
I have plenty of help! one of the people who actually post here are gonna come help me tell you to fuck yourself! isn’t that fun?
There’s value in the underlying theory of machine learning. ML models are exceptionally good at sifting enormous amounts of data so long as you’re cross checking outputs - which sounds like doing the work anyway, except now you know what to look for and check. Astronomers have been using this shit for years to sift radio telescope data, and particle physicists use it to sift collider results.
Is there danger? Absolutely. But saying it’s worthless is like saying the theory of relativity is worthless because it created nukes. Throwing the underlying theory out because people are using it to do shitty things is going to rapidly shrink your world, because a LOT of science has been used to do a LOT of harm, yet we still use it.
you saw this:
and a bunch of waffle about unrelated ML advancements in robotics, and it confused you into giving me a shit lecture on tech I already know about? why?
And if we were talking about “the underlying theory of machine learning”, you might have a point.
Most frustrating thing about this AI Psychosis crap, they think machines trained on inherently biased data can communicate fully without bias, and forget the rule parameters of the systems (and that they’re products designed in the Skinner mindset of monopolizing the user’s time and creating “positive experiences”).
The machines aren’t designed to scrub bias, they’re designed to appear to while alligning with their corporate developer’s goals. (Which is also fucked from a consent-autonomy angle if they ever do design AGI, which is essentially what Detroit Become Human was talking about).