- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
science shows as true what you thought was only 99% true
https://www.youtube.com/watch?v=uVf7VUX_iUk&list=UU9rJrMVgcXTfa8xuMnbhAEA - video
https://pivottoai.libsyn.com/20251015-ai-is-not-popular-and-ai-users-are-unpleasant-asshats - podcast
time: 5 min 57 sec
It’s almost like telling asshats that they’re right all the time is, somehow, not good for their social-emotional development. Who would’a thunk it!
Until this article I wasn’t sure how big the AI bubble is…
But seeing that more than 99% of the public don’t really use AI and then comparing that to tech stocks makes me sure the bubble is just like in the 2000s
Of course some stuff will stick just like in the 2000s but with these high valuations and 99% of people not using it, it will take a long time till break-even
Removed by mod
bold words from a promptfondler.
In other news water is wet.
And? Wait… is this not universally known?
Worse when your wife calls it her best friend.
That’s rough buddy
10 years ago I would have been the one who wanted AI. I love Sci fi and futuristic things. But seeing how it’s abused and the devastation it is doing to the planet? No.
And it’s just not reliable unless I am desperate when it comes to helping my kiddo with math.
I imagine that this might be the kind of thing someone would talk about in couples therapy.
can you provide more context?
My wife uses Chat gpt constantly. She said it as a joke but she knows how I feel about AI. I detest it, refuse to use it, and am teaching my 10 year old to do things without.
ah. :-(
The only thing that drives me to AI is the extreme uselessness of modern search engines. This is not an endorsement of hallucination engines as much as it is a condemnation of late stage enshittification of search engines and the internet in general. I miss the days when I could google something and actually find what I was looking for.
It’s not perfect, but https://udm14.com/ is at least an improvement. I found it too tempting to read the AI answer that pops up right away and is often wrong.
… the search engines became crap because of ai
Plus, ai just lies. It’s not a replacement
… the search engines became crap because of ai
I mean, search has always been built on some kind of LLM. That’s how you convert a query into a list of page-results.
We’ve just started trying to wrap the outputs into a natural language response pattern in order to make them feel more definitive and more engaging. The “AI” part of search is mostly window-dressing.
Plus, ai just lies.
It has inaccurate heuristics and often tries to back-fill what it can’t find with an approximation in order to maintain user engagement.
Idk if I’d even call it lying, so much as bullshitting.
Shitty search has been a problem longer than ai
AI has been around since the 50s, Internet search has only been around since the 80s.
That is irrelevant. We are specifically talking about LLMs ruining Google searches, which is the last couple of years.
If your definition of ai is only LLMs, sure. But it’s been algorithmic tweaking and SEO wars for a while now.
Well yeah, llm garbage is the problem being discussed. Do you consider SEO to be ai? I don’t see why you would.
No, but the algorithms that search engines use to combat it is.
ah minor correction, you’re on Lemmy not on Mastodon, the venue for this sort of tedious and point-avoiding pedantry
No, they aren’t.
if you post a thread about intolerable dickheads, the most intolerable dickheads on Lemmy will post some shit like “intolerable dickhead checking in, how fucking dare you”
it’s like catnip for the Reddit-brained, and by catnip I mean meth
I haven’t met that many dickheads here though, maybe one or two, but not as many as reddit.
Unless I’m the asshole.
they tend to disappear from here rather quickly
Sometimes they file reports against regulars, accusing them of “ableism” for being anti-slop-machine. That’s also entertaining.
There’s a term for being anti-ai slop? Hahahaha that is hilarious!
Edit. That term is against those with disabilities. I agree that that’s a dick move. Are they claiming that by hating ai slop, we are against those with disabilities? I’m feeling like I missed something. They can’t mean that, that’s idiotic. Like… Idiocracy level.
inconceivable!
We hope they enjoy their Fediverse™ experience here at Awful
Removed by mod
Yeah, that’s great, thanks.
so predictable.
deleted by creator
paraphrasing, it was “i have exactly the traits mentioned around and i think i’m perfectly fine and rational”.
what an unpleasant asshat
science shows as true what you thought was only 99% true
Still a way higher accuracy than LLM output, so…
Interesting. Why would more manipulative people and ones with more focus on self-interest use AI more than other people? Because they’re more likely to take shortcuts while doing stuff? Or is there any other direct benefit for them?
Removed by mod
My completely PIDOOMA take is that if you’re self-interested and manipulative you’re already treating most if not all people as lesser, less savvy, less smart than you. So just the fact that you can half-ass shit with a bot and declare yourself an expert in everything that doesn’t need such things like “collaboration with other people”, ew, is like a shot of cocaine into your eyeball.
LLMs’ tone is also very bootlicking, so if you’re already narcissistic and you get a tool that tells you yes, you are just the smartest boi, well… To quote a classic, it must be like being repeatedly kicked in the head by a horse.
increasing number of social media responses which come across as they think they’re giving clarifying orders to a chatbot
I imagine there are a few reasons. An LLM is a narcissist’s dream–it will remain focused on you and tell you what you want to hear (and is always willing to be corrected).
In addition, LLMs are easy to manipulate, and sort of mimic a person enough to give you a sense of power or authority. So if you’re the type of person who gets something from that, there’s likely a draw to that kind of person.
Those are just guesses, though. I don’t use LLMs myself, so I don’t really know.
Thanks, that sounds reasonable. Especially the focus/attention.
Maybe it’s the same as with other games or computer games… Some people also really get something out of fantasy achievements and when they win and feel like the main character… in a weird way…
I’m just spitballing here, but I suspect it’s for the same reason people with “dark triad” traits (narcissism, Machiavellianism, and psychopathy) are more successful in business and politics than the average person.
Dark triad types give quick, confident, and persuasive answers, and aggressively challenge anyone who disagrees with them. But they don’t actually care if the answers are true as long as they can win the debate or argument they’re having. This lets them be totally confident and persuasive in any situation - whether they know the answer or not - and so demonstrate more “leadership skills” than people who are less willing to bullshit.
Same with policies - a dark triad type is going to confidently and aggressively support policies that make him look good or benefit him personally in other ways. He doesn’t actually care whether they are good policies or bad policies, whether they’ll be good for the organization or the people or not - the dark triad type will lie, cheat, or steal to make sure his policies look successful, get himself promoted upwards, and blame his successor for the long term failure of the policy.
(If you were a dark triad type, you might, for example, enact policies that crash the economy and drive inflation through the roof while making yourself and your cronies incredibly rich, then cancel all the reports that track inflation, hunger, unemployment, etc, to conceal the impact of your policies, and go on a social media blitz claiming the economy is better than ever and any problems are someone else’s fault. Just as a hypothetical example.)
I’m kind of not surprised people who care more about persuasiveness than honesty, and more about results than processes, would find AI tools appealing.
I would think because ai is basically just a yes man they can get instant gratification from. Easier to manipulate than a real human, when they’re wrong, you can berate them without year of pushback.
For example: https://youtu.be/qhwbUL2mJMs
As a certified bullshitter myself, I often find myself really annoyed with llms because their bullshitting is just so obvious
I love technology and seeing what I can and can’t get to work. I have a self hosted image generator and Lon (stable diffusion and ollama). It was fun for a little while, generating images of whatever popped in mind and using the llama for code completion, grammar checks, and rewording things. I even started working on something like that AI streamer, Neuro. It’s all garbage though. The whole stack has been relegated to sending welcome messages on a discord server. It’s a neat toy but anything past that is just adding a whole layer of inaccuracy to whatever you’re using it for and way too many people don’t realize that.
Removed by mod
Sorry, it’s science now.
Removed by mod
like moths to a flame
So long as you keep your bullshit detector well-maintained, check the sources—and actually use an AI that cites it’s sources—I see nothing wrong with them. The tech is still in its infancy; it’ll improve with time.
fuck off asshat
dammit beat me by 7 minutes
and actually use an AI that cites it’s sources
make the hallucinotron useful with this one weird trick
Oh look, it’s literally “we’re still early”, I missed the classics
I try to leep my bullshit detector well-maintained; I see some AI crap being forced and think: “Well that’s some bullshit!”
Removed by mod
Having reviewed your posts, new poster bad.