I want to apologize for changing the description without telling people first. After reading arguments about how AI has been so overhyped, I’m not that frightened by it. It’s awful that it hallucinates, and that it just spews garbage onto YouTube and Facebook, but it won’t completely upend society. I’ll have articles abound on AI hype, because they’re quite funny, and gives me a sense of ease knowing that, despite blatant lies being easy to tell, it’s way harder to fake actual evidence.
I also want to factor in people who think that there’s nothing anyone can do. I’ve come to realize that there might not be a way to attack OpenAI, MidJourney, or Stable Diffusion. These people, which I will call Doomers from an AIHWOS article, are perfectly welcome here. You can certainly come along and read the AI Hype Wall Of Shame, or the diminishing returns of Deep Learning. Maybe one can even become a Mod!
Boosters, or people who heavily use AI and see it as a source of good, ARE NOT ALLOWED HERE! I’ve seen Boosters dox, threaten, and harass artists over on Reddit and Twitter, and they constantly champion artists losing their jobs. They go against the very purpose of this community. If I hear a comment on here saying that AI is “making things good” or cheering on putting anyone out of a job, and the commenter does not retract their statement, said commenter will be permanently banned. FA&FO.
Because it’s just that, snake oil. Fuck AI
I mean I think there’s hope for it. It’s the natural evolution of tech. But it’s not what we are being sold, like it’s not just like yep whelp here’s AI and it’ll literally do everything now. If you understand computational and mathematical limitations, that’s literally not possible. A mathematical equation from the age of Turing still stands true, 2^n-1 means that you are going to need a fuck load of computing power for anything meaningful. That doesn’t really exist. Yet.
I think we’re looking at this from completely different angles if you are "hope"ful that AI will improve.
Also, you’re looking at AI completely wrong if you’re analyzing its performance on traditional CS problems in terms of time complexity. Nobody credible is hoping that AI is going to be solving NP problems just by feeding the problem into its context window like a quarter into a vending machine.
I’m not looking at it wrong, I think we need to look at it as two different things. I’m not a technology skeptic, I believe in technology, and I believe it largely makes the whole us better off. That’s not what “AI” is though, AI under its current definition is snake oil, and it’s a bubble. It’s something that salespeople circle jerk off over while they sell an illusion to stupid unimaginative executives. What you seem to fear is AGI, which is an entirely different thing.
It’s true, I fear AGI, not the current state of AI if it were to remain frozen and not improve at all. I am also not terribly afraid of climate change if the climate were to remain fixed at this point. Sure, we have lots of forest fires, and people are dying of heat, but it could get much worse.
I think maybe the root of our disagreement is that we’re appraising the current state of AI differently. I’m looking at AI now vs AI five years ago and seeing an orders-of-magnitude increase in how powerful it is – still not as good as a human, but no longer negligible – but you’re looking at both of these and rounding them to zero, calling it snake oil. Perhaps, in the Gartner hype cycle, you’re in the trough of disillusionment?
I don’t want to be a shill for big AI here, but I reject the idea that AI in its current state is useless (though I would agree it’s overhyped and probably detrimental to society overall). It’s capable of doing a lot of trivial labour that previously was not automatable, including coding tasks and graphics, and while it can’t do it with great reliability, or anywhere near as well as a human expert, and it’s much worse in some areas than others (AI-written news articles are much worse than useless, for instance), it’s still turning out to be a productivity benefit (read: reduction in jobs) for those who know how to use it to its strengths. I think the “snake oil” aspect is when lay-people are using it expecting it to be reliable or as good as a human – which is basically how big tech is pitching it.