I want to apologize for changing the description without telling people first. After reading arguments about how AI has been so overhyped, I’m not that frightened by it. It’s awful that it hallucinates, and that it just spews garbage onto YouTube and Facebook, but it won’t completely upend society. I’ll have articles abound on AI hype, because they’re quite funny, and gives me a sense of ease knowing that, despite blatant lies being easy to tell, it’s way harder to fake actual evidence.
I also want to factor in people who think that there’s nothing anyone can do. I’ve come to realize that there might not be a way to attack OpenAI, MidJourney, or Stable Diffusion. These people, which I will call Doomers from an AIHWOS article, are perfectly welcome here. You can certainly come along and read the AI Hype Wall Of Shame, or the diminishing returns of Deep Learning. Maybe one can even become a Mod!
Boosters, or people who heavily use AI and see it as a source of good, ARE NOT ALLOWED HERE! I’ve seen Boosters dox, threaten, and harass artists over on Reddit and Twitter, and they constantly champion artists losing their jobs. They go against the very purpose of this community. If I hear a comment on here saying that AI is “making things good” or cheering on putting anyone out of a job, and the commenter does not retract their statement, said commenter will be permanently banned. FA&FO.
I don’t understand the glee with taking job from people by stealing their data. Those people deserve better.
I’m glad to be here. Fuck AI
Fuck AI. I’m still learning lemmy.
Original Desc: A place for all those who loathe machine-learning and its endorsement – and to document the knaves behind this dismal industry. Proud supporter of human artists. And proud booer of SXSW 2024.
I do want to understand, I am no “booster” by any means and I think on the whole AI is being used in some abhorrent ways, but I don’t think the technology is the problem I think the problem is it’s use to steal art and by corporations to cut people out of the picture, and I do think there are some beneficial applications of AI and that it is not unilaterally negative or evil. Just being used in a lot of shitty evil ways right now. Does that mean I am unwelcome here? Genuinely asking.
I’ll say that while it is a problem that LLM are plagiarism machines at industrial scales, It’s also a problem that it would create the same carbon emissions as NYC produces in a month to make them plagiarize 3-5% better.
Eh, you’re welcome here.
LOL, they think twitter artists make art for money. Dumbass! cries
Fuck AI but imo community names are good to be little more productive. We will just circlejerk here hard lol which is not like bad in itself but hella boooring.
Gimme a good debate with the ai bros I can watch on sidelines as smarter than me make arguments. Just warn/ban em if they are disrespectful
But maybe that’s the purpose of whole rest of the tech comms…
The types of places they’re on are the mainstream sites (Reddit & Twitter), and I don’t want to go there unless I have to. I honestly think a big part of the push for AI isn’t popularity at all. Nobody really likes it that much. It’s purely oligarchs who try to make their product look good to investors, and who see it as a way to replace human workers immensely easily. I’m unsure where the cryptocurrency-using techbros come from, tho. Maybe they’re bot accounts held by very few people.
Lots of people love AI lol you artbros are delusional.
Even here on Lemmy there’s an absolute crapton of FOSS AI fans and communities like [email protected] not to mention this being the home of the AI Horde.
Reject ludditism, embrace the solarpunk high tech future.
What is solarpunk about AI? 😂😂
Ah yes, the solarpunk high tech happy-for-everyone future in which artists no longer have jobs and are forced to do something they hate doing because otherwise they won’t eat.
Tell me that you don’t understand who the Luddites were nor what solarpunk means without saying it straight out.
Reject ludditism, embrace the solarpunk high tech future.
Ooh! Well done, Sparky!
Shut the fuck up, A LLM will only return results It’s masters have deemed sufficient to go into public knowledge, it’s crazy how easy you’re to be controlled in the future if not already, fucking punk ass bitch.
I have a question about if I’m allowed to be here. Here’s my stance:
- As a CS person, I find the algorithms that run AI intellectually interesting.
- The way AI is being used as a society, to put people out of work and replace human-made work with slop, is very very bad. (I support the sag-aftra strikers.)
- I think it’s especially bad that artists are being put out of work, and that big companies think that copyright should protect mickey mouse and not starving artists. This is where most of my “fuck AI” drive comes from.
- I think AI could in the future be made to not be slop, with some future new algorithms, but we’re not there yet. It’s not theoretically impossible or anything though. Doomer opinion: if AI actually becomes as powerful as the boosters say it will be, then it could be an x-risk for humanity. I’m not saying that to make AI sound really impressive, I just think we need to be cautious about it.
- Currently, LLMs are sometimes useful, but only in the right context and when used properly. For instance, AI is pretty good at NLP. It’s also useful for explaining opaque C++ error messages. It’s not useful for ghiblification or summarizing search results or whatever it is Altman is trying to peddle. It might be able to help with protein folding and other pharmaceutical research.
- The biggest reason that AI is bad is because capitalism is bad. Without capitalism, we could focus on the actually good uses of AI.
It’s just fantasy in regards to the replacing everything. The same thing happened back in the 60s and the 70s with the fantasy that computers were going to make everything in life easy peasy. Flying cars and all of that. No doubt computers have made our lives easier, and have progressed us as a society (debatable when it comes to social media). But it isnt the end all, be all that it was promised to be. I still don’t have a hoverboard or a personal flying car, which I DO have to admit I’m super bummed about.
Just look at the paperless society era. We still print stuff out, 40-50 years later. Maybe a lot less, but it didn’t put us all out of jobs, it just made us work smarter and maybe made things a little easier to reference.
That’s all this AI stuff is, its just the next evolution of reference. If anyone thinks it’s ready to put me out of a job, whelp I double dog dare them. One company I’m associated with is at this stage, drunk on the AI. It’s all just super disconnected execs fantasizing in between drunken lunches and boat purchases. Their day is coming, don’t worry.
I don’t think it’s ready to put anyone out of a job. But if you’re not worried about AI, then why bother being in the fuck-AI community? Like, why agitate about AI at all if you think it’s all snake oil.
Because it’s just that, snake oil. Fuck AI
I mean I think there’s hope for it. It’s the natural evolution of tech. But it’s not what we are being sold, like it’s not just like yep whelp here’s AI and it’ll literally do everything now. If you understand computational and mathematical limitations, that’s literally not possible. A mathematical equation from the age of Turing still stands true, 2^n-1 means that you are going to need a fuck load of computing power for anything meaningful. That doesn’t really exist. Yet.
I think we’re looking at this from completely different angles if you are "hope"ful that AI will improve.
Also, you’re looking at AI completely wrong if you’re analyzing its performance on traditional CS problems in terms of time complexity. Nobody credible is hoping that AI is going to be solving NP problems just by feeding the problem into its context window like a quarter into a vending machine.
I’m not looking at it wrong, I think we need to look at it as two different things. I’m not a technology skeptic, I believe in technology, and I believe it largely makes the whole us better off. That’s not what “AI” is though, AI under its current definition is snake oil, and it’s a bubble. It’s something that salespeople circle jerk off over while they sell an illusion to stupid unimaginative executives. What you seem to fear is AGI, which is an entirely different thing.
It’s true, I fear AGI, not the current state of AI if it were to remain frozen and not improve at all. I am also not terribly afraid of climate change if the climate were to remain fixed at this point. Sure, we have lots of forest fires, and people are dying of heat, but it could get much worse.
I think maybe the root of our disagreement is that we’re appraising the current state of AI differently. I’m looking at AI now vs AI five years ago and seeing an orders-of-magnitude increase in how powerful it is – still not as good as a human, but no longer negligible – but you’re looking at both of these and rounding them to zero, calling it snake oil. Perhaps, in the Gartner hype cycle, you’re in the trough of disillusionment?
I don’t want to be a shill for big AI here, but I reject the idea that AI in its current state is useless (though I would agree it’s overhyped and probably detrimental to society overall). It’s capable of doing a lot of trivial labour that previously was not automatable, including coding tasks and graphics, and while it can’t do it with great reliability, or anywhere near as well as a human expert, and it’s much worse in some areas than others (AI-written news articles are much worse than useless, for instance), it’s still turning out to be a productivity benefit (read: reduction in jobs) for those who know how to use it to its strengths. I think the “snake oil” aspect is when lay-people are using it expecting it to be reliable or as good as a human – which is basically how big tech is pitching it.
I think generative AI for pure entertainment is bs “AI-Girlfriend”. But also that for accessibility for blind people it might be helpful: Transcribe images into spoken text. For deaf people it might transcribe audio to text they can read on a display. And this tech uses neural nets as well. But copyrights of artists and creators all such systems are trained on should be valued, especially when sold for a profit. Training data should be made public, or the AI it is trained on hidden data should be forced to open up about their shady practices. This is where my position on “FÜCK AI” stands. Also blind trust and hype in these systems is not justified in my humble opinion.