- cross-posted to:
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
Idk, maybe I’m a big idiot, but I have issues with the points made in this article. I will concede the point that individual Chatgpt use is not that big of a deal, environmentally speaking. If I can trust the numbers in this article, then it has successfully convinced me that I dont need to worry about the energy cost or emissions or water use of individual prompts.
The case I take issue with is the author’s point that LLMs are inherently useful. I don’t care if chatgpt is kind of just a better Google. I still hate every other thing about it. Using Chatgpt is clearly developing psychoses in some people, and even for people like this author that can use it responsibly, I think it’s just intellectually lazy. It encourages the user to abandon critical thinking and let the robot do it for you. What’s more, as a search tool, it’s destroying the internet. If no one ever goes to websites to read the info, why would people keep making websites with reliable information? Why should I even read this article? Why don’t I just have chat summarize it for me, and I never give this author any traffic or money? Then of course there is the plagiarism problem…
So idk. Maybe now I’ll stop harping on the environmental point. But I’m still going to avoid LLMs like the plague, because at their core I think they rob us of some of the finer points of a life well-lived. I’d rather spend my time poring through articles to understand the why and how of a question, rather than have a robot just spit the “what” out at me.
In an ideal world, we’d work together as a species to distill our collective knowledge into a reliable source of truth, much like the promise of Wikipedia. We’d use this new technology to make that accessible to everyone, even if they lack context to understand some of the deeper subjects. It would be a rising tide that lifted everyone. It doesn’t have to be controlled by tech bros, and IMO the lack of a popular utopian vision coming from leftist ideals has left a hole eagerly taken over by people trying to make a buck. Projects like AI Horde are much better than saying “AI bad, end of story” IMO.
It’s hard for me to really gauge the laziness aspect. Yeah, it encourages laziness in some ways, but it doesn’t have to be on the things that matter. If it takes care of grunt work like “generate a react app skeleton that does X”, that could be viewed as laziness, but it could also be viewed as the invention of the tractor eliminating a lot of unnecessary farm work or something. In other words, if you just want to “do the thing”, and it helps you get to that goal faster, is that lazy?
Regarding your edit, makes sense to take it with a grain of salt, but broken clocks and all that. The numbers are more important, and they seem reasonable. Taking a look at this article (written before the current crop of LLMs took off, but also just a random link I found so take it also with a grain of salt), we see a huge increase in data center workloads before any current AI workloads:

I think the article’s general point is likely valid, and that it’s a valid criticism to say “If you criticize AI for energy usage but not video streaming then you’re unfairly targeting AI”.
I think in terms of energy usage on this issue, as per usual it’s not so much about individual consumption habits as it is corporate spending and ecologically destructive habits.
- Collectively, big players in AI are ramping up the construction of AI datacenter megaprojects that will consume massive amounts of electricity and water. If managed well this will be in areas of the world where there is electricity and water to spare, or added to grids that will be expanded in responsible ways. Utility companies in the US often seem all too happy to put the cost burden from expanding infrastructure onto individual users, instead of the massive corporation who wants to build the massive thing. This is bad when most people are barely scraping by.
- Training AI uses a lot more resources than using it when it’s done. Developers of commercial AI models such as OpenAI spend a ton of resources on models that will never be used, and these practices are fairly opaque. Additionally, it’s very hard to measure the full cost of developing a model against that of any individual query because a lot of usage data isn’t public.
- The point of AI is to replace workers. So so many people are struggling as a result of broader economic issues in addition to the new tech putting them out of a job, in a deregulated economic system where it’s increasingly work or die. This technology is not unique in the regard that it replaces human jobs, but it threatens increasingly more people, and increasingly centralizes resources and control into fewer and fewer hands. I don’t think that’s particularly good or healthy for society.
- The AI industry now is trying to do what Uber did over the last decade: beat out the competition, monopolize, and enshittify to squeeze its users for better returns every quarter until the end of time. Using ChatGPT may be fine now, but if it’s the only game in town I guarantee it will start costing a lot more and get a lot worse too. OpenAI in particular has not turned a profit since it went public and doesn’t expect to turn one until 2029.
Most importantly, using these tools signals to the companies making them that there is demand for them to tap into. Your use is not significant on its own, but in so using it you are making yourself into a part of their market. That gives them more justification and green light to keep going, and to waste ever more resources on ever marginal improvements.
I don’t think the issue is primarily where things are at now in terms of resource consumption by AI, but that the industry shows little to no signs of slowing down let alone stopping without popping the catastrophically large investment bubble it’s developed into. No matter how this ends, it likely doesn’t end well.
I think the article’s point is still valid in regards to “AI datacenter megaprojects”. Is this new and unique for AI, or simply a continuation of the huge build-out for other demands, like video streaming? Is it “unfair” to target AI for that when video streaming apparently dwarfs AI in terms of energy usage?
I think AI replacing workers is great (in an ideal world). I’m coming at it from the Fully Automated Luxury Gay Space Communism angle, and saying “AI bad because it’s replacing workers” seems wrong to me, vs “Privatization of AI and economic inequality are bad”. The genie isn’t going back into the bottle, so let’s take on the fight that can be won.
this is such a dumb take.
“if you are talking to a chatbot you arent doing other things like driving!”
Are you talking about this bit?
How concerned should you be about spending 0.8 Wh? 0.8 Wh is enough to: […] Drive a sedan at a consistent speed for 4 feet
Or this bit?
If your friend were about to drive their personal largest ever in history cruise ship solo for 60 miles, but decided to walk 1 mile to the dock instead of driving because they were “concerned about the climate impact of driving” how seriously would you take them?
I don’t think it’s saying “if you are talking to a chatbot you arent doing other things like driving!”, I think the point is that you probably do lots of other things every day that dwarf the energy consumption you could use on LLMs, and you don’t avoid doing those because of their energy usage. For example, nobody* is complaining about the energy usage of Netflix or Youtube, but they dwarf chatgpt:

*I’m sure it’s not actually nobody, but I’d bet it rounds to 0, and either way is far less than the number of people complaining about AI energy usage.
Watching the anti-AI crowd get played and manipulated over the past year or so has been really sad. It reminds me a lot of the early days of the anti-GMO movement when legitimate criticisms and efforts at regulation were supplanted by fear mongering and the whole thing got transformed into more of cult or a kind of subcultural identity. You can see the same pattern playing out where they increasingly are sprouting disproven nonsense, actual substantive criticism is left behind for whatever sounds scariest and/or makes them feel righteous. They are on the path to discredit themselves and lose the fight for AI regulations just as badly as the anti-GMO movement lost.
Yeah, it’s similar to nuclear energy as well. For both, I’m not so much “pro-X” so much as I’m “anti-anti-X”. Renewable energy seems to have finally outpaced other options, but we also wasted decades fighting against people that meant well but helped destroy the environment. Likewise, simply rejecting AI completely just leaves more opportunity for unethical people just trying to make a buck.
Thought this was an interesting look at an oft-repeated claim, that AI is bad for the environment. I think it’s assumed to be true, especially on Lemmy, but that might not actually be the case.





