all of that would be well and good if using AI didn’t cost an outsized amount of energy, and if our energy grids were not mostly comprised of dirty energy. But it does, and they are, so I can’t help but feel like you are boiling the oceans because you…
I use it sometimes for tasks like “Write a Python snippet that aggregates a Pandas dataframe like so…so that I can learn. Yeah, I could RTFM but the docs are scattered around and frequently out of date.”
“Make me a menu with these ingredients that I have in my cupboard and keep in mind these dietary constraints” or similar queries.
…don’t like using your human brain sometimes? Like sure, we all pull out the phone calculator for math problems we could solve on paper within 30 seconds, so I’m not saying I can’t relate to that desire to save some brainpower. But the energy cost of that calculator is a drop compared to the glasses of water you are dumping out every time you run a single ChatGPT prompt, so it all just feels really…idk, wasteful? to say the least?
It’s hard to find exact numbers, but it seems a good ballpark number is that a single chatgpt response costs 15x the energy use of a Google search. I think there’s already questions that can be answered by LLMs more efficiently than using Google, and better models will increase that amount. Do you think it’s more ethical to use AI if that results in less energy usage?
If they could create an AI that uses dramatically less energy, even during the training phase, then I think we could start having an actual debate about the merits of AI. But even in that case, there are a lot of unresolved problems. Copyright is the big one - AI is essentially a copyright launderer, eating up a bunch of data or media and mixing it together just enough to say that you didn’t rip it off. It generates outputs that are derivative by nature. And stuff like Grok shows how these LLMs are vulnerable to the political whims of their creators.
I am also skeptical about its use cases. Maybe this is a bit luddite, but I am concerned about the way people are using it to automate all of the interesting challenges out of their lives. Cheating college essays, vibe coding, meal planning, writing emotional personal letters, etc. My general sense is that some of these challenges are actually good for our brains to do, partly because we define our identity in the ways we choose to tackle these challenges. My fear is that automating all of these things away will lead to a new generation that can’t do anything without the help of a $50-a-month corpo chatbot that they’ve come to depend on for intellectual tasks and emotional processing.
Your mention of a corpo chatbot brings up something else that I’ve thought about. I think leftists are abdicating their social responsibility when they just throw up their hands and say “ai bad” (not aiming at you directly, just a general trend I’ve noticed). You have capitalists greedily using it to maximize profit, which is no surprise. But where are the people saying “Here’s how we can do it ethically and minimize harms”? If there’s no opposing force and the only option is “unethically-created AI or nothing” then the answer is inevitably going to be “unethically-created AI”. Open weight/self-hostable models are good and all, but where are the people pushing for a group effort to create an LLM that represents the best humanity has to offer or some sort of grand vision like that?
all of that would be well and good if using AI didn’t cost an outsized amount of energy, and if our energy grids were not mostly comprised of dirty energy. But it does, and they are, so I can’t help but feel like you are boiling the oceans because you…
…don’t like using your human brain sometimes? Like sure, we all pull out the phone calculator for math problems we could solve on paper within 30 seconds, so I’m not saying I can’t relate to that desire to save some brainpower. But the energy cost of that calculator is a drop compared to the glasses of water you are dumping out every time you run a single ChatGPT prompt, so it all just feels really…idk, wasteful? to say the least?
It’s hard to find exact numbers, but it seems a good ballpark number is that a single chatgpt response costs 15x the energy use of a Google search. I think there’s already questions that can be answered by LLMs more efficiently than using Google, and better models will increase that amount. Do you think it’s more ethical to use AI if that results in less energy usage?
If they could create an AI that uses dramatically less energy, even during the training phase, then I think we could start having an actual debate about the merits of AI. But even in that case, there are a lot of unresolved problems. Copyright is the big one - AI is essentially a copyright launderer, eating up a bunch of data or media and mixing it together just enough to say that you didn’t rip it off. It generates outputs that are derivative by nature. And stuff like Grok shows how these LLMs are vulnerable to the political whims of their creators.
I am also skeptical about its use cases. Maybe this is a bit luddite, but I am concerned about the way people are using it to automate all of the interesting challenges out of their lives. Cheating college essays, vibe coding, meal planning, writing emotional personal letters, etc. My general sense is that some of these challenges are actually good for our brains to do, partly because we define our identity in the ways we choose to tackle these challenges. My fear is that automating all of these things away will lead to a new generation that can’t do anything without the help of a $50-a-month corpo chatbot that they’ve come to depend on for intellectual tasks and emotional processing.
Your mention of a corpo chatbot brings up something else that I’ve thought about. I think leftists are abdicating their social responsibility when they just throw up their hands and say “ai bad” (not aiming at you directly, just a general trend I’ve noticed). You have capitalists greedily using it to maximize profit, which is no surprise. But where are the people saying “Here’s how we can do it ethically and minimize harms”? If there’s no opposing force and the only option is “unethically-created AI or nothing” then the answer is inevitably going to be “unethically-created AI”. Open weight/self-hostable models are good and all, but where are the people pushing for a group effort to create an LLM that represents the best humanity has to offer or some sort of grand vision like that?