If they could create an AI that uses dramatically less energy, even during the training phase, then I think we could start having an actual debate about the merits of AI. But even in that case, there are a lot of unresolved problems. Copyright is the big one - AI is essentially a copyright launderer, eating up a bunch of data or media and mixing it together just enough to say that you didn’t rip it off. It generates outputs that are derivative by nature. And stuff like Grok shows how these LLMs are vulnerable to the political whims of their creators.
I am also skeptical about its use cases. Maybe this is a bit luddite, but I am concerned about the way people are using it to automate all of the interesting challenges out of their lives. Cheating college essays, vibe coding, meal planning, writing emotional personal letters, etc. My general sense is that some of these challenges are actually good for our brains to do, partly because we define our identity in the ways we choose to tackle these challenges. My fear is that automating all of these things away will lead to a new generation that can’t do anything without the help of a $50-a-month corpo chatbot that they’ve come to depend on for intellectual tasks and emotional processing.
Your mention of a corpo chatbot brings up something else that I’ve thought about. I think leftists are abdicating their social responsibility when they just throw up their hands and say “ai bad” (not aiming at you directly, just a general trend I’ve noticed). You have capitalists greedily using it to maximize profit, which is no surprise. But where are the people saying “Here’s how we can do it ethically and minimize harms”? If there’s no opposing force and the only option is “unethically-created AI or nothing” then the answer is inevitably going to be “unethically-created AI”. Open weight/self-hostable models are good and all, but where are the people pushing for a group effort to create an LLM that represents the best humanity has to offer or some sort of grand vision like that?
If they could create an AI that uses dramatically less energy, even during the training phase, then I think we could start having an actual debate about the merits of AI. But even in that case, there are a lot of unresolved problems. Copyright is the big one - AI is essentially a copyright launderer, eating up a bunch of data or media and mixing it together just enough to say that you didn’t rip it off. It generates outputs that are derivative by nature. And stuff like Grok shows how these LLMs are vulnerable to the political whims of their creators.
I am also skeptical about its use cases. Maybe this is a bit luddite, but I am concerned about the way people are using it to automate all of the interesting challenges out of their lives. Cheating college essays, vibe coding, meal planning, writing emotional personal letters, etc. My general sense is that some of these challenges are actually good for our brains to do, partly because we define our identity in the ways we choose to tackle these challenges. My fear is that automating all of these things away will lead to a new generation that can’t do anything without the help of a $50-a-month corpo chatbot that they’ve come to depend on for intellectual tasks and emotional processing.
Your mention of a corpo chatbot brings up something else that I’ve thought about. I think leftists are abdicating their social responsibility when they just throw up their hands and say “ai bad” (not aiming at you directly, just a general trend I’ve noticed). You have capitalists greedily using it to maximize profit, which is no surprise. But where are the people saying “Here’s how we can do it ethically and minimize harms”? If there’s no opposing force and the only option is “unethically-created AI or nothing” then the answer is inevitably going to be “unethically-created AI”. Open weight/self-hostable models are good and all, but where are the people pushing for a group effort to create an LLM that represents the best humanity has to offer or some sort of grand vision like that?