I don’t usually keep the author’s name in the suggested hed, but here I think he’s recognizable enough that it adds value.
I am a science-fiction writer, which means that my job is to make up futuristic parables about our current techno-social arrangements to interrogate not just what a gadget does, but who it does it for, and who it does it to.
What I do not do is predict the future. No one can predict the future, which is a good thing, since if the future were predictable, that would mean we couldn’t change it.
Now, not everyone understands the distinction. They think science-fiction writers are oracles. Even some of my colleagues labor under the delusion that we can “see the future”.
Then there are science-fiction fans who believe that they are reading the future. A depressing number of those people appear to have become AI bros. These guys can’t shut up about the day that their spicy autocomplete machine will wake up and turn us all into paperclips has led many confused journalists and conference organizers to try to get me to comment on the future of AI.
That’s something I used to strenuously resist doing, because I wasted two years of my life explaining patiently and repeatedly why I thought crypto was stupid, and getting relentlessly bollocked by cryptocurrency cultists who at first insisted that I just didn’t understand crypto. And then, when I made it clear that I did understand crypto, they insisted that I must be a paid shill.
This is literally what happens when you argue with Scientologists, and life is just too short. That said, people would not stop asking – so I’m going to explain what I think about AI and how to be a good AI critic. By which I mean: “How to be a critic whose criticism inflicts maximum damage on the parts of AI that are doing the most harm.”
So what is the alternative? A lot of artists and their allies think they have an answer: they say we should extend copyright to cover the activities associated with training a model.
And I am here to tell you they are wrong. Wrong because this would represent a massive expansion of copyright over activities that are currently permitted – for good reason.
He goes on to say that prohibiting AI works from being copyrighted and worker collective bargaining are better solutions, and I really agree with the arguments for this. I also liked this bit about how some of what remains past the bubble could be useful:
And we will have the open-source models that run on commodity hardware, AI tools that can do a lot of useful stuff, like transcribing audio and video; describing images; summarizing documents; and automating a lot of labor-intensive graphic editing – such as removing backgrounds or airbrushing passersby out of photos. These will run on our laptops and phones, and open-source hackers will find ways to push them to do things their makers never dreamed of.
That’s a great article, tanks for posting.
Cory has a way for getting right to the heart of things, and does so marvellously here. Great explanation of why the investments continue despite the dogshit economics of this industry.
Cory Doctorow is an international treasure
Not just something but a ton of used RAM sticks and GPUs.
NPUs not GPUs, they target different metrics
What I do not do is predict the future.
Ok.
Likely more accurate to say ‘know the future’ instead of ‘predict the future’, but the intent is the same. He doesn’t share what will happen, only what will likely happen.
maybe he means “ai companies will fail” as not so much as a prediction, but just a given. kind of like “one day, you must die” isn’t really a “prediction,” that’s just the way it is
“The way it is” is based on a huge statistics material. Claiming some future results without huge statistics material is called “prediction”.
The guy is just PRing on the anti-AI sentiment.
Has this not always happened with any new technology?
Hysterical hatred? Not sure. Personal computers were welcomed, cars and planes too (planes were laughed upon, but never hated as far as I know)… Nah, I don’t think that every significant new technology is hated at the start.
Companies fail all the time with any new technology, and some AI companies will fail. In this case it’s just business not hatred. But I can see you’re also not seeing the other perspective of people hating on it by calling it hysterical so it’s a waste of time to argue with you.
it’s a waste of time to argue with you
You’re correct. I believe at this moment I have heard the full specter of anti-LLM arguments and most of the are pathetic; and those few that are somewhat reasonable are not actually anti-LLM but against consumer practices (ragarding as companies who try to shovel-in LLMs to anything without any reason, and end-consumers as well, who use LLMs for the purposes it never was made for and where it is still completely ineffective)





