A straightforward dismantling of AI fearmongering videos uploaded by Kyle “Science Thor” Hill, Sci “The Fault in our Research” Show, and Kurz “We’re Sorry for Summarizing a Pop-Sci Book” Gesagt over the past few months. The author is a computer professional but their take is fully in line with what we normally post here.
I don’t have any choice sneers. The author is too busy hunting for whoever is paying SciShow and Kurzgesagt for these videos. I do appreciate that they repeatedly point out that there is allegedly a lot of evidence of people harming themselves or others because of chatbots. Allegedly.
My favorite science Youtubers? Nah, those channels are IFLScience-tier, their intended audience is literally children.
The author also proposes a framework for analyzing claims about generative AI. I don’t know if I endorse it fully, but I agree that each of the four talking points represents a massive failure of understanding. Their LIES model is:
- Lethality: the bots will kill us all
- Inevitability: the bots are unstoppable and will definitely be created in the future
- Exceptionalism: the bots are wholly unlike any past technology and we are unprepared to understand them
- Superintelligent: the bots are better than people at thinking
I would add to this a Plausibility or Personhood or Personality: the incorrect claim that the bots are people. Maybe call it PILES.
Call the last one A for Agency and turn the acronym into an AI history reference: ELISA.
Hey while we’re here, I propose two more letters:
S, standing for “stochastic parrot ignorance,”
C, standing for “Chinese room does not constitute thought,”
Now we can have ASS LICE
Kurz “We’re Sorry for Summarizing a Pop-Sci Book” Gesagt
Geshundheit

