US senator Bernie Sanders amplified his recent criticism of artificial intelligence on Sunday, explicitly linking the financial ambition of “the richest people in the world” to economic insecurity for millions of Americans – and calling for a potential moratorium on new datacenters.
Sanders, a Vermont independent who caucuses with the Democratic party, said on CNN’s State of the Union that he was “fearful of a lot” when it came to AI. And the senator called it “the most consequential technology in the history of humanity” that will “transform” the US and the world in ways that had not been fully discussed.
“If there are no jobs and humans won’t be needed for most things, how do people get an income to feed their families, to get healthcare or to pay the rent?” Sanders said. “There’s not been one serious word of discussion in the Congress about that reality.”



The “reasoning” models aren’t really reasoning, they are generating text that resembles “train of thought”. If you examine some of the reasoning chains with errors, you can see some errors are often completely isolated, with no lead up and then the chain carries on as if the mistake never happened. Errors that when they happen in an actual human reasoning chain propagate.
LLM reasoning chains are generating essentially fanfics of what reasoning would look like. It turns out that expending tokens to generate more text and discarding it does make the retained text more more likely to be consistent with desired output, but “reasoning” is more a marketing term than describing what is really happening.
LLMs do not reason in the human sense of maintaining internal truth states or causal chains, sure. They predict continuations of text, not proofs of thought. But that does not make the process ‘fake’. Through scale and training, they learn statistical patterns that encode the structure of reasoning itself, and when prompted to show their work they often reconstruct chains that reflect genuine intermediate computation rather than simple imitation.
Stating that some errors appear isolated is fair, but the conclusion drawn from it is not. Human reasoning also produces slips that fail to propagate because we rebuild coherence as we go. LLMs behave in a similar way at a linguistic level. They have no persistent beliefs to corrupt, so an error can vanish at the next token rather than spread. The absence of error propagation does not prove the absence of reasoning. It shows that reasoning in these systems is reconstructed on the fly rather than carried as a durable mental state.
Calling it marketing misses what matters. LLMs generate text that functions as a working simulation of reasoning, and that simulation produces valid inferences across a broad range of problems. It is not human thought, but it is not empty performance either. It is a different substrate for reasoning, emergent, statistical, and language-based, and it can still yield coherent, goal-directed outcomes.