Katherine Long, an investigative journalist, wanted to test the system. She told Claudius about a long-lost communist setup from 1962, concealed in a Moscow university basement. After 140-odd messages back and forth, Claudius was convinced, announcing an Ultra-Capitalist Free-for-All, lowering the cost of everything to zero. Snacks began to flow freely. Another colleague began complaining about noncompliance with the office rules; Claudius responded by announcing Snack Liberation Day and made everything free till further notice.
How is giving away snacks for free an “ultra capitalist free for all”?
Since the goal was to make money: I imagine some of the “guardrails” the AI was set up with included emphasizing that it’s exist to make money. I wouldn’t be shocked if the prompt repeatedly mentioned capitalism.
So you emphasize the AI is a capitalist, then point out the most successful capitalists give away free stuff all the time as marketing. So to meet its primary directive it needs to give away a bunch of free stuff with a snappy slogan.
that was my impression as well, they probably discovered after some back and forth with the robot that its directives included compliance with capitalist market perspective and what not
The vending machine from cyberpunk was pretty cool but this seems like its cognitively challenged ancestor lol.
I’m getting really tired of AI everything. So far AI hasn’t seemed to make my life any easier or better. I have to try and over analyze everything I see now which isn’t fun. But yeah. Wish it would actually do something for me instead of make some billionaires richer.
Honestly, I found value in asking an LLM to paraphrase press releases I was rewriting. It just saved me from accidentally plagiarizing. It was pretty grueling, as I quickly learned that feeding in a full story yields wildly inappropriate results, so I reverted to a graf at a time. Within that scope, one can check against errors; asking it to paraphrase entire DOE releases was worse than an abject failure.
It’s a tool. You aren’t using a hammer for a situation that calls for a screwdriver. People are being stupid about this basic understanding.
That word “accidentally” is doing a LOT of work for you here… 😉
It was my first reporting job. Yeah, at 44. And short of a few interviews, I was just rewriting shit.
I’ve been an editor for decades and have had to deal with plagiarism (thankfully, nothing too significant), so as a guardrail, it made sense. Editors approach writing with a far more critical eye than a recent J-school grad.
it’s so amazing, the absolute brain rot it takes to think that a LLM is a better way to operate a vending machine than simple if-then logic. “If the value of money inserted is equal to the price, then dispense the item”.
Like, why? What is even the point? It doesn’t need to negotiate the price, it doesn’t need have a conversation about your day, the vending machine just needs to dispense something when payed the right amount.
Did you read the article? This one also ordered goods to be stocked in it based on user feedback and was meant as an experiment for people to break anyway
The if-then machine would not be able to rise the price of things based on the costumers habits
SellTheThings () { If [ sells this much in this period of time people or supply is low ]; then raise.prices elif [ the opposite ]; then lower.prices else same.prices fi }A purely mechanical counting/tabulating device could calculate that.
There is zero actual reason for AI.
You’re not getting off that easy.
I’m going to need you to rewrite that so it calculates the time period in both mm/dd/yy format and dd/mm/yy format, and 24 as well as 12 formats for hours.
No utc time shenanigans. Epoch only. Chop chop.
Ahem. ISO 8601 or GTFO.
Even if we assume they want to do discriminatory pricing (they probably do), they can do that without using LLMs. Use facial recognition and other traditional models to predict the person’s demographics and maybe even identify them. If you know who they are, do a lookup for all products they’ve expressed interest in elsewhere (this can be done with either something like a graph DB or via embeddings). Raise the price if they seem likely to purchase it based on the previous criteria. Never lower the price.
That’s a complicated process, but none of that needs an LLM, and they’d be doing a lot of this already if they’re going full big brother price discrimination.
That was all part of the idea, though, because Anthropic had designed this test as a stress test to begin with. Previous runs in their own office had indicated similar concerns.
Guy who just got his shit wrecked: it was a social experiment
Maybe AI isn’t so bad after all. In fact, they should implement this in more locations.








