Be VERY SCARED, okay?
https://www.youtube.com/watch?v=UTbyGFW0new&list=UU9rJrMVgcXTfa8xuMnbhAEA - video
https://pivottoai.libsyn.com/20251114-anthropic-claims-chinese-ai-hackers-security-researchers-call-bs - podcast
time: 5 min 40 sec
Be VERY SCARED, okay?
https://www.youtube.com/watch?v=UTbyGFW0new&list=UU9rJrMVgcXTfa8xuMnbhAEA - video
https://pivottoai.libsyn.com/20251114-anthropic-claims-chinese-ai-hackers-security-researchers-call-bs - podcast
time: 5 min 40 sec
Even taking their story at face value:
It seems like they are hyping up LLM agents operating a bunch of scripts?
It indicates that their safety measures don’t work
Anthropic will read your logs, so you don’t have any privacy or confidentiality or security using their LLM, but, they will only find any problems months after the fact (this happened in June according to Anthropic but they didn’t catch it until September),
But yeah, the whole thing might be BS or at least bad exaggeration from Anthropic, they don’t really precisely list what their sources and evidence are vs. what is inference (guesses) from that evidence. For instance, if a hacker tried to setup hacking LLM bots, and they mostly failed and wasted API calls and hallucinated a bunch of shit, if Anthropic just read the logs from their end and didn’t do the legwork contacting people who had allegedly been hacked, they might "mistakenly’ (a mistake that just so happens to hype up their product) think the logs represent successful hacks.