

The bill mandates safety testing of advanced AI models and the imposition of “guardrails” to ensure they can’t slip out of the control of their developers or users and can’t be employed to create “biological, chemical, and nuclear weapons, as well as weapons with cyber-offensive capabilities.” It’s been endorsed by some AI developers but condemned by others who assert that its constraints will drive AI developers out of California.
Man, if I can’t even build homemade nuclear weapons, what CAN I do? That’s it, I’m moving to Nevada!
The really annoying thing is, the people behind AI surely ought to know all this already. I remember just a few years ago when DALL-E mini came out, and they’d purposefully not trained it on pictures of human faces so you couldn’t use it to generate pictures of human faces – they’d come out all garbled. What’s changed isn’t that they don’t know this stuff – it’s that the temptation of money means they don’t care anymore