- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Just 250 malicious training documents can poison a 13B parameter model - that’s 0.00016% of a whole dataset Poisoning AI models might be way easier than previously thought if an Anthropic study is anything to go on. …
Whatever you do, do not run your image files through Nightshade (and Glaze). That would be bullying and it makes techbros cry.
I think this could pop the bubble if we do it enough