Title, or at least the inverse be encouraged. This has been talked about before, but with how bad things are getting, and how realistic goods ai generated videos are getting, anything feels better than nothing, AI generated watermarks, or metadata can be removed, but thats not the point, the point is deterrence. Immediately all big tech will comply (atleast on the surface for consumer-facing products), and then we will probably see a massive decrease in malicious use of it, people will bypass it, remove watermarks, fix metadata, but the situation should be quite a bit better? I dont see many downsides/
No, mostly because I’m against laws which are literally impossible to enforce. And it’ll become exponentially harder to enforce as the years pass on.
I think a lot of people will get annoyed at this comparison, but I see a lot of similarity between the attitudes of the “AI slop” people and the “We can always tell” anti-trans people, in the sense that I’ve seen so many people from the first group accuse legitimate human works of being AI-created (and obviously we’ve all seen how often people from the second group have accused AFAB women of being trans). And just as those anti-trans people actually can’t tell for a huge number of well-passing trans people, there’s a lot of AI-created works out there that are absolutely passing for human-created works in mass, without giving off any obvious “slop” signs. Real people will get (and are getting) swept-up and hurt in this anti-AI reactionary phase.
I think AI has a lot of legitimately decent uses, and I think it has a lot of stupid-as-shit uses. And the stupid-as-shit uses may be in the lead for the moment. But mandating tagging AI-generated content would just be ineffective and reactionary. I do think it should be regulated in other, more useful ways.
what other, more useful ways?