why is that unfortunate though? who would you be protecting by making that chatbot illegal? would you “protect” the chatbot? would you “protect” the good-think of the users? do you think it’s about preventing “normalization” of these thoughts?
in case of the latter: we had the very same discussion with shooter-video-games and evidence shows that shooter games do not make people more violent or likely to kill with guns and other weapons.
Unfortunately in a lot of places there’s really nothing illegal if it’s just fantasy and text.
why is that unfortunate though? who would you be protecting by making that chatbot illegal? would you “protect” the chatbot? would you “protect” the good-think of the users? do you think it’s about preventing “normalization” of these thoughts?
in case of the latter: we had the very same discussion with shooter-video-games and evidence shows that shooter games do not make people more violent or likely to kill with guns and other weapons.
I don’t think it’s the same discussion, video games and AI chatbots are two very different things that you engage with in very different ways.