I understand that it’s a general purpose machine for producing images given prompt/context. I don’t feel particularly outraged. I just know that, say, openAI has quite a lot of safeguards to prevent generating CSAM. Safeguards may not be perfect but… seems like grok doesn’t have good enough safeguards?
I understand that it’s a general purpose machine for producing images given prompt/context. I don’t feel particularly outraged. I just know that, say, openAI has quite a lot of safeguards to prevent generating CSAM. Safeguards may not be perfect but… seems like grok doesn’t have good enough safeguards?