I have a question… does grok not do deelfakes of men as well? Or is that not an issue for some reason. Haven’t been paying attention, just seen headlines.
No, this whole story is a mix of hyperbole and not understanding AI.
Imagine there was a shitty clothing company that produced a lightweight fabric. Due to piss-poor testing and not actually giving a shit about their customers, it turns out that after a few times in the wash, the clothing essentially disintegrates as you wear it.
The story breaks, but for some reason, all the headlines all say “Company sells clothing which falls off when little kids wear them, leaving them naked!” While technically true, anyone who spends more than 5 seconds looking into it will recognize how much that headline twists the actual situation. But they print it anyways because people already hate the company and are eager to accept anything negative at face value. Plus, accusing a person or group of child exploitation is a time-honored strategy of criticism because not many people will push back against it as they don’t want to be seen as defending child exploitation, even though they’re really just pointing out the truth.
Well, grok is capable of producing csam with a straightforward text prompt right? This would seem to me to be illegal on x’s part but I could be mistaken.
It’s a bit more complex than that, it’s not a straightforward text prompt as they did attempt to have filters to prevent stuff like this. However, this being a Musk company, those filters are shitty and people quickly found ways to bypass it, likely through a series of prompts or very highly tailored prompts.
But thats just the nature of AI. AI generators are never specifically trained using CSAM (at least I really fucking hope not). But neither are they specifically trained to generate giraffes made out of dumplings dancing on the concept of time. However, if you ask it to make the latter, it will dutifully spit out some slop that matches. The point is, AI image generators can make ANYTHING, or at least try to. That’s what they do. You can build filters and put in restrictions to try to prevent users from asking it to make certain things, or prevent those things from getting delivered, but the actual ability for the AI to make those things is still there. And due to the black box nature of machine learning, it can never actually be removed.
Now, there is a VERY big argument to be made against AI as a whole for that reason. If you spend a little while thinking about what it actually means to have something with the ability to create ANYTHING, or at least an approximation of it, you should be scared shitless. The only real safeguards are creating filters on either the input or the output side, but filters can be worked around. You could see it with early versions of things like ChatGPT, where you could create a carefully worded prompt to have it create a duplicate version of itself with the filters removed and return a secondary response from that duplicated instance, leading to it replying to normally off-limit topics (like building explosives or committing suicide) with a generic “I’m sorry Dave, I’m afraid I can’t do that.”, followed by another response that gives the full, unredacted answer. Because it always has the ability to create these things, it’s just company created filters which stop it from showing them.
Anyways, this comment has gotten away from me. The point is, it’s not really about Grok. It’s not really about CSAM. It’s about AI as a whole, but that’s too big and abstract of a concept for the masses to grasp. So instead we get articles and legislation specifically dealing with one particular issue from one particular program because that’s just the first thing people have become outraged at, without seeing the big picture.
TL;DR: No, it’s not as simple as a straightforward prompt, and it’s far from just Grok that is at issue.
I understand that it’s a general purpose machine for producing images given prompt/context. I don’t feel particularly outraged. I just know that, say, openAI has quite a lot of safeguards to prevent generating CSAM. Safeguards may not be perfect but… seems like grok doesn’t have good enough safeguards?
I have a question… does grok not do deelfakes of men as well? Or is that not an issue for some reason. Haven’t been paying attention, just seen headlines.
No, this whole story is a mix of hyperbole and not understanding AI.
Imagine there was a shitty clothing company that produced a lightweight fabric. Due to piss-poor testing and not actually giving a shit about their customers, it turns out that after a few times in the wash, the clothing essentially disintegrates as you wear it.
The story breaks, but for some reason, all the headlines all say “Company sells clothing which falls off when little kids wear them, leaving them naked!” While technically true, anyone who spends more than 5 seconds looking into it will recognize how much that headline twists the actual situation. But they print it anyways because people already hate the company and are eager to accept anything negative at face value. Plus, accusing a person or group of child exploitation is a time-honored strategy of criticism because not many people will push back against it as they don’t want to be seen as defending child exploitation, even though they’re really just pointing out the truth.
Well, grok is capable of producing csam with a straightforward text prompt right? This would seem to me to be illegal on x’s part but I could be mistaken.
TL;DR at the bottom.
It’s a bit more complex than that, it’s not a straightforward text prompt as they did attempt to have filters to prevent stuff like this. However, this being a Musk company, those filters are shitty and people quickly found ways to bypass it, likely through a series of prompts or very highly tailored prompts.
But thats just the nature of AI. AI generators are never specifically trained using CSAM (at least I really fucking hope not). But neither are they specifically trained to generate giraffes made out of dumplings dancing on the concept of time. However, if you ask it to make the latter, it will dutifully spit out some slop that matches. The point is, AI image generators can make ANYTHING, or at least try to. That’s what they do. You can build filters and put in restrictions to try to prevent users from asking it to make certain things, or prevent those things from getting delivered, but the actual ability for the AI to make those things is still there. And due to the black box nature of machine learning, it can never actually be removed.
Now, there is a VERY big argument to be made against AI as a whole for that reason. If you spend a little while thinking about what it actually means to have something with the ability to create ANYTHING, or at least an approximation of it, you should be scared shitless. The only real safeguards are creating filters on either the input or the output side, but filters can be worked around. You could see it with early versions of things like ChatGPT, where you could create a carefully worded prompt to have it create a duplicate version of itself with the filters removed and return a secondary response from that duplicated instance, leading to it replying to normally off-limit topics (like building explosives or committing suicide) with a generic “I’m sorry Dave, I’m afraid I can’t do that.”, followed by another response that gives the full, unredacted answer. Because it always has the ability to create these things, it’s just company created filters which stop it from showing them.
Anyways, this comment has gotten away from me. The point is, it’s not really about Grok. It’s not really about CSAM. It’s about AI as a whole, but that’s too big and abstract of a concept for the masses to grasp. So instead we get articles and legislation specifically dealing with one particular issue from one particular program because that’s just the first thing people have become outraged at, without seeing the big picture.
TL;DR: No, it’s not as simple as a straightforward prompt, and it’s far from just Grok that is at issue.
I understand that it’s a general purpose machine for producing images given prompt/context. I don’t feel particularly outraged. I just know that, say, openAI has quite a lot of safeguards to prevent generating CSAM. Safeguards may not be perfect but… seems like grok doesn’t have good enough safeguards?