Yes. You’re saying that the AI trainers must have had CSAM in their training data in order to produce an AI that is able to generate CSAM. That’s simply not the case.
You also implied earlier on that these AIs “act or respond on their own”, which is also not true. They only generate images when prompted to by a user.
The fact that an AI is able to generate inappropriate material just means it’s a versatile tool.
3,226 suspected images out of 5.8 billion. About 0.00006%. And probably mislabeled to boot, or it would have been caught earlier. I doubt it had any significant impact on the model’s capabilities.
Yes. You’re saying that the AI trainers must have had CSAM in their training data in order to produce an AI that is able to generate CSAM. That’s simply not the case.
You also implied earlier on that these AIs “act or respond on their own”, which is also not true. They only generate images when prompted to by a user.
The fact that an AI is able to generate inappropriate material just means it’s a versatile tool.
Removed by mod
The AI had CSAM in its training model:
https://cyber.fsi.stanford.edu/news/investigation-finds-ai-image-generation-models-trained-child-abuse
3,226 suspected images out of 5.8 billion. About 0.00006%. And probably mislabeled to boot, or it would have been caught earlier. I doubt it had any significant impact on the model’s capabilities.