A Florida man is facing 20 counts of obscenity for allegedly creating and distributing AI-generated child pornography, highlighting the danger and ubiquity of generative AI being used for nefarious reasons.

Phillip Michael McCorkle was arrested last week while he was working at a movie theater in Vero Beach, Florida, according to TV station CBS 12 News. A crew from the TV station captured the arrest, which made for dramatic video footage due to law enforcement leading away the uniform-wearing McCorkle from the theater in handcuffs.

  • KillerTofu@lemmy.world
    link
    fedilink
    arrow-up
    16
    ·
    3 months ago

    How was the model trained? Probably using existing CSAM images. Those children are victims. Making derivative images of “imaginary” children doesn’t negate its exploitation of children all the way down.

    So no, you are making false equivalence with your video game metaphors.

    • fernlike3923@sh.itjust.works
      link
      fedilink
      arrow-up
      60
      ·
      3 months ago

      A generative AI model doesn’t require the exact thing it creates in its datasets. It most likely just combined regular nudity with a picture of a child.

      • finley@lemm.ee
        link
        fedilink
        English
        arrow-up
        9
        ·
        3 months ago

        In that case, the images of children were still used without their permission to create the child porn in question

    • macniel@feddit.org
      link
      fedilink
      arrow-up
      28
      ·
      3 months ago

      Can you or anyone verify that the model was trained on CSAM?

      Besides a LLM doesn’t need to have explicit content to derive from to create a naked child.

      • KillerTofu@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        3 months ago

        You’re defending the generation of CSAM pretty hard here in some vaguely “but no child we know of” being involved as a defense.

        • macniel@feddit.org
          link
          fedilink
          arrow-up
          15
          ·
          3 months ago

          I just hope that the Models aren’t trained on CSAM. Making generating stuff they can fap on ““ethical reasonable”” as no children would be involved. And I hope that those who have those tendancies can be helped one way or another that doesn’t involve chemical castration or incarceration.

    • Diplomjodler@lemmy.world
      link
      fedilink
      arrow-up
      13
      ·
      3 months ago

      While i wouldn’t put it past Meta&Co. to explicitly seek out CSAM to train their models on, I don’t think that is how this stuff works.

    • grue@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 months ago

      But the AI companies insist the outputs of these models aren’t derivative works in any other circumstances!