• Catoblepas@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    7
    ·
    1 month ago

    Did we memory hole the whole ‘known CSAM in training data’ thing that happened a while back? When you’re vacuuming up the internet you’re going to wind up with the nasty stuff, too. Even if it’s not a pixel by pixel match of the photo it was trained on, there’s a non-zero chance that what it’s generating is based off actual CSAM. Which is really just laundering CSAM.

    • Ragdoll X@lemmy.world
      link
      fedilink
      English
      arrow-up
      31
      arrow-down
      2
      ·
      1 month ago

      IIRC it was something like a fraction of a fraction of 1% that was CSAM, with the researchers identifying the images through their hashes but they weren’t actually available in the dataset because they had already been removed from the internet.

      Still, you could make AI CSAM even if you were 100% sure that none of the training images included it since that’s what these models are made for - being able to combine concepts without needing to have seen them before. If you hold the AI’s hand enough with prompt engineering, textual inversion and img2img you can get it to generate pretty much anything. That’s the power and danger of these things.

      • Catoblepas@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        5
        ·
        1 month ago

        What % do you think was used to generate the CSAM, though? Like, if 1% of the images were cups it’s probably drawing on some of that to generate images of cups.

        And yes, you could technically do this with no CSAM training material, but we don’t know if that’s what the AI is doing because the image sources used to train it were mass scraped from the internet. They’re using massive amounts of data without filtering it and are unable to say with certainty whether or not there is CSAM in the training material.