• JovialMicrobial@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 month ago

    I think one of the many problems with AI generated CSAM is that as AI becomes more advanced it will become increasingly difficult for authorities to tell the difference between what was AI generated and what isn’t.

    Banning all of it means authorities don’t have to sift through images trying to decipher between the two. If one image is declared to be AI generated and it’s not…well… that doesn’t help the victims or create less victims. It could also make the horrible people who do abuse children far more comfortable putting that stuff out there because it can hide amongst all the AI generated stuff. Meaning authorities will have to go through far more images before finding ones with real victims in it. All of it being illegal prevents those sorts of problems.

    • PM_Your_Nudes_Please@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 month ago

      And that’s a good point! Luckily it’s still (usually) fairly easy to identify AI generated images. But as they get more advanced, that will likely become harder and harder to do.

      Maybe some sort of required digital signatures for AI art would help; Something like a public encryption key in the metadata, that can’t be falsified after the fact. Anything without that known and trusted AI signature would by default be treated as the real deal.

      But this would likely require large scale rewrites of existing image formats, if they could even support it at all. It’s the type of thing that would require people way smarter than myself. But even that feels like a bodged solution to a problem that only exists because people suck. And if it required registration with a certificate authority (like an HTTPS certificate does) then it would be a hurdle for local AI instances to jump through. Because they would need to get a trusted certificate before they could sign their images.