• ricecake@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      22
      ·
      1 month ago

      It does learn from real images, but it doesn’t need real images of what it’s generating to produce related content.
      As in, a network trained with no exposure to children is unlikely to be able to easily produce quality depictions of children. Without training on nudity, it’s unlikely to produce good results there as well.
      However, if it knows both concepts it can combine them readily enough, similar to how you know the concept of “bicycle” and that of “Neptune” and can readily enough imagine “Neptune riding an old fashioned bicycle around the sun while flaunting it’s tophat”.

      Under the hood, this type of AI is effectively a very sophisticated “error correction” system. It changes pixels in the image to try to “fix it” to matching the prompt, usually starting from a smear of random colors (static noise).
      That’s how it’s able to combine different concepts from a wide range of images to create things it’s never seen.

    • helpImTrappedOnline@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 month ago

      Basically if I want to create … (I’ll use a different example for obvious reasons, but I’m sure you could apply it to the topic)

      … “an image of a miniature denium airjet with Taylor Swift’s face on the side of it”, the AI generators can despite no such thing existing in the training data. It may take multiple attempts and effort with the text prompt to get exactly what you’re looking for, but you could eventually get a convincing image.

      AI takes loads of preexisting data on airplanes, T.Swift, and denium to combine it all into something new.

    • pavnilschanda@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      1 month ago

      True, but by their very nature their generations tend to create anonymous identities, and the sheer amount of them would make it harder for investigators to detect pictures of real, human victims (which can also include indicators of crime location.