When Adobe Inc. released its Firefly image-generating software last year, the company said the artificial intelligence model was trained mainly on Adobe Stock, its database of hundreds of millions of licensed images. Firefly, Adobe said, was a “commercially safe” alternative to competitors like Midjourney, which learned by scraping pictures from across the internet.

But behind the scenes, Adobe also was relying in part on AI-generated content to train Firefly, including from those same AI rivals. In numerous presentations and public postsabout how Firefly is safer than the competition due to its training data, Adobe never made clear that its model actually used images from some of these same competitors.

  • General_Effort@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 months ago

    we’re still feeding them “more data”.

    Yes, that’s one way of putting it. What gets into the Adobe stock database is already curated. They also have the sales and tracking data.

    Though as these generative models get better and better at mimicking real world data

    Also yes on this. It doesn’t matter if your data is synthetic but only if it’s fit for purpose. That’s especially true in this case, where the distinction between synthetic and real is so unclear. You’re already including drawings, renders, photomanips, etc. I have no idea what kind of misconception people have that they would think it matters if some piece of digital art is AI generated.