• federalreverse-old@feddit.de
    link
    fedilink
    English
    arrow-up
    70
    arrow-down
    1
    ·
    3 months ago

    Apple, afaik, used to be doing this on-device rather than in the cloud. Not quite sure about the situation today.

      • Septimaeus@infosec.pub
        link
        fedilink
        English
        arrow-up
        30
        ·
        3 months ago

        I don’t. Corps gonna corp, if they can. But I’ve checked this using all the development, networking, and energy monitoring tools at my disposal and apple’s e2e and on-device guarantee does appear to hold. For now.

        Still, those who can should audit periodically, even if they’re only doing it for the settlement.

    • Hawk@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 months ago

      They were inferencing a cnn on a mobile device? I have no clue but that would be costly battery wise at least.

      • didnt_readit@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 months ago

        They’ve been doing ML locally on devices for like a decade. Since way before all the AI hype. They’ve had dedicated ML inference cores in their chips for a long time too which helps the battery life situation.

        • Hawk@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          It couldn’t quite be a decade, a decade ago we only just had the vgg — but sure, broad strokes, they’ve been doing local stuff, cool.