• barsquid@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    5 days ago

    Putting the claim instead of the reality in the headline is journalistic malpractice. 2x for free is still pretty great tho.

    • barsquid@lemmy.world
      link
      fedilink
      English
      arrow-up
      27
      ·
      5 days ago

      Just finished the article, it’s not for free at all. Chips need to be designed to use it. I’m skeptical again. There’s no point IMO. Nobody wants to put the R&D into massively parallel CPUs when they can put that effort into GPUs.

      • frezik@midwest.social
        link
        fedilink
        English
        arrow-up
        7
        ·
        4 days ago

        Not every problem is amenable to GPUs. If it has a lot of branching, or needs to fetch back and forth from memory a lot, GPUs don’t help.

        Now, does this thing have exactly the same limitations? I’m guessing yes, but it’s all too vague to know for sure. It’s sounds like they’re doing what superscalar CPUs have done for a while. On x86, that starts with the original Pentium from 1993, and Crays going back to the '60s. What are they doing to supercharge this idea?

        Does this avoid some of security problems that have popped up with superscalar archs? For example, some kernel code running at ring 0 is running alongside userspace code, and it all gets the same ring 0 level as a result.