A shocking story was promoted on the “front page” or main feed of Elon Musk’s X on Thursday:

“Iran Strikes Tel Aviv with Heavy Missiles,” read the headline.

This would certainly be a worrying world news development. Earlier that week, Israel had conducted an airstrike on Iran’s embassy in Syria, killing two generals as well as other officers. Retaliation from Iran seemed like a plausible occurrence.

But, there was one major problem: Iran did not attack Israel. The headline was fake.

Even more concerning, the fake headline was apparently generated by X’s own official AI chatbot, Grok, and then promoted by X’s trending news product, Explore, on the very first day of an updated version of the feature.

  • assassin_aragorn@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    Due to the way it works. A bit like static error in control theory, you know that for different applications it may or may not be acceptable. The “I” in PID-regulators and all that. IIRC

    Oh great, I’m getting horrible flashbacks now to my controls class.

    Another way to look at it is if there’s sufficient lag time between your controlled variable and your observed variable, you will never catch up to your target. You’ll always be chasing your tail with basic feedback control.