A biologist was shocked to find his name was mentioned several times in a scientific paper, which references papers that simply don’t exist.

  • krayj@lemmy.world
    link
    fedilink
    English
    arrow-up
    96
    arrow-down
    1
    ·
    10 months ago

    Brandolini’s law, aka the “bullshit asymmetry principle” : the amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.

    Unfortunately, with the advent of large language models like ChatGPT, the quantity of bullshit being produced is accelerating and is already outpacing the ability to refute it.

    • calabast@lemm.ee
      link
      fedilink
      English
      arrow-up
      14
      ·
      10 months ago

      I’m curious to see if AI tech can actually help fight some of the bullshit out there someday. I agree that current AI is only making it easier to produce bullshit, but I think with some advances it could be used to parse a long-winded batch of bullshit, and summarize it, maybe with bullet points about how the source material is wrong. If they can make an AI as confident as chatgpt, but without as much of the “makes stuff up left and right” it could be useful.

      THEN we just have to worry about who owns the AI that parses and summarizes the info we take in, and what kind of biases they’ve baked into the tech…

      • NeoNachtwaechter@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        10 months ago

        I’m curious to see if AI tech can actually help fight some of the bullshit out there someday.

        It is one of the most difficult problems on earth: to decide between lie or truth.

        And then think about the fine line when detecting irony, half-irony or other forms of humoristic non-truth.

      • Barack_Embalmer@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 months ago

        I have high hopes for concepts like Toolformer where the model has to learn to use external APIs and resources like Wikipedia or Wolfram to get answers, rather than relying on the inscrutable and garbled soup of knowledge absorbed from the text training corpus directly. Systems plugged into knowledge graphs could have the best of both worlds - able to generate well-written novel text outputs AND the added rigor of “classical AI” style interpretability.

      • gjghkk@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        I’m curious to see if AI tech can actually help fight some of the bullshit out there

        Those AI are the best ones to produce fake scientific papers. It’s a cat and mouse game again. Those who can detect bullshit can produce the best bullshit.

  • EnglishMobster@kbin.social
    link
    fedilink
    arrow-up
    32
    ·
    edit-2
    10 months ago

    Stupid question: Why can’t journals just mandate an actual URL link to a study on the last page, or the exact issue something was printed in? Surely both of those would be easily confirmable, and both would be easy for a scientist using “real” sources to source (since they must have access to it themselves already).

    Like, it feels silly to me that high school teachers require this sort of thing, yet scientific journals do not?

    • 👍Maximum Derek👍@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      44
      arrow-down
      1
      ·
      10 months ago

      Because scientific journals exist to profit off science, not bolster it. Fact checking costs money so they do the bare minimum they deem necessary to preserve their reputation.

    • tburkhol@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      1
      ·
      10 months ago

      Many of the journals I’ve published in do require a link, usually a PMID or DOI, but they’re not usually part of the review process. That is, one doesn’t expect academic content reviewers to validate each of the citations, but it’s not unreasonable to imagine a journal having an automated validator. The review process really isn’t structured to detect fraud. It looks like the article in question was in the preprint stage - i.e.: not even reviewed yet - and I didn’t notice mention of where they were submitted.

      Message here should be that the process works and the fake article never got published. Very different than the periodic stories about someone who submits a blatantly fake, but hand written, article to a bullshit journal and gets published.

    • phx@lemmy.ca
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      10 months ago

      Well that used to be a thing called a bibliography but it appears that these journals don’t require such. Funny when even my old 7gr essays required those

      • JoBo@feddit.uk
        link
        fedilink
        English
        arrow-up
        4
        ·
        10 months ago

        Of course they do. How do you think fake references were included if references were not needed?

        • phx@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          Citing sources by name rather than providing full links/ISBN’s/etc?

          • JoBo@feddit.uk
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            10 months ago

            Ah! “Bibliography” is an ambiguous term.

            As the linked article says, one measure that journals are starting to adopt is requiring DOI or PMID links for each reference. It ought to be standard anyway, it’s much less work for reviewers to check the references if they’re easy to find. Even if they exist, they often don’t say what the authors cite them as saying. But journals don’t pay anyone for checking these things so it often doesn’t get done. Peer review needs to be paid for. For-profit journals need to die.

            • phx@lemmy.ca
              link
              fedilink
              English
              arrow-up
              2
              ·
              10 months ago

              Yeah that’s fair. Since Covid I’ve noticed that a bunch of the more vocal opponents online liked to pick actual scientific articles and quote small sections way out of context in order to support their “view”. It’s like using scientific articles for anti-science. That pull that shit repeatedly and piss people off, then report anyone who gets a bit to loud in their response. Seems a whole playbook these days

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    12
    ·
    10 months ago

    This is the best summary I could come up with:


    As Retraction Watch reports, Natural History Museum of Denmark myriapodologist Henrik Enghoff suspected the authors of the paper from China and Africa used OpenAI’s ChatGPT to dig up academic references — and as it turns out, his hunch was right.

    The offending paper was initially taken down by Preprints.org, a preprint archive run by the academic publisher MDPI, in June after Enghoff’s colleague, the University of Copenhagen’s David Richard Nash, notified editors of the errors.

    Earlier this year, reporters at The Guardian noticed that the AI chatbot even made up entire articles with bylines of journalists who had never written these non-existent pieces.

    “We will withdraw it immediately and add the authors of this preprint to our blacklist,” Preprints.org’s editor Lloyd Shu told Nash in an email back in June.

    Kahsay Tadesse Mawcha of Aksum University in Ethiopia, who was originally listed as a corresponding author on the offending preprint, admitted to Danish newspaper Weekendavisen back in July that he indeed used ChatGPT, adding that he only realized later that the tool was “not recommended” for the task.

    Powerful but flawed AI tools like ChatGPT are a bull in a china shop of almost every knowledge domain, academia included — and it’ll be fascinating to watch everybody involved try to find a new sense of equilibrium.


    The original article contains 547 words, the summary contains 214 words. Saved 61%. I’m a bot and I’m open source!

  • FuryMaker@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    10 months ago

    Aren’t papers peer reviewed? Or are they getting ChatGPT to do that too?

    Submit harsher consequences for falsified information?

  • daredevil@kbin.social
    link
    fedilink
    arrow-up
    1
    arrow-down
    4
    ·
    10 months ago

    Assuming this is carelessness, this just goes to show that working in academia isn’t an indicator of critical thinking skills IMO

    • average650@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      10 months ago

      Honestly, I bet he has the skills, he just didn’t use them because he didn’t care, or is overworked, or for whatever reason.

      • Kerfuffle@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        ·
        10 months ago

        A lot of people don’t understand the limitations/weaknesses of AI. The carelessness was probably more in not actually learning about the tool he was relying on (and just assuming it was reliable information).

        • T156@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 months ago

          It’s like the aeroplane lawyer case some time ago. People treat the computer as an arbiter of truth, and/or think checking is just asking the chatbot “Did you use a real citation for this?”.

      • daredevil@kbin.social
        link
        fedilink
        arrow-up
        4
        ·
        10 months ago

        You make a valid point, and there are certainly more considerations than my original reply would lead one to believe. Cheers.