• Asuka@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    3
    ·
    4 months ago

    I think it’s a big mistake to think that because the most basic LLMs are just autocompletes, or that because LLMs can hallucinate, that what big LLMs do doesn’t constitute “thinking”. No, GPT4 isn’t conscious, but it very clearly “thinks”.

    It’s started to feel to me like current AIs are reasonable recreations of parts of our minds. It’s like they’re our ability to visualize, to verbalize, and to an extent, to reason (at least the way we intuitively reason, not formally), but separared from the “rest” of our thought processes.

    • fidodo@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 months ago

      Depends on how you define thinking. I agree, LLMs could be a component of thinking, specifically knowledge and recall.

    • erwan@lemmy.ml
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 months ago

      Yes, as Linus Torvalds said humans are also thinking like autocomplete systems.