• fidodo@lemmy.world
    link
    fedilink
    English
    arrow-up
    52
    arrow-down
    3
    ·
    4 months ago

    Good. It’s dangerous to view AI as magic. I’ve had to debate way too many people who think they LLMs are actually intelligent. It’s dangerous to overestimate their capabilities lest we use them for tasks they can’t perform safely. It’s very powerful but the fact that it’s totally non deterministic and unpredictable means we need to very carefully design systems that rely on LLMs with heavy guards rails.

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      17
      arrow-down
      3
      ·
      4 months ago

      Conversely, there are way too many people who think that humans are magic and that it’s impossible for AI to ever do <insert whatever is currently being debated here>.

      I’ve long believed that there’s a smooth spectrum between not-intelligent and human-intelligent. It’s not a binary yes/no sort of thing. There’s basic inert rocks at one end, and humans at the other, and everything else gets scattered at various points in between. So I think it’s fine to discuss where exactly on that scale LLMs fall, and accept the possibility that they’re moving in our direction.

      • fidodo@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        4 months ago

        It’s not linear either. Brains are crazy complex and have sub cortexes that are more specialized to specific tasks. I really don’t think that LLMs alone can possibly demonstrate advanced intelligence, but I do think it could be a very important cortex for one. There’s also different types of intelligence. LLMs are very knowledgeable and have great recall but lack reasoning or worldview.

        • FaceDeer@kbin.social
          link
          fedilink
          arrow-up
          3
          ·
          4 months ago

          Indeed, and many of the more advanced AI systems currently out there are already using LLMs as just one component. Retrieval-augmented generation, for example, adds a separate “memory” that gets searched and bits inserted into the context of the LLM when it’s answering questions. LLMs have been trained to be able to call external APIs to do the things they’re bad at, like math. The LLM is typically still the central “core” of the system, though; the other stuff is routine sorts of computer activities that we’ve already had a handle on for decades.

          IMO it still boils down to a continuum. If there’s an AI system that’s got an LLM in it but also a Wolfram Alpha API and a websearch API and other such “helpers”, then that system should be considered as a whole when asking how “intelligent” it is.

          • fidodo@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            4 months ago

            Lol yup, some people think they’re real smart for realizing how limited LLMs are, but they don’t recognize that the researchers that actually work on this are years ahead on experimentation and theory already and have already realized all this stuff and more. They’re not just making the specific models better, they’re also figuring out how to combine them to make something more generally intelligent instead of super specialized.

    • Deceptichum@kbin.social
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      4 months ago

      I find the people who think they are actually an AI are generally the people opposed to them.

      People who use them as the tools they are know how limited they are.

    • voluble@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 months ago

      Not being combative or even disagreeing with you - purely out of curiosity, what do you think are the necessary and sufficient conditions of intelligence?

      • fidodo@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 months ago

        A worldview simulation it can use as a scratch pad for reasoning. I view reasoning as a set of simulated actions to convert a worldview from state a to state b.

        It depends on how you define intelligence though. Normally people define it as human like, and I think there are 3 primary sub types of intelligence needed for cognizance, being reasoning, awareness, and knowledge. I think the current Gen is figuring out the knowledge type, but it needs to be combined with the other two to be complete.

        • voluble@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 months ago

          Thanks! I’m not clear on what you mean by a worldview simulation as a scratch pad for reasoning. What would be an example of that process at work?

          For sure, defining intelligence is non trivial. What clear the bar of intelligence, and what doesn’t, is not obvious to me. So that’s why I’m engaging here, it sounds like you’ve put a lot of thought into an answer. But I’m not sure I understand your terms.

          • fidodo@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            4 months ago

            A worldview is your current representational model of the world around you, so for example you know you’re a human on earth in a physical universe when a set of rules, you have a mental representation of your body and it’s capabilities, your location and the physicality of the things in your location. It can also be abstract things too, like your personality and your relationships and your understanding of what’s capable in the world.

            Basically, you live in reality, but you need a way to store a representation of that reality in your mind in order to be able to interact with and understand that reality.

            The simulation part is your ability to imagine manipulating that reality to achieve a goal, and if you break that down, you’re trying to convert reality from your perceived current real state A, to a imagined desired state B. Reasoning is coming up with a plan to convert the worldview from state A to state B step by step, so let’s say you want to brush your teeth, you a want to convert your worldview of you having dirty teeth to you having clean teeth, and to do that you reason that you need to follow a few steps to achieve that, like moving your body to the bathroom, retrieving tools (toothbrush and toothpaste) and applying mechanical action to your teeth to clean them. You created a step by step plan to change the state of your worldview to a new desired state you came up with. It doesn’t need to be physical either, it could be an abstract goal, like calculating a tip for a bill. It can also be a grand goal, like going to college or creating a mathematical proof.

            LLMs don’t have a representational model of the world, they don’t have a working memory or a world simulation to use as a scratchpad for testing out reasoning. They just take a sequence of words and retrieve the next word that is probabilistically and relationally likely to be a good next word based on its training data.

            They could be a really important cortex that can assist in developing a worldview model, but in their current granular state of being a single task AI model, they cannot do reasoning on their own.

            Knowledge retrieval is an important component that assists in reasoning though, so it can still play a very important role in reasoning.

            • voluble@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 months ago

              Interesting. I’m curious to know more about what you think of training datasets. Seems like they could be described as a stored representation of reality that maybe checks the boxes you laid out. It’s a very different structure of representation than what we have as animals, but I’m not sure it can be brushed off as trivial. The way an AI interacts with a training dataset is mechanistic, but as you describe, human worldviews can be described in mechanistic terms as well (I do X because I believe Y).

              You haven’t said it, so I might be wrong, but are you pointing to freewill and imagination as somehow tied to intelligence in some necessary way?

              • fidodo@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                4 months ago

                I think worldview is all about simulation and maintaining state, it’s not really about making associations, but rather maintaining some kind of up to date and imaginary state that you can simulate on top of, to represent the world. I think it needs to be a very dynamic thing which is a pretty different paradigm to the ML training methodology.

                Yes, I view these things as foundational to freewill and imagination, but I’m trying to think more low level than that. Simulation facilities imagination and reasoning facilities motivation which facilities free will.

                Are those things necessary for intelligence? Well it depends on your definition and everyone has a different definition ranging from reciting information to full blown consciousness. Personally, I don’t really care about coming up with a rigid definition for it, it’s just a word, I care more about the attributes. I think LLMs are a good knowledge engine and knowledge is a component of intelligence.

    • Asuka@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      3
      ·
      4 months ago

      I think it’s a big mistake to think that because the most basic LLMs are just autocompletes, or that because LLMs can hallucinate, that what big LLMs do doesn’t constitute “thinking”. No, GPT4 isn’t conscious, but it very clearly “thinks”.

      It’s started to feel to me like current AIs are reasonable recreations of parts of our minds. It’s like they’re our ability to visualize, to verbalize, and to an extent, to reason (at least the way we intuitively reason, not formally), but separared from the “rest” of our thought processes.

      • fidodo@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 months ago

        Depends on how you define thinking. I agree, LLMs could be a component of thinking, specifically knowledge and recall.

      • erwan@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 months ago

        Yes, as Linus Torvalds said humans are also thinking like autocomplete systems.