• Rabbit R1 AI box is actually an Android app in a limited $200 box, running on AOSP without Google Play.
  • Rabbit Inc. is unhappy about details of its tech stack being public, threatening action against unauthorized emulators.
  • AOSP is a logical choice for mobile hardware as it provides essential functionalities without the need for Google Play.
  • De_Narm@lemmy.world
    link
    fedilink
    English
    arrow-up
    112
    arrow-down
    14
    ·
    2 months ago

    Why are there AI boxes popping up everywhere? They are useless. How many times do we need to repeat that LLMs are trained to give convincing answers but not correct ones. I’ve gained nothing from asking this glorified e-waste something, pulling out my phone and verifying it.

    • cron@feddit.de
      link
      fedilink
      English
      arrow-up
      57
      ·
      2 months ago

      What I don’t get is why anyone would like to buy a new gadget for some AI features. Just develop a nice app and let people run it on their phones.

      • no banana @lemmy.world
        link
        fedilink
        English
        arrow-up
        27
        ·
        edit-2
        2 months ago

        That’s why though. Because they can monetize hardware. They can’t monetize something a free app does.

        • knotthatone@lemmy.one
          link
          fedilink
          English
          arrow-up
          9
          ·
          2 months ago

          Plenty of free apps get monetized just fine. They just have to offer something people want to use that they can slather ads all over. The AI doo-dads haven’t shown they’re useful. I’m guessing the dedicated hardware strategy got them more upfront funding from stupid venture capital than an app would have, but they still haven’t answered why anybody should buy these. Just postponing the inevitable.

    • exanime@lemmy.today
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      1
      ·
      2 months ago

      The answer is “marketing”

      They have pushed AI so hard in the last couple of years they have convinced many that we are 1 year away from Terminator travelling back in time to prevent the apocalypse

      • sudo42@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 months ago
        • Incredible levels of hype
        • Tons of power consumption
        • Questionable utility
        • Small but very vocal fanbase

        s/Crypto/AI/

    • Blackmist@feddit.uk
      link
      fedilink
      English
      arrow-up
      11
      ·
      2 months ago

      Because money, both from tech hungry but not very savvy consumers, and the inevitable advertisers that will pay for the opportunity for their names to be ejected from these boxes as part of a perfectly natural conversation.

    • XEAL@lemm.ee
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      7
      ·
      2 months ago

      It’s not black or white.

      Of couse AI hallucinates, but not all that an LLM produces is garbage.

      Don’t expect a “living” Wikipedia or Google, but, it sure can help with things like coding or translating.

      • De_Narm@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 months ago

        I don’t necessarily disagree. You can certainly use LLMs and achieve something in less time than without it. Numerous people here are speaking about coding and while I had no success with them, it can work with more popular languages. The thing is, these people use LLMs as a tool in their process. They verify the results (or the compiler does it for them). That’s not what this product is. It’s a standalone device which you talk to. It’s supposed to replace pulling out your phone to answer a question.

      • Paradox@lemdro.id
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 months ago

        I quite like kagis universal summarizer, for example. It let’s me know if a long ass YouTube video is worth watching

      • Croquette@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        2 months ago

        I use LLMs as a starting point to research new subjects.

        The google/ddg search quality is hot garbage, so LLM at least gives me the terminology to be more precise in my searchs.

    • TrickDacy@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      2 months ago

      I have now heard of my first “ai box”. I’m on Lemmy most days. Not sure how it’s an epidemic…

      • De_Narm@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        2 months ago

        I haven’t seen much of them here, but I use other media too. E.g, not long ago there was a lot of coverage about the “Humane AI Pin”, which was utter garbage and even more expensive.

    • BaroqueInMind@lemmy.one
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      edit-2
      2 months ago

      There is s fuck ton on money laundering coming from China nowadays and they invest millions in any tech-bro stupid idea to dump their illegal cash.

    • OneOrTheOtherDontAskMe@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      2 months ago

      I just started diving into the space from a localized point yesterday. And I can say that there are definitely problems with garbage spewing, but some of these models are getting really really good at really specific things.

      A biomedical model I saw seemed lauded for it’s consistency in pulling relevant data from medical notes for the sake of patient care instructions, important risk factors, fall risk level etc.

      So although I agree they’re still giving well phrased garbage for big general cases (and GPT4 seems to be much more ‘savvy’), the specific use cases are getting much better and I’m stoked to see how that continues.

    • Blue_Morpho@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      6
      ·
      2 months ago

      I think it’s a delayed development reaction to Amazon Alexa from 4 years ago. Alexa came out, voice assistants were everywhere. Someone wanted to cash in on the hype but consumer product development takes a really long time.

      So product is finally finished (mobile Alexa) and they label it AI to hype it as well as make it work without the hard work of parsing wikipedia for good answers.

      • AIhasUse@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        5
        ·
        2 months ago

        Alexa is a fundamentally different architecture from the LLMs of today. There is no way that anyone with even a basic understanding of modern computing would say something like this.

        • Blue_Morpho@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          4
          ·
          edit-2
          2 months ago

          Alexa is a fundamentally different architecture from the LLMs of today.

          Which is why I explicitly said they used AI (LLM) instead of the harder to implement but more accurate Alexa method.

          Maybe actually read the entire post before being an ass.

    • MxM111@kbin.social
      link
      fedilink
      arrow-up
      8
      arrow-down
      28
      ·
      2 months ago

      The best convincing answer is the correct one. The correlation of AI answers with correct answers is fairly high. Numerous test show that. The models also significantly improved (especially paid versions) since introduction just 2 years ago.
      Of course it does not mean that it could be trusted as much as Wikipedia, but it is probably better source than Facebook.

      • De_Narm@lemmy.world
        link
        fedilink
        English
        arrow-up
        21
        arrow-down
        3
        ·
        2 months ago

        “Fairly high” is still useless (and doesn’t actually quantify anything, depending on context both 1% and 99% could be ‘fairly high’). As long as these models just hallucinate things, I need to double-check. Which is what I would have done without one of these things anyway.

        • AIhasUse@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          8
          ·
          2 months ago

          Hallucinations are largely dealt with if you use agents. It won’t be long until it gets packaged well enough that anyone can just use it. For now, it takes a little bit of effort to get a decent setup.

        • TrickDacy@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          9
          ·
          2 months ago

          1% correct is never “fairly high” wtf

          Also if you want a computer that you don’t have to double check, you literally are expecting software to embody the concept of God. This is fucking stupid.

          • De_Narm@lemmy.world
            link
            fedilink
            English
            arrow-up
            11
            arrow-down
            2
            ·
            edit-2
            2 months ago

            1% correct is never “fairly high” wtf

            It’s all about context. Asking a bunch of 4 year olds questions about trigonometry, 1% of answers being correct would be fairly high. ‘Fairly high’ basically only means ‘as high as expected’ or ‘higher than expected’.

            Also if you want a computer that you don’t have to double check, you literally are expecting software to embody the concept of God. This is fucking stupid.

            Hence, it is useless. If I cannot expect it to be more or less always correct, I can skip using it and just look stuff up myself.

            • TrickDacy@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              11
              ·
              2 months ago

              Obviously the only contexts that would apply here are ones where you expect a correct answer. Why would we be evaluating a software that claims to be helpful against 4 year old asked to do calculus? I have to question your ability to reason for insinuating this.

              So confirmed. God or nothing. Why don’t you go back to quills? Computers cannot read your mind and write this message automatically, hence they are useless

              • De_Narm@lemmy.world
                link
                fedilink
                English
                arrow-up
                7
                arrow-down
                1
                ·
                2 months ago

                Obviously the only contexts that would apply here are ones where you expect a correct answer.

                That’s the whole point, I don’t expect correct answers. Neither from a 4 year old nor from a probabilistic language model.

                • TrickDacy@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  8
                  ·
                  2 months ago

                  And you don’t expect a correct answer because it isn’t 100% of the time. Some lemmings are basically just clones of Sheldon Cooper

                  • De_Narm@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    6
                    arrow-down
                    1
                    ·
                    2 months ago

                    I don’t expect a correct answer because I’ve used these models quite a lot last year. At least half the answers were hallucinated. And it’s still a common complaint about this product as well if you look at actual reviews (e.g., pretty sure Marques Brownlee mentions it).

                  • FlorianSimon@sh.itjust.works
                    link
                    fedilink
                    English
                    arrow-up
                    3
                    arrow-down
                    1
                    ·
                    2 months ago

                    Something seems to fly above your head: quality is not optional and it’s good engineering practice to seek reliable methods of doing our work. As a mature software person, you look for tools that give less room for failure and want to leave as little as possible for humans to fuck up, because you know they’re not reliable, despite being unavoidable. That’s the logic behind automated testing, Rust’s borrow checker, static typing…

                    If you’ve done code review, you know it’s not very efficient at catching bugs. It’s not efficient because you don’t pay as much attention to details when you’re not actually writing the code. With LLMs, you have to do code review to ensure you meet quality standards, because of the hallucinations, just like you’ve got to test your work before committing it.

                    I understand the actual software engineers that care about delivering working code and would rather write it in order to be more confident in the quality of the output.

          • SpaceNoodle@lemmy.world
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            1
            ·
            2 months ago

            Perhaps the problem is that I never bothered to ask anything trivial enough, but you’d think that two rhyming words starting with 'L" would be simple.

            • CaptDust@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              2 months ago

              “AI” is a really dumb term for what we’re all using currently. General LLMs are not intelligent, it’s assigning priorities to tokens (words) in a database, based on what tokens were provided before it, to compare and guess the next most logical word and phrase, really really fast. Informed guesses, sure, but there’s not enough parameters to consider all the factors required to identify a rhyme.

              That said, honestly I’m struggling to come up with 2 rhyming L words? Lol even rhymebrain is failing me. I’m curious what you went with.

            • MxM111@kbin.social
              link
              fedilink
              arrow-up
              2
              arrow-down
              2
              ·
              2 months ago

              Ok, by asking you mean that you find somewhere questions that someone identified as being answered wrongly by LLM, and asking yourself.

        • magic_lobster_party@kbin.run
          link
          fedilink
          arrow-up
          5
          arrow-down
          3
          ·
          2 months ago

          I’ve asked GPT4 to write specific Python programs, and more often than not it does a good job. And if the program is incorrect I can tell it about the error and it will often manage to fix it for me.

          • FlorianSimon@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            2 months ago

            You have every right not to, but the “useless” word comes out a lot when talking about LLMs and code, and we’re not all arguing in bad faith. The reliability problem is still a strong factor in why people don’t use this more, and, even if you buy into the hype, it’s probably a good idea to temper your expectations and try to walk a mile in the other person’s shoes. You might get to use LLMs and learn a thing or two.

            • TrickDacy@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              2 months ago

              I only “believe the hype” because a good developer friend of mine suggested I try copilot so I did and was impressed. It’s an amazing technical achievement that helps me get my job done. It’s useful every single day I use it. Does it do my job for me? No of fucking course not, I’m not a moron who expected that to begin with. It speeds up small portions of tasks and if I don’t understand or agree with its solution, it’s insanely easy not to use it.

              People online mad about something new is all this is. There are valid concerns about this kind of tech, but I rarely see that. Ignorance on the topic prevails. Anyone calling ai “useless” in a blanket statement is necessarily ignorant and doesn’t really deserve my time except to catch a quick insult for being the ignorant fool they have revealed themselves to be.

              • FlorianSimon@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                2 months ago

                I’m glad that you’re finding this useful. When I say it’s useless, I speak in my name only.

                I’m not afraid to try it out, and I actually did, and, while I was impressed by the quality of the English it spits out, I was disappointed with the actual substance of the answers, which makes this completely unusable for me in my day to day life. I keep trying it every now and then, but it’s not a service I would pay for in its current state.

                Thing is, I’m not the only one. This is the opinion of the majority of people I work with, senior or junior. I’m willing to give it some time to mature, but I’m unconvinced at the moment.

                • TrickDacy@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  2
                  ·
                  edit-2
                  2 months ago

                  You would need to be pulling some trickery on Microsoft to get access to copilot for more than a single 30 day trial so I’m skeptical you’ve actually used it. Sounds like you’re using other products which may be much worse. It also sounds like you work in a conservative shop. Good luck with that

                  • FlorianSimon@sh.itjust.works
                    link
                    fedilink
                    English
                    arrow-up
                    3
                    arrow-down
                    1
                    ·
                    edit-2
                    2 months ago

                    I have not tried Copilot, no. I’m not giving any tool money, personal info and access to my code when it can’t reliably answer a question like: “does removing from a std::vector invalidate iterators?” (not a prompt I tried on LLMs but close enough).

                    That shit’s just dangerous, for obvious reasons. Especially when you consider the catastrophic impact these kinds of errors can have.

                    There needs to be a fundamental shift to something that detects and fixes the garbage, which just isn’t there ATM.