• lily33@lemm.ee
    link
    fedilink
    English
    arrow-up
    105
    ·
    edit-2
    9 months ago

    competition too intense

    dangerous technology should not be open source

    So, the actionable suggestions from this article are: reduce competition and ban open source.

    I guess what it is really about, is using fear to make sure AI remains in the hands of a few…

    • thehatfox@lemmy.world
      link
      fedilink
      English
      arrow-up
      39
      ·
      9 months ago

      Yes, this the setup for regulatory capture before regulation has even been conceived. The likes of OpenAI would like nothing more than to be legally declared the only stewards of this “dangerous” technology. The constant doom laden hype that people keep falling for is all part of the plan.

      • lily33@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        edit-2
        9 months ago

        I think calling it “dangerous” in quotes is a bit disingenuous - because there is real potential for danger in the future - but what this article seems to want is totally not the way to manage that.

        • foggy@lemmy.world
          link
          fedilink
          English
          arrow-up
          19
          arrow-down
          1
          ·
          9 months ago

          It would be an obvious attempt at pulling up the ladder if we were to see regulation on ai before we saw regulation on data collection from social media companies. Wen have already seen that weaponized. Why are we going to regulate something before it gets weaponized when we have other recent tech, unregulated, being weaponized?

          • Touching_Grass@lemmy.world
            cake
            link
            fedilink
            English
            arrow-up
            6
            ·
            edit-2
            9 months ago

            I saw a post the other day about how people crowd sourced scraping grocery store prices. Using that data they could present a good case for price fixing and collusion. Web scraping is already pretty taboo and this AI fear mongering will be the thing that is used to make it illegal.

    • Heresy_generator@kbin.social
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      edit-2
      9 months ago

      It’s also about distraction. The main point of the letter and the campaign behind it is slight-of-hand; to get the media obsessing over hypothetical concerns about hypothetical future AIs rather than talking about the actual concerns around current LLMs. They don’t want the media talking about the danger of deepfaked videos, floods of generated disinformation, floods of generated scams, deepfaked audio scams, and on and on, so they dangle Skynet in front of them and watch the majority of the media gladly obsess over our Terminator-themed future because that’s more exciting and generates more clicks than talking about things like the flood of fake news that is going to dominate every democratic election in the world from now on. Because these LLM creators would much rather see regulation of future products they don’t have any idea how to build (and , even better, maybe that regulation can even entrench their own position) than regulation of what they’re currently, actually doing.

    • Touching_Grass@lemmy.world
      cake
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      9 months ago

      I’m going to need a legal framework to be able to DMCA any comments I see online in case they were created with an AI trained on Sara Silverman’s books

      • lily33@lemm.ee
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        1
        ·
        edit-2
        9 months ago

        Since I don’t think this analogy works, you shouldn’t stop there, but actually explain how the world would look like if everyone had access to AI technology (advanced enough to be comparable to a nuke), vs how it would look like if only a small elite had access to it.

        • Touching_Grass@lemmy.world
          cake
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          edit-2
          9 months ago

          We could all do our taxes for free. Fix grammatical errors. Have a pocket legal, medical advice. A niche hobby advisor. Pocket professor. A form completion tool. All in one assistant especially for people who might not know how to navigate a lot of tasks in life. Or we could ban it because I fear maybe someone will use it to make memes. Lots of lazy articles convinced me the AI sky is falling

        • photonic_sorcerer@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          3
          ·
          9 months ago

          Okay, well, if everyone had access to an AGI, anyone could design and distribute a pathogen that could wipe out a significant portion of the population. Then again, you’d have the collective force of everyone else’s AI countering that plot.

          I think that putting that kind of power into the hands of everyone shouldnt be done lightly.

          • Hanabie@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            1
            ·
            9 months ago

            There are papers online on how to design viruses. Now to get funding for a lab and staff, because this is nothing like Breaking Bad.

          • Honytawk@lemmy.zip
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            1
            ·
            9 months ago

            Since when does AI translate to being able to create bacteria and stuff?

            If having the information on how to do so was enough to create pathogens, we should already have been wiped out because of books and libraries.

            • photonic_sorcerer@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              3
              ·
              edit-2
              9 months ago

              You can’t type “How do I make a pathogen to wipe out a city” into a book. A sufficiently advanced and aligned AI will, however, answer that question with a detailed list of production steps, resource requirements and timeline.

              • HubertManne@kbin.social
                link
                fedilink
                arrow-up
                1
                arrow-down
                1
                ·
                9 months ago

                this requires special materials like enzymes and such. It would much easier to restrict access to those. Now true this godlike ai could go back to show you how to make all the base stuff but you need equipment for this like centrifuges and you will need special media. Its like the ai telling you how to make a nuke really. Yeah it could star you off with bronze age metal smithing and you could work your way up to the modern materials you would need but realistically you won’t be able to do it (assuming again you restrict certain materials)

          • Rayspekt@kbin.social
            link
            fedilink
            arrow-up
            5
            ·
            9 months ago

            You still can’t manufacture it. Your comparision with nukes is actually a good example: The basic knowledge how a nuke works is out there, yet most people struggle in refining weapon-grade plutonium.

            Knowledge is only one part in doing something.

          • lily33@lemm.ee
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            1
            ·
            9 months ago

            I would say the risk of having AI be limited to the ruling elite is worse, though - because there wouldn’t be everyone else’s AI to counter them.

            And if AI is limited to a few, those few WILL become the new ruling elite.

            • Touching_Grass@lemmy.world
              cake
              link
              fedilink
              English
              arrow-up
              5
              ·
              9 months ago

              And people would be less likely to identify what AI can and can’t do if we convince ourselves to limit our access to it.

              • subignition@kbin.social
                link
                fedilink
                arrow-up
                1
                ·
                9 months ago

                People are already incompetent enough at this when there’s a disclaimer in front of their faces warning about gpt.

                We’re seeing responses even in this thread conflating AGI with LLMs. People at large are too fucking stupid to be trusted with this kind of thing

          • serratur@lemmy.wtf
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            9 months ago

            You’re just gonna print the pathogens with the pathogen printer? You understand that getting the information doesn’t mean you’re able to produce it.

            • Touching_Grass@lemmy.world
              cake
              link
              fedilink
              English
              arrow-up
              4
              ·
              9 months ago

              I need an article on how a 3d printer can be used to print an underground chemistry lab to produce these weapons grade pathogens

              • testfactor@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                9 months ago

                I know how to build a barn. Doesn’t mean I can do it by myself with no tools or materials.

                Turns out that building and operating a lab that can churn out bespoke pathogens is actually even more difficult and expensive than that.

          • Kichae@kbin.social
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            9 months ago

            Let’s assume your hypothetical here isnt bonkers: How, exactly, do you propose limiting people’s access to linear algebra?

      • Hanabie@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        You can google how to make a nuke. Of course, you’re gonna get your hands on the plutonium, which is something even countries struggle with.

        • Rayspekt@kbin.social
          link
          fedilink
          arrow-up
          1
          ·
          9 months ago

          Then I’ll ask AI how to obtain plutonium, checkmate.

          But by that point I might just ask the all-knowing AI how I can achieve what I want to with the nuke and cut out the radioactive middle man. Unless the AI tells me to build a nuke, then it’s nuke time anyway.

          • Hanabie@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            9 months ago

            The point I was trying to make is, all the information about viruses and nuclear bombs are already readily available. AI doing the googling for you will not have an actual impact, especially considering what else you’ll need to make it all work.

            I would assume you get the fear of AI from the news media. Understandable, they have a vested interest in keeping you afraid. AI is gonna steal their ad revenue, when you won’t have to visit their shitty websites anymore.

  • Steeve@lemmy.ca
    link
    fedilink
    English
    arrow-up
    64
    arrow-down
    3
    ·
    9 months ago

    “Dangerous technology should not be open source, regardless of whether it is bio-weapons or software,” Tegmark said.

    What a stupid alarmist take. The safest way for technology to operate is when people can see how it works, allowing experts who don’t just have a financial interest in it succeeding to scrutinize it openly. And it’s not like this is some magical technology that only massive corporations have access to in the first place, it’s built on top of open research.

    Home Depot sells all the ingredients you need to make a substantial bomb, should we ban fertilizer and pressure cookers for non-industrial use?

      • Steeve@lemmy.ca
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        2
        ·
        9 months ago

        How about bleach and ammonia? I can buy those ingredients at any convenience store near me and throw together some mustard gas right? Point is if we banned everything that has any potential to do harm we wouldn’t even be left with rocks and sticks. Regulate, sure, but taking technology out of the hands of regular people and handing it to a select few corporations is a recipe for inequality and disaster.

        • tryptaminev 🇵🇸 🇺🇦 🇪🇺@feddit.de
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          9 months ago

          You wouldnt make mustard gas. You’d make chlor gas,which is also very nasty but still quite a mile from mustard gas. The extent to which risky chemicals have been banned, reduced in concentration or made subject to extensive monitoring of sales and use is quite substantial.

          But here is a huge difference to AI tools. Anyone could create these tools him or herself. It is information. Unlike information on how to build a nuke it is more easy to use this information for negative purposes, but the extend is much smaller. A deepfake itself cannot kill people. A selfmade pipebomb can. Meanwhile the cat is out of the hat for ML already. The tools are there and many people have copies of the code and it can be replicated countless times, whereas the clandestine bomb-builder needs to procure another batch of chemicals and hardware.

  • theluddite@lemmy.ml
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    9 months ago

    I had Max Tegmark as a professor when I was an undergrad. I loved him. He is a great physicist and educator, so it pains me greatly to say that he has gone off the deep end with his effective altruism stuff. His work through the Future of Life Institute should not be taken seriously. For anyone interested, I responded to Tegmark’s concerns about AI and Effective Altruism in general on The Luddite when they first got a lot of media attention earlier this year.

    I argue that EA is an unserious and self-serving philosophy, and the concern about AI is best understood as a bad faith and self-aggrandizing justification for capitalist control of technology. You can see that here. Other commenters are noting his opposition to open sourcing “dangerous technologies.” This is the inevitable conclusion of a philosophy that, as discussed in the linked post, reifies existing power structures to decide how to do the most good within them. EA necessarily excludes radical change by focusing on measurable outcomes. It’s a fundamentally conservative and patronizing philosophy, so it’s no surprise when its conclusions end up agreeing with the people in charge.

    • profdc9@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      9 months ago

      I think Max Tegmark is like other public intellectuals, for example, Michio Kaku, that have to say something controversial periodically to stay in he news and maintain their reputation.

      • theluddite@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        9 months ago

        Maybe. It had been almost 15 years since I last heard of him until the EA stuff started going mainstream, but he was a very well respected physicist, especially for how young he was back then. After having taken several very small classes with him, it would surprise me if he was a clout chaser. People are complicated though, so who knows.

  • 👁️👄👁️@lemm.ee
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    6
    ·
    edit-2
    9 months ago

    Anyone against FOSS adoption of LLMs is straight up a capitalist fascist

    They love the AI ethics issue, it’s so vague and morally superior that they can use it to shut down anything they like.

    The letter warned of an “out-of-control race” to develop minds that no one could “understand, predict, or reliably control”

    And this is why people who don’t understand that LLMs are essentially big hallucinating math machines should have no voice in things they fundamentally do not understand

    • 5BC2E7@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      6
      ·
      9 months ago

      You might be able to assert they are full of shit after hearing the arguments. Accusing them of being fascist for not agreeing with you is extremely intolerant and authoritarian aka facist.

        • 5BC2E7@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          9 months ago

          the thing is that you don’t want to become the thing you are fighting. you can be right in every case, as long as it’s in a case by case basis. it would be different if you explain why the arguments are bad faith arguments or why they are facists, that is also perfectly fine.

          • 👁️👄👁️@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 months ago

            There are things that are just true like that, like racism or slavery don’t have a case by case basis where they’re bad, but that’s getting to be an extreme comparison here, just saying absolute statements can be true like that. When is FOSS not about freedom?

            • 5BC2E7@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              edit-2
              9 months ago

              this seems a bit more complicated than the examples you share where things are more evident. even if they are wrong, they can be wrong for reasons other than them being facists.

              edit: to show some nuance, would people not be against open software that is purposefully crafted for a nefarious purpose? be it ransomware, or software for a DIY automated blinding laser weapons? I know UN would probably not like the second example, regardless of it being FOSS.

              • jcg@halubilo.social
                link
                fedilink
                English
                arrow-up
                4
                ·
                9 months ago

                regardless of it being FOSS

                Exactly, it’s not about it being FOSS. It’s about the nature of the software itself. Being against that software doesn’t make you anti-FOSS. Additionally, open sourcing your malware is actually helpful for people trying to combat it.

      • Aceticon@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        9 months ago

        I think it’s pretty valid to point out that somebody who is against free software in the XXI century has a strong authoritarian posture.

        Granted, the use of “fascist” might be incorrect (mainly because it’s a quite specific autoritarian ideology and it’s hard to, for example, find indications in this that the guy supports other elements of it such as hypernationalism) and the word suffers from overuse in a sloganized way (i.e. it’s commonly parroted in a mindless way), but in this case it’s not a bad shortcut to pass the idea.

        • 5BC2E7@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          9 months ago

          I think using that term is fully regressing to tribalism. I believe that in some cases we can reach the goal by building consensus with the opposition. I’m sure we can see the world with more than 1 bit of resolution.

          • Aceticon@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            9 months ago

            Yeah, I do agree with that point of view.

            Consider, however, that had you made it in that original response as you did it just now in this one, it would’ve come out as a perfectably rational and acceptable take rather than as just angry.

            Whilst I understand being angry at people throwing “fascist” around like a slogan, I find an angry response to be counter-productive, not as much for the tribalist parrots who normally throw slogans around like that (those are beyond saving, IMHO) but for the audience.

            • 5BC2E7@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              9 months ago

              I could be wrong. but since I really didn’t intend to appear angry, and i was not angry I am not reading that in the comment. perhaps it’s the accusation of facism that riles people up. which was precisely my point. and if we are talking about my emotions. in truth i feel mostly sad because the people that are making this comments have strong opinons on the issue and care enough to try to make things better.

              • Aceticon@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                9 months ago

                Hah, the bit that really makes me angry is exactly that people running around parroting whatever the leaders of “their” “tribe” say (and who will do things like use “fascism” to describe things that are slightly autoritarian) do care enough to try to make things better.

                They often have the best of intentions whilst damaging their cause and even being led by the nose like useful idiots (that’s the thing with tribalism: once people run around identifying with “the tribe” and following the “leaders” of “the tribe” they’re extremelly easy to manipulate).

                It’s very frustrating to see just how many people run around thinking themselves lefties whilst, by following herd/pack instincts and parroting others rather than using their brains to think, acting in ways that don’t really advance the cause of “the greatest good for the greatest number”.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    5
    ·
    9 months ago

    This is the best summary I could come up with:


    The scientist behind a landmark letter calling for a pause in developing powerful artificial intelligence systems has said tech executives did not halt their work because they are locked in a “race to the bottom”.

    Max Tegmark, a co-founder of the Future of Life Institute, organised an open letter in March calling for a six-month pause in developing giant AI systems.

    Despite support from more than 30,000 signatories, including Elon Musk and the Apple co-founder Steve Wozniak, the document failed to secure a hiatus in developing the most ambitious systems.

    “I felt there was a lot of pent-up anxiety around going full steam ahead with AI, that people around the world were afraid of expressing for fear of coming across as scare-mongering luddites.

    “So you’re getting people like [letter signatory] Yuval Noah Harari saying it, you’ve started to get politicians asking tough questions,” said Tegmark, whose thinktank researches existential threats and potential benefits from cutting-edge technology.

    Mark Zuckerberg’s Meta recently released an open-source large language model, called Llama 2, and was warned by one UK expert that such a move was akin to “giving people a template to build a nuclear bomb”.


    The original article contains 695 words, the summary contains 192 words. Saved 72%. I’m a bot and I’m open source!

  • Blapoo@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    19
    ·
    9 months ago

    Everyone’s focusing on LLMs. Idiots. LamgChain is where the first “AI” systems will come from. And anyone can do that shit.

      • Blapoo@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        8
        ·
        9 months ago

        By themselves, limited capabilities. I’ll accept the downvotes :) My market

        • imPastaSyndrome@lemm.ee
          link
          fedilink
          English
          arrow-up
          6
          ·
          9 months ago

          ???

          You’re not even writing whole thoughts. You’re not being rejected because you’re too smart, or unique or above other people you’re just contributing poorly

          • Blapoo@lemmy.ml
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 months ago

            An LLM is a tool, not the entire solution. ChatGPT isn’t an LLM in isolation. It’s used is a series of loops (a chain). LangChain and similar strategies expose this value of a series of decisions in sequence. A series of decisions could also be called a thought. A handful of people are already doing this.

            In other words: LLM in isolation, meh. But any given LLM used in sequence: Gold

    • bioemerl@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      9 months ago

      Oh dear, this is a bad take. Lang chain some super basic software that 90% of people could ride in a couple of weekends, it’s not even slightly advanced and it doesn’t really attribute to anything to the ability of AI