You communicate with co-workers using natural languages but that doesn’t make co-workers useless. You just have to account for the strengths and weaknesses of that mechanism in your workflow.
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit and then some time on kbin.social.
You communicate with co-workers using natural languages but that doesn’t make co-workers useless. You just have to account for the strengths and weaknesses of that mechanism in your workflow.
Sure, in those situations. I find that it doesn’t take that much effort to write a prompt that gets me something useful in most situations, though. You just need to make some effort. A lot of people don’t put in any effort, get a bad result, and conclude “this tech is useless.”
It also isn’t telepathic, so the only thing it has to go on when determining “what you want” is what you tell it you want.
I often see people gripe about how ChatGPT’s essay writing style is mediocre and always sounds the same, for example. But that’s what you get when you just tell ChatGPT “write me an essay about X.” It doesn’t know what kind of essay you want unless you tell it. You have to give it context and direction to get good results.
“Just give me this and I’ll do the rest” is actually a pretty great workflow, in my experience. AI isn’t at the point where you can just set it loose to work on its own but as a collaborator it saves me a huge amount of hassle and time.
It’s social media. Social media is all about bubbles, groupthink, driving engagement. It happens on Facebook, it happens on Reddit.
It happens here, too. There are certain views that are accepted as what every right-thinking person holds, and certain other views that are dumped on with great glee about how wrong they are. But which specific views they are varies from bubble to bubble.
To flee North Korea requires playing a long game, marry someone you hate.
The “I Want To Live” project is going to need a bunch of Korean translators.
You get out ahead of the locomotive knowing that most of the directions you go aren’t going to pan out. The point is that the guy who happens to pick correctly will win big by getting out there first. Nothing wrong with making the attempt and getting it wrong, as long as you factored that risk in (as McDonalds’ seems to have done given that this hasn’t harmed them).
Training an AI does not involve copying anything so why would you think that fair use is even a factor here? It’s outside of copyright altogether. You can’t copyright concepts.
Downloading pirated books to your computer does involve copyright violation, sure, but it’s a violation by the uploader. And look at what community we’re in, are we going to get all high and mighty about that?
Training an AI on something doesn’t involve copying it.
And under copyleft licensing, they’re allowed to do that. Both to GitHub repositories and Wikipedia.
Why would that matter? You can fork such projects too.
I don’t think that making LLMs cheaper and easier to run is going to “pop that bubble”, if bubble it even is. If anything this will boost AI applications tremendously.
I don’t think you’ve thought through the logistics required for the sort of war where you’d just go around and shoot everyone who lives in hundreds of solar systems. Even assuming they do nothing at all to defend themselves, how do you even find them all?
Of course it is! We are simultaneously facing a labor shortage and mass unemployment. The important thing is to keep being angry and frightened, the specific subject you’re angry about at any given time is flexible.
My advice against getting too deeply invested applies to those companies and communities as well.
I once got permabanned from a politics subreddit (I think it was /r/canadapolitics) that had a “downvoting is not permitted” rule, because there was a guy getting downvotes and I offered him an explanation for why I thought he was getting them. That counted as evidence that I had downvoted him, I guess.
My response: I sent one message to the mods that was essentially “really?” And then when there was no response I unsubbed from that subreddit and moved on. I see no point in participating in subreddits with ridiculous rules and ridiculous enforcement.
Granted, unsubbing from politics subreddits is generally a good idea even when not banned. But eh.
The only other subreddit I’m banned in is /r/artisthate, which I never visited in the first place. Apparently they scan other subreddits for signs of users who don’t hate artificial intelligence enough and preemptively ban them. That was kind of hilarious.
Anyway, I guess my advice is don’t get too deeply “invested” in a community that can be so easily and arbitrarily taken away from you in the first place. And also manage your passwords better.
It’s not specifically oxygen that’s linked to life, it’s chemical disequilibrium. Oxygen is highly reactive, there are lots of minerals that will bind it up and there aren’t any natural geological processes that unbind it again in significant quantities. If you put an oxygen atmosphere on a lifeless planet then pretty soon all of the oxygen will be bound up in other compounds - carbon dioxide, silicon oxides, ferric oxides, and so forth. There has to be some process that’s constantly producing oxygen in vast quantities to keep Earth’s atmosphere in the state that it’s in.
There are other chemicals that could also be taken as signs of life, depending on the conditions on a planet. Methane, for example, also has a short lifespan under Earthlike conditions. You may have seen headlines a little while back about the detection of “life signs” on Venus, in that case it was phosphine gas (PH3) that they thought they’d spotted (turns out it may have been a false alarm). These sorts of gasses can be detected in planetary atmospheres at interstellar distances, especially in the case of something like Earth where it’s quite flagrant.
Even if these are sometimes false alarms, in a “Dark Forest” scenario it’d still be worth sending a probe to go and kill whatever planets exhibit signs like that. It’s a lot cheaper and quieter than trying to fight an actual civilization. That’s why I can’t see why we wouldn’t have already been wiped out aeons ago in this scenario.
But that’s not actually true. We’ve been “broadcasting” the fact that there’s life on Earth in the form of the spectrographic signature of an oxygen-rich atmosphere, which is a clear sign that photosynthesis is going on. There’s no geological process that could maintain that much oxygen in the atmosphere. The Great Oxidation Event is when that started.
We have the technology to detect this kind of thing already, at our current level. Any civilization that could reach out and attack another solar system would be able to very easily see it.
Remember when piracy communities thought that the media companies were wrong to sue switch manufacturers because of that?
It baffles me that there’s such an anti-AI sentiment going around that it would cause even folks here to go “you know, maybe those litigious copyright cartels had the right idea after all.”
We should be cheering that we’ve got Meta on the side of fair use for once.
Look up “overfitting.” It’s a flaw in generative AI training that modern AI trainers have done a great deal to resolve, and even in the cases of overfitting it’s not all of the training data that gets “memorized.” Only the stuff that got hammered into the AI thousands of times in error.