• 0 Posts
  • 93 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle

  • Ideas are great - but execution is king. Because execution is where most of your creativity actually makes a difference in how the idea is represented. If you have a good idea and a good execution, it’s very hard for someone to take that away from you. If you have a good idea, but execute it poorly, someone taking that idea and executing it better will leave you in the dust. But without the better execution that wouldn’t work.

    Better execution isn’t always fair though - we often start out in life being unable to compete because of lack of experience, financing, and publicity. But it’s basically how the entire entertainment industry works. Everyone just shuffles ideas around, and try to execute it better (or different enough) from the previous time the idea made the rounds.

    After finding good ideas, get people hooked on your execution, and they will not be able to get that anywhere else unless someone else comes along and does it even better, but with practice that can also be you.



  • Yes, it would be much better at mitigating it and beat all humans at truth accuracy in general. And truths which can be easily individually proven and/or remain unchanged forever can basically be 100% all the time. But not all truths are that straight forward though.

    What I mentioned can’t really be unlinked from the issue, if you want to solve it completely. Have you ever found out later on that something you told someone else as fact turned out not to be so? Essentially, you ‘hallucinated’ a truth that never existed, but you were just that confident it was correct to share and spread it. It’s how we get myths, popular belief, and folklore.

    For those other truths, we simply ascertain the truth to be that which has reached a likelihood we consider it to be certain. But ideas and concepts we have in our minds constantly float around on that scale. And since we cannot really avoid talking to other people (or intelligent agents) to ascertain certain truths, misinterpretations and lies can sneak in to cause us to treat as truth that which is not. To avoid that would mean the having to be pretty much everywhere to personally interpret the information straight from the source. But then things like how fast it can process those things comes in to play. Without making guesses about what’s going to happen, you basically can’t function in reality.


  • Yes, a theoretical future AI that would be able to self-correct would eventually become more powerful than humans, especially if you could give it ways to run magnitudes more self-correcting mechanisms at the same time. But it would still be making ever so small assumptions when there is a gap in the information it has.

    It could be humble enough to admit it doesn’t know, but it can still be mistaken and think it has the right answer when it doesn’t. It would feel neigh omniscient, but it would never truly be.

    A roundtrip around the globe on glass fibre takes hundreds of milliseconds, so even if it has the truth on some matter, there’s no guarantee that didn’t change in the milliseconds it needed to become aware that the truth has changed. True omniscience simply cannot exists since information (and in turn the truth encoded by that information) also propagates at the speed of light.

    a big mistake you are making here is stating that it must be fed information that it knows to be true, this is not inherently true. You can train a model on all of the wrong things to do, as long it has the capability to understand this, it shouldn’t be a problem.

    The dataset that encodes all wrong things would be infinite in size, and constantly change. It can theoretically exist, but realistically it will never happen. And if it would be incomplete it has to make assumptions at some point based on the incomplete data it has, which would open it up to being wrong, which we would call a hallucination.


  • I’m not sure where you think I’m giving it too much credit, because as far as I read it we already totally agree lol. You’re right, methods exist to diminish the effect of hallucinations. That’s what the scientific method is. Current AI has no physical body and can’t run experiments to verify objective reality. It can’t fact check itself other than be told by the humans training it what is correct (and humans are fallible), and even then if it has gaps in what it knows it will fill it up with something probable - but which is likely going to be bullshit.

    All my point was, is that to truly fix it would be to basically create an omniscient being, which cannot exist in our physical world. It will always have to make some assumptions - just like we do.


  • Hallucinations in AI are fairly well understood as far as I’m aware. Explained in high level on the Wikipedia page for it. And I’m honestly not making any objective assessment of the technology itself. I’m making a deduction based on the laws of nature and biological facts about real life neural networks. (I do say AI is driven by the data it’s given, but that’s something even a layman might know)

    How to mitigate hallucinations is definitely something the experts are actively discussing and have limited success in doing so (and I certainly don’t have an answer there either), but a true fix should be impossible.

    I can’t exactly say why I’m passionate about it. In part I want people to be informed about what AI is and is not, because knowledge about the technology allows us to make more informed decision about the place AI takes in our society. But I’m also passionate about human psychology and creativity, and what we can learn about ourselves from the quirks we see in these technologies.



  • It will never be solved. Even the greatest hypothetical super intelligence is limited by what it can observe and process. Omniscience doesn’t exist in the physical world. Humans hallucinate too - all the time. It’s just that our approximations are usually correct, and then we don’t call it a hallucination anymore. But realistically, the signals coming from our feet take longer to process than those from our eyes, so our brain has to predict information to create the experience. It’s also why we don’t notice our blinks, or why we don’t see the blind spot our eyes have.

    AI representing a more primitive version of our brains will hallucinate far more, especially because it cannot verify anything in the real world and is limited by the data it has been given, which it has to treat as ultimate truth. The mistake was trying to turn AI into a source of truth.

    Hallucinations shouldn’t be treated like a bug. They are a feature - just not one the big tech companies wanted.

    When humans hallucinate on purpose (and not due to illness), we get imagination and dreams; fuel for fiction, but not for reality.


  • ClamDrinker@lemmy.worldtomemes@lemmy.worldA bit late
    link
    fedilink
    arrow-up
    116
    arrow-down
    9
    ·
    edit-2
    2 months ago

    The thing is, I’ve seen statements like this before. Except when I heard it, it was being used to justify ignoring women’s experiences and feelings in regard to things like sexual harassment and feeling unsafe, since that’s “just a feeling” as well. It wasn’t okay then, and it’s not okay the other way around. The truth is that feelings do matter, on both sides. Everyone should feel safe and welcome in their surroundings. And how much so that is, is reflected in how those people feel.

    The outcome of men feeling being respected and women feeling safe are not mutually exclusive. The sad part is that someone who is reading this here is far more likely to be an ally than a foe, yet the people who need to hear the intended message the most will most likely never hear it nor be bothered by it. There’s a stick being wedged here that is only meant to divide, and oh my god is it working.

    The original post about bears has completely lost all meaning and any semblance of discussion is lost because the metaphor is inflammatory by design - sometimes that’s a good thing, to highlight through absurdity. But metaphors are fragile - if it’s very likely to be misunderstood or offensive, the message is lost in emotion. Personally I think this metaphor is just highly ineffective at getting the message across, as it has driven people who would stand by the original message to the other side due to the many uncharitable interpretations it presents. And among the crowd of reasonable people are those who confirm those interpretations and muddy the water to make women seem like misandrists, and men like sexual assault deniers. This meme is simply terrible and perhaps we can move on to a better version of it that actually gets the message across well, instead of getting people at each other’s throat.








  • Its funny how something like this get posted every few days and people keep falling for it like its somehow going to end AI. The people that make these models are acutely aware of how to avoid model collapse.

    It’s totally fine for AI models to train on AI generated content that is of high enough quality. Part of the research to train models is building data sets with a text description matching the content, and filtering out content that is not organic enough (or even specifically including it as a ‘bad’ example for the AI to avoid). AI can produce material indistinguishable from human work, and it produces material that wasn’t originally in the training data. There’s no reason that can’t be good training data itself.


  • You can train AI models on AI generated content though. AI collapse only occurs if you train it on bad AI generated content. Bots and people talking gibberish are just as bad for training an AI model. But there are ways to filter that from the training data. Such as language analysis. They will also most likely filter out any lowly upvoted comments, or those edited a long time since their original post date.

    And if you start posting now, any sufficiently good AI generated material, which other humans will like and upvote, will not be bad for the model.