• theunknownmuncher@lemmy.world
    link
    fedilink
    arrow-up
    7
    arrow-down
    3
    ·
    edit-2
    15 hours ago

    I know everyone wants to be like “ha ha told you so!” and hate on AI in here, but this headline is just clickbait.

    Current AI models have been trained to give a response to the prompt regardless of confidence, causing the vast majority of hallucinations. By incorporating confidence into the training and responding with “I don’t know”, similar to training for refusals, you can mitigate hallucinations without negatively impacting the model.

    If you read the article, you’ll find the “destruction of ChatGPT” claim is actually nothing more than the “expert” making the assumption that users will just stop using AI if it starts occasionally telling users “I don’t know”, not any kind of technical limitation preventing hallucinations from being solved, in fact the “expert” is agreeing that hallucinations can be solved.