Update: engineers updated the @Grok system prompt, removing a line that encouraged it to be politically incorrect when the evidence in its training data supported it.

  • acosmichippo@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    11 hours ago

    The problem is LLMs are programmed by biased people and trained on biased data. So “good” AI developers will attempt to mitigate that in some way.