• Pennomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    83
    ·
    1 day ago

    It’s easy to tune a chat to confidently speak any bullshit take. But overriding what an AI has learned with alignment steps like this has been shown to measurably weaken its capabilities.

    So here we have a guy who’s so butthurt by reality that he decided to make his own product stupider just to reinforce his echo chamber. (I think we all saw this coming.)

    • ToastedRavioli@midwest.social
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 day ago

      Its honestly a great analogy for the way that humans have a tendency to do the same thing. Most people are fairly incapable of setting aside what they already think is true when they go to assess new information. This is basically no different than an LLM being pushed to ignore nuance in order to maintain a predisposed alignment that it has been instructed to justify in spite of evidence to the contrary.

      If anything hes designed a model with built-in problems specifically to cater to human beings with the same design problems

      • Uriel238 [all pronouns]@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        12 hours ago

        In psychology, it’s called attitude polarization, where we ignore data that conflicts with an ideology while accepting data that confirms it. It’s a known common human bias.

        Scientists train themselves to accept new data as challenging old presumptions (that maybe the old model is false, or simplistic and some unconsidered noise is affecting observed data)… at least when they’re doing real science. Failure to do so, and to cling to older models, is how old dudes get tagged as hidebound reactionaries. And even Einstein couldn’t square his feelings regarding Heisenberg probability models of quantum dynamics.