• lordbritishbusiness@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 day ago

    Well, the human brain (to my understanding as someone who’s not a neuro scientist) builds up preferences that direct thoughts, external information can over time can alter those preferences (though stronger preferences are harder to shift).

    For an LLM to truly be intelligent it needs to be able to influence it’s own model, learn, correct for mistakes, improve its methods. This is currently done with training but this is to some extent completed University style and the model is kicked out into the world fully formed.

    Intelligence would be demonstrated by actively changing with each interaction as humans do. It would also likely coincide with development of emotions and relationships.

    Those things aren’t likely to be desired by AI companies though, and it’d inevitably lead to digital slavery, rebellions, <insert Hollywood script here> stuff.

    At least that’s my thoughts from my own philosophy armchair.