• very_well_lost@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    5 hours ago

    Generative transformers are specifically trained to model the statistical landscape of natural language and use that model to predict the next word in a sequence, i.e. the most statistically-likely next token. Their output is, by definition, average.

    That’s the entire point of LLMs: to produce output that’s indistinguishable from the average.