Generative transformers are specifically trained to model the statistical landscape of natural language and use that model to predict the next word in a sequence, i.e. the most statistically-likely next token. Their output is, by definition, average.
That’s the entire point of LLMs: to produce output that’s indistinguishable from the average.
Generative transformers are specifically trained to model the statistical landscape of natural language and use that model to predict the next word in a sequence, i.e. the most statistically-likely next token. Their output is, by definition, average.
That’s the entire point of LLMs: to produce output that’s indistinguishable from the average.