Generative transformers are specifically trained to model the statistical landscape of natural language and use that model to predict the next word in a sequence, i.e. the most statistically-likely next token. Their output is, by definition, average.
That’s the entire point of LLMs: to produce output that’s indistinguishable from the average.
Pseudo-science and phony absolutes.
I understand that this person’s job might be threatened by generated content, but that just makes their false claims even more suspect.
The only legitimate point is buried at the end.
Generative transformers are specifically trained to model the statistical landscape of natural language and use that model to predict the next word in a sequence, i.e. the most statistically-likely next token. Their output is, by definition, average.
That’s the entire point of LLMs: to produce output that’s indistinguishable from the average.