I assume they all crib from the same training sets, but surely one of the billion dollar companies behind them can make their own?
I assume they all crib from the same training sets, but surely one of the billion dollar companies behind them can make their own?
Well, technically, the "AI"s… can generate their own additional training data…
But then trying to train another AI on said AI-generated data… well, then the AI starts to develop toward model collapse, basically, it gets more stupid and incoherent, develops weirder and stronger ‘quirks’.
But yeah, as far as I see it, basically zero chance an LLM advances beyond ‘very fancy autocomplete’ toward AGI or a capacity for actual metacognition, to think about its own thinking and then try and modify that.
Sorry, but you’re not gonna get a super intelligence if it isn’t capable of actually assessing and correcting itself.