Yeah, you’re right. I guess I disagree on some technicalities. I think they are AI and they even have a goal / motivation. And that is to mimic legible text. That’s also why they hallucinate, because that text being accurate isn’t what it’s about. At least not directly. The term is certainly ill-defined. And the word “intelligence” in it is a ruse. Sadly it makes it more likely people anthropomorphize the thing, which the AI industry can monetize… I’m still fairly sure there’s reinforcement-learning inside and a motivation / loss-function. It’s just not the one people think it is… Maybe we need some better phrasing?
Btw, there’s a very long interview with Richard Sutton on Youtube, going in detail about this very thing. Motivation and goals of LLMs and how it’s not like traditional machine learning. I enjoyed that video, think he’s right with a lot of his nuanced opinions. Spoiler Alert: He knows what he’s talking about and doesn’t really share the enthusiasm/hype towards LLMs.
Yeah, you’re right. I guess I disagree on some technicalities. I think they are AI and they even have a goal / motivation. And that is to mimic legible text. That’s also why they hallucinate, because that text being accurate isn’t what it’s about. At least not directly. The term is certainly ill-defined. And the word “intelligence” in it is a ruse. Sadly it makes it more likely people anthropomorphize the thing, which the AI industry can monetize… I’m still fairly sure there’s reinforcement-learning inside and a motivation / loss-function. It’s just not the one people think it is… Maybe we need some better phrasing?
Btw, there’s a very long interview with Richard Sutton on Youtube, going in detail about this very thing. Motivation and goals of LLMs and how it’s not like traditional machine learning. I enjoyed that video, think he’s right with a lot of his nuanced opinions. Spoiler Alert: He knows what he’s talking about and doesn’t really share the enthusiasm/hype towards LLMs.