The problem is that anything even remotely related to AI is just being called “AI,” whether it’s by the average person or marketing people.
So when you go to a company’s website and you see “powered by AI,” they could be talking about LLMs, or an ML model to detect cancer, and the average person won’t know the difference between the technologies.
So if someone universally rejects anything that says it “uses AI” just because what’s usually called “AI” is just badly implemented LLMs that make the experience worse, they’re going to inevitably catch nearly every ML model in the crossfire too, since most companies are calling their ML use cases “AI powered,” and that means rejecting companies that develop models like those that detect tumors, predict protein folding patterns, identify anomalies in other health characteristics, optimize traffic routes in cities, etc, even if those use cases aren’t even related to LLMs and all the flaws they often bring.
The problem is that anything even remotely related to AI is just being called “AI,” whether it’s by the average person or marketing people.
So when you go to a company’s website and you see “powered by AI,” they could be talking about LLMs, or an ML model to detect cancer, and the average person won’t know the difference between the technologies.
So if someone universally rejects anything that says it “uses AI” just because what’s usually called “AI” is just badly implemented LLMs that make the experience worse, they’re going to inevitably catch nearly every ML model in the crossfire too, since most companies are calling their ML use cases “AI powered,” and that means rejecting companies that develop models like those that detect tumors, predict protein folding patterns, identify anomalies in other health characteristics, optimize traffic routes in cities, etc, even if those use cases aren’t even related to LLMs and all the flaws they often bring.