How dense can a company be? Or more likely how intentionally deceptive.
No, Eaton. We don’t need to “improve model reliability”, we need to stop relying on models full stop.
I love all these articles that frame the public’s reaction to something as the problem, while ignoring or glossing over the cause of the reaction entirely.
“How dare you question the orphan grinder! No, the real problem is that you don’t understand why the orphan grinder is necessary!”
That’s not at all what this is doing. It’s a call to make sure businesses out a priority on making these machine learning models less opaque, so you can see the inputs it used, the connections it found at each step to be able to see why a result was given.
You can’t debug a black box (you put in into and get an unexplained output) remotely as easily, if at all
I want eaton to do nothing with AI. I don’t want an ai developing circuit breakers, heavy duty automotive drivetrain or control compoenents, or other safety critical things.
“it’s difficult to get a man to understand something when his salary depends on him not.”
This sounds like they’re talking about machine learning models, not the glorified autocorrect LLMs. So the actually useful AI stuff that can be leveraged to do real, important things with large sets of data that would be much more difficult for humans to spot.
I doubt it.
It sounds like you are doubting something without understanding it. Let’s say you gathered all the electricity consumption of individual houses in July in your city. Now, if someone is building a new house next to a regular one, what do you predict how much electricity it will consume? You answer with the mean value of your dataset. It’s that simple.
This can count as machine learning.
Now, are you saying you doubt this math, which has been used for probably more than two millennium, or are you doubting something else?
…this math, which has been used for probably more than two millennium
Sure. That’s what I’m doubting. That’s what they’re talking about. That’s the hype.
Sorry, sir (or madam), you doubt math? Are you saying you don’t even believe mean value has a bigger chance of matching what you expect? Well that’s fine.
I think you’re missing sarcasm for insanity, and the reason that you’re doing that is that you were already belittling their viewpoint quite fiercely, rejecting absolutely everything they said just because you disagree with their conclusion.
“Everything they said”? Ha. The OP literally just said “I doubt it” without any reasoning. It sounds to me that they are the ones who reject everything other people said, and blindly believe in their instinct: “Yeah. I don’t believe science.”
Call it belittling. I probably won’t say it out loud, but I absolutely laugh at these kind of people silently, who don’t care to take a look at facts and theories.
You’re so insightful and wise. You have learned much from other viewpoints.
What is there to doubt? It’s right there in the text. LLMs are not data processing nor decision making models. There wouldn’t need to be a push to make the steps in LLM output more visible, like in other machine learning models