I guess now we finally know why Babbage never finished building the Analytical Engine.
Confusion of ideas in, garbage out.
That quote is my favorite example of a very polite wtf
So the I-d10t bug has been around since the beginning, it seems.
I’ve also heard it referred to as PICNIC: Problem In Chair, Not In Computer.
PEBKAC - Problem Exists Between Keyboard And Chair is another one.
I’m a fan of CUI (Chair User Interface) problem
Layer-8 issue, even when it’s DNS, it’s a Layer-8 issue.
Could be layer 9, management
Ada Lovelace invented coding on that thing btw
And then had the wisdom to die before a computer capable of running her programs was invented, thus saving the bother of having to debug them.
Writes code.
Realises that debugging code that was written by the lunatic that is yourself two nights ago is going to be a big part of her life.
dies
We’ve all had debugging sessions where that feels like the best option. Right?
Debugging was easier when all you had to do was spray the room with fly spray and vacuum the tubes.
I wouldnt have done anything different
Could it run Doom though?
I’ve seen a couple papers on theoretical designs for purely mechanical computers that can run doom, but as far as I am aware I’ve never heard of one that’s actually been built.
but in theory yes it could have
i want to play doom with a flipbook now
it has limited memory but could me expanded, by a lot. but I’m theory yes, no display though.
Pff, who needs a display? Just do that Matrix thing and render the raw state in your head.
I didn’t even see the code anymore. It’s just; pinky, capro, barron.
Person, woman, man, camera, tv.
given how mechanical it was, it likely wouldn’t be hard to colour some of the memory gears and make it so they are visible from the outside turning it into a display.
I always assumed they were asking if it was rigged.
Like, i can write function sum(a, b) that always returns 10, and impress people how it’s correct when I pass in 1,9 and 2,8 and 3,7. But if I pass in 7,7 it’ll still return the “right” answer of 10, because it’s rigged and not actually doing math.
That’s a good point, but a few decades of talking to clients has led to a number of conversations like this where they want it to “just work”, even if they’ve input the wrong information.
Clients? Shit happens in my house.
“My monitor keeps turning off.”
“Ok next time it happens ill look at it and see if i can figure out what is going on.”
“Cant you just fix it?”
“Fix what? I dont know whats wrong yet.”
“Just fix the monitor.”
Legitimately, about 1/3 of the time my mere presence seems to magically fix the issue.
i really should have gone into IT because electronics spontaneously break around me
There was a thread on Reddit where people likewise noted that having another person try problematic software solved the issue. So one commenter regaled how a dude sidestepped the whole rigmarole by saying to his colleague “look, this thing’s broken again”, and then before the other guy could step in, he clicked the thing himself, and it worked.
Same, I keep track of magic on a white board in my office
I’ve started defaulting to just saying “yes” with my family and pretending to fix it. I’m actually thankful for the laptop revolution, cause I can just say “it’s fucked, buy a new one.”
Once you’ve got the new one, I’ll take your old one and dispose of it appropriately…
I have like a dozen old laptops with various flavors of Linux on them because of this. Can’t give them away cause apparently Linux is a scary word in this part of the country.
Ah, you must be an expert
One time my boss asked me to basically solve the Travelling salesman problem.
My first pass at ot was a simple grab closest neighbor solution, but that left a slightly unoptimal path and my boss asked me to “fix” it. I explained to him why, no, I can’t make it both fast amd accurate, pick one, while also showing him that wikipedia page. I was so mad when he said just make it more accurate ignoring now it takes hours to run sometimes only to save 10 seconds of a machine moving.
This is how I expect AI to work. I will silently think of a thing, and the AI must make it perfectly in one go. If it doesn’t, I have just lacked in describing in detail what it should do. And that takes thousands of lines of code.
I always assumed they were asking if it was rigged.
That’s a valid assumption one can only make without knowing the malevolent stupidity of typical computer users.
Alternatively, people could genuinely believe the primitive computer is a “thinking machine”. So if you fat-finger an input, will the machine know you made a mistake and intuitively correct you? Not unlike asking “Hey, I’ve got ten days of vacation, can I take two weeks off?” And your coworker - knowing a week is seven days, but you’re only referring to business days - responds “Yes”.
No, they were literally asking if the machine was able to return the right result if the person didn’t enter it ccorrectly. You know, like how some people expect search engines and AI to give them the answer they want even if they use the wrong words.
Oh like when you type “population of tenton” and it returns “Did you mean Trenton? That population is XYZ”
Yes, except in the case of Babbage’s machine they were asking if putting 1235 instead of 1234 would give the same answer.
Search engines work that way because of having large large datasets and pattern recognition that can suggest based on typos. Calculators don’t do that.
Yeah but calculator back then was a profession. So if suddenly a machine can replace a complete profession it’s at least conprehensible to assume it can do more than it actually can. It’s basically the same with AI right now. There is this “overshoot” of what is expected from a new paradigm shifting technology. Similar to how people 100 years ago thought there will be flying cars by now.
Helicopters are flying cars.
It is possible that the question was intended to be about human error checking prior to starting the process of calculating, like noticing a lack of a decimal on a monetary number in a data set, and Babbage misunderstood. That would be a valid question, but isn’t how the quote is phrased.
No I meant Teton.
Big ones.
“Can it ChatGPT?”
“No.”
“Can ChatGPT?”
“No.”
“If I fuck up, will it correct it?”
“No.”
“Will ChatGPT correct it?”
“Yes. Too much.”
And thus the role of QA was born.
All unit tests show PEBKAC
Works on my machine
i.e. “I’m not smart enough, nor dumb enough, to understand how you arrived at such a stupid question.”
Wasn’t it a member of Parliament who asked him this? Or was that addition apocryphal?
No idea, but wouldn’t be surprised at all.
This I do not know. However the first widely reported railway fatality in the UK (possibly the world) was MP William Huskisson who was run over by Stevenson Rocket…
Not relevant to the conversation but I think it’s interesting.
Being extremely, extremely generous, maybe they meant a human would notice the input was incorrect? But even then, a human could notice the same when inputting it into a computer.
Old enough to remember Babbages video game store. I’d spend hours re-reading the descriptions on the back of every game box. Joy. Great share, thanks!
Ah yes, I remember being in the store when Charles Babbage himself would brag about his high score in Asteroids. Or that time he gave me a copy of the Doom shareware on 3.5" floppy. /s
Don’t you mean 5 and a quarter pence?
The true floppy!
8 inch disks were floppier.

True, but those were well out of style by the Doom era
Before my time, I had no clue they even existed until this comment. Love learning new things.
Congratulations on being one of today’s lucky 10,000!
GIGO
I’ve replied with just these letters to people before. Improved UX can only get you so far, before the ticket becomes “can you fix stupid?”.
PEBCAK.
[off topic?]
https://bookshop.org/p/books/the-difference-engine-a-novel-william-gibson/0a5ffa44e0f3f9f1
“The Difference Engine” Fifty years ago, Ada Lovelace and Charles Babbage gave the British empire the first working computer. Since that time, life has changed vastly in some areas, but remained the same in others. Great novel.
Isn’t the whole point of autocorrect to put the wrong input but still get the right output?
Not exactly. An autocorrect is a closest-match or prediction device with correct input given beforehand. When you type “fridsy”, what it does is to answer the question “between fridsy and this set of words, what is the shortest distance?”
But to the user, it can correct their inputs. The rest is an abstraction. My point is that there’s more to a platform than just precise calculations. Obviously the asker isn’t thinking this far ahead, but Babbage is also rather flippant in his response.
Babbage was being flippant because, when questioned about his mechanical calculator, he didn’t imagine how computers might function two hundred years later?
I mean, that’s a hyperbole. I think there’s more depth to this question from our point of view than just what’s on the surface.
No, not really. Calculators still don’t have autocorrect, because the concept is nonsense. With language, there are true and false combinations of letters. More probable, and less probable, combinations of words. Coffee is a word, covfefe is not. But a calculator cannot know that when you entered 2+2 you meant to enter 2+3, as both are valid inputs, and neither is more probable.
Isn’t this just dependent on the level of abstraction? At the low level a CPU is just a calculator.
Presumably the user has a way to enter these digits. If they’re using a touchscreen, then there’s plenty of algorithms being used to make sure the intended touch target is triggered, even if they touch something in between.
There’s a lot of effort into making sure the user gets the intended result even if their input is fuzzy.
Articulate the utility of a calculator that provides the response of “5” to “2+2.”
Gotta get down on Fridsy.














