I guess now we finally know why Babbage never finished building the Analytical Engine.

  • Horsecook@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    19 hours ago

    Babbage was being flippant because, when questioned about his mechanical calculator, he didn’t imagine how computers might function two hundred years later?

    • CookieOfFortune@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      19 hours ago

      I mean, that’s a hyperbole. I think there’s more depth to this question from our point of view than just what’s on the surface.

      • Horsecook@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        18 hours ago

        No, not really. Calculators still don’t have autocorrect, because the concept is nonsense. With language, there are true and false combinations of letters. More probable, and less probable, combinations of words. Coffee is a word, covfefe is not. But a calculator cannot know that when you entered 2+2 you meant to enter 2+3, as both are valid inputs, and neither is more probable.

        • CookieOfFortune@lemmy.world
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          18 hours ago

          Isn’t this just dependent on the level of abstraction? At the low level a CPU is just a calculator.

          Presumably the user has a way to enter these digits. If they’re using a touchscreen, then there’s plenty of algorithms being used to make sure the intended touch target is triggered, even if they touch something in between.

          There’s a lot of effort into making sure the user gets the intended result even if their input is fuzzy.