• self@awful.systemsM
    link
    fedilink
    English
    arrow-up
    11
    ·
    10 months ago

    I keep flashing back to that idiot who said they were employed as an AI researcher that came here a few months back to debate us. they were convinced multimodal LLMs would be the turning point into AGI — that is, when your bullshit text generation model can also do visual recognition. they linked a bunch of papers to try and sound smart and I looked at a couple and went “is that really it?” cause all of the results looked exactly like the section you quoted. we now have multimodal LLMs, and needless to say, nothing really came of it. I assume the idiot in question is still convinced AGI is right around the corner though.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        10 months ago

        Yall can sneer whatever you want, it doesn’t undo the room temperature superconductor made out of copper! We are going to mars with bitcoin and optimus sex bots! cope and seethe!

        /s of course.

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      10 months ago

      I caught a whiff of that stuff in the HN comments, along with something called “Solomonoff induction”, which I’d never heard of, and the Wiki page for which has a huge-ass “low quality article” warning: https://en.wikipedia.org/wiki/Solomonoff’s_theory_of_inductive_inference.

      It does sound like that current AI hype has crested, so it’s time to hype the next one, where all these models will be unified somehow and start thinking for themselves.

      • titotal@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        ·
        10 months ago

        Solomonoff induction is a big rationalist buzzword. It’s meant to be the platonic ideal of bayesian reasoning which if implemented would be the best deducer in the world and get everything right.

        It would be cool if you could build this, but it’s literally impossible. The induction method is provably incomputable.

        The hope is that if you build a shitty approximation to solomonoff induction that “approaches” it, it will perform close to the perfect solomonoff machine. Does this work? Not really.

        My metaphor is that it’s like coming to a river you want to cross, and being like “Well Moses, the perfect river crosser, parted the water with his hands, so if I just splash really hard I’ll be able to get across”. You aren’t Moses. Build a bridge.

        • self@awful.systemsM
          link
          fedilink
          English
          arrow-up
          11
          ·
          10 months ago

          it’s very worrying how crowded Wikipedia has been getting with computer pseudoscience shit, all of which has a distinct stench to it (it fucking sucks to dig into a seemingly novel CS approach and find out the article you’re reading is either marketing or the unpublishable fantasies of the deranged) but none of which seems to get pruned from the wiki, presumably because proving it’s bullshit needs specialist knowledge, and specialists are frequently outpaced by the motivated deranged folks who originate articles on topics like these

          for Solomonoff induction specifically, the vast majority of the article very much feels like an attempt by rationalists to launder a pseudoscientific concept into the mainstream. the Turing machines section, the longest one in the article, reads like a D-quality technical writing paper. the citations are very sparse and not even in Wikipedia’s format, it waffles on forever about the basic definition of an algorithm and how inductive Turing machines are “better” because they can be used to implement algorithms (big whoop) followed by a bunch of extremely dense, nonsensical technobabble:

          Note that only simple inductive Turing machines have the same structure (but different functioning semantics of the output mode) as Turing machines. Other types of inductive Turing machines have an essentially more advanced structure due to the structured memory and more powerful instructions. Their utilization for inference and learning allows achieving higher efficiency and better reflects learning of people (Burgin and Klinger, 2004).

          utter crank shit. I dug a bit deeper and found that the super-recursive algorithms article is from the same source (it’s the same rambling voice and improper citations), and it seems to go even further off the deep end.

          • blakestacey@awful.systemsM
            link
            fedilink
            English
            arrow-up
            9
            ·
            edit-2
            10 months ago

            Taking a look at Super-recursive algorithm, and wow…

            Examples of super-recursive algorithms include […] evolutionary computers, which use DNA to produce the value of a function

            This reads like early-1990s conference proceedings out of the Santa Fe Institute, as seen through bong water. (There’s a very specific kind of weird, which I can best describe as “physicists have just discovered that the subject of information theory exists”. Wolfram’s A New Kind[-]Of Science was a late-arriving example of it.)

            • self@awful.systemsM
              link
              fedilink
              English
              arrow-up
              6
              ·
              10 months ago

              as someone with an interest in non-Turing models of computation, reading that article made me feel how an amateur astronomer must feel after reading a paper trying to find a scientific justification for a flat earth

            • V0ldek@awful.systems
              link
              fedilink
              English
              arrow-up
              3
              ·
              10 months ago

              In computability theory, super-recursive algorithms are a generalization of ordinary algorithms that are more powerful, that is, compute more than Turing machines[citation needed]

              This is literally the first sentence of the article, and it has a citation needed.

              You can tell it’s crankery solely based on the fact that the “definition” section contains zero math. Compare it to the definition section of an actual Turing machine.

              • blakestacey@awful.systemsM
                link
                fedilink
                English
                arrow-up
                3
                ·
                edit-2
                10 months ago

                More from the “super-recursive algorithm” page:

                Traditional Turing machines with a write-only output tape cannot edit their previous outputs; generalized Turing machines, according to Jürgen Schmidhuber, can edit their output tape as well as their work tape.

                … the Hell?

                I’m not sure what that page is trying to say, but it sounds like someone got Turing machines confused with pushdown automata.

                • V0ldek@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  10 months ago

                  That’s plainly false btw. The model of a Turing machine with a write-only output tape is fully equivalent to the one where you have a read-write output tape. You prove that as a student in elementary computation theory.

                  • aio@awful.systems
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    edit-2
                    10 months ago

                    The article is very poorly written, but here’s an explanation of what they’re saying. An “inductive Turing machine” is a Turing machine which is allowed to run forever, but for each cell of the output tape there eventually comes a time after which it never modifies that cell again. We consider the machine’s output to be the sequence of eventual limiting values of the cells. Such a machine is strictly more powerful than Turing machines in that it can compute more functions than just recursive ones. In fact it’s an easy exercise to show that a function is computable by such a machine iff it is “limit computable”, meaning it is the pointwise limit of a sequence of recursive functions. Limit computable functions have been well studied in mainstream computer science, whereas “inductive Turing machines” seem to mostly be used by people who want to have weird pointless arguments about the Church-Turing thesis.

                • self@awful.systemsM
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  10 months ago

                  it’s hard to determine exactly what the author’s talking about most of the time, but a lot of the special properties they claim for inductive Turing machines and super-recursive algorithms appear to be just ordinary von Neumann model shit? also, they seem to be rather taken with the idea that you can modify and extend a Turing machine, but that’s not magic — it’s how I was taught the theoretical foundations for a bunch of CS concepts, like nondeterministic Turing machines and their relationship to NP-complete problems

                  • blakestacey@awful.systemsM
                    link
                    fedilink
                    English
                    arrow-up
                    3
                    ·
                    edit-2
                    10 months ago

                    New top-level thread for complaining about the worst/weirdest Wikipedia article in one’s field of specialization?

                    I wonder how much Rationalists have mucked up Wikipedia over the years just by being loud and persistent on topics where actual expertise would be necessary to push back.

        • blakestacey@awful.systemsM
          link
          fedilink
          English
          arrow-up
          5
          ·
          10 months ago

          “Solomonoff induction” is the string of mouth noises that Rationalists make when they want to justify their preconceived notion as the “simplest” possibility, by burying all the tacit assumptions that actual experience would let them recognize.