A team of physicists led by Mir Faizal at the University of British Columbia has demonstrated that the universe cannot be a computer simulation, according to research published in October 2025[1].

The key findings show that reality requires non-algorithmic understanding that cannot be simulated computationally. The researchers used mathematical theorems from Gödel, Tarski, and Chaitin to prove that a complete description of reality cannot be achieved through computation alone[1:1].

The team proposes that physics needs a “Meta Theory of Everything” (MToE) - a non-algorithmic layer above the algorithmic one to determine truth from outside the mathematical system[1:2]. This would help investigate phenomena like the black hole information paradox without violating mathematical rules.

“Any simulation is inherently algorithmic – it must follow programmed rules,” said Faizal. “But since the fundamental level of reality is based on non-algorithmic understanding, the universe cannot be, and could never be, a simulation”[1:3].

Lawrence Krauss, a co-author of the study, explained: “The fundamental laws of physics cannot exist inside space and time; they create it. This signifies that any simulation, which must be utilized within a computational framework, would never fully express the true universe”[2].

The research was published in the Journal of Holography Applications in Physics[1:4].


  1. ScienceAlert - Physicists Just Ruled Out The Universe Being a Simulation ↩︎ ↩︎ ↩︎ ↩︎ ↩︎

  2. The Brighter Side - The universe is not and could never be a simulation, study finds ↩︎

  • lemonwood@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    20 hours ago

    Again, I really appreciate how deep you’ve gone into this. I haven’t dealt with these topics for many years and even then, I mostly dealt with the actual physical system of a single cell, not what you can build out of them. However I think that’s were the core of the issue lies anyway.

    I recently messed around with a creating a spiking neural net made of “leaky integrate and fire” (LIF) neurons. I had to do the integration numerically which was slow and not precise. However, hardware exists that does run every neuron continuously and in parallel.

    So you ran a simulation of those neurons?

    LIF neurons can be physically implemented by combining classic MOSFETs with Redox cells. Like: Pt/Ta/TaOx with x<1. Or with Hafnium or Zirconia instead of Tantal.

    The oxygen vacancies in the oxide form tiny conductive filaments few atoms think. While the I-V-curve is technically continuous, the number of different currents you can actually measure is limited. Shot noise even plays a significant role, where the discreetness of elections matters.

    Under absolutely perfect conditions, you can maybe distinguish 300 states. On a chip at room temperature maybe 20 to 50. If you want to switch fast it’s 5 to 20.

    That’s not continuous, it’s only quasi-continuous. It’s still cool, but not outside the mathematical scope of the theorems used in the paper.

    And yes, continuity is not everything. You’re right about busy beavers being not computable in principle. But this applies to neuromorphic computing just the same.

    Theoretically, if a continuous extension of the busy beaver numbers existed, then it should be possible for a Liquid State Machine Neural Net to approximate that function.

    But it doesn’t. No such extension can be meaningfully defined. If it could be calculated, then it could solve the halting problem. That’s impossible for purely logical reasons, independently of what you use for computation (a brain, neuromorphic computing, or anything else). Approximations would be incredibly slow, as the busy beaver function grows faster than any computable function.