• 1 Post
  • 98 Comments
Joined 10 months ago
cake
Cake day: November 30th, 2024

help-circle
  • Don’t know much in detail but I believe that after the revolution the communists had the brilliant idea to implement a liberal-style competitive multi-party political system, basically the agenda you hear from “democratic socialists” to get communists/socialists in power but maintain with a liberal political system based on multi-party competition. The result was immense factionalization of the communists into a crap ton of different battling parties that resulted in nothing ever getting done, and the main one that is “the government” mainly caring more about trying to hold onto power than actually putting any socialistic policies in place, and people are just kinda getting fed up with a government that doesn’t do jack shit. There are actually more communist parties in the opposition in the parliament than supportive of the government. It just goes to show that “democratic socialism” is a garbage fire, quite literally, the parliament building was set on fire. That is really the extend of my knowledge of it. I have never heard anything positive about the dumpster fire of a political situation there.






  • pcalau12i@lemmygrad.mltoMemes@lemmygrad.mlWe are not the same.
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    10 days ago

    “Did you hear MAGA wants the government to own parts of Intel? That is literally the worst thing they have ever done, they’re so bad they’re literally communists. I wish we could go back to the old days when Republicans were at least respectable and not communists, like Reagan.” 😐




  • It’s a myth that Marxism-Leninism says “thou shalt support every national liberation struggle.” If you read Foundations of Leninism it is pretty unambiguously clear that support for national liberation struggles should always be put into the global context of whether doing so supports the overall goals of dismantling imperialism and the global capitalist system or if it hinders it. If you read the book it is quite explicit that we should not support national liberal struggles that go against overall geopolitical interests; i.e. if that national liberation struggle is led and supported by big bourgeois imperialist powers and is being used to facilitate their own interests and so it would set the proletariat back to support it on the global stage. The point is that “national liberation” shouldn’t be treated as some sort of eternal unquestionable moral principle. You should put it into the global context. I don’t know very much about the specific cases you mention, but it is in no way inherently contradictory to Marxism-Leninism to question supporting a particular national liberation struggle. It depends upon their reasoning.




  • This is sadly pseudoscience, that only gets talked about because one smart guy endorsed it, but hardly anyone in academia actually takes it seriously. What you are talking about is called Orch OR, but Orch OR is filled with problems.

    One issue is that Orch OR makes a lot of claims that are not obviously connected to one another. The reason this is is an issue is because, while they call the theory “falsifiable” because it makes testable predictions, even if the predictions are tested and it is found to make the correct prediction, that wouldn’t actually even validate the theory because there is no way to actually logically or mathematically connect that experimental validation to all of its postulates.

    Orch OR has some rather bizarre premises: (1) Humans can consciously choose to believe things that cannot be mathematically proven, therefore, human consciousness must not be computable, (2) you cannot compute the outcome of a quantum experiment ahead of time, therefore there must be an physical collapse that is fundamentally not computable, (3) since both are not computable, they must be the same thing: physical collapse = consciousness, (4) therefore we should look for evidence that the brain is a quantum computer.

    Argument #1 really makes no sense. Humans believing silly things doesn’t prove human decisions aren’t computable. Just look at AI. It is obviously computable and hallucinates nonsense all the time. This dubious argument means that #3 doesn’t follow; there is no good reason to think consciousness and “collapse” are related.

    Argument #2 is problematic because physical collapse models are not compatible with special relativity or the statistical predictions of non-relativistic quantum mechanics, and so they cannot reproduce the predictions of quantum field theory in all cases, and so they aren’t particularly popular among physicists, and of course there is no evidence for them. Most physicists see the “collapse” as an epistemic, not a physical, event.

    Orch OR also arbitrarily insists on using the Diósi–Penrose model specifically, even though there have been multiple models of physical collapse proposed, such as GRW. There is no obvious reason to use this model specifically, it isn’t connected to any of the premises in the theory. Luckily, argument #2 does present falsifiable claims, but because #2 is not logically connected to the rest of the arguments, even if we do prove that the Diósi–Penrose model is correct, it doesn’t follow that #1, #3, or #4 are correct. We would just know there are physical collapses, but nothing else in the theory would follow.

    The only other argument that propose something falsifiable is #4, but again, #4 is not connected to #1, #3, or #4. Even if you desperately searched around frantically for any evidence that the brain is a quantum computer, and found some, that would just be your conclusion: the brain is a quantum computer. From that, #1, #2, and #3 do not then follow. It would just be an isolated fact in and of itself, an interesting discovery but wouldn’t validate the theory. I mean, we already have quantum computers, if you think collapse = consciousness, then you would have to already think quantum computers are conscious. A bizarre conclusion.

    In fact, only #2 and #4 are falsifiable, but even if both #2 and #4 are validated, it doesn’t get you to #1 or #3, so the theory as a whole still would remain unvalidated. It is ultimately an unfalsifiable theory but with falsifiable subcomponents. The advocates insist we should focus on the subcomponents as proof it’s a scientific theory because “it’s falsifiable,” but the theory as a whole simply is not falsifiable.

    Also, microtubules are structural. They don’t play any role in information processing in the brain, just in binding cells together, but it’s not just brain cells, microtubules are something found throughout your body in all kinds of cells. There is no reason to think at all they play any role in computations in the brain. The only reason you see interest in them from the Orch OR “crowd” (it’s like, what, 2 people who just so happen to be very loud?) is because they’re desperate for anything that vaguely looks like quantum effects in the brain, and so far microtubules are the only things that seem quantum effects may play some role, but this role is again structural. There is no reason to believe it plays any role in information processing or cognition.


  • US politics is plagued by American exceptionalism. The overwhelming majority of the population do not even consider how people in other countries view things, and they implicitly assume their own Overton window is the global Overton window of “reasonable discourse.” If anyone disagrees, they literally cannot fathom it is even possible to disagree with American politics, that literally cannot even register in their brain as a possibility, thus they assume you must either be lying or paid to disagree (“wumao” or “Russian bot”). This is why Americans are often so easy to convince that the US should intervene in other countries, because they nearly all implicitly believe that even the citizens of countries like China or Cuba also believe in American politics and are secretly hoping for Americans to come liberate them but are forced to lie about it by their government.

    Honestly, I see no way to break this mass delusion without something seriously calling into question American exceptionalism, something that forces Americans to actually take seriously their own position in global politics, which is something I doubt can come internally from the US. It would have to come externally: something in the global geopolitical situation would have to change to force Americans to take seriously the diversity of global politics. It doesn’t even matter if what it is is “socialist,” there just needs to be something that breaks the illusion that US-style politics is the only way to understand the world and the only valid system. You aren’t going to have much luck convincing a population of socialism in a capitalist country where suggesting anything outside of its own media Overton window is considered extremely taboo (which is ironic because if you ask most Americans straight-up if they trust the media, they will say no, but they will almost always defend everything the media says verbatim and act like it is absurd to question it).


  • I think a lot of proponents of objective collapse would pick a bone with that, haha, although it’s really just semantics. They are proposing extra dynamics that we don’t understand and can’t yet measure.

    Any actual physicist would agree objective collapse has to modify the dynamics, because it’s unavoidable when you introduce an objective collapse model and actually look at the mathematics. No one in the physics community would debate GRW or the Diósi–Penrose model technically makes different predictions, however, and in fact the people who have proposed these models often view this as a positive thing since it makes it testable rather than just philosophy.

    How the two theories would deviate would depend upon your specific objective collapse model, because they place thresholds in different locations. For GRW, it is based on a stochastic process that increases with probability over time, rather than a sharp threshold, but you still should see statistical deviations between its predictions and quantum mechanics if you can maintain a coherent quantum state for a large amount of time. The DP model has something to do with gravity, which I do not know enough to understand it, but I think the rough idea is if you have sufficient mass/energy in a particular locality it will cause a “collapse,” and so if you can conduct an experiment where that threshold of mass/energy is met, traditional quantum theory would predict the system could still be coherent whereas the DP model would reject that, and so you’d inherently end up with deviations in the predictions.

    What’s the definition of interact here?

    An interaction is a local event where two systems become correlated with one another as a result of the event.

    “The physical process during which O measures the quantity q of the system S implies a physical interaction between O and S. In the process of this interaction, the state of O changes…A quantum description of the state of a system S exists only if some system O (considered as an observer) is actually ‘describing’ S, or, more precisely, has interacted with S…It is possible to compare different views, but the process of comparison is always a physical interaction, and all physical interactions are quantum mechanical in nature.”

    The term “observer” is used very broadly in RQM and can apply to even a single particle. It is whatever physical system you are choosing as the basis of a coordinate system to describe other systems in relation to.

    Does it have an arbitrary cutoff like in objective collapse?

    It has a cutoff but not an arbitrary cutoff. The cutoff is in relation to whatever system participates in an interaction. If you have a system in a superposition of states, and you interact with it, then from your perspective, it is cutoff, because the system now has definite, real values in relation to you. But it does not necessarily have definite, real values in relation to some other isolated system that didn’t interact at all.

    You can make a non-separable state as big as you want.

    Only in relation to things not participating in the interaction. The moment something enters into participation, the states become separable. Two entangled particles are nonseparable up until you interact with them. Although, even for the two entangled particles, from their “perspectives” on each other, they are separable. It is only nonseparable from the perspective of yourself who has not interacted with them yet. If you interact with them, an additional observer who has not interacted with you or the three particles yet may still describe all three of you in a nonseparble entangled state, up until they interact with it themselves.

    This is also the first I’ve heard anything about time-symmetric interpretations. That sounds pretty fascinating. Does it not have experimenter “free will”, or do they sidestep the no-go theorems some other way?

    It violates the “free will” assumption because there is no physical possibility of setting up an experiment where the measurement settings cannot potentially influence the system if you take both the time-forwards and time-reverse evolution seriously. We tend to think because we place the measurement device after the initial preparation and that causality only flows in a single time direction, then it’s possible for the initial preparation to affect the measurement device but impossible for the measurement device to affect the initial preparation. But this reasoning doesn’t hold if you drop the postulate of the arrow of time, because in the time-reverse, the measurement interaction is the first interaction in the causal chain and the initial preparation is the second.

    Indeed, every single Bell test, if you look at its time-reverse, is unambiguously local and easy to explain classically, because all the final measurements are brought to a single locality, so in the time-reverse, all the information needed to explain the experiment begins in a single locality and evolves towards the initial preparation. Bell tests only appear nonlocal in the time-forwards evolution, and if you discount the time-reverse as having any sort of physical reality, it then forces you to conclude it must either be nonlocal or a real state for the particles independent of observation cannot exist. But if you drop the postulate of the arrow of time, this conclusion no longer follows, although you do end up with genuine retrocausality (as opposed to superdeterminism which only gives you pseudo-retrocausality), so it’s not like it gives you a classical system.

    So saying we stick with objective collapse or multiple worlds, what I mean is, could you define a non-Lipschitz continuous potential well (for example) that leads to multiple solutions to a wave equation given the same boundary?

    I don’t know, but that is a very interesting question. If you figure it out, I would be interested in the answer.


  • Many of the interpretations of quantum mechanics are nondeterministic.

    1. Relational quantum mechanics interprets particles as taking on discrete states at random whenever they interact with another particle, but only in relation to what they interact with and not in relation to anything else. That means particles don’t have absolute properties, like, if you measure its spin to be +1/2, this is not an absolute property, but a property that exists only relative to you/your measuring device. Each interaction leads to particles taking on definite states randomly according to the statistics predicted by quantum theory, but only in relation to things participating in those interactions.

    2. Time-symmetric interpretations explain violations of Bell inequalities through rejecting a fundamental arrow of time. Without it, there’s no reason to evolve the state vector in a single time-direction. It thus adopts the Two-State Vector Formalism which evolves it in both directions simultaneously. When you do this, you find it places enough constructs on the particles give you absolutely deterministic values called weak values, but these weak values are not what you directly measure. What you directly measure are the “strong” values. You can interpret it such that every time two particles interact, they take on “strong” values randomly according to a rule called the Aharonov-Bergmann-Lebowitz rule. This makes time-symmetric interpretations local realist but not local deterministic, as it can explain violations of Bell inequalities through local information stored in the particles, but that local information still only statistically determines what you observe.

    3. Objective collapse models are not really interpretations but new models because they can’t universally reproduce the mathematics of quantum theory, but some serious physicists have explored them as possibilities and they are also fundamentally random. You assume that particles literally spread out as waves until some threshold is met then they collapse down randomly into classical particles. The reason this can’t reproduce the mathematics of quantum theory is because this implies quantum effects cannot be scaled beyond whatever that threshold is, but no such threshold exists in traditional quantum mechanics, so such a theory must necessarily deviate from its predictions at that threshold. However, it is very hard to scale quantum effects to large scales, so if you place the threshold high enough, you can’t practically distinguish it from traditional quantum mechanics.



  • pcalau12i@lemmygrad.mltoScience Memes@mander.xyzGravity
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 months ago

    Arguably, if we insist on trying to come up with the simplest way to explain non-relativistic quantum mechanics, that is to say, if we are very conservative and stick to classical explanations unless we absolutely are forced not to (rather than throwing our hands up and saying it’s all magic that’s impossible to understand, as most people do), then we find that it comes naturally to explain non-relativistic quantum mechanics by treating particles as excitations in a classical field. This alone can explain the interference-based paradoxes in completely classical terms, like double-slit or Elitzur-Vaidman paradox, without altering any of the postulates of the theory in any way. The extension to quantum field theory then becomes more natural and intuitive. imo


  • pcalau12i@lemmygrad.mltoScience Memes@mander.xyzGravity
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    2 months ago

    For any physical theory, you can always just ask “why x”, like a child who constantly asks “why” over and over again to every answer, but you will always hit a bottom. There seems to be a popular mentality that “why x” is always a meaningful question, and from that, we can conclude that we don’t know anything at all, because all our beliefs rely on a “why x” we don’t know the answer to, an so they are all baseless. We can’t make any truth claims about the behavior of particles, galaxies, or anything, because you can just infinitely ask “why” until we hit a bottom and then you would say “I don’t know.”

    But, personally, I find this point of view rather bizarre, because, again, it can make it seem like we don’t know anything at all and have no foundations for truth claims in the slightest, and are completely ignorant about everything. I think it makes more coherent sense to just allow for to be a bottom to the questioning. Eventually a string of “why” questions will reach a bottom, where that bottom shouldn’t be answered with “I don’t know” but it should be answered with “it is what it is,” because, for all we know, it is indeed an accurate description of reality at a fundamental level and there is nothing beneath it.

    That shouldn’t be taken as a strong claim that there definitely isn’t anything beneath it, as if we should just accept our current most fundamental theories are the end of the line and stop searching. It should be taken as the weaker claim that as far as we currently know it is the bottom, and so we can indeed make truth claims upon that basis. The child might ask, “why do things experience gravity?” You might say, “time dilation near matter.” The child then may ask, “why does time dilate near matter?” In my opinion, the appropriate response to that is just, “as far as we know, it is what it is.” That could change in the future, but, given our best scientific models at the present moment, that is the end of the line of the explanation.

    That seems to be a fairly controversial point, though. Most people in my experience disagree, but I don’t see how you can have a basis for truth claims at all if you claim that “why gravity” does indeed have an answer but you can’t specify it, because then it would also be baseless to claim that gravity is caused by time dilation near matter, because you’ve not established that time actually does dilate near matter, as you would be claiming that this relies on postulates which you’ve not defined. It seems, again, simpler to just take the most fundamental theories as the postulates themselves, as the fundamental axioms.

    There is a popular point of view that we shouldn’t do this because scientific theories often change, so something you believe today can be proven wrong tomorrow. But then we end up never being allowed to believe anything at all. We always have to pretend we’re clueless about nature because if we believe in any of our most fundamental theories, then our beliefs could be overturned. But personally, I don’t see why this to be a problem. A person who believed Newtonian mechanics was fundamental to how nature worked back in the 1700s were shown later to be wrong, but that person’s beliefs were still closer to reality than the people who rejected it and upheld outdated Aristotelian physics, or people who refused to belief in anything at all. It is fine to later be shown to be wrong, nothing to be upset about, nothing negative about that. We are better off, imo, as treating our best physical theories as indeed fundamentally how reality works, the “bottom” so to speak, until we find new theories that show otherwise, and we change our minds with the times.

    That doesn’t disallow speculation or research into potentially more fundamental theories. Theories of quantum gravity are such a speculation. They remain in the realm of speculation because no one has demonstrated in the real world that it’s actually possible to construct a device such that quantum effects and gravitational effects are both simultaneously relevant and necessary to make predictions. The theories thus describe separate domains, and there isn’t a genuine need for a new theory until we can figure out how to bridge the two domains in reality.

    We don’t actually know what would happen if we bridge the two domains. We may find that our theories of turning gravity quantum are all wrong and that in fact it is quantum theory that needs to be abandoned. We may also find that the domains aren’t even bridgeable. We already know of certain physical limitations that make the domains unbridgeable, such as, building an interferometer sensitive enough to detect both gravitational and quantum effects simultaneously would collapse into a black hole. There may be more things like this we will discover later on that just render the two theories unbridgeable in physical reality.

    Many physicists are convinced that the bridging will end up turning gravity quantum, but this is just a complete guess, there’s no actual empirical evidence for it other than a complete historical coincidence that when studying the strong interaction physicists happened to accidentally stumble upon mathematics that also seemed to be able to also predict a particle that could explain gravity, giving birth to String Theory. People thought it must be correct because it wasn’t intentional but discovered by accident, but this isn’t a good criterion at all for suggesting it’s correct, and ultimately the theory never went anywhere.

    If we are to talk about theories replacing quantum mechanics and general relativity, we don’t have a clue what these would look like because it’s just speculation, and so it could go either way.



  • pcalau12i@lemmygrad.mltoScience Memes@mander.xyzif I fits...
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    3 months ago

    That’s not true. If you read Schrodinger’s original paper “The Present Situation in Quantum Mechanics” he’s pretty clear that he was attempting to show how ridiculous it is to treat a superposition of states as if a particle is actually smeared out in multiple locations at once, because you could use that particle as the basis of a chain reaction that would eventually affect a macroscopic object, and then you would have to say the macroscopic object is smeared out in multiple places at once. The argument was a reductio ad absurdum for treating microscopic objects as if they are smeared out in multiple places at once. Its fundamental point was simply not a commentary on macroscopic objects but microscopic objects.

    You don’t need the wave function to do quantum mechanics, it’s just a mathematical convenience, and so Schrodinger had insisted it shouldn’t be interpreted as a literal physical object as if particles are actually spreading out as waves. In his book “Science and Humanism” he says that the reason he invented the wave formalism is because he didn’t like Heisenberg’s formalism which, even though it made all the right predictions, it didn’t give intermediate states for particles, so it is as if they just hop around from interaction to interaction probabilistically, and the wave formalism was meant to “fill in the gaps” between the interactions.

    However, in that book he also says that he believes this project was a failure because all the wave formalism does is move the gap between interactions to a gap between the evolution of the quantum state and observation, which made even less sense, and so he changed his mind and argued that we should abandon the notion of filling in the gaps between interactions, and the illusion of continuous transitions between states is only a macroscopically emergent feature.