4 arguments for the multiverse
Clockwise from top left: Occam, Deutsch, Everett, and Dirac |
1. Occam's razor
Hugh Everett's 1956 thesis The Theory of the Universal Wavefunction opens with a mathematical summary of the then widely accepted Copenhagen Interpretation.
"... there are two fundamentally different ways in which the state function can change:
Process 1: The discontinuous change brought about by the observation of a quantity with eigenstates $\phi_1, \phi_2,...,$ in which the state $\psi$ will be changed to the state $\phi_j$ with probability $\lvert(\psi,\phi_j)\rvert^2$.
Process 2: The continuous, deterministic change of state of the (isolated) system with time according to a wave equation $\frac{\partial \psi}{\partial t} = U\psi$, where $U$ is a linear operator."
The 1st process is commonly known as the "collapse of the wavefunction" and the 2nd as the "time dependent Schroedinger equation". Now according to Occam, if this theory with two processes can be replaced by a simpler one then we should be inclined to believe the simpler theory. Although there is a caveat: the simpler theory must make the same predictions (or if it makes different predictions they should prove to be the correct ones).
In some cases it may not be obvious which of two competing theories is "simpler". But if the only difference is that one has an additional rule then it's a no-brainer that the theory with one less rule is the simplest. So, if it can be shown that one of the two rules above is a consequence of the other, then the redundant rule should be ditched because, as Laplace said when asked where God was in his model of the solar system, "Je n'avais pas besoin de cette hypothèse-là".
This is precisely what Hugh Everett goes on to do in the remainder of his thesis. He shows that the apparent collapse of the wavefunction referred to in Process 1 is actually a consequence of the deterministic wavefunction evolution described by Process 2. This magic is achieved by an emergent phenomenon known commonly as "entanglement".
Suppose your system consists of two particles each with a known spin, and for simplicity let's take the two spins to be oriented orthogonally to each other. We can treat this as an initial condition and use the Schroedinger equation to evolve the state and find out what happens next. There's an important point to make about the mathematical space in which the state lives. It contains the state just described, but also every other describable state, for example it contains one where the two spins have become aligned oppositely and a photon has been released. Let's call these "classical" states. In addition to the classical states the space contains all linear superpositions of classical states, where the multipliers are complex numbers. The way to think of this is that the space in which the state lives and evolves with time is a complex vector space in which the "classical" states provide a set of basis vectors.
This allows us to think of most states as a superposition of classical states. According to Process 1, as soon as such a state is "observed" - whatever that means - it "collapses" to one of its constituent classical states, and the complex multipliers provide the likelihood for each outcome.
But what if no "observation" takes place? Then according to Process 2 the initial state described above evolves into as very special state known as the singlet state. This state is a sum of two classical states. The first - "up/down" - has spin 1 up and spin 2 down; the second - "down/up" - is the other way round. Usually the singlet state is described as an entangled state. But another way to think of this situation is that we've allowed the 1st spin time to measure the orientation of the 2nd. When thought of this way, we can view the singlet state as a superposition of one classical state in which the 1st spin has observed the 2nd to be "down" and another classical state in which it has observed the 2nd spin to be "up".
Everett extends this further: if we ignore Process 1 we can use the Schroedinger equation to evolve the state of the entire universe. In this universal state many things are entangled: humans, cannonballs, electrons. But the entangled state of the universe is actually a superposition of a gazillion classical states. In some of these classical states there's you thinking you've collapsed a wavefunction because you've made an observation of a spin, just like the 1st spin in the singlet state. But what you cannot see is that the entangled state of the universe is a superposition which also includes a version of you that got the opposite result.
So, Hugh Everett showed that Process 1 is predicted by Process 2. Or rather, the illusion of wavefunction collapse is predicted by the Schroedinger equation. This means we can and should ditch Process 1, and in doing so keep Occam happy. The consequence of ditching Process 1, a.k.a. wavefunction collapse, is that we have to accept that a multitude of other classical states of the universe are just as real as the one that we are observing.
I have heard (on many an occasion) Occam's razor used as an argument against Hugh Everett's many worlds. The argument goes like this: many worlds are more complicated than just one, and Occam tells us simple theories are more likely to be correct. This is to fundamentally misunderstand Occam's razor. What it tells us is that we should believe the simplest explanation, not the explanation with the simplest consequences. Our current theories of cosmology are far simpler than the older theological explanations, in terms of the amount of paper needed to write them down. But the current theory predicts a universe vastly more complex, with myriad galaxies, black holes, stars and planets. The attraction (or one of the attractions) of our current theory is the simplicity of its explanation, it is not the simplicity of what it predicts.
2. The anthropic principle
People often say they are invoking the anthropic principle when they answer a question like "why is it like that" with "if it weren't we wouldn't be here, would we!" That doesn't make any sense on its own but referring to a named principle usually deflects further questions.
Let's give an example: we know that entropy always increases because high entropy states are more numerous and therefore more likely than low entropy states, with heat death being the highest entropy and, it follows, most likely. This means that we are currently in a very unlikely state and the most unlikely state the universe has ever been in was when it began. So one can ask "why did the universe start in such an unlikely state?", and one can answer "because you wouldn't be here if it hadn't". But what actually are we trying to say with such an answer:
Let's give an example: we know that entropy always increases because high entropy states are more numerous and therefore more likely than low entropy states, with heat death being the highest entropy and, it follows, most likely. This means that we are currently in a very unlikely state and the most unlikely state the universe has ever been in was when it began. So one can ask "why did the universe start in such an unlikely state?", and one can answer "because you wouldn't be here if it hadn't". But what actually are we trying to say with such an answer:
- That it's just a massive co-incidence?
- That we humans are the goal of the universe?
- That reality is vastly bigger than we supposed, every possibility is tried out somewhere, and the existence of this state is therefore not so unlikely?
I hope, like me, you will discard 1 & 2 out of hand, as neither actually explain anything. So what we should mean by the anthropic principle is that what appears to be an amazing coincidence making self-aware life possible is actually evidence of a larger reality, in which every possibility is explored.
To give another example, we live on a planet in what is known as the Goldilocks zone: the Sun is just the right distance and just the right size to allow liquid water. In addition to this we have a protective atmosphere and a magnetic field which deflects dangerous particles. Obviously, we wouldn't be here if this were not the case. We could say it is just luck, or that God designed it for us, but the best explanation is that the universe is full of suns, and planets and whilst most of these are not conducive to life, the sheer number of them makes it likely that at least one would be. This was essentially the argument made by renaissance monk Giordano Bruno (who was burned at the stake).
Going back to the fortuitously low entropy of the universe, the anthropic principle as interpreted here suggests the existence of a host of other universes. Most of these will look like some sort of heat death and be incredibly dull, but inevitably some will be more interesting low entropy universes like this one.
Max Tegmark uses this version of the anthropic principle over and over to argue for different types of multiverse in his book Our Mathematical Universe. For example, looking at the constants of nature, such as the charge on the electron, we find that each one has to be within a very small range to allow for intelligent life. Change some and planets never form, change others and atoms don't either. If these constants were random it would be incredibly unlikely they'd have the right values for self aware life to be possible. This alone should hint to us that there are places in reality where alternate constants are tried out. But, as with the multiverse of quantum theory we don't need to rely on the anthropic principle because we have other reasons for believing these constants do actually vary, in particular because it is predicted by inflation. However, it does seem to be a principle: whenever you see a massive coincidence that enables us to exist, assume reality is larger than you originally thought.
Tegmark takes this to the nth degree by looking at the very laws of physics and asking: why those laws and not others? His answer is to apply the principle and claim that all mathematical structures are physically real, and that we just live in one with enough structure to support intelligent life. This happens to be very appealing from the point of view of Occam's razor too. If every mathematical structure is physically real then it is not necessary to write anything down at all in order to describe all of reality. The laws we observe are not fundamental to all of reality, they're just an address telling us where in reality we "live".
3. Forward reasoning
This is an ingenious argument from chapter two of The Fabric of Reality by David Deutsch. The chapter is titled Shadows.
It starts with the standard set up for Young's double slit experiment. Photons are fired at a screen with two slits and an interference pattern emerges on a 2nd screen behind it. This interference pattern persists even when the intensity of the light is dialled down so far that the photons hit the 2nd screen one at a time, with each one making a little flash when it lands. (Although with the intensity this low you do need to record where the photons land in order to see the interference pattern build up over time.) What is interesting about this interference pattern is that there are locations where the photons can land if only one slit is open, but where they cannot land if both slits are open.
Deutsch reasons forward from this experiment and, without needing to resort to mathematics, shows that many worlds are an inevitable consequence. First, he argues that between photon creation and destruction something must go through each slit: if one of the slits has nothing pass through it then the possible outcomes would be the same as if that slit were blocked. So, what is it that passes through? We can determine something of its nature, by experimenting with mirrors, lenses, opaque materials, and the like. What we find when we do this is that whatever it is, it's reflected by the same materials that reflect photons, it's guided by the same materials that guide photons, and it's blocked by the same materials that block photons. It must be a sort of photon, but we only detect a single flash, not two, so a reasonable name for whatever it is would be a "shadow" photon.
Next Deutsch asks how many shadow photons there are. If we repeat the experiment with pin pricks instead of slits we still get an interference pattern, and we know that there are an enormous number of locations where we could make a 2nd pin prick. This means that there are an enormous - possibly infinite - number of shadow photons for each "real" photon.
We know that shadow photons can be blocked because if we place a 50% opaque mist across one of the slits then the interference pattern begins to disappear. In fact the distribution of the flashes on the 2nd screen becomes the average of the one slit and two slit distributions. This means that shadow photons are blocked by the mist particles. But we know that there are a huge number of shadow photons and if half of these are blocked then the particles of mist must be absorbing an enormous amount of momentum and energy. Since we do not see any effect of this we have to conclude that what is actually blocking the shadow photons is not mist particles, but "shadow" mist particles.
Let's consider just one shadow mist particle. If it could be hit by every shadow photon then an enormous number of shadow photons would hit it, and a hole would be blown through the shadow mist. The conclusion we are drawn towards is therefore that there is actually a separate shadow mist for each shadow photon and each shadow photon can only be blocked by its own mist. By similar reasoning we can determine that each shadow photon has its own shadow screens. When looked at this way we see that there is actually nothing special about the non-shadow photon, non-shadow mists, and non-shadow screens. Reality is partitioned into complete universes with Young's double slit experiment being performed in parallel with slightly different results in each one. The universes only interact weakly via interference making some outcomes appear more often in the multiverse than others, and some not appear at all.
Just in case the reader is tempted to imagine that this parallelization is restricted to the inanimate matter involved in the experiment, Deutsch replaces the 2nd screen with the retina of a frog. He chooses this particular animal because, unlike humans, it is capable of detecting individual photons of light. He imagines that this frog jumps every time it detects a photon and shows, by taking the argument to its logical conclusion, that although we may see a stationary frog there are infinitely many shadow frogs and some of these are jumping.
4. Lack of any consistent alternative
Let's head back to Hugh Everett's thesis. After summarizing the Copenhagen interpretation mathematically he attacks the imprecise part of "Process 1". Remember that Process 1 is "The discontinuous change brought about by the observation of a quantity". The imprecise part of this is the word "observation". The question of what exactly it is that constitutes an observation is known as "the measurement problem". Everett demolishes the idea that the term "observation" can be made precise, and he does this via a thought experiment.
Imagine Alice is measuring a spin in a superposition of the up and down states. Her observation causes the state to collapse to a single classical state, say "up". That's all well and good, except that Alice and her entire laboratory are in a sealed box and outside of the sealed box is Bob. According to Bob the contents of the box - including Alice - are in a superposition of states until he opens the box and observes its contents. But Bob, in turn, is in a box controlled by Charlie, and Charlie thinks he is the one who collapses the state. So, if Alice observes the spin, then Bob observes Alice , and then Charlie observes Bob, who actually collapsed the wavefunction? Was it Alice, Bob, or Charlie? Or going the other way, was it Alice, Alice's computer, or a small part of the equipment that came into direct contact with the spin? The mathematics works wherever you deem the division between observer and observed to lie! The measurement problem shows that the Copenhagen Interpretation lacks internal consistency: if the world works that way according to Alice then it doesn't according to Bob, and so on.
I know of only two internally consistent solutions to the measurement problem. One is to say that neither Alice, Bob, or Charlie can collapse wavefunctions. Only I can collapse them. I am special, no one else can do this. Mwah ha ha ha! Although this is internally consistent, I'm going to struggle to convince you of it! And if it were true then everything outside of my light cone, well over 99% of reality, would still exist in a superposition of states, i.e. as many worlds. Solipsism aside, the other internally consistent solution to the measurement problem is to ditch the postulate of wavefunction collapse altogether, and as a consequence accept the existence of the multiverse.
There may be another internally consistent solution to the measurement problem, but if there is I'm not aware of it. A word of warning though: there are a lot of people out there claiming to have a solution that doesn't require you to accept the multiverse. Invariably these solutions turn out to be incomprehensible philosophical clap trap. It is part of our nature that we blame ourselves when we fail to understand something we've read, especially when it's written by a learned academic. But often the failure belongs to the author not the reader. Sometimes things do not make sense because they are nonsense. And sometimes the philosophical acrobatics that make interpretations of quantum theory so difficult to understand only exist to make you blame yourself when fail to follow them, and hide the fact that it's meaningless nonsense written by someone desperate to avoid having to accept the existence of the multiverse.
The many worlds theory is mathematically efficient, conceptually simple, and internally consistent. No other purported explanation is any of these. Not liking the ramifications is not a counter argument.
Comments
Post a Comment