Quantum  Entanglement and Causality

by Fergus Ray-Murray

Acknowledgements


1.  Introduction:  Relativistic Causality

In the twentieth century, physics has provided us with a fascinating and at first mysterious new perspective on time (and space), in the form of relativity theory. It has also provided us with a new and frequently baffling perspective on the nature of the sub-microscopic world, in the form of quantum physics, and this in turn has spawned a number of tricky questions about the nature of time and causality. In particular, quantum systems show correlations over such distances that it is very difficult to reconcile them with the picture of time painted by relativity and the picture of cause and effect with which we are all familiar from the earliest ages.

  The phenomenon in question is known as quantum entanglement. Briefly stated, what is happening is this: Particles which are arbitrarily far apart seem to be influencing each other, even though according to relativity this means that what seems to be causing an event from one point of view, from another point of view doesn't happen until after the effect being caused.

  What is going on here remains a topic of heated debate; so much so that it seems that for any conclusion on the matter reached by one scholar, one can find another to vehemently contradict it.  However, there is one thing that almost all commentators agree on:  There has to be something pretty strange going on here.

  The fact that these non-local connections are predicted by quantum mechanics was originally pointed out by Einstein, Podolsky and Rosen, but they used it to argue that Quantum Mechanics must be incomplete - dismissing the possibility of such 'spooky action-at-a-distance' out of hand, probably quite reasonably. However in 1964 Bell published a now-famous paper in which he argued that any theory of quantum physics must abandon either Einsteinian locality (ie. the requirement that no influence can travel faster than light), or else the objective reality of the properties of subatomic particles.

  Now, as I said, for just about any conclusion someone has reached on this topic someone has come along and contradicted them, and this is especially true of Bell's paper, although few authors have rejected it in its entirety. In 1982 Arthur Fine argued that abolishing objective reality wasn't enough to escape from Bell's inequality, so that we are in fact forced to accept non-locality. In contrast, Willem de Muynck has been arguing since 1986 that abandoning locality doesn't help escape the inequality. And Thomas Brody argued from about 1985 onwards that neither non-locality, nor the non-reality of the properties of subatomic particles necessarily follow from Bell's arguments, although hardly anyone paid him any attention. Arguing for a similar conclusion, Thomas Angelidis produced what he claims is a local, realistic interpretation of quantum mechanics, which it took the best part of ten years for anyone to get round to refuting.

  First I want to give an overview of the part played by time in relativity theory, because it is this that underlies the strangeness of abandoning Einsteinian locality1.  Einstein suggested that time should be viewed as another dimension, with much in common with the three spatial ones we are all familiar with.  However, the relations between time and the spatial dimensions are not quite the same as the relations amongst the spatial dimensions themselves.

  Rotations in space will always conserve any length r, which can be found from Pythagora’s theorem, r2 = x2 + y2 + z2, where x, y and z are distances in the three spatial directions respectively.  What they will not conserve is distance in a particular direction.  In order for these facts to be true, it can be shown that the familiar trigonometric relations must apply.  That is, if we rotate an object a degrees about the z axis we can write x = r * sin a and y = r * cos a.

  Now, in relativity theory we find relations which are closely related to these, but have rather different consequences.  The crucial difference is that 'rotations' in spacetime conserve not distances but space-time intervals, which are found from a slightly modified version of Pythagoras' Theorem:  R2 = x2 + y2 + z2- t2.  As you can see, the time difference between two events subtracts from the space-time interval, rather than adding to it2.  Just as it can be shown that the trigonometric relations must follow to conserve distances, it can be shown that in order to preserve a spacetime interval when something accelerates (a 'rotation' in spacetime), their close cousins the hyperbolic trig functions come in.  In many senses, the hyperbolic and standard trigonometric functions are very similar; the mathematical analogies between the two groups are very powerful (all the trigonometric identities, as well as the series expansions, are the same in hyperbolic geometry apart from sign changes).  However in practice they have quite different effects.  If one keeps increasing an angle, for instance, one quickly gets back to the starting position (one full turn).  The hyperbolic functions are not periodic like this; instead, as the hyperbolic angle increases and increases (which in this context means that an object is getting faster and faster), what we get are ever-shrinking spatial separations and ever-increasing time separations.  These are the Lorentz-Fitzgerald contractions - time dilation and length contraction - which Relativity is famous for.

  The upshot of all this is that observers travelling at different speeds will always agree on the size of the space-time interval between two events, but they will disagree on both the distance between them and the times at which they occur.  Sometimes this just means that they might disagree how much later one event happened, but they will always agree that A happened before B; other times, when A and B are sufficiently far apart, they will even disagree about which of the two came first.  Specifically, if two events happen close enough together for light to travel from one to the other, then all observers will agree that A precedes B - in the jargon, A is inside B's light cone (or null cone), in its absolute past; the two events are timelike separated;  the spacetime interval between them is positive.

  Two events which happen exactly far enough apart for light to travel between them are on the surface of the light cone, and they are said to be lightlike-separated.  All observers will agree that events happening at a particular time in their frame of reference happen at the same distance from A.

  Events outside the light cones are spacelike-separated from A; the spacetime interval between them is negative, and outside observers will disagree about which comes first.  The idea that no influence can travel faster than the speed of light, thereby connecting two spacelike-separated events, is known as locality.  It is widely seen as a natural consequence of relativity, because any violation of the locality principle entails a violation of the normal assumption that a cause must precede its effects.

2.  EPR, Bell and Non-Locality

This brings us to what arguably appears to be a contradiction between relativity and quantum theory: 
The  mathematical descriptions provided by quantum mechanics do not satisfy the locality principle.  When something has an effect on one of two particles, the wave function of the other changes 'simultaneously' (particles which show these sorts of connections are said to be entangled3).  Now as I have been explaining, the meaning of this word 'simultaneously' becomes ambiguous in relativity: Specifically, events which are simultaneous in one frame of reference are not simultaneous in another. So although in one point of view two events might happen simultaneously, there will always be frames of reference which place one first, and others which place the other first. So if there is any causal link between these two - and it is difficult or impossible to explain these correlations without invoking one - it cannot be the normal kind where one event can be clearly said to cause another; rather, each of the events depends equally on the outcome of the other, regardless of which one might look like it happens first, or how far apart the events might be.

There has been much debate on whether this implies that there is a real non-local influence involved, and a consensus has yet to be reached.

Albert Einstein, with his students Nathan Rosen and Boris Podolsky, were the first to point out thats the mathematics of quantum mechanics entails apparent these apparent non-local connections. They used them to argue that the quantum theory must be incomplete.  In an article known as the EPR paper4, published in 1935, they pointed out that by making a measurement of the momentum of one particle, it is possible to accurately guage the momentum of another with which it has previously interacted.  This implies at least one of two things:  Either the quantum theory is incomplete, and incorrect in its assertion that the second particle does not have a definite momentum before it is measured; or else the measurement of the first particle somehow determines the state of the other, however far away it is. Einstein called this ‘spooky action at a distance’ - spooky because there is no known mechanism for such an interaction, and because it would entail that things can be affected by events which, in some frame of reference, haven't happened yet.  The paper concluded that a particle must have a definite state whether we look at it or not.

Three years earlier, however, John von Neumann had produced a proof which was supposed to show that no theory which assigns definite states (‘hidden variables’) to particles could be consistent5.  Von Neumann was among the most eminent and accomplished mathematicians of his time, and it was accepted by almost everyone that he had proved what he thought he had, even after Einstein et al’s ‘proof’ of almost the exact opposite three years later.

In 1952, though, David Bohm succeeded in what von Neumann had apparently shown to be impossible:  He created a fully working ‘hidden variables’ interpretation.  His theory constituted a convincing counter-example to the von Neumann ‘proof’, but was generally ignored or rubbished for some time.  Many physicists believed rather vaguely that someone or other had shown how Bohm couldn’t possibly be right – that some gaping flaw in Bohm’s logic had been exposed, showing why his theory had to be wrong.  But there was no such flaw; Bohm’s theory, though inelegant, stood in stark defiance of the alleged impossibility of an ontological interpretation of quantum mechanics (that is, one which tries to give a description of the world itself, and not just rules determining what we can say about it).  John von Neumann had, in fact, been quite wrong.

The von Neumann impossibility proof depended on a postulate of additivity; briefly, he assumed that since the results of measurements are commutative on average in the quantum formalism, the results of individual measurements should also be commutative.  But it is not really difficult to see from a simple example6 that this need not be so – in fact, it cannot be true.  It is simply wrong, and when one tries to apply it to Bohm’s theory its absurdity becomes quite clear.

There are a number of extraordinary features of the history of von Neumann’s proof, and other related proofs trying to show the same thing.  In 1935 the von Neumann proof was actually refuted – its fatal logical flaw exposed – by a little-heard of female German mathematician by the name of Grete Hermann7.  Nobody seems to have paid any attention at all until 1974.  In 1952 David Bohm formulated a theory which demonstrated quite clearly, for anyone who took the time to look at it, that von Neumann couldn’t possibly be right.  In fact, if von Neumann had tested his proof against de Broglie’s theory, the predecessor to Bohm’s account which was rubbished at the 1927 Solvay Congress and didn’t resurface until Bohm rediscovered the same approach and overcame the difficulties with de Broglie’s own account 25 years later, he would have seen at once that his crucial assumption did not apply to it.  But he did not consider it, although it was the closest anyone had come to a working hidden variables interpretation of quantum theory at that time.  His theorem failed to rule out the only putative example of the class of theories it was supposed to prove impossible.

When Bohm’s theory was published it was attacked by several eminent physicists, and subsequently it was widely treated as if it had been refuted.  But it had not.  As Bell put it ‘...even Pauli, Rosenfeld, and Heisenberg, could produce no more devastating criticism of Bohm’s version than to brand it as “metaphysical” and “ideological”’8.  People not only ignored Bohm's theory – they actually kept producing new variations on the proof showing why it couldn’t possibly exist, well after 1952.  When Bell finally showed what was wrong with such proofs in a paper which people actually paid attention to9, one might have expected that people would finally stop making the same mistake.  But they did not.  People kept on producing impossibility proofs with closely related errors at least as late as 1978, twelve years after their central fallacy was brought to the fore by Bell, twenty-six years after a convincing counter-example was shown to exist – and forty-three years after von Neumann’s proof had first been disproven.

Now in 1964 Bell produced an impossibility proof of his own, of a more limited character (this was after he had demonstrated von Neumann’s mistake, but before the paper in which he did so was published).  He sought to show not that any ‘hidden variables’ theory of quantum mechanics would fail, but only that they would necessarily entail a sort of inseparability, or non-locality.  In fact, he thought that the EPR argument was quite persuasive, and had not been given the credit it was due.

However he produced a proof to show that if we assume the reality of the variables in question (that is to say we assume the particle possesses real, definite values of position, momentum, spin and so on) and we assume that Einsteinian locality holds, it is possible to derive a joint probability distribution for measurements on the two particles which demonstrates that they cannot possibly be correlated as strongly as quantum mechanics predicts, as long both assumptions hold.  When Bell wrote this the possibility remained that these particular quantum mechanical predictions would turn out to be wrong, but it already seemed rather unlikely.  When they were finally tested experimentally, by Aspect and others10, it came as no great surprise that the quantum mechanical predictions were confirmed.

One of the simplest examples of a violated Bell inequality is a variation on the EPR experiment proposed by Bohm and refined by Bell.  An atom emits two photons in opposite directions.  When one of these photons reaches a polariser, it will either be absorbed or transmitted.  If the other photon reaches a polariser at the same angle, it will be absorbed if the first was absorbed, or transmitted if the first was transmitted; the photons exhibit perfect correlation when the polarisers are aligned, as if they were polarised at just the same angle.

If the polarisers are aligned at different angles, the photons will sometimes both be stopped or both let through, and sometimes one will be stopped and the other let through; we can say that they disagree in these cases.  The proportion of the time that the photons disagree is found from cos2 (q1-q2), where q1 and q2 are the angles of the two polarisers respectively.

What Bell believed he had proved was that this level of agreement between the photons was in excess of the level that any local hidden variables theory could possibly predict.  His conclusion was based on the observation that any probability distribution which is factorisable cannot show correlation in excess of a certain amount - an amount exceeded by the predictions of quantum theory.  He supposed that any local 'hidden-variables' theory should yield a probability distribution of this sort - that is, it should be possible to decompose the probability distribution into factors representing the settings of the two polarisers and any additional local hidden variables.  Bell concludes11:

‘In a theory in which parameters are added to quantum mechanics to determine the results of individual measurements, without changing the statistical predictions, there must be a mechanism whereby the setting of one measuring device can influence the reading of another instrument, however remote.  Moreover, the signal involved must propagate instantaneously, so that such a theory could not be Lorentz invariant.’  

3.  Relativity and Non-Locality

It is not difficult to see why many people have argued that Bell’s results are incompatible with Einsteinian relativity.  If we want to hold onto the idea that particles possess real properties that do not depend on whether we measure them, and we accept Bell’s conclusion, then we are left with the idea that events befalling one particle somehow ‘instantaneously’ influence the properties of other particles spacelike-separated from it.  As discussed in section 1, in relativity the idea of instantaneity itself is relative; if two spacelike-separated events are simultaneous in some frame of reference, there will always be other frames of reference where one precedes the other, and still other frames of reference the perceived order is reversed.  So from some point of view, the event which is supposed to be doing the influencing doesn’t happen until after the event which is being influenced.

This is obviously a strange conclusion; however, it is not necessarily wrong.  Clearly for certain kinds of backwards causality, paradoxical situations can arise.  For instance, I might send a message back to my past self saying ‘Do not under any circumstances send this message,’ if superluminal (faster than light) signalling were allowed and relativity correct.  If I heed the warning, I will never send the message; how then can I come to receive it?  This presents a paradox, which could be resolved in more than one way.  Firstly, it may be that I cannot possibly send the message unless I am not going to heed it (perhaps I suffer amnesia in the interim).  This could allow spacetime to be globally consistent, but it leaves the puzzling problem of what could be stopping me from creating paradoxes12, if any kind of signalling to my own past were possible.

Perhaps the ripest resolution in terms of science fiction possibilities is to reject the need for global consistency of spacetime, and say that the result of my message is that I will never send that message again; it is a Fergus Murray on a different time-line who sends the message, and the fact that he will not exist in our own need not concern us.  Certainly the world looks globally consistent so far (although some people claim otherwise, they are generally assumed to be mad).  But has anyone actually tested to make sure?  Would we necessarily have noticed?  I don't think so.

It may be possible, in any case, to avoid the paradoxes altogether while accepting the existence of superluminal causal links, as Maudlin (1994) and others suggest, because we have no real control over the links. We cannot, for instance, use them to send superluminal signals of any kind.  So paradoxes explicitly based on signalling will automatically fail (although every now and then someone publishes a new paper claiming to have found a loophole which allows signalling, before someone points out that it doesn't work after all13).  Maudlin argues that this means we cannot have any causal paradoxes at all.  I am not altogether convinced by this; it seems to me that in general, if the links make real changes to spacelike-separated events, then either they must display a certain symmetry whereby it is impossible to say which event causes which, although we may be able to say that they cause each other – potentially avoiding paradoxes by allowing any reference frame to construct a chain of events in which no causes travel backwards – or there must be a preferred frame of reference in which all backwards causation is seen to be analogous to an optical illusion; or it should be possible to find causal chains in which something can affect its own past, irrespective of the possibility of controlling the chains.  It may still be possible to avoid paradoxes, but Maudlin’s critique does not seem conclusive in this case.

However, even if we rule out causal paradoxes there remains a difficulty, potentially quite serious, in reconciling relativity and quantum theory.  This concerns the question of Lorentz invariance:  That is, the equivalence of all frames of reference.  This is one of the ontological pillars of relativity.  If all reference frames are not in fact equivalent, it seems as though the laws of physics are conspiring to hide this fact from us.  It is true, though, that although strange this is not inconceivable; indeed it was the position held by Lorentz, Fitzgerald, Poincaré and Larmor, who between them deduced the mathematics of special relativity without achieving Einstein’s conceptual insight into the problem.

It is just this conceptual insight which we must to a large extent abandon if we throw out Lorentz invariance and accept the existence of some preferred frame of reference; no conflict with actual experimental data will result, because although the conceptual insight played an important part in making relativity theory popular, and without it it seems unlikely Einstein would have progressed on to develop the general theory of relativity, the mathematics of the theory is unchanged without it.  Neither the standard (Copenhagen) interpretation, nor Bohm’s theory, nor the Ghirardi-Rimini-Weber interpretation (in which the wave function collapses of its own accord from time to time, completely at random) is explicitly Lorentz-invariant.  John Cramer’s Transactional Interpretation, based on the interaction of waves travelling forwards and backwards in time, is Lorentz-invariant, but may fall on other grounds14.

4.  No Bell Non-Locality? 

It is still possible that we are not in fact forced to accept Bell’s conclusion that we must either accept non-local influences or abandon realism, although I will turn to possible reasons to accept non-locality besides Bell’s theorem in section 5.  Thomas Angelidis has produced a fully local, realistic theory in which each run of the experiment has a probability distribution which satisfies the Bell inequality associated with it.  However the theory predicts just the same correlations as standard quantum theory when averaged over several runs.

The basic thing Angelidis has done is to introduce a slightly finer state specification; in his theory the chance of a particle being observed with a particular spin depends on the angle of the measuring apparatus, and the common plane of polarisation of the two particles.   For each individual pair of photons, their probability distribution factorises in exactly the way required by Bell; it is only over several runs that the inequality-violation observed in experiments takes place.  I will not go into the details of the theory here.

This theory constitutes an example of precisely what Bell thought he had ruled out.   How can this be possible?  A number of commentators since Bell have argued that Bell’s conclusions are untenable.  Thomas Brody15 argues that the existence of a joint probability distribution (j.p.d.) in the form Bell’s theorem relies crucially on a third assumption which is generally not made explicit in derivations of Bell’s inequality.  This is what he calls the joint measurability assumption – the assumption that two properties can be measured without mutual interference, in this case the polarisation of a single particle along two different axes.  This does not hold in the Angelidis theory.  Brody lists five derivations of Bell’s inequality in his chapter on joint measurability – Bell’s original derivation, a quantum logical one due to Santos, two probability-theoretic derivations due to Wigner/Holt and Suppes/Zanotti respectively, and one based on the treatment of experimental data.  Of these, only the first two make use of a locality and hidden variables are only relevant to the first.  However, all of these derivations rely on an assumption of joint measurability.  He also gives accounts of certain classical situations in which Bell-type inequalities are violated; these are due to Notarrigo (1983) and Scalera (1983, 1984).

Brody is not the first to have made use of these derivations to attack Bell’s conclusions, although his critique is especially piercing.  Stapp (1982) seems to have been the first to point out that the inequalities can be perfectly well derived without making any reference to hidden variables.  Fine (1982) showed that the existence of a four-variable (quadrivariate) joint probability distribution implies that the Bell inequality should hold.  Building on his results Willem de Muynck16 has published a series of papers attacking the common conclusion that Bell inequalities sound a death knell for Einsteinian locality, on the grounds that it is just as possible to derive Bell inequalities for (some) non-local hidden variables theories as it is for local ones.  He points out, however, that this depends on the sort of hidden variables theories under consideration. An objectivist account, in the sense that the values of observables is independent of our experimental set up, seems to be ruled out.  For de Muynck, the most promising possibility is that of contextual hidden variable theories, in which the overall experimental set-up plays an important part in determining the conditional probabilities.

Taking a rather different tack, Cartwright and Chang (1993) point out that the factorisability condition required by Bell in the derivation of his inequality is quite unreasonable when applied to stochastic (that is, fundamentally statistical) theories. However, it should be noted that it cannot be the determinism of the universe as such that is it at issue here, but rather the ‘determinism’ of the causes – that is, their ability to reliably produce a given outcome.  Any fundamentally probabilistic theory could be simulated with arbitrary precision by a deterministic model.  We could imagine, for instance, replacing the probabilities in quantum theory with the roll of a deterministic but in practice unpredictable die; the substitution would make no difference whatever to any of our calculations, as we remained unable to give an account of the behaviour of the die going beyond its apparent randomness.  It is also possible that there are patterns in the data which are too subtle for us to have noticed yet, but which may one day become apparent – for instance, if the outcomes of experiments turned out to be determined not by probabilities but by a function which varies rapidly with time (sinusoidally, perhaps) we could easily have missed this fact – especially since almost nobody is trying to find  such patterns; it was accepted by most of the physics community long ago that quantum mechanics just is indeterministic, and there is no sense trying to avoid this – although it has to be said that the principle that every event must have sufficient cause served us rather well in the days before quantum physics.

The only way in principle to distinguish between something which is intrinsically random, and something which follows pre-determined patterns, is to find the patterns.  We can never be certain that we are not missing an underlying pattern which fully determining outcomes, because such a pattern can have effects arbitrarily similar to true intrinsic randomness; we can rule out such patterns one by one, but we can never rule out their existence entirely.

Cartwright and Chang argue, though, that the factorisability condition does seem to follow from these four premises:
 

‘(i) The Contiguity Condition:  every cause and its effect must be connected by a causal process that is continuous in space and time.
(ii)  The Finite-Speed Condition:  all causal processes must propagate at a finite speed.  In particular, popular “relativistic” variant of this condition, which we will be working with, stipulates that causal processes can travel at most at the speed of light.  In principle, it would be possible to have versions of this condition which impose some other fixed speed-limits on propagation, or even no particular fixed limit at all.
(iii)  The Markov Condition:  the stages of causal processes must have no memory, so that complete information on temporally intermediate stages make earlier stages causally irrelevant. [...]
(iv)  The Cause-Correlation Link:  all correlations between spatially separated events must be complpetely explainable by either a direct (spatio-temporally continuous) causal connection between them, or by common events in the past causal history of each.’


Cartwright & Chang conclude that the contiguity condition required here is probably violated in quantum mechanics; they may be correct, since various aspects of quantum mechanics seem to be incompatible with the condition – the collapse of the wave function, the supposed ‘quantum leaps’ made by electrons around atoms, and so on.  It still remains possible that these may all be describable by fully continuous processes, however.  The superluminal connections present more trouble for a continuous model of quantum mechanics, since (if real) they seem to be unmediated. Carwright & Chang favour the idea that the non-local action is real and discontinuous, and simply falls outside the range of applicability of special relativity.

A final note on the possibility of maintaining locality – although the experiments strongly favour the correlations which are predicted, they are at most about 63% efficient.  At this level of efficiency it remains possible, if highly unlikely, that the photons being missed in the experiments are just those which do not fit the predicted pattern of correlation.  More efficient experiments should hopefully be able to settle this question.

5.  Entanglement in Practice

Leaving aside questions of non-local action for now, the fact remains that the phenomenon known as entanglement is a real feature of our world, whatever its exact nature.  For some time entanglement was thought to be important only in very special circumstances, but in the last decade or so it has been shown to be much more important than was thought – it is in fact ubiquitous in quantum mechanics, the rule rather than the exception.  It turns out, for instance, that entanglement seems to be necessary to explain the results of the classic Young’s Two-Slit Experiment17, which have traditionally (but erroneously) been explained in terms of Heisenberg Uncertainty.  In these experiments, a beam of particles is sent through two slits in a barrier, towards a detecting screen.  The result is a pattern of light and dark fringes, showing that the so-called particles interfere with each other much like classical waves on water.  The same pattern is built up even if the particles pass through the apparatus one at a time – that is, it’s not just that the particles interfere with each other, but that each particle interferes with itself, in a way which seems to force us to the counter-intuitive conclusion that it passes through both holes at once.

If, however, we shine a light near the slits, we will see the particle passing through either one slit or the other.  Surely, then, it can’t be right to say that the particle is really passing through both slits?  This doesn’t help solve the quandary.  The thing is, if you shine a strong enough light to see which slit a particle goes through, you destroy the interference pattern; either you can see the interference patterns, or you can find out which way the particle goes, but not both.

The traditional explanation of this fact, due to Bohr, is given in terms of Heisenberg’s uncertainty principle, which says that knowing the position of a particle means making its momentum more uncertain.  To tell which slit a particle has gone through, you need to know its position with an uncertainty smaller than the gap between the slits, which would usually be done by bouncing a photon off it.  The photon gives the particle a small ‘kick’, changing the momentum uncontrollably by just enough that the interference pattern is destroyed.

Or so it was thought, until Gerhard Rempe and his colleagues at the University of Konstantz in Germany proved that the pattern is still destroyed even when the particles are tracked using photons with far too little energy to smear out their interference pattern.  They did this by using cold rubidium atoms for their particles, pure laser light for their barriers, and low-frequency microwaves emitted by the atoms themselves to detect them.  If it was the uncertainty principle which destroyed the patterns, the experiments wouldn’t have worked; the smearing in the momentum is not sufficient to destroy the fringes in this case.

The physicist Yu Shi, based at the University of Cambridge, has shown that it is just a fortunate numerical coincidence which allowed the uncertainty principle to apparently explain the two-slit experiment; if the full equations of quantum theory are taken into account, the uncertainty principle is seen to be inadequate.  Entanglement, however, does the job admirably; according to Shi, the interference pattern disappears because as a result of the entanglement between the diffracted particles and their photon partners.

 A number of practical applications for entanglement have also been proposed, such as the possibility of sending an encrypted message by means of entangled particles.  The knowledge of the correlation between the two particles means that it is possible, at least in theory, to send a coded message using the correlations.  What is more, this technique should be completely safe, since if anyone were to intercept the photon en route, the recipient could tell.  Briefly, the quantum encryption works by sending a stream of photons to the recipient which are all entangled with photons measured by the sender.  By comparing the measurements they made over a normal phone line, say (a string of symbols which is meaningless without the results themselves), they are able to send messages to each other which cannot possibly be intercepted without their knowledge.

Notably, none of the possibilities for utilising quantum entanglement makes explicit use of its supposed superluminal character.  Whether it is physically possible to make any use of this remains to be seen, but most commentators agree it is unlikely.

6. Non-Locality without Bell?

Although it seems as if the Bell inequality, at least as it is usually presented, does not give us the convincing reasons for believing in the non-locality of physics which it has often been claimed to, it may be that there are other reasons for believing in non-local causes.  Greenberger, Horne and Zeilinger (1989) have presented a scheme which avoids the criticisms I have presented, and if correct really seems to leave no room for a local interpretation.  However, this scheme has not yet been tested experimentally.

The GHZ scheme uses three particles rather than two, and measurements of spin rather than polarisation.  The particles are sent out in different directions, and one of two sorts of spin measurement is made on each – call the first type X and the second type Y.  The measurements are made in one of four combinations:  Either every particle will be asked X, or two of the particles will be asked Y, and the other one X.  Quantum mechanics predicts a 100% probability that if only X measurements are made, an odd number of the particles will be found in the ‘spin-up’ state, whereas if two Y measurements are made, an even number of particles will be measured as ‘spin-up’.  The theory says nothing about whether the odd number will be 1 or 3, or the even number 2 or 4.

In this arrangement (assuming it is not eventually disproved, either by experiment or by the unlikely discovery of a theoretical error) it seems to be truly impossible to find a local explanation for the correlations; if each particle were to decide in advance what it will answer in response to the two questions, a single run of the experiment would stand at least a one in four chance of showing results that are in conflict with quantum mechanical predictions.

Another area of quantum theory which may be irreconcilable with locality is the Aharonov-Bohm effect, whereby particles travelling through a field-free region are influenced by a nearby magnetic field.  Richard Healey18 argues that the apparent non-locality of this effect is closely analogous to that manifested in violations of Bell’s inequality.

GHZ and Aharonov-Bohm aside, it may also be that a more complete understanding of quantum theory will incorporate a fuller explanation of non-locality than we have at present, at a fundamental level.  For instance Mark Hadley19 has approached the problems of quantum theory from a spacetime perspective, building on the idea of particles as geons – kinks in the spacetime continuum – which was conceived by Einstein, but which met with serious problems at that time.  Hadley has had some success in resolving these problems and using geons to explain ‘quantum weirdness’ by giving a fully general-relativistic treatment, taking into account the distortion of time as well as of space, where Einstein and others working on similar lines had considered only space.  Although the actual solutions of general relativity required to flesh out the theory have yet to be found, this could well prove to be a promising avenue of exploration.

Bibliography
Author's main page
Other writings