Heisenberg's Uncertainty Principle
No, it Doesn't Mean Things Change Unpredictably When We Look at Them. Or Does it?
Quantum mechanics is a confused and confusing theory. Nowhere is this confusion more manifest than in one of the theory’s core concepts, discovered by the German physicist Werner Heisenberg in 1926 and published in 1927. This is, of course, Heisenberg’s famous uncertainty principle. At the time, even Heisenberg was confused about what his principle might actually mean. The very possibility of clarity appeared doomed from the outset.
Central to Heisenberg’s argument was an experiment in which a microscope is used to ‘see’ a single electron. No such microscope exists, so this is a ‘thought experiment’, designed to illustrate physical principles rather than practical possibilities. Classical (meaning non-quantum) theories of optical devices suggest that the resolving power of a microscope - the smallest distance it can distinguish - is proportional to the wavelength of the light used to illuminate the object under study. So, to ‘see’ the extremely tiny electron requires light of extremely short wavelength, shorter even than X-rays, equivalent to that associated with gamma radiation.
But, Heisenberg reasoned, there’s a catch. Albert Einstein had set the ball rolling in 1905 by suggesting that light (including all forms of radiation) might come in ‘bundles’ of energy he called light-quanta. Long thought of as a wave, he was suggesting that light also possesses particle-like properties. In 1923, French physicist Louis de Broglie went further. He discovered a connection between these properties, summarised in an exceptionally simple equation. The wavelength of light (a wave property) is equal to a physical constant (named for German physicist Max Planck) divided by the momentum of the associated particle of light, which by late 1927 had gained the name photon. Classically, the momentum of an object is given by its mass multiplied by its speed in a specific direction in space - very much a particle property. Switch de Broglie’s equation around and we have
particle momentum = Planck’s constant divided by wavelength
So, the shorter (smaller) the wavelength, the larger the momentum of the associated photon. This is a bit weird, but de Broglie wasn’t suggesting that his relationship applies only to light waves/photons. He was suggesting that it applies to electrons, too.
Heisenberg figured that hitting the hapless electron with a gamma ray photon would be like hitting it with a sledgehammer, a blow so powerful and clumsy it would knock the electron out of position on the microscope stage. It might be possible to be specific about where the electron was, but there would be no fix on its speed and direction. All was not completely lost, however, as he found he could set some limits on the effect. Collisions between photons and electrons had been studied a few years earlier by American physicist Arthur Compton, and he was able to use the ‘Compton effect’ to deduce that
uncertainty in position multiplied by uncertainty in momentum ~ Planck’s constant.
Switching the terms around quickly reveals that we can reduce the uncertainty in position only by accepting greater uncertainty in momentum, and vice versa. What we can’t do is simultaneously determine both with complete certainty. This is the uncertainty principle.
Professor Bohr Explains
Alas, Heisenberg’s reasoning was flawed and, as he soon discovered, this is not the right way to interpret his principle. His mentor, Danish physicist Niels Bohr, attempted to put him straight. Heisenberg was confused about how a classical microscope works - a failing that had almost cost him his doctorate at the University of Munich when, to the dismay of his examiners, it had become apparent that he could not derive the equation for a microscope’s resolving power. He had also incorrectly assumed that the collision between the gamma-ray photon and the electron would be entirely unpredictable. In truth, the momentum imparted to the electron can in principle be known as precisely as the wavelength of the photon that slams into it.
How then does the uncertainty arise? Bohr reasoned that it arises because, in our efforts to describe quantum phenomena, we are obliged to reach for two familiar but incompatible classical theories, of waves and of particles. These just can’t be used to describe one and the same thing - such as a photon or an electron - without provoking some logical contradictions. And yet in quantum mechanics, nature somehow finds a way to reconcile these contradictions. For an electron, the de Broglie relationship connects the momentum of the electron-as-a-particle with the wavelength of the electron-as-a-wave. But the electron cannot be both wave and particle simultaneously.
We can readily understand how such contradictions create ‘uncertainty’. If you’ve spent time in your local gym, you may be familiar with ‘battle cables’ used for building upper body strength (think of Thor getting back into god-like shape in an early scene in Love and Thunder). Gripping the cable by its handle and moving it rapidly up and down creates a wave motion in the cable which travels along its length. If the up-and-down motion is uniform then the wavelength of the resulting waves - the length of a single up and down cycle within the sequence - will be well-defined. But it would seem to make no sense to talk about ‘position’, as the wave motion is spread all along the cable. If the cable is of sub-atomic dimensions, we can use the de Broglie relationship to determine the momentum from the wavelength. But the position is undefined.
Now imagine we raise the handle in a single upwards motion, but instead of continuing the motion downwards, we stop at the middle. This creates a single ripple which travels along the cable. It now makes sense to talk about the instantaneous position of this ripple as it moves down the cable. But as there is no complete up and down cycle its wavelength (and, from the de Broglie relationship, its momentum) is now undefined.
We can’t get Heisenberg’s uncertainty relation from this, but Bohr developed a variation of this argument based on the notion of a ‘wave packet’, which can be created by adding together a number of waves of different wavelength. Obviously, a single wave has a well-defined wavelength but, as we’ve seen, no defined position. Adding lots of waves with different wavelengths together creates a wave packet with a peak amplitude much more localised in space, sharpening the position. But the more waves we add (all of different wavelength) the less defined the wavelength of the resulting packet. In classical optics, there is a relation between the ‘uncertainty’ in the position of such a wave packet and the uncertainty in the reciprocal of its wavelength, such that the two multiplied together are of the order of 1. This is nothing whatsoever to do with quantum mechanics, but we can once again make use of de Broglie’s relation to set the uncertainty in reciprocal wavelength equal to the uncertainty in momentum divided by Planck’s constant. The result can be quickly rearranged to give us the uncertainty principle.
In Heisenberg’s original interpretation, the uncertainty arises because of an essential disturbance of the object under study by the means used to study it. This is what is known as an observer effect, which places limits on what is in principle measurable. We don’t normally fuss about this kind of thing when dealing with large classical objects, such as tennis balls. We don’t imagine that scattering light from a tennis ball (so that we can see it) will affect the outcome of a player’s serve.
Bohr argued instead that it is our use of incompatible classical descriptions - of waves and of particles - that places limits on what is knowable about the (quantum) object under study. There is a sense that the position and momentum of an electron are only defined in the context of the apparatus used to study them and, like the battle cable or wave packet analogies, what we find depends on how we choose to set the system up and what we choose to look for.
Heisenberg wasn’t entirely happy with Bohr’s version of events. He had already submitted his paper for publication and Bohr insisted that it should be withdrawn. They argued, bitterly. Eventually, Heisenberg relented and agreed to add a note to the proofs acknowledging Bohr’s point of view. ‘The consistent care needed in employing the uncertainty relation is, as Professor Bohr has explained, essential, among other things, for a comprehensive discussion of the transition from [quantum] to [classical] mechanics’.
Changing the Game
The uncertainty principle was a game-changer. Physics to this point had been ruled by a description based on the classical mechanics of Isaac Newton, in which every effect is completely determined by a cause, nothing is left to chance, and there is no room for free will. Heisenberg had pressed a release button. The physics of the microscopic world of atoms and sub-atomic particles, on which all material substance in the universe is based, was now revealed to be telling a very different story. The Newtonian physics of strict causality, of ‘when I do this, that will happen’ had been replaced by a quantum physics of ‘when I do this, that may happen with a certain probability’.
As far as English physicist Arthur Eddington was concerned, ‘[S]cience thereby withdraws its moral objection to free will’. And, it seemed, God: ‘religion first became possible for a reasonable scientific man about the year 1927’. The rejection of causality and determinism and, more extremely, of a hard, material reality, played to the sense of political crisis in Weimer Germany that many German physicists were keen to embrace, whilst provoking anger among communist philosophers wedded to Marxist materialism. in 1947, Andrei Zhdanov, thought to be Stalin’s successor-in-waiting, declared war on ‘modern bourgeois atomic physicists’, whose philosophy, he proclaimed, ‘lead them to inferences about the electron’s possessing “free will”, to attempts to describe matter as only a certain conjunction of waves, and to other devilish tricks’.
The new quantum rules left an enduring imprint on modern culture. That our access to knowledge has limits governed by an unnavigable lack of certainty, that just by looking at something changes the way it appears, or criticising a literary work changes the way others will read it, entered the public’s collective consciousness. It is manifest in many works of literature, theatrical plays, movies, and puns, including Dilbert’s project uncertainty principle: the more certain the understanding of the project, the greater the uncertainty in its cost. Like Schrödinger’s cat, the uncertainty principle gained popular meanings far removed from their interpretation and use in physics.
Back to Physics
But Heisenberg’s paper of 1927 was quite alarmingly vague. What did he mean by ‘uncertainty’? Of course, the paper was written in German and those of us who do not speak the language are at the mercy of its translators (does translating a work inevitably change the way others will read it?). In his original, Heisenberg used words such as Ungenauigkeit (inaccuracy), and Unbestimmtheit (indeterminacy). Bohr preferred Unsicherheit (insecurity or doubtfulness). The principle is today more widely known among physicists as the indeterminacy principle.
This sharpens the language but doesn’t tell us what this means. Later in 1927, the American physicist Earle Kennard defined ‘uncertainty’ as a ‘standard deviation’, a term from statistics which summarises the spread (or variation) of results around a mean value. If the results form a normal distribution (a bell-curve) around the mean, then about 68% of these will fall within a single standard deviation from the mean (95% will fall within two standard deviations and 99.7% within three standard deviations). He found that the standard deviation in position multiplied by the standard deviation in momentum must be greater than or at least equal to Planck’s constant divided by 4pi. The German mathematician Hermann Weyl independently found the same result a year later.
In 1929, the American physicist Howard Robertson joined a few dots. Another puzzling feature of the algebra of quantum mechanics is that it is ‘non-commutative’. In ordinary algebra, the order in which we multiply things together is not important: X multiplied by Y gives the same result as Y multiplied by X. In other words, XY minus YX = 0. Mathematicians say that X and Y commute, and call the term [X,Y] = XY - YX the commutator. But this is no longer true for certain properties in quantum mechanics, such as (you guessed it) position and momentum. In quantum mechanics, position multiplied by momentum does not given the same result as momentum multiplied by position. In other words, the quantum commutator for position and momentum is not zero. It is, in fact, equal to the square root of minus 1 multiplied by Planck’s constant divided by 2pi. The explanation for this takes us into mathematical territory beyond the scope of this essay (but for curious readers is explored fully in my book The Quantum Cookbook).
Robertson was able to derive the uncertainty principle written in terms of standard deviations directly from the commutator for position and momentum. In 1930, Schrödinger generalised Robertson’s derivation to deduce a version that accounts for circumstances where the properties are inter-dependent. Schrödinger’s uncertainty principle? Who knew?
The Gamma-ray Microscope, Again
Despite his humbling experiences with his gamma-ray microscope thought experiment, Heisenberg wasn’t quite ready to put this behind him. A few years later, he assigned a young student in his group at Leipzig, Carl Friedrich von Weizsäcker, to reinvestigate the argument using a fledgling form of quantum electrodynamics that he and Wolfgang Pauli had devised to describe the light-quantum. The purpose of this analysis was not to derive the uncertainty relations, which were assumed to hold. It was rather to demonstrate that uncertainty would still prevail in the new representation.
Von Weizsäcker published the results of his studies in 1931. He went much further than Heisenberg had done in his erroneous analysis. He discovered that - in principle - the microscope can be used to discover the initial position of the electron or its momentum, depending on whether the photographic plate which captures the image of the scattered gamma-ray photon is placed in the image plane or the focal plane of the microscope lens. A few years later the talented mathematician and philosopher Grete Hermann went further still, asking us to imagine that we remove the photographic plate entirely. How should we then describe the situation? Quantum mechanics has a ready answer - as a superposition of both possibilities. Schrödinger had not yet introduced the term ‘entangled’ (he would do this in 1935), but this is what Hermann was emphasising. As a result of their interaction, the gamma-ray photon and the electron had become entangled, and what we can discover about their properties depends on how we choose to interrogate the scattered photon.
We’re Not Done Yet
It’s a mistake to think that science progresses when concluding proof is established for the superiority of one theory or interpretation over another. Even when a consensus prevails within the scientific community, there will always be dissidents who argue that we should hold on, we’re not done yet.
The interpretation of ‘uncertainty’ as a standard deviation leans towards a statistical interpretation in which the scatter around the mean arises in repeated measurements of identically-prepared systems. This harks back to Heisenberg’s original argument based on an observer effect - the object under study may have well-defined properties all along but inevitable errors in measurement lead to a statistical uncertainty. Kennard was well aware of the potential for conflict, and preferred to retreat to ambiguity. He simply claimed that in quantum mechanics, the ‘true’ value of a property such as position or momentum does not exist ‘in a physical sense’.
Much more recent investigations have sought to unpack a number of effects that became bundled under the heading of ‘uncertainty’, including uncertainty in the preparation of the system, uncertainty resulting from the disturbance of the system by the measurement (observer effect), and uncertainty resulting from statistical errors in the measurements. Arguably, there is no single ‘uncertainty principle’, but rather a whole collection of them. The motivation for revisiting these fundamental quantum concepts inevitably arose as the precision of laboratory measurements improved, especially those related to the detection of gravitational waves. Heisenberg and others reached for thought experiments because, at the time, real experiments were inconceivable. Contemporary physicists have been busy conceiving such experiments.
Claims that Heisenberg’s version of the uncertainty principle had been shown experimentally to be disproved caused a bit of a media splash in 2012. But this was a version of the principle that Heisenberg never formulated. Whether you think that Heisenberg’s version (as formulated by Kennard and Robertson) is correct or not depends on whether you think he was referring to single-interaction measurements or statistical averages. The debate will no doubt run for some time to come.
Jim Baggott is an award-winning science writer and co-author with John Heilbron of Quantum Drama: From the Bohr-Einstein Debate to the Riddle of Entanglement, published by Oxford University Press.
Interesting, thanks for sharing, Jim, very thoughtful piece.
Fabulous description of the background science to this principle. Really enjoyed it! I read a book recently about the making of the atomic bomb, which included some great history about the experiments to understand the atomic structure, and included some description of this principle. Your account is really specified and detailed, thanks!