15
The Perfect Code
‘… every item of the physical world has at bottom – at a very deep bottom, in most instances – an immaterial source and explanation; that what we call reality arises in the last analysis from the posing of yes-no questions and the registering of equipment-evoked responses; in short, that all things physical are information-theoretic in origin and this is a participatory universe.’
John Archibald Wheeler50
‘… the whole show is wired up together …’
John Archibald Wheeler51
‘… time and space are not things, but orders of things …’
Gottfried Wilhelm Leibniz
Quantum entanglement is turning out to be a key player. We have thought about it so far to keep track of the information coming out of a black hole. In that context, we have seen that entanglement seems to be responsible for creating what we experience as space. What we will learn now is that the way entanglement creates space appears to be very robust. This is just as well for us: we don’t want to live in a space that might be prone to falling apart.
Quantum entanglement is also a key resource for those trying to build quantum computers. At first sight, the construction of computing machines might appear to have nothing at all to do with the emergence of space. In a quantum computer, entanglement is the primary means by which information is encoded in a robust way that is resilient to damaging environmental factors. This topic, known as quantum error correction, is fundamental to the construction of working quantum computers. There are parallels here: it is beginning to look as if space is woven out of quantum entanglement in a manner similar to the way quantum engineers weave qubits together to build quantum computers. The suggestion is that there is a link between quantum computing and the fabric of reality. In this chapter we are going to explore that link.
Figure 15.1. Illustrating the causal wedge puzzle.
The source code of spacetime
In Figure 15.1, we show a slice through AdS spacetime with an entangled quantum theory on the boundary; the Poincaré disk once more. The boundary has been split into three parts, labelled A, B and C. Let’s first focus on region A. Ryu–Takayanagi tells us that the entanglement entropy of region A with regions B and C is given by the length of the shortest line that can divide the two regions. In this AdS spacetime, the shortest line is a curve. Holography tells us that if we know what is happening on the boundary (A, B and C), we know everything about the interior. It also tells us, and this is not obvious but has been proved, that if we know what is happening on A, we also know what is happening in the shaded region. In the jargon, the shaded region is known as the entanglement wedge of A, because the quantum theory on A entirely determines what happens in the shaded ‘wedge’. The same is true for regions B and C, as illustrated in the remainder of the figure. Now consider a point somewhere near to the centre of the disk (the black dot). The left disk tells us that it is encoded on boundary regions B and C. The middle disk says it is encoded on A and C and the right disk says it is encoded on A and B. The only way all three statements can be true is if the information is encoded redundantly. This means that we could erase region A and still know what is happening at the black dot; or we could erase region B or region C. What we can’t get away with is erasing two of the three boundary regions – that would be too much. This is intriguing. It means that the answer to the question, ‘where on the boundary is the information associated with the region around the black dot encoded?’ is ‘it isn’t in any single region (A, B or C), but the information can be determined from knowledge of any two regions’. It is quantum entanglement that makes this robust distribution of information possible.
According to holography, the information needed to encode the interior space is scrambled up and distributed across the boundary, which makes it hard to read but very robust against destruction. This is very similar to a technique computer scientists have discovered that is central to the construction of working quantum computers. At the time of writing, the largest quantum computers are networks of around 100 entangled qubits. The potential of these computers is vast because the ‘space’ in which calculations can be performed grows exponentially with the number of qubits, exploiting quantum entanglement as an information resource. These 100-qubit quantum computers can perform calculations in minutes that would take a conventional supercomputer longer than the current age of the Universe to complete.
One of the biggest challenges in building large-scale quantum computers is preventing the qubits from becoming entangled with their environment. Given what we know about entanglement and quantum information, it should be clear that this would be a bad thing because information would ‘leak out’ of the computer into the surroundings and the computer wouldn’t work. Perfect isolation isn’t practicable, so what is needed is a way to protect the important qubits that are needed to program the computer: a way to encode information that makes it hard to destroy. This can be done by exploiting quantum entanglement to encode the information in a robust way. This is quantum error correction.
Classical error correction is a routine part of our everyday technology. A QR code, for example, encodes multiple copies of information so that a sizeable part of it can be destroyed while still allowing the information to be decoded. Quantum computers can’t rely on storing multiple copies of the information because, as we’ve seen, the quantum no-cloning theorem prevents quantum information from being copied. The solution is to devise a quantum circuit that encodes the important information in a redundant way without copying, but also in a way that is robust against interactions with the environment. It turns out that the latter is equivalent to requiring that the information should be scrambled up such that, in a sense, it is kept secret from the environment. It is rather like the environment can destroy the precious information only if it understands how we have encoded it. If we scramble things up sufficiently then the environment can’t crack the code. We give a non-quantum example of redundant, non-local information encoding in Box 15.1.
BOX 15.1. Encoding information
Suppose we want to encode the three-digit combination to a safe (abc). One way to do it is to make use of the function f(x) = ax2 + bx + c. To crack the code, one needs to know the values of a, b and c. It is possible to hide this information among a large group of people by giving each person a pair of numbers; a particular value of x and the corresponding value of f(x). To crack the code, we must interrogate any three people in the room for their pairs of numbers x and f(x). This is sufficient to determine a, b and c. This secret sharing scheme is a means to redundantly encode information in a non-local way. The method is robust against losing people: so long as we have at least three people we can get the code.
The challenge for those wanting to build a quantum computer is to invent a compact device for encoding a qubit (or a bunch of qubits) inside a bigger block of qubits, so that the qubits we want are safe even if there is damage to the exterior qubits due to their interactions with the environment. Error correction is all about trying to achieve that using an optimal combination of redundancy and secrecy. We can now appreciate the connection with holography, because the coding we’ve discussed in the context of AdS/CFT is an impressive combination of redundancy and secrecy.*52 In holography, the boundary codes for the interior space, and it does so in a redundant way because we can erase part of the boundary without losing the information in the interior. It also stores information in a way that is hard to decode, since the information is scrambled up and encoded non-locally by quantum entanglement. To destroy the interior space (as Van Raamsdonk imagined) we need to destroy the entanglement over a substantial part of the boundary and not just a small part of it.
Figure 15.2. The HaPPY holographic pentagon code.
In 2015, Fernando Pastawski, Beni Yoshida, Daniel Harlow and John Preskill53 devised an arrangement of networked qubits that redundantly encodes information about the interior of the network on the boundary. This is precisely the situation we’ve been discussing in the context of holography. The coding is known as the HaPPY code (after the authors’ initials) and is shown schematically in Figure 15.2. The open circles around the outside are qubits, as are the circles inside the pentagons. In a quantum computer, the boundary qubits are those most in danger from the environment. The qubits inside the pentagons are the ones the computer will use for its operations, and these are safer because of the structure of the network. The pentagons are devices that entangle the six qubits that feed into them. They operate such that any three qubits are maximally entangled with the other three. This means that the information encoded by the central qubit is robust against the erasure of up to three of the surrounding qubits.
The diagram shows a network with just a few layers of pentagons. You can see (from the underlying shaded pattern) that the pentagons are linked together in a manner that matches the hyperbolic tiling of the Poincaré disk. We could add more layers by moving the external qubits out by another layer. The pentagons would look very small on the diagram but that does not mean they are very small in real life. As physical devices, the pentagons could all be the same size. What matters is the way they are networked together and that is governed by the underlying hyperbolic geometry. This hyperbolic linking is an important feature, as we will now see.
Figure 15.3. The greedy geodesic has a length defined by the number of network legs it cuts through. Starting from the physical qubits dangling on the outside we can move inwards to reconstruct the interior logical qubits shown as black dots inside the pentagons.
The exciting feature of the HaPPY code is that it reproduces the most important features of AdS/CFT, and in particular the Ryu–Takayanagi result. We illustrate this in Figure 15.3. Each black dot represents a qubit. Let’s suppose that we know the state of the ‘dangling’ qubits around the edge. If lines from three known qubits feed into a pentagon, then we also know the state of the other two qubits, and also the central qubit. Qubits from the outer pentagons link into adjacent interior pentagons. As we head inwards and repeat, we always know the state of all the qubits linking into each pentagon until we encounter a pentagon with fewer than three inputs. At this point we can’t go any deeper; the qubits around the edge no longer encode for that part of the interior. When we reach this stage, the line that we cross is known as the ‘greedy geodesic’, shown as a dashed line. It marks out the part of the interior that is described perfectly by the dangling qubits around the edge. It is also the shortest line that can be drawn through the interior that links the edges of the boundary region containing the dangling qubits. Remarkably, the amount of entanglement between the qubits in the boundary region and the rest of the boundary is equal to the number of links the greedy geodesic cuts through in the network. This is nothing other than the Ryu–Takayanagi result.
The gem here – the key point – is that the HaPPY code is a network of qubits, and yet it exhibits the properties of the physics we’ve been discussing in the context of black holes. Try to imagine the HaPPY code without imagining a space in which the qubits are embedded. No space, just entangled qubits. We know that there is an equivalent description of the code using the language of geometry, which is what we’ve been using to visualise it: it’s the hyperbolic geometry of the Poincaré disk. In other words, the way we have wired up the qubits gives rise to an emergent hyperbolic geometry. The notion of distance emerges as the number of links we cut through in the network, i.e. distance is defined by counting the number of links that are cut. Astonishing as it may seem, we are being invited to imagine that the space we live in is built up from an entangled network of elemental entangled quantum units that are too small for us to detect with current experiments. Instead, we are sensitive to the way these entangled units give rise to the physical phenomena we see, including the very idea of space itself. This is quite a remarkable development and one of which John Archibald Wheeler would surely have approved.
So, what is reality?
Are we living inside a giant quantum computer? The evidence is mounting that it may be so. For years, the study of black holes has been an intellectual endeavour that has pushed theoretical physicists into corners. But in the last decade or so, a flurry of understanding, fuelled in large part by exploiting the rapidly developing field of quantum information, has led to a consensus view that holography is here to stay and that it shares many similarities with quantum error correction.
Does living inside a universe that resembles a giant quantum computer suggest that we are virtual creations living inside the computer game of a super-intellect? Probably not. There is no reason to make that link. Rather, in our pursuit of the quantum theory of gravitation, the bluest of blue skies research, we appear to have glimpsed a deeper level of the world, and understanding this deeper level may well be useful to us when we design quantum computers. This has happened so many times in the history of science. We are constantly discovering techniques that Nature has already exploited. It is not so surprising that those techniques turn out to be useful to us technologically: it seems that Nature is the best teacher.
This unlikely link between quantum computing and quantum gravity raises tantalising new possibilities. The future of quantum gravity research may have an experimental side to it, something thought highly unlikely just a few years ago. Maybe we can explore the physics of black holes in the laboratory using quantum computers. And this deep relationship between the two fields flows both ways. There may in turn be a good deal of overlap between pure black hole research and the development of large-scale quantum computers, devices which will be of enormous benefit to our economy and the long-term future of our civilisation. Perhaps it will not be long before we can no more imagine a world without quantum computers than we could imagine a world without classical computers today.
This is the ultimate vindication of research for research’s sake: two of the biggest problems in science and technology have turned out to be intimately related. The challenge of building a quantum computer is very similar to the challenge of writing down the correct theory of quantum gravity. This is one reason why it is vital that we continue to support the most esoteric scientific endeavours. Nobody could have predicted such a link.
‘Be clearly aware of the stars and the infinity on high. Then life seems almost enchanted after all’, wrote Vincent van Gogh. The study of black holes has attracted many of the greatest physicists of the last 100 years because physics is the search for both understanding and enchantment. That the quest to understand the infinities in the sky has led inexorably to the discovery of a holographic universe enchanting in its strangeness and logical beauty serves to underline Van Gogh’s insight. Perhaps it is inevitable that human beings will encounter enchantment when they commit to exploring the sublime. But it’s bloody useful too.
* The penny dropped first for Ahmed Almheiri, Xi Dong and Daniel Harlow, who pointed out the AdS/CFT link with quantum error correction in 2015.