Chapter 14. Staying on Top: A Guide to the Literature

We hope you’ve enjoyed tinkering with the computational problems we presented in this book! Before closing, we’ll briefly introduce a few subjects we didn’t have space to go into previously, and provide pointers on where to go to learn more about these and other topics in quantum computing. We won’t go into too much depth here; the aim here is rather to link what you’ve learned so far to material reaching beyond the scope of this book. Let the intuition you’ve built here be only the first step in your exploration of quantum programming!

From Circle Notation to Complex Vectors

The |x notation that we use throughout the book to refer to states in a quantum register is called bra-ket notation or, sometimes Dirac notation, in honor of the 20th-century physicist of the same name. Throughout the quantum computing literature, this—rather than circle notation—is the notation used to represent quantum states. In Chapter 2 we hinted at the equivalence between these two notations, but it’s worth saying a little more to set you on your way. A general superposition within a single-qubit register can be expressed in Dirac notation as α|0+β|1, where α and β are the states’ amplitudes, represented as complex numbers satisfying the equation |α|2+|β|2=1. The magnitude and relative phase of each value in the circle notation we’ve been using are given by the modulus and argument of the complex numbers α and β, respectively. The probability of a READ outcome for a given binary output value from a QPU register is given by the squared modulus of the complex number describing the amplitude of that value. For example, in the preceding single-qubit case, |α|2 would give the probability of READing a 0 and |β|2 would give the probability of READing a 1.

The complex vectors describing QPU register states satisfy some very specific mathematical properties, meaning that they can be said to exist in a structure known as a Hilbert space. You likely don’t need to know that much about Hilbert space, but will hear the term used a lot—mostly simply to refer to the collection of possible complex vectors representing a given QPU register.

In the case of single qubits, a common way to parameterize α and β is as cosθ|0+eiϕsinθ|1. In this case, the two variables θ and ϕ can be interpreted as angles on a sphere, which many references will refer to as the Bloch sphere. As mentioned in Chapter 2, the Bloch sphere provides a visual representation of single-qubit states. Unlike circle notation, though, it’s unfortunately difficult to use the Bloch sphere to visualize registers with more than one qubit.

Another complication regarding qubit states that we didn’t cover in the book is the so-called mixed states, which are represented mathematically by so-called density operators (although we did mention density operators briefly in Chapter 13). These are a statistical mixture of the kind of pure states that we’ve been working with in our quantum registers throughout the book (i.e., how you should describe a qubit if you’re not sure precisely what superposition it’s in). To some extent it is possible to represent mixed states in circle notation, but if we have a QPU with error correction (more on that later), pure states are enough to get started with QPU programming.

As visualizations of many-qubit registers are not common in most textbooks and academic references, quantum registers are most often represented solely by complex vectors,1 with the length of the vector needed for representing n qubits being 2n (just as the number of circles needed in circle-notation to represent n qubits was 2n. When writing down the amplitudes of an n-qubit register’s state in a column vector, the amplitude of the state |00...0 is conventionally placed at the top with the remaining possible states following below in ascending binary order.

QPU operations are described by unitary matrices acting on these complex vectors. The order in which the matrices are written is right to left (exactly opposite of how we would write a quantum circuit diagram—left to right), so that the first matrix that acts on our complex vector corresponds to the first (leftmost) gate in an associated circuit diagram. Equation 14-1 shows a simple example, where a NOT gate (often also referred to as X in the literature) is applied to an input qubit in the state α|0+β|1. We can see how it flips the values of α and β, as expected.

Equation 14-1. NOT gate acting on qubit in standard complex-vector notation
0110αβ=βα

Single-qubit gates are represented as 2 × 2 matrices since they transform vectors with two entries, corresponding to the single-qubit register values of |0 and |1. Two-qubit gates are represented by 4 × 4 matrices and, in general, n-qubit gates are represented by 2n × 2n matrices. In Figure 14-1 we show the matrix representations of some of the most commonly used single- and two-qubit gates. If you have an understanding of matrix multiplication and really want to test your understanding, you might try to predict the action of these operations on different input states and see whether your predictions match what you see in QCEngine’s circle notation.

Matrix representation of the most basic single- and two-qubit gates
Figure 14-1. Matrix representations of the most basic single- and two-qubit gates

Some Subtleties and Notes on Terminology

There are a few subtleties we wanted to mention with regard to to the terminology used in this book.

  • Throughout the book we’ve referred to pre-quantum computing as “conventional.” You may also hear people use the term classical to refer to traditional binary computing that doesn’t operate with quantum registers.

  • We have often used so-called scratch qubits to help perform some aspects of the quantum computation. These are instead referred to as ancilla qubits in many quantum computing resources.

  • In Chapter 2 we introduced the PHASE gate, which takes an angle as an input parameter. In this book we have used degrees to represent angles, ranging from 0° to 360°. It is common in the quantum computing literature to specify angles in radians. Radians are an angular unit corresponding to an angle at the center of the circle such that its arc has the same length as its radius. The following table shows commonly used angles in both units:

    Degrees

    45°

    90°

    135°

    180°

    225°

    270°

    315°

    Radians

    0

    π4

    π2

    3π4

    π

    5π4

    3π2

    7π4

  • There are three cases in which the PHASE gate receives a special name:

    Angle (radians)

    π4

    π2

    π

    Name

    T

    S

    Z

  • In Chapter 6, we introduced the AA primitive. This primitive (more specifically, the mirror operation) allows us to amplify the amplitude of marked states in our register, thereby increasing the probability of READing out that register. Although this terminology may seem straightforward enough, it’s worth noting that in the scholarly literature, these iterations are usually denoted as Grover iterations, while the expression amplitude amplification is reserved for a general class of algorithms that can use Grover iterations in order to improve their success probabilities.

  • The particular configuration of a QPU register (eg: what superposition or entangled arrangement it might exist in) is commonly referred to as the state of the register. This terminology arises from the fact that a register’s configuration is really described by a quantum state in the mathematical sense introduced briefly above (even if we we choose to visualize the register more conveniently using circle notation).

  • When describing an N qubit QPU register in some superposition of its 2N possible integer values, we have often referred to each of these possibilities (and their asociated amplitudes) as values within the superposition. An equivalent, more commonplace, expression is term. So one might talk (for example) about the amplitude of “the ∣4〉 term” in a QPU register’s superposition. If we’re thinking of QPU register’s in terms of their proper Dirac notation representation, then this terminology makes sense, as ∣4〉 (and it’s amplitude) really is a term in a mathematical expression. We avoided using the expression term throughout most of the book, simply because of the unavoidable mathematical connotations.

  • QPU operations are sometimes referred to as quantum gates, with reverance to the logic gates of conventional computation. You can consider the terms “QPU operation” and “quantum gate” to be snynonymous. Collections of quantum gates form a quantum circuit.

Measurement Basis

There’s a widespread concept in quantum computing that we’ve carefully managed to avoid mentioning throughout the book—that of measurement basis. Understanding measurement more thoroughly in quantum computing really involves getting to grips with the full mathematical machinery of quantum theory, and we can’t hope to begin to do so in this short space. The goal of this book is to give you an intuitive “in” to the concepts needed to program a QPU, and for more in-depth discussion we refer the interested reader to the recommended resources at the end of this chapter. That said, we’ll try to briefly give an insight into how the core idea of measurement basis relates to the concepts and terminology we’ve used.

Wherever we used the READ operation, we always assumed it would give us an answer of 0 or 1. We saw that these two answers correspond to the states |0 and |1, in the sense that these are the states which will always give a 0 or 1 outcome, respectively. These states are, more technically, the eigenstates2 of the PHASE(180) operation (also sometimes called a Z gate). Whenever we "READ in the Z basis,” as we have implicitly done throughout the book, the complex vector describing our QPU register will, after the READ, end up in one of these two states. We say that we’ve projected our QPU register onto one of these states.3 Although we’ve so far only thought about measurements in the Z basis, this is not the only option.

Different measurement bases are like different questions we are asking our QPU state. The possible questions we can ask with quantum READ operations are which of the eigenstates from certain QPU operations our system is in. This may sound incredibly abstract, but in quantum physics these operations and their eigenstates do have physical meanings, and understanding their precise nature requires a more in-depth understanding of the underlying physics. Since so far we’ve only been measuring in the basis of PHASE(180), we’ve actually always been asking the question, “is the QPU register in the eigenstate of PHASE(180) corresponding to the eigenvalue +1, or is it in the state corresponding to the eigenvalue -1?” Even if the QPU register is in a superposition of these possibilities, after the READ it will assume one of them.

Performing a READ in another basis means asking which eigenstate of some other QPU operation, U, our QPU register is in. After the READ our QPU register will end up in (i.e., be projected onto) one of the eigenstates of U.

Since a QPU register’s state can be thought of as a superposition of the eigenstates of any QPU operation (as we described in Chapter 13), writing out the complex vector representation of a QPU register state in the eigenbasis of some operation U allows us to work out the various READ probabilities in the U measurement basis. Another interesting aspect of this whole measurement basis business is that states that always have the same measurement outcome (with 100% probability) when measured in one basis may not be so definite in a different basis, possibly only probabilistically yielding each outcome. For example, we’ve seen multiple times that when reading out the state |+=12|0+12|1, we have a 50% chance of getting 0 and 50% chance of getting 1 when measuring in the Z basis. However, if we’re measuring in the X (NOT) basis, we will always get 0; this is because |+ happens to be an eigenstate of X and therefore, when considered in the measurement basis of X, our state isn’t in a superposition at all.

Gate Decompositions and Compilation

Controlled operations have played an important role throughout the book, and on occasion, you may have been left wondering how we would implement these operations. Cases that might particularly require elucidation are:

  1. Controlled operations where the operation acting on the target qubit is something other than NOT or PHASE

  2. Gates with controls on several qubits at once

For compactness of notation we’ve often drawn these kinds of operations as single diagrammatic units in our circuit diagrams. However, in general, they will not correspond to native instructions on QPU hardware and they’ll need to be implemented in terms of more fundamental QPU operations.

Fortunately, more complex conditional operations can be written as series of single-qubit and two-qubit operations. In Figure 14-2 we show a general decomposition of a controlled QPU operation (corresponding to a general unitary matrix in the mathematics of quantum computing). The constituent operations in this decomposition need to be chosen such that A, B, and C satisfy U=eiαAXBXC (where X is the NOT gate) and acting all three operations directly one after the other, A·B·C, has no overall effect on our QPU register state.

General decomposition of a controlled unitary
Figure 14-2. General decomposition of a controlled unitary

If we can find operations A, B, C, and α satisfying these requirements, then we can conditionally perform our desired operation U. Sometimes there may be more than one possible way to decompose a conditional operation according to this prescription.

What about conditional operations that are conditioned on more than one qubit at once? As an example, Figure 14-3 shows three different decompositions for implementing the CCNOT (Toffoli) gate, which is conditioned on two qubits.

The familiar Toffoli gate can be decomposed into more basic operations
Figure 14-3. The familiar CCNOT gate can be decomposed into more basic operations

You may notice that all three of these decompositions follow the same pattern. This is not only the case for the CCNOT, any controlled-controlled operation (i.e., an operation conditioned on two other qubits) will have a similar decomposition. Figure 14-4 shows the general way we can implement a QPU operation controlled on two qubits, if we can find a QPU operation V satisfying V2=U (i.e., where applying V twice to a register is the same as if we had applied U).

A general controlled-controlled unitary decomposition
Figure 14-4. A general controlled-controlled unitary decomposition

Note that if we need help constructing the controlled-V operations in Figure 14-4, we can always refer back to Figure 14-3.

Finding the optimal decompositions of QPU algorithms in terms of simple QPU primitives and operations is not easy. The field of quantum compiling focuses on finding fast implementations of quantum algorithms. We’ll mention a few more details about quantum compiling later in the chapter, and you can find some examples of quantum compiling optimizations online at http://oreilly-qc.github.io?p=14-GD.

Gate Teleportation

In Chapter 4, we used the quantum teleportation protocol to introduce ideas and notation that we used later throughout the book. While the teleportation of information does not feature heavily in most QPU applications, the ability to teleport QPU operations often does. This allows two parties to perform an operation on a quantum register, even if no one of the two parties has access to the state and operation in the same place. Like the teleportation protocol we saw in Chapter 4, this trick requires the use of a pair of entangled qubits.

A simple example of a gate teleportation protocol can be found online at http://oreilly-qc.github.io?p=14-GT.

QPU Hall of Fame

For most of the book, we have abstained from referring the reader to academic references. Here we’ve compiled a small list of the references that first introduced many of the ideas and algorithms that we’ve discussed in detail:

Further references and resources can also be found in the lists of selected books and lecture notes at the end of the chapter.

The Race: Quantum Versus Conventional Computers

When discussing various QPU applications we’ve often been interested in comparing the performance of our newly learned QPU algorithms with their conventional counterparts. Although we’ve made comparisons on a case-by-case basis, interesting general comparisons can be made between the capabilities of quantum and conventional computers in terms of computational complexity. The computational complexity of a problem in computer science is given (roughly speaking) by the resources required to run the most efficient algorithm solving the problem. Computational complexity theory studies the classification of difficult problems according to their computational complexity.4 For example, some problems are classified as P, which means that the resources required to find a solution to the problem scale polynomially with the size of the problem (e.g., matrix diagonalization). NP refers to the class of problems that have a correct solution that can be checked in polynomial time, but where these solutions cannot necessarily be found in polynomial time. This class has a twin, co-NP, which corresponds to the problems for which an incorrect answer to the problem can be verified in polynomial time. For example, the problem of factoring that we discussed in Chapter 12 is in both the NP and co-NP classes—it’s easy to check whether a pair of numbers are (or aren’t) prime factors, but not so easy to find them in the first place. Whether P=NP is a notorious open problem, well known thanks to multiple references in popular culture and the small matter of a $1M prize awaiting anyone who can address the question! It is widely suspected, with good reason, that the two classes are not equal. The NP-complete class that we mentioned in Chapter 10 corresponds, in some sense, to the most difficult problems in the NP class. Any other problem in NP can be reduced to one of these NP-complete problems, and if a polynomial-time solution is found for any NP-complete problem, all the problems in NP will also become solvable in polynomial time and change class to P.

The excitement surrounding quantum computing is mainly due to the fact that it appears able to reduce the computational complexity of certain problems. Note that this is not a blanket speedup of any conventional computing problem; only very specific classes of algorithms are currently known to enjoy quantum speedups. This reduction can be polynomial (e.g., from higher-order polynomial to lower-order), as is the case for the 3-SAT problem we looked at in Chapter 10, or superpolynomial (e.g., from exponential to polynomial), as is the case for factoring. Where can quantum computers provide a superpolynomial speedup? It turns out that problems that are both in NP and co-NP are prime suspects (pun intended) to have exponentially faster QPU algorithms, and in fact, most of the algorithms for which such a speedup has been achieved belong to the intersection of these two classes.

Another important development that we initially mentioned in Chapter 13 is worth reiterating here. There have been a number of instances where completely conventional but quantum-inspired algorithms have emerged after quantum algorithms had been developed for those same problems. Therefore, research and understanding of quantum algorithms can lead to advances in conventional computing as well!

A Note on Oracle-Based Algorithms

Three of the earliest quantum algorithms shown to have a speedup on quantum versus conventional computers required calls to an oracle. An oracle provides information about a variable or a function, without revealing the variable or functions itself. The task in these early algorithms was to determine the variable or function used by the oracle in as few calls as possible. The required number of oracle calls in such a problem (and how this number scales with problem size) is usually referred to as the query complexity.

While these algorithms did not provide a useful computational advantage, they were crucial in the development of quantum computing, as they built understanding of the capabilities of quantum computers and eventually inspired researchers such as Peter Shor to develop more useful algorithms. Due to their pedagogical and historical importance, we briefly mention these early algorithms here, each of which is now known by the name of its inventor.

QCEngine code samples are also provided online at http://oreilly-qc.github.io to help you explore these pioneering quantum algorithms.

Deutsch-Jozsa

Oracle

Takes a binary string of n bits and outputs a single bit that is the result of applying a function f to the binary string. We are promised that the function f is either constant, in which case the output bit will always be the same, or balanced, in which case there will be the same number of 0 and 1 outputs.

Problem

Decide with absolute certainty whether f is constant or balanced in as few queries to the oracle as possible.

Query complexity

Conventionally, we will need to make 2n–1 + 1 queries to the oracle to be absolutely sure about the nature of the function. With a QPU, we can solve the problem with zero probability of error in a single quantum query!

This algorithm can be run online at http://oreilly-qc.github.io?p=14-DJ.

Bernstein-Vazirani

Oracle

Takes a binary string, x, of n bits and outputs a single binary number. The output is obtained by ixi·si, where s is a secret string used by the oracle.

Problem

Find the secret string s.

Query complexity

Conventionally, we require n oracle queries, one to learn each of the input bits. However, using a QPU we can solve the problem with a single query.

This algorithm can be run online at http://oreilly-qc.github.io?p=14-BV.

Simon

Oracle

Takes an n-bit binary string, x, and outputs a single integer. All the possible input strings are paired through a secret string s, such that two strings (x,y) will result in the same output if and only if y=xs (where denotes bitwise addition modulo 2).

Problem

Find the secret string s.

Query complexity

A conventional deterministic algorithm will require at least 2n–1 + 1 oracle queries. By using Simon’s quantum algorithm, we can find the solution in a number of calls that scales linearly in n, rather than exponentially.

This algorithm can be run online at http://oreilly-qc.github.io?p=14-S.

Quantum Programming Languages

A topic we have not touched upon is that of quantum programming languages—i.e., programming languages specifically developed with the quirks of quantum computing in mind. The applications and algorithms we’ve covered have been described using basic QPU operations (analogous to conventional binary logic gates), which we have orchestrated and controlled from a conventional programming language. This approach was followed for two reasons. Firstly, our goal has been to allow you to get hands-on and experiment with the abilities of a QPU. At the time of writing, quantum programming is arguably too nascent a field, with no existing universal standards. Secondly, the majority of quantum computing resources available to take you beyond this book (other books, lecture notes, and online simulators) are all written in terms of the same basic QPU operations we have centered our discussion around.

On the topic of developing a quantum programming stack, one area that has seen a lot of recent interest and development is quantum compiling. Due to the characteristics of quantum error-correction codes, some QPU operations (such as HAD) are much easier to implement in a fault-tolerant manner than others (such as PHASE(45), also referred to as a T gate). Finding ways to compile quantum programs that account for such implementation constraints but don’t adversely impact the speedup offered by a QPU is the task of quantum compiling. Literature surrounding quantum compiling often mentions the T count of a program, referring to the total number of difficult-to-perform T gates required. A common topic is also the preparation and distillation of so-called magic states,5 particular quantum states that (when perfectly prepared) allow us to implement the elusive T gate.

At the time of writing, the study of quantum programming languages and quantum software toolchains would benefit greatly from input from experts in conventional computing—especially in the areas of debugging and verification. Some good starting points on the topic are listed here:

The Promise of Quantum Simulation

At the time of writing (circa 2019), quantum simulation is being proposed as the killer app of quantum computing. In fact, there are already some quantum algorithmic proposals to address quantum chemistry problems that are intractable with classical computers. Some of the problems that could be solved using quantum simulation routines are:

Nitrogen fixation

Find a catalyst to convert nitrogen to ammonia at room temperature. This process can be used to lower the cost of fertilizer, addressing hunger in third-world countries.

Room temperature superconductor

Find a material that superconducts at room temperature. This material would allow for power transmission with virtually no losses.

Catalyst for carbon sequestration

Find a catalyst to absorb carbon from the atmosphere, reducing the amount of CO2 and slowing global warming.

It is clear that the potential for social and economic change that quantum computers can bring is unparalleled compared to most other technologies, which is one of the reasons there is such an interest in bringing about this technology.

Error Correction and NISQ Devices

The discussions of QPU operations, primitives, and applications in this book have all assumed (or simulated) the availability of error-corrected (also called logical) qubits. As with conventional computation, errors in registers and operations can quickly ruin quantum computation. A large body of research exists into quantum error-correction codes designed to counteract this effect. A remarkable result in the study of quantum error correction is the threshold theorem, which shows that if the rate of errors in a QPU falls below a certain threshold, then quantum error-correction codes will allow us to suppress the errors at the cost of only a small overhead to our computation. QPU applications only maintain their advantage over conventional algorithms under such low-noise conditions, and therefore likely require error-corrected qubits.

That said, at the time of writing there is a trend to search for QPU algorithms that might provide speedups over conventional computers even when run on Noisy Intermediate-Scale Quantum (NISQ) devices. These devices are understood to be composed of noisy qubits that do not undergo error correction. The hope is that algorithms might exist that are themselves intrinsically tolerant to QPU noise. However, circa 2019, no such algorithms capable of solving useful problems are known to exist.

Where Next?

In this book, we hope to have provided you with the conceptual tools and QPU intuition to further explore the fascinating topic of quantum computing. With the foundation you now have, the references we list in this section are good places to take your understanding of QPUs to the next stage. Be warned (but not dissuaded!) that these references often freely use the more advanced levels of linear algebra and other mathematics that we’ve aimed to avoid in this text.

1 Recall from Chapter 13, though, that mixed states are represented by matrices, called density operators, instead of vectors.

2 For a refresher on eigenstates, see Chapter 8.

3 The term project here has a mathematical meaning. Projection operators in mathematics “select out” certain vectors from a linear combination. In quantum mechanics, the action of a READ in the Z basis is to “project” the complex vector representing our QPU register onto one of the complex vectors representing |0 or |1.

4 A full description of its 500+ classes can be found in the Complexity Zoo.

5 Magic is a technical term in quantum computing. As well as a magical one.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset