Chapter 6. Amplitude Amplification

In the previous chapter we showed how to build conventional arithmetic and logical operations that utilize the power of superposition. But when using a QPU, being able to perform computations in superposition is useless if we don’t have something clever up our sleeves to make sure that we’re actually able to READ out a solution.

In this chapter we introduce the first quantum primitive allowing us to manipulate superpositions into a form that we can reliably READ. We’ve already noted that there are many such primitives, each suited to different kinds of problems. The first we will cover is amplitude amplification.1

Hands-on: Converting Between Phase and Magnitude

Very simply, amplitude amplification is a tool that converts inaccessible phase differences inside a QPU register into READable magnitude differences (and vice versa). As a QPU tool, it’s simple, elegant, powerful, and very useful.

Tip

Given that amplitude amplification converts phase differences into magnitudes, you might think that magnitude amplification would be a better name. However, “amplitude amplification” is commonly used in the wider literature to refer to the kind of primitive we describe here.

For example, suppose we have a four-qubit QPU register containing one of the three quantum states (A, B, or C), shown in Figure 6-1, but we don’t know which one.

Each state has a single flipped value
Figure 6-1. Each state has a single flipped value

These three states are clearly distinct in that each one has a different certain value phase-flipped. We’ll call that the marked value. However, as all of the values in the register have the same magnitude, reading a QPU register in any of these states will return an evenly distributed random number, revealing nothing about which of the three states we started with. At the same time, such READs will destroy the phase information in the register.

But with a single QPU subroutine we can reveal the hidden phase information. We call this subroutine the mirror operation (we’ll see why later), and you can see its action for yourself by running the sample code in Example 6-1, shown in Figure 6-2.

Applying the mirror subroutine to a flipped phase
Figure 6-2. Applying the mirror subroutine to a flipped phase

Note that before applying the mirror subroutine we first flip, which takes our register initially in state ∣0⟩ and marks one of its values. You can alter which value is flipped in the preceding code sample by changing the number_to_flip variable.

Applying the mirror subroutine to the A, B, and C states from Figure 6-1, we obtain the results in Figure 6-3.

After applying the mirror subroutine, phase difference have been converted into magnitude differences
Figure 6-3. After applying the mirror subroutine, phase differences have been converted into magnitude differences

The magnitudes within each state are now very different, and performing a READ on the QPU register is very likely (though not certain) to reveal which value had its phase flipped, and hence which of the three states our register had been in. Instead of all values having the same probability of 6.25%, the marked value now has a READ probability of about 47.3%, with the nonmarked values being at about 3.5%. At this point, READing the register gives us an almost 50% chance of obtaining the value that had its phase flipped. That’s still not great.

Notice that although the mirror subroutine changed the magnitude of the marked value, its phase is now the same as the rest of the register. In a sense, mirror has converted the phase difference into a magnitude difference.

Note

The mirror operation is commonly called the “Grover iteration” in the quantum computing literature. Grover’s algorithm for an unstructured database search was the first algorithm implementing the flip-mirror routine, and in fact, the amplitude amplification primitive that we cover here is a generalization of Grover’s original algorithm. We’ve chosen to call the operation mirror to make it easier for the reader to recall its effect.

Can we repeat the operation again to try to further improve our probability of success? Suppose we have the B state from Figure 6-1 (i.e., flip acted on the ∣3⟩ value). Applying the mirror subroutine again simply leaves us where we started, converting the magnitude differences back into differences of phase. However, suppose that before reapplying mirror we also reapply the flip subroutine (to re-flip the marked value). This starts us out with another phase difference before our second mirror application. Figure 6-4 shows what we get if we apply the whole flip-mirror combination twice.

After applying flip-mirror a second time on state B
Figure 6-4. After applying flip-mirror a second time on state B

Following two applications of flip-mirror, the probability of finding our marked value has jumped from 47.3% to 90.8%!

The Amplitude Amplification Iteration

Together, the flip and mirror subroutines are a powerful combination. flip allows us to target a value of the register and distinguish its phase from the others. mirror then converts this phase difference into a magnitude difference. We’ll refer to this combined operation, shown in Figure 6-5, as an amplitude amplification (AA) iteration.

A single AA iteration
Figure 6-5. A single AA iteration

You’ve probably noticed that the AA operation assumes that we know which value we want to amplify—it’s hardwired into which value the flip subroutine affects. This may seem to defeat the whole point of amplitude amplification—if we already know which values we should amplify, why do we need to find them? We’ve used the flip subroutine as the simplest possible example of a subroutine flipping the phase of selected values within our QPU register. In a real application, flip would be replaced with some more complex subroutine performing a combination of phase flips representing logic specific to that application. In Chapter 10 we’ll show in more detail how computations can be performed purely on the phases of a QPU register. When applications make use of this kind of phase logic they can take the place of flip, and amplitude amplification becomes an extremely useful tool.

The key point is that although we discuss using flip alongside the mirror subroutine, compounding more complex phase-altering subroutines with mirror still converts phase alterations into magnitude differences.

More Iterations?

In Figure 6-4 we’ve applied two AA iterations to the B state, leaving us with a 90.8% success probability of observing the marked value. Can we continue applying AA iterations to bring that even closer to 100%? This is easy to try. The sample code in Example 6-2 repeats the AA steps a specified number of times. By varying the number_of_iterations variable in the sample code, we can run as many AA iterations as we like.

Figure 6-6 shows the result of running this code with number_of_iterations=4, so that we flip and then mirror our register four consecutive times.

The result of applying +AA+ 1, 2, 3, and 4 times to state B—the rows of circles show the state of our QPU register at the positions marked in the circuit
Figure 6-6. The result of applying AA 1, 2, 3, and 4 times to state B—the rows of circles show the state of our QPU register at the positions marked in the circuit

Let’s follow the action of each successive AA iteration. After the flip of the first AA we begin with the state B from Figure 6-1. As we’ve already seen, B1 has a success probability of 47.3%, while after two iterations, B2 has a probability of 90.8%.

The third iteration brings us to 96.1%, but notice that in B3 the marked state is out of phase with the others, so our next flip subroutine will cause all phases to be in alignment. At that point, we will have a magnitude difference but no phase difference, so further AA iterations will start diminishing magnitude differences until we end up with the original state.

Sure enough, by the time we get to B4 our chances of successfully reading out the marked state are way down to 58.2%, and they’ll continue to drop if we apply more AA iterations.

So how many AA iterations should we apply to maximize the chances of READing out our marked value correctly?

The plot in Figure 6-7 shows that as we continually loop through our iterations, the probability of reading the marked value oscillates in a predictable way. We can see that to maximize the chance of getting the correct result, we’re best off waiting for the 9th or 15th iteration, giving a probability of finding the marked value of 99.9563%.

Probability of reading the marked value versus the number of AA iterations
Figure 6-7. Probability of reading the marked value versus the number of AA iterations

On the other hand, if each iteration is expensive to perform (we’ll see some cases of this later), we can stop after three and attempt to harvest the answer at 96.1%. Even if we fail and need to repeat the entire QPU program, we’ll have a 99.848% chance that one of the two attempts will succeed, only costing us at most six total iterations.

In general, there’s a simple and useful equation allowing us to determine the number of AA iterations, NAA, that we should perform to get the highest probability within the first oscillation made by the success probability (this would be the 96.1% success probability at NAA = 3 in our previous example). This is shown in Equation 6-1, where n is the number of qubits.

Equation 6-1. Optimal number of iterations in amplitude amplification
NAA=π2n4

We now have a tool that can convert a single phase difference within a QPU register into a detectable magnitude difference. But what if our register has several values with different phases? It’s easy to imagine that subroutines more complicated than flip might alter the phases of multiple values in the register. Fortunately, our AA iterations can handle this more general case.

Multiple Flipped Entries

With a small modification to the circuit in Example 6-1 we can try running multiple AA iterations on a register having any number of phase-flipped values. In Example 6-3 you can use the n2f variable to set which values in the register should be flipped by the flip subroutine inside each AA operation. As before, you can also adjust the number of AA iterations performed using the number_of_iterations variable.

By running this sample with a single value flipped (e.g., n2f = [4], as shown in Figure 6-8), we can reproduce our earlier results, where increasing the number of AA iterations causes the probability of READing the “marked” value to vary sinusoidally, as illustrated in Figure 6-9.

One value flipped
Figure 6-8. One value flipped
Repeated AA iterations with 1 value of 16 flipped
Figure 6-9. Repeated AA iterations with 1 value of 16 flipped

But now let’s instead flip two values (e.g., n2f=[4,7], as shown in Figure 6-10). In this case we ideally want to end up with the QPU register configured so that we will READ either of the two phase-flipped values, with zero possibility of READing any others. Applying multiple AA iterations just like we did for one marked state (albeit with two flip subroutines run during each iteration—one for each marked state) yields the results shown in Figure 6-11.

Two values flipped
Figure 6-10. Two values flipped
Repeated AA iterations with 2 values of 16 flipped
Figure 6-11. Repeated AA iterations with 2 values of 16 flipped
Warning

Note that in Figure 6-11 the probability shown on the y-axis is the probability of obtaining either one of the (two) marked values if we were to READ our register.

Although we still get a sinusoidally varying chance of success, comparing this with the similar plot for only one phase-flipped value in Figure 6-7 you’ll notice that the frequency of the sinusoidal wave has increased.

With three values flipped (e.g., n2f=[4,7,8], as shown in Figure 6-12), the wave’s frequency continues to increase, as you can see in Figure 6-13.

Three values flipped
Figure 6-12. Three values flipped
Repeated AA iterations with 3 values of 16 flipped
Figure 6-13. Repeated AA iterations with 3 values of 16 flipped

When we have 4 of the 16 values flipped, as shown in Figure 6-14, something interesting happens. As you can see in Figure 6-15, the wave’s frequency becomes such that the probability of us READing one of the marked values repeats with every third AA iteration that we apply. This ends up meaning that the very first iteration gives us 100% probability of success.

Four values flipped
Figure 6-14. Four values flipped
Repeated AA iterations with 4 values of 16 flipped
Figure 6-15. Repeated AA iterations with 4 values of 16 flipped

This trend continues for up to seven flipped values, as illustrated in Figures 6-16 and 6-17.

Seven values flipped
Figure 6-16. Seven values flipped
Repeated AA iterations with 7 values of 16 flipped
Figure 6-17. Repeated AA iterations with 7 values of 16 flipped

Of course, with 7 of our 16 values being marked, even getting a correct READ value might not provide us with much useful information.

Everything comes to a halt in Figure 6-19, where we have 8 of our 16 values flipped as shown in Figure 6-18. As has been mentioned in previous chapters, only the relative phase matters for quantum states. Because of this, flipping half of the values to mark them is physically the same as flipping the other half. The AA iterations fail completely here, and we’re left with just as much chance of READing out any value in the register.

Eight values flipped
Figure 6-18. Eight values flipped
Repeated +AA+ iterations with 8 values of 16 flipped
Figure 6-19. Repeated AA iterations with 8 values of 16 flipped

When 50% of the register values are marked, we could continue applying AA operations as much as we liked and we’d never do better than READing out random numbers.

Note

Note that by symmetry we don’t really need to consider what happens if we have more than 50% of our register values phase-flipped. For example, if 12 of our 16 values are marked, attempting to read “success” is identical to what we observed when marking 4 of them, but with what we consider success and failure flipped.

Interestingly, our exploration shows that the frequency with which our chance of success oscillates depends only on the number of flipped values, not which values are flipped. In fact, we can extend Equation 6-1 to also hold for when we have multiple marked items, as shown in Equation 6-2 (where n is the number of qubits and m is the number of marked items).

Equation 6-2. Optimal number of iterations for multiple flipped phases
NAA=π42nm

So long as we know m, we can use this expression to decide how many AA iterations we should apply to amplify the magnitudes of the flipped values to give a high probability of success. This raises an interesting point: if we don’t know how many states are flipped, then how can we know how many AA iterations to apply to maximize our chance of success? When we come to employ amplitude amplification to applications in Chapter 10, we’ll revisit this question and see how other primitives can help.

Using Amplitude Amplification

Hopefully you now have an idea of amplitude amplification’s capabilities. Being able to convert unREADable phases into READable magnitudes definitely sounds useful, but how do we actually employ this? Amplitude amplification finds utility in a number of ways, but one strikingly pragmatic use is as part of a Quantum Sum Estimation technique.

AA and QFT as Sum Estimation

We’ve seen that the frequency with which probabilities fluctuate in these AA iteration examples depends on the number of flipped values. In the next chapter, we’ll introduce the Quantum Fourier Transform (QFT)—a QPU primitive allowing us to READ out the frequency with which values vary in a quantum register.

It turns out that by combining the AA and QFT primitives, we can devise a circuit allowing us to READ not just one of our marked values, but a value corresponding to how many marked values in our initial register state were flipped. This is a form of Quantum Sum Estimation. We’ll discuss Quantum Sum Estimation fully in Chapter 11, but mention it here to give a feeling for just how useful amplitude amplification can be.

Speeding Up Conventional Algorithms with AA

It turns out that AA can be used as a subroutine on many conventional algorithms, providing a quadratic performance speedup. The problems that AA can be applied to are those invoking a subroutine that repeatedly checks the validity of a solution. Examples of this type of problem are boolean satisfiability and finding global and local minima.

As we have seen, the AA primitive is formed of two parts, flip and mirror. It is in the flip part that we encode the equivalent to the classical subroutine that checks the validity of a solution, while the mirror part remains the same for all applications. In Chapter 14 we will cover this aspect of AA fully and learn how to encode classical subroutines in the flip part of AA.

Inside the QPU

So how do the QPU operations making up each AA iteration allow it to undertake its task? Rather than worry about the functioning of every individual operation, we’ll try to build an intuitive understanding of an AA iteration’s effect. It turns out there’s a useful way of understanding amplitude amplification in terms of its geometrical effect on a QPU register’s circle notation.

The Intuition

There are two stages to amplitude amplification: flip and mirror. The flip subroutine flips the phase of a marked term in the superposition that we ultimately want to extract from our QPU.

The mirror subroutine, which remains unchanged throughout the AA primitive, turns phase differences into contrasts in magnitude. But another way to understand mirror is that it causes each value in a state to mirror about the average of all values.

As well as explaining why we call this subroutine mirror, this alternative interpretation also helps us give a more precise step-by-step account of what mirror achieves.

Suppose we have a two-qubit input state to the mirror subroutine, which is in a superposition of ∣0⟩ and ∣3⟩ as shown in Figure 6-20.

Starting state
Figure 6-20. Starting state

In terms of circle notation, the mirror subroutine performs the following steps:

  1. Find the average of all the values (circles). This can be done by numerically averaging the x and y positions of the points within the circles.2 When calculating the average, the zero values (empty circles) should be included as [0.0, 0.0], as shown in Figures 6-21 and 6-22.

    Calculating the average
    Figure 6-21. Calculating the average
    Plotting the average
    Figure 6-22. Plotting the average
  2. Flip each value about the common average. Visually, this is simply a reflection, as shown in Figure 6-23.

    Flipping about the average
    Figure 6-23. Flipping about the average

That’s all there is to it. The result, as shown in Figure 6-24, is that the phase differences in our original state have been converted into magnitude differences. It’s worth noting that the common average of the circles is still the same. This means that applying the transform again will simply return the initial state.

The resulting state
Figure 6-24. The resulting state

How is it that this “mirror about the average” operation ends up converting phase and magnitude differences? Imagine that there are many states with similar phases, but one oddball state with a very different phase, as shown in Figure 6-25.

State with one oddball phase
Figure 6-25. State with one oddball phase

Given that most values are the same, the average will lie closer to the value of most of the states, and very far from the state that has the opposite phase. This means that when we mirror about the average, the value with the different phase will “slingshot” over the average and stand out from the rest, as shown in Figure 6-26.

Final state
Figure 6-26. Final state

For most applications of AA that we discuss in Chapter 10, the replacements for flip will introduce a phase difference of 180° between the marked and unmarked states, so the preceding “slingshotting” example is particularly relevant.

Tip

In practice, it’s actually simpler to implement mirror about the average + flip all phases rather than simply mirror about the average. Since only the relative phases in a register are actually important, this is entirely equivalent.

Conclusion

This chapter introduced one of the core operations in many QPU applications. By converting phase differences into magnitude differences, amplitude amplification allows a QPU program to provide useful output relating to phase information from a state that would otherwise remain invisible. We explore the full power of this primitive in Chapters 11 and 14.

1 In this book, we use the term amplitude amplification in a slightly different way than the term is used in the academic literature. We cover the exact difference in Chapter 14.

2 In terms of the full-blown mathematical description of a QPU register state as a complex vector, this corresponds to averaging the real and imaginary parts of its components.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset