Annealers vs Universal
So far, weve been discussing universal gate quantum computers. These systems rely on building really reliable qubits where basic quantum circuit
operations, or gates, can be put together to create any sequence, running more and more complex algorithms. As weve discussed, these systems can be subject to some limitations based on their hardware approaches, but they benefit from being
programmable and broadly applicable.
Quantum annealers are more limited in the problems they can efficiently solve, and are mostly used for certain
optimization problems. These systems work by setting up a field of qubits that represent a problem, letting them interact with each other, and waiting for them to settle into an annealed arrangement where you can read out global maxima
and minima.
The D-Wave machine is probably the most famous example of a quantum annealer. Its built using
superconducting qubits, which are actually the same type of qubit used for many of the universal gate quantum computer systems. The difference is in how the superconducting qubits are arranged, their topology, and how they are controlled.
Annealers may be good at solving optimization problems like the traveling salesman problem and even finding the solution to some games! However, while quantum
annealers can theoretically run any algorithm you can run on universal gate quantum computers, the role of entanglement is such devices is not clear and they are much less efficient when used outside of their core area of optimization.
Quantum Volume and Algorithmic Qubits
Now, how do
we compare these hardware implementations? What makes a collection of qubits useful? Two metrics we can use are Quantum Volume and Algorithmic Qubits.
IBMs Quantum volume is a single number designed to be more mindful than simple qubit counts for gauging the performance of a quantum computer. It takes
into account many features of a quantum computer, including number of qubits, gate and measurement errors, crosstalk, and the topology, or connectivity, of the quantum device. For example, if qubits or their gates are very noisy, it doesnt
matter how many qubits you have. If you have perfect qubits and perfect gates but only 4 qubits, then its equally useless.
IonQ has also introduced
their own metric to compare quantum computing systems, called Algorithmic Qubits (AQ). Its defined as the largest number of useful qubits you can deploy for a typical quantum program. It takes into account the number of physical
qubits, the average 2-qubit fidelity, which is a measure of how long qubits maintain coherence (or how long it can store quantum information) through a computation, and the error correction overhead needed for
the program. IonQs next generation system has 32 qubits with 99.9% fidelity, delivering 22 Algorithmic Qubits.
Since no qubits have perfect
fidelity and all require error correction, the number of algorithmic qubits will always be fewer than the number of physical qubits.
Now, while the
industry is just starting to standardize what defines the best quantum computer, what we can all agree on is that just the number of qubits is not the whole story.
Quantum Platforms
Now, lets take a look at
the quantum computing cloud landscape. Todays cloud platforms are mainly hosted by large tech companies: Amazon, Microsoft, IBM, and Google.