Article 3 - Quantum Computing - The Inner Workings
Article 1 - https://www.qcguy.com/blog-1-quantum-mechanics-the-basis-of-quantum-computing/
Article 2 - https://www.qcguy.com/blog-2-quantum-computing-revolutionary-and-not-just-evolutionary/
To say that the world of Quantum Mechanics (QM) is strange is an understatement of the century. The previous statement holds to for the area of Quantum Computing (QC) which is relatively a new area but the feverish pace that it has gathered across the globe with various big names in the tech industry in a race to build the first practical, commercially sellable QC. Every major organisation from Microsoft to Google, IBM to Intel and even some new names such as D-Wave and IonQ, all are in the process of trying different methods and technologies to build a QC.
In my previous two articles, I have introduced readers to the world of QM and QC where I have noted some weird and wonderful characteristics that make QC a computer that is in a completely different class compared to the classical computers that we are used to today. QCs aren’t just an extension to the classical computers, but they are more of a revolution in the field computing that opens up brand new possibilities to help the human civilisation in ways until recently were only the stuff of dreams and imagination. QCs can be seen as an extension of the physical properties found in nature which opens up possibilities in solving some of the fundamental questions that we have been looking to find answers for. While classical computers convert every object, every information, every problem into a string of 0s and 1s, QCs work with 0s, 1s and everything in between.
In this article, I will deep-dive into the inner-workings of the QC hardware and software. Introduce the different techniques that are currently in use by the various organisations to build a QC and describe the most basic building blocks of the QC programming language. I will try and keep the language simple and keep the technical details to the bare minimum, but there will still be areas where the technical information cannot be avoided. I would encourage the readers not to be discouraged if the technical details are hard to grasp and move onto the next sections as the most important thing I am trying to achieve with these articles is to spread awareness on the topic of Quantum Computing. I believe QC is an area that is going to bring about a defining and long-lasting change to the human civilisation. This change, in my opinion, would be more profound than the one brought about by the invention of classical computers some 75 years ago, and hence it's important to at least understand the basics of it to know enough so that you can be the judge of its impact (or lack of) and not some geeky fellow with thick spectacles writing articles on the topic and claiming it to be such.
The Qubit – In transistor-based classical computers, a transistor represents a classical binary bit that can store one bit of information. Classical bits are found in one of two distinct logical states: logic state 0 or logic state 1. “State 0" corresponds to the transistor switch being “off" (e.g., no voltage is applied to the transistor gate, and so no current flows in the transistor channel), and “state 1" corresponds to the transistor switch being “on" (e.g., a voltage is applied to the gate, and so a current flows through the transistor channel). These discrete states are robust and can be measured with near certainty.
The fundamental elements of quantum computers are “quantum bits", typically referred to as “qubits." Qubits are quantum-mechanical two-level systems. They are binary in the sense that they can be initialised in classical states 0 or 1. However, as quantum mechanical objects, qubits can also be prepared in a quantum superposition state: a single quantum state that embodies aspects of both state 0 and state 1.
Quantum superposition states are succinctly represented using Dirac notation. In this notation, quantum states are expressed as “kets," where |0⟩ and |1⟩ represent the quantum states 0 and 1 respectively. A qubit which is in a superposition of these two states is then written as |ψ⟩=a|0⟩+b|1⟩. The coefficients a and b are called “probability amplitudes',' and they are related to the relative “weighting" of the two states in the superposition. An obvious special case occurs when either is zero, in which case the state |ψ⟩ is no longer in a superposition. For example, if a=0 , then |ψ⟩=|1⟩ . More generally, both a and b can be complex numbers and therefore must satisfy |a|2+|b|2=1, a normalisation to unity that ensures the “weights" being compared are of a standard, consistent size. This is analogous to the convention that probabilities are set to sum to 1.
Both classical and quantum bits can be visualised on a “Bloch sphere," a tool which can be thought of as the planet Earth, as pictured in Fig. 5. By convention, the “north pole" of the sphere represents state 0 and the “south pole" represents state 1. A classical bit is either at the north pole or the south pole, but nowhere else. In contrast, a qubit may exist anywhere on the surface of the sphere. And, when the qubit state is anywhere except for the north and south poles, it is in a superposition state.
Figure: The two logical states of a classical system correspond to the north and south on a globe.
Figure: The Bloch Sphere
The above description of the qubit is more complex than if I were to describe a classical computer bit and that complexity is the beauty of the qubit in QC. This complexity arises from the qubit’s ability to be in a superposition state and is the reason why it can be claimed that a QC can be used as nature’s tool because nature’s behaviour isn’t an abrupt 0 or 1. So in short, the same way that a person’s position can be defined by a point on the Earth, a quantum state can be defined by a point on the Bloch sphere. While a point on the globe always refers to position, a point on the Bloch sphere refers to a state of the qubit. For example, a qubit state can be spin-up (north pole), spin-down (south pole), or in a superposition of spin-up and spin-down (anywhere else).
Logic Gates – According to Wikipedia (the source of all truth), “In electronics, a logic gate is an idealised or physical device implementing a Boolean function; that is, it performs a logical operation on one or more binary inputs and produces a single binary output.”
Logic gates are the next step of abstraction above the bits and qubits. Logic gates are the reason you can perform calculations on your simple calculators or someone on ISS can click a button and change perform a function of changing its trajectory over the earth. Algorithms are the level of abstraction after logic gates. Algorithms and software programming languages use the various gets to get their functions and logic running.
Classical computers perform arbitrary Boolean logic with a small set of single-bit and two-bit logic gates. For example, the NOT gate (a single-bit gate) in conjunction with the AND gate (a two-bit gate) are together one example of a universal gate set. Such a universal gate set can in principle implement any arbitrary classical algorithm based on boolean logic. Similarly, quantum algorithms can be run on quantum computers using a small, universal set of single and two-qubit gates. The quantum analogue of a NOT gate is the X-gate. A NOT gate inverts its input (NOT 0→1, and NOT 1→0 ). Similarly, the X-gate would swap states |0⟩ and |1⟩. The swap can be visualised on the Bloch sphere as a 180-degree rotation around the x-axis (this is why it is called an “X-gate").
Not all classical gates have a direct quantum analogue. This is because quantum circuits must be reversible. In a reversible circuit, one can exactly reconstruct the input state(s) given the output state(s). For example, a NOT gate is reversible, because, given the output 0/1, you know the input was 1/0. However, the AND gate is not reversible, because, for example, given the output 0, you can't tell if the input had been 00, 01, or 10. The reason quantum circuits have to be reversible has to do with the fact that coherent quantum states ideally always undergo unitary evolution, and so a quantum gate can be “undone" by applying the inverse of that unitary evolution.
Universal quantum computation can be built from a small subset of these types of single and two-qubit gates. A universal gate set enables us to perform any type of algorithm — quantum or classical — on a gate model quantum computer. However, universality does not imply quantum advantage. There are many algorithms that can be implemented on a quantum computer in principle, but feature no quantum advantage.
The images below describe various classical and quantum gates (Source: MIT) –
Figure: Quantum Gates > List of Classical Gates
Figure: Quantum Gates > Single-Qubit Gates
Figure: Quantum Gates > Two-Qubit Gates
Quantum Algorithms - Algorithms, as mentioned previously, is the next level of abstraction after logic gates.
A quantum algorithm comprises a sequence of single-qubit and two-qubit gates. Quantum parallelism and quantum interference are used by the algorithm designer to take an input state -- typically a massive superposition state -- and, in a step-by-step manner, modify the weighting coefficients (probability amplitudes) until the quantum mechanical state evolves into an output state that encodes the answer to the problem. Since a projective quantum measurement will yield a single, classical state, it is imperative that the algorithm results in a final state that has a probability amplitude near unity such that -- with near-unity probability -- the measurement will lead to the correct answer.
The power of a universal computer is that it can implement any algorithm that can be expressed in terms of a circuit comprised of logical gates. A universal, fault-tolerant quantum computer will enable people to program, implement, and reliably run arbitrarily complex quantum algorithms. In fact, a universal quantum computer can run any type of algorithm – quantum or classical. Determining when it is advantageous to implement an algorithm on a quantum computer, as opposed to a classical computer, is one of the principal aims of quantum information science. The study of how efficiently a computer, whether a classical or a quantum computer, can solve a given problem results in the formal segregation of algorithms into what are called “computational complexity classes." This efficiency is determined based on how the computational resources needed to solve a problem, grow or “scale" with the size of the problem instance. For example how the size of the memory or the number of computational steps scales with the problem size.
Two important computational complexity classes are P and NP. An example of a problem in P is the multiplication of two numbers of length n, which requires n2 time steps to complete. For example, this means multiplying the binary numbers 01 and 10 requires 4 time steps since each number has n=2 digits, but multiplying 100 and 110 requires 9 time steps, since n=3 for these numbers. This time scaling, and any time scaling of a×nb where both a and b are constants, is referred to as “polynomial scaling" and is considered to be efficient.
One of the main reasons for the current interest in quantum computers is that certain problems can be solved in polynomial time on a quantum computer even though the best known classical algorithm for the same problem requires exponential or sub-exponential time. A prominent example of this is Shor's algorithm. Shor's algorithm effectively factors large integer numbers into two constituent prime numbers, a computationally hard task for classical computers. The difficulty of factorisation is one reason it is used as the foundation for public key cryptography, for example, RSA encryption. Even using the best-known classical algorithm, the resources required to break RSA encryption using a classical computer increase sub-exponentially with the length of the public key, and hence, the problem is considered to be in NP (although it has not yet been formally proven to be NP). Practically, this means that doubling the number of bits in an RSA encryption key makes it almost exponentially more difficult for a classical computer to break the scheme. In contrast, Shor's algorithm can perform the same task efficiently on a quantum computer, with only a marginal increase in difficulty as the length of the key increases. For an integer N the specific run time for Shor's algorithm scales as (logN)2 and the fastest classical algorithm, the (randomised) General Number Field Sieve, scales as e(logN)1/3. More details will be shared in my next article “Practical usages of Quantum Computing”. More details can be found here and in this first rigorous analysis of the classical algorithm, published only in 2018 (https://arxiv.org/abs/1805.08873).
The Quantum Software – The final level of abstraction after the algorithms is the software and the programming language on which all the algorithms, business logic and solutions are written. Like classical computers, quantum computers have a software stack used in the development of programs, analysis of designs, and testing of constructive programs and circuits.
Let's start by contrasting quantum software tools to classical ones to perform similar types of functions. To create a classical circuit design, one may use a schematic capture CAD tool. A quantum analogue to this is the circuit composer from IBM's quantum experience. For program based designs, for example, HDL circuit designs are written in VHDL or Verilog, or high-level computer programs are written in a language like c++, one uses a compiler to translate the program to a lower level format. Numerous quantum compilers exist that perform the same function. Some examples include Quipper developed by Dalhousie University and Q sharp from Microsoft. These quantum compilers typically produce an intermediate format known as QASM, or quantum assembly language. Finally, to translate a program or circuit to a format that a machine can execute, one uses an assembler. A similar function is required for quantum circuits. Each hardware platform will have specific gates that it can execute, and the connectivity that it provides is specific to the topology constraints of the technology. Mapping from QASM programs to hardware is something that IBM's QISKit software performs, in this case, for superconductor based technology. For classical circuit designs, a circuit simulation tool like Spice is commonly used to determine circuit properties like power consumption, timing, and to verify the correctness of the circuit. For quantum circuits, one of the main concerns is how noise or errors impact the operation of the circuit. Quantum simulation tools exist at multiple levels of abstraction that model error in the operation of quantum circuits. Quantum assimilation tools are also used to calculate the fidelity of gates operating under the control of control waveforms, and to verify the correctness of circuits. The last category of tools are those used for testing of fabricated chips, PCBs, etc. For classical chips, one may use hardware and software tools, such as JTAG and boundary-scan to verify the operation of a chip. For quantum circuits, quantum characterization, validation, and verification, or QCVV tools provide a similar function. These tools evaluate results obtained from experimental runs to calculate the Fidelity devices and to verify their correctness.
Let's now look at the three main methods for program generation, the first being schematic capture. One of the front-ends to IBM's quantum experience, the composer tool is an example of a schematic capture tool, where one uses a GUI to construct a circuit consisting of one and two-qubit gates and measurements. The tool knows of underlying hardware constraints and prevents the user from violating these. In many cases, it is more convenient to specify the quantum circuit as a program, which allows for the specification of larger-scale circuits.
This leads to the motivation behind the second method for program generation, high-level quantum languages. Examples of this are tools like Quipper, Q Sharp, and Scaffold. High level programs written in these languages are compiled into QASM circuits or can be displayed as circuit diagrams. The last method for program generation that we will discuss is problem specific generation. Google's Open Fermion software tool is an example of this method. This particular package is designed specifically for formulating simulations of quantum chemistry. An advantage of this approach is that it incorporates the steps and the known methods used for the particular class of problems, without requiring the user to know detailed domain knowledge or the methods used to solve these problems. A user specifies a problem at a high level, and the package generates a quantum circuit that can be mapped to a hardware-specific platform or simulation systems. And so we have seen the type of software tools that are commonly used to program a quantum computer, and how these tools are similar to tools used for classical computers.
Programming a quantum computer comprises three primary types of software:
- Program generation software
- Circuit mapping software
- Control and execution software
Program generation is generally technology agnostic, whereas circuit mapping, control, and execution all require knowledge of the hardware architecture. Program generation can be further divided into three categories:
- schematic capture (generally related to the quantum circuit model)
- high-level programming languages (abstracting away subroutines such as phase estimation or period-finding via the Quantum Fourier Transform) and lower-level assembly languages (abstracting away from specific hardware controls and subroutines)
- problem-specific platforms (e.g. to generate auxiliary inputs to quantum chemistry calculations)
Different techniques currently being used to build a QC – I read somewhere that QCs are like divas, they require just the right conditions to operate in, and any deviation will cause problems in their operations. They are finicky like that. The technology that we know so far to build qubits don’t have high coherence times, which means they lose their state quite quickly and thus aren’t capable of being used in a long-running quantum operation. For transistors (or classical bits) this concept loosely translates to the mean time to failure.
Here are some quantum mechanical two-level systems that potentially could serve as qubits. For example, the electronic states of an ion, the electron spin of a phosphorus atom implanted in silicon, a nuclear spin of a crystal defect in diamond, even manufactured “artificial atoms," electrical circuits that can be described as quantum mechanical two-level systems. However, what are the basic requirements any one of these modalities must at least possess in order to be a candidate for a quantum computing technology?
In 2000, Dr. David DiVincenzo, then a researcher at IBM, articulated five fundamental requirements for any qubit technology to be a suitable physical implementation for large scale quantum computation (see the paper here).
In addition to these five criteria for qubit technology, David DiVincenzo added two conditions related to the communication of quantum information between qubits.
Figure: Divincenzo Criteria for suitable qubit technology.
These criteria, known as the DiVincenzo Criteria, are in many ways adapted from the conditions for operational classical computers and summarize the fundamental requirements qubit technologies need to fulfill at a minimum to be a candidate quantum computing technology.
Because we are in such an early stage of building a commercial scale QC, there are different techniques used by different organisations to physically realise a qubit in practice that achieves the desired state of high modality and high coherent qubits. The different techniques are noted in the table below along with their performance across various parameters noted in the DiVincenzo Criteria –
Figure: Current state of technology for several leading qubit modalities as of April 2018
Figure: An overview of several leading qubit technologies v/s their performance with respect to the DiVincenzo Criteria (circa 2018)
As is evident, the various technologies listed above all have their strengths and weaknesses. There is no single technology that meets all the in its entirety. However, there are 2 that perform better than others, viz. Superconductors and Trapped Ions. I have listed some of the notable organisations that are they betting either of the two technologies to build a QC –
- Superconductors – IBM, Google, Rigetti, QCI, D-Wave Systems,
- Trapped Ion – IonQ, Microsoft
I know that I have overloaded this article with information. But it was important for me to do so before I start talking about the practical usages of a QC as it will require some level of understanding about the advantages that QC bring to the field of computing to better appreciate their impact on the world as we know it.
Disclaimer - The reading of all information on this article is of your own free will. If you do not accept these Terms and Conditions, you should cease reading this article immediately. If you do however want to read some awesome QC related articles, please don’t leave just yet.
I reserve the right to change any of these Terms and Conditions at any given time on this article.
Even though I work very hard to provide you with up-to-date information (through thorough research and eating lots of cookies while re-writing already published articles), I make no representations or warranties of any kind (expressed or implied) about the completeness, accuracy, reliability, suitability or availability of any information, products, services or related graphics contained on the QCGuy.com for any purpose.
I aim to provide you with accurate information at the time of publishing, but some information will understandably be less accurate as time passes. Should you find any inaccurate information, please do not hesitate to contact me. I will drop everything, get behind my “classical computer” and correct this world shocking mistake right away… or as soon as I finish my cup of tea.