User:Irvings1/Quantum algorithm for linear systems of equations
The Quantum algorithm for linear systems of equations, designed by Aram Harrow, Avinatan Hassidim, and Seth Lloyd, is a quantum algorithm for solving linear systems formulated in 2009. The algorithm estimates the result of a scalar measurement on the solution vector to a given linear system of equations.
The algorithm is one of only a few known algorithms known to provide an exponential improvement over their classical counterparts, along with Shor's factoring algorithm and Feyman's quantum simulation of complex systems. Provided the linear system is sparse and has a low condition number, and the user is interested in the result of a scalar measurement on the solution vector, and not the values of the solution vector itself, the algorithm hs a runtime of O(logN*k^2), offering an exponential speedup over the fastest classical algorithm, which runs in O(N*sqrt(k)).
Due to the prevalence of linear systems in virtually all areas of science and engineering, the quantum algorithm for linear systems of equations is considered to be the most practically useful quantum algorithm so far. This is an important milestone for quantum computing, as previous algorithms offering exponential speedup do not have obvious real-world applications.
An implementation of the quantum algorithm for linear systems of equations was first demonstrated in 2013 by both Barz et al and Pan et al. The demonstration consisted of solving 2x2 linear equations on a photonic quantum computer.
Procedure
PROBLEM OVERVIEW
The problem we are trying to solve is: given a matrix and vector , find the result of some scalar measurement on the vector , where . This algorithm estimates features of the solution vector
The problem we are trying to solve is: given a Hermitian matrix and a unit vector , find satisfying .
PROCEDURE BODY
- Estimate b as a quantum state (summation equation)
- use techniques of Hamiltonian simulation to apply e^iAt to b for a superposition of different times t
(This process uses phase estimation (find the eigenvalues and eigenvectors)) (this is possible if A is s-sparse and efficiently row computable, meaning it has at most s nonzero entries per row and given a row index these entries can be computed in time O(s). ) (This process has a reference [3] in the paper)
- The resulting state from the above is close to (summation) which is a decomposition of b in terms of A's ev's and ev's
- We would like to take each lambda j to C*lambdaj inverse * lambda j, which is not a unitary operation, and thus will have some probability of failing.
- After the previous step is successful, we are left with a state that is proportional to A^-1*b which is equal to the solution vector
CONCLUSION
This procedure yields a quantum-mechanical representation |x> of the desired vector x. To read out all components of x would require one to perform the procedure at east N times. However, it is often the case that one is not interested in x itself but rather some expectation value x^T M x, where M is some linear operator. By mapping M to a quantum-mechanical operator and performing the quantum measurement corresponding to M, we obtain an estimate of the expectation value <x|M|x> = x^T M , as desired. This allows for a wide variety of features of the vector x to be extracted including normalization, weights in different parts of the state space, moments, etc.
Explanation of the Algorithm
Initialization
- Force A to be hermitian - https://en.wikipedia.org/wiki/Hermitian_matrix
In the case were A is not Hermitian, define
As is hermitian, the algorithm can now be used to solve to obtain .
- Prepare b on register FIRST
The algorithm also requires an efficient procedure to prepare , the quantum representation of b. It is assumed that there is some that can take the quantum state to efficiently or that this algorithm is a subroutine in a larger algorithm and is given as input. Any error in the preparation of state is ignored.
- Prepare Psi0 on register C
Finally, the algorithm assumes that the state can be prepared efficiently. Where
for some large . The coefficients of are cosen to minimize a certain quadratic loss function which induces error.
Phase Estimation
- Transform A into a unitary operator First, the Hermitian matrix A must be transformed into a unitary operator using, which can be applied at will. This is possible if A is s-sparse and efficiently row computable, meaning it has at most s nonzero entries per row and given a row index these entries can be computed in time O(s). Under these assumptions, quantum phase estimation allows to be simulated in time
Uinvert Subroutine
The key subroutine to the algorithm, denoted , is defined as follows:
1. Prepare on register C
2. Apply the conditional Hamiltonian evolution (sum)
3. Apply the Fourier transform to the register C. Denote the resulting basis states with for k = 0,...,T-1. Define .
4. Adjoin a three-dimensional register S in the state
- ,
5. Reverse steps 1-3, uncomputing any garbage produced along the way.
for functions f, g, defined below. Here 'nothing' indicates that the desired matrix inversion hasnt taken place. 'well' indicates that it has, and 'ill' means that part of is in the ill-conditioned subspace of A.
Conjugate Gradient Descent
- Rsucc
- Rinit
- Repeatedly apply UinvertBRinitBdaggerUdaggerRsucc ... measure on S
- Let denote the success probability of the measurement. Rather than repeating times to minimize error, amplitude amplification is used to achieve the same error resilience using only repetitions.
- Final state If is applied to it will, up to an error, adjoin .
Scalar Measurement
- Perform the quantum measurement corresponding to the scalar measurement M
Run Time Analysis
Classical Efficiency
-- not to be confused with Gaussian Elimination which produces the actual solution x. -- If A is s-sparse (i.e. s << N) then Conjugate Gradient Descent can be used to find the solution in N*logN (via minimizing a quadradic function -- An important question is whether classical methods can be improved when only a summary statistic of the solution, such as xMx, is required.
Quantum Efficiency
The quantum algorithm for solving linear systems of equations proposed in Harrow et. al was shown to be . The runtime of this algorithm was subsequently improved to by Andris Ambainis.
Optimality
-- An important factor in the performance of the matrix inversion algorithm is k, the condition number of A, or the ratio between A's largest and smallest eigenvalues. as the Condition number grows, A becomes closer to a matrix which cannot be inverted, and the solutions become less stable. Such a matrix is said to be "ill-conditioned". This algorithm generally assumes that the singular values of A lie between 1/k and 1, in which case the run time will scale with k^2. Therefore, the greatest advantage this algorithm has over classical algorithms occurs when k is poly(logN), in which case an exponential speedup is achieved.
If the run-time of the algorithm could be made polylogarithmic in k, then any problem solvable on n qubits could be solved in poly(n) time, causing a complexity heirarchy collapse (BQP=PSPACE), which is highly unlikely. Even improving k-dependence to for would imply that BQP=PSPACE. Similarly, improving the error dependence to poly log(1/ would imply that BQP includes PP.
Error Analysis
Experimental Realization
Introduction
Demonstration of the powerful quantum algorithms in a scalable quantum system has been considered as a milestone towards achieving true quantum computation. While two of the three most prominent quantum algorithms that can achieve exponential speedup over classical computers have previously been realized, the third, this algorithm, remained a challenge for years after its proposal. The first demonstration of the quantum linear system algorithm was in 2013, though it only tests the simplest meaningful instance: solving a 2x2 linear equation on a photonic quantum computer.
Barz et. Al
On February 5, 2013 Barz et. al demonstrate this quantum algorithm by implementing various instances on a photonic quantum computing architecture. Our implementation involves the application of two consecutive entangling gates on the same pair of polarisation-encoded qubits. We realize two separate controlled-NOT gates where the successful operation of the first gate is heralded by a measurement of two ancillary photons. Our work thus demonstrates the implementation of a quantum algorithm with high practical significance as well as an important technological advance which brings us closer to a comprehensive control of photonic quantum information.
Results?
Pan et. Al
On February 8, 2013 Pan et. al reported a proof-of-concept experimental demonstration of the quantum algorithm using a 4-qubit nuclear magnetic resonance (NMR) quantum information processor. For all the three sets of experiments with different choices of , they obtain the solutions with over 96% fidelity. This experiment is a first implementation of the algorithm. Because solving linear systems is a common problem in nearly all fields of science and engineering, we will also discuss the implication of our results on the potential of using quantum computers for solving practical linear systems.
Results?
Sources
http://arxiv.org/abs/1302.1946
http://arxiv.org/abs/1302.1210v1
http://www.2physics.com/2013/06/quantum-computer-runs-most-practically.html
http://physicsworld.com/cws/article/news/2013/jun/12/quantum-computer-solves-simple-linear-equations
http://phys.org/news/2013-06-quantum-algorithm-linear-equations.html
Applications
Introduction
Quantum computers are devices that harness quantum mechanics to perform computations in ways that classical computers cannot. For certain problems, quantum algorithms supply exponential speedups over their classical counterparts, the most famous example being Shor's factoring algorithm. Few such exponential speedups are known, and those that are (such as the use of quantum computers to simulate other quantum systems) have so far found limited use outside the domain of quantum mechanics. This algorithm provides an exponentially faster method of estimating features of the solution of a set of linear equations, which is a problem ubiquitous in science an engineering, both on its own and as a subroutine in more complex problems.
Linear Differential Equation Solving
Differential equations are used for an enormous variety of applications, including industrial design and weather prediction. This paper considers sparse systems of first-order linear differential equations. Using standard techniques to convert linear differential equations with high-order derivatives into first-order linear differential equations.
http://arxiv.org/abs/1010.2745
Least-Square Curve-Fitting
new quantum algorithm that efficiently determines the quality of a least-squares fit over an exponentially large data set
Wiebe et al. Provide a new quantum algorithm that efficiently determines the quality of a least-squares fit over an exponentially larger data set by building upon the Quantum algorithm for linear systems of equations. They find that in many cases, their algorithm can also efficiently find a concise function that approximates the data to be fitted and bound the approximation error. In cases where the input data is a pure quantum state, the algorithm can be used to provide an efficient parametric estimation of the quantum state and therefore can be applied as an alternative to full quantum state tomography given a fault tolerant quantum computer.
http://arxiv.org/abs/1204.5242
Machine Learning
Machine-learning tasks frequently involve problems of manipulating and classifying large numbers of vectors in high-dimensional spaces. Classical algorithms for solving such problems typically take time polynomial in the number of vectors and the dimension of the space. Quantum computers are good at manipulating high-dimensional vectors in large tensor product spaces. Lloyd et al. provide a suprivsed and unsupervised quantum machine learning algorithm for cluster assignment and cluster finding. Quantum machine learning can take time logarithmic in both the number of vectors and their dimension, an exponential speed-up over classical algorithms.
In supervised learning, the machine infers a function from a set of training examples.
In unsupervised learning, the machine tries to find a hidden structure in unlabeled data.
In machine learning, information processors perform tasks of sorting, assembling, assimilating, and classifying information.
http://arxiv.org/abs/1307.0411
Big Data Analysis
Supervised machine learning is the classification of new data based on already classified training examples. This algorithm has been applied to a support vector machine, am optimized linear and non-linear binary classifier, which can be implemented using a quantum computer with exponential speedupos in the size of the vectors and the number of training exmaples. At the core of the algorithm is a non-sparse matrix simulation technique usd to efficiently perform a principle component analysis and matrix ingversion of the training data kernel matrix. Thus, an example of quantum big feature and big data algorithm has been provided to pave the way for future developments at the intersection of quantum computing and machine learning.