Jump to content

User:Irvings1/Quantum algorithm for linear systems of equations

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Irvings1 (talk | contribs) at 04:01, 6 May 2014 (Run Time Analysis). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

The Quantum algorithm for linear systems of equations, designed by Aram Harrow, Avinatan Hassidim, and Seth Lloyd, is a quantum algorithm for solving linear systems formulated in 2009. The algorithm estimates the result of a scalar measurement on the solution vector to a given linear system of equations.[1]

The algorithm is one of only a few known algorithms known to provide an exponential improvement over their classical counterparts, along with Shor's factoring algorithm and Feyman's quantum simulation of complex systems. Provided the linear system is an sparse and has a low condition number , and the user is interested in the result of a scalar measurement on the solution vector, and not the values of the solution vector itself, the algorithm has a runtime of O(logN*k^2), offering an exponential speedup over the fastest classical algorithm, which runs in O(N*sqrt(k)), where is the number of variables in the linear system.

An implementation of the quantum algorithm for linear systems of equations was first demonstrated in 2013 by both Barz et al. and Pan et al in parallel. The demonstrations consisted of solving 2x2 linear equations on a photonic quantum computer.[2][3]

Due to the prevalence of linear systems in virtually all areas of science and engineering, the quantum algorithm for linear systems of equations has the potential to be the most practically useful quantum algorithms so far. This is an important milestone for quantum computing, as previous algorithms offering exponential speedup do not have obvious real-world applications.[4]

Procedure

The problem we are trying to solve is: given a Hermitian matrix and a unit vector , find the solution vector satisfying . This algorithm assumes that the user is not interested in the values of itself, but rather the result of applying some operator onto x, .

First, the algorithm represents the vector as a quantum state of the form:

.

Next, Hamiltonian simulation techniques are used to apply the unitary operator to for a superposition of different times . This ability to exponentiate translates, via the well known technique of phase estimation, into the ability to decompose into the eigenbasis of and to find the corresponding eigenvalues .

Informally, the state of the system after this decomposition is close to:

,

where is the eigenvector basis of , and .

We would then like to perform the linear map taking to , where is a normalizing constant. As this operation is not unitary, it has some probability of failing. After it succeeds, we uncompute the register and are left with a state proportional to:

,

Where is a quantum-mechanical representation of the desired solution vector . To read out all components of x would require the procedure be repeated at least times. However, it is often the case that one is not interested in itself, but rather some expectation value x^T M x, where M is some linear operator. By mapping M to a quantum-mechanical operator and performing the quantum measurement corresponding to M, we obtain an estimate of the expectation value <x|M|x> = x^T M , as desired. This allows for a wide variety of features of the vector x to be extracted including normalization, weights in different parts of the state space, moments, etc.

Explanation of the Algorithm

Initialization

Firstly, the algorithm requires that the matrix be Hermitian so that it can be converted into a unitary operator. In the case were is not Hermitian, define

As is Hermitian, the algorithm can now be used to solve to obtain .

Secondly, The algorithm also requires an efficient procedure to prepare , the quantum representation of b. It is assumed that there exists some linear operator that can take some arbitrary quantum state to efficiently or that this algorithm is a subroutine in a larger algorithm and is given as input. Any error in the preparation of state is ignored.

Finally, the algorithm assumes that the state can be prepared efficiently. Where

for some large . The coefficients of are chosen to minimize a certain quadratic loss function which induces error in the subroutine described below.

Phase Estimation

Phase estimation is used to transform the Hermitian matrix into a unitary operator, which can then be applied at will. This is possible if A is s-sparse and efficiently row computable, meaning it has at most s nonzero entries per row and given a row index these entries can be computed in time O(s). Under these assumptions, quantum phase estimation allows to be simulated in time .

Uinvert Subroutine

The key subroutine to the algorithm, denoted , is defined as follows:

1. Prepare on register C

2. Apply the conditional Hamiltonian evolution (sum)

3. Apply the Fourier transform to the register C. Denote the resulting basis states with for k = 0,...,T-1. Define .

4. Adjoin a three-dimensional register S in the state

,

5. Reverse steps 1-3, uncomputing any garbage produced along the way.

for functions f, g, defined below. Here 'nothing' indicates that the desired matrix inversion hasnt taken place. 'well' indicates that it has, and 'ill' means that part of is in the ill-conditioned subspace of A.

Main Loop

The body of the algorithm follows the amplitude amplification procedure: starting with , the following operation is repeatedly applied:

Where

,

and

,

After each repetition, is measured until the a successful measurement of 'well' is obtained. Let denote the success probability of the measurement. Rather than repeating times to minimize error, amplitude amplification allows the algorithm to achieve the same error resilience using only repetitions.

Scalar Measurement

After successfully measuring 'well' on we are left with a state proportional to:

.

Finally, we perform the quantum-mechanical operator corresponding to M and obtain an estimate of the value of .

Run Time Analysis

Classical Efficiency

The best classical algorithm which produces the actual solution vector is Gaussian Elimination, which runs in time.

If A is s-sparse, where s is significantly smaller than N, then the Conjugate Gradient method can be used to find the solution vector can be found in time by minimizing the quadratic function .

When only a summary statistic of the solution vector is needed, as is the case for the quantum algorithm for linear systems of equations, a classical computer can find an estimate of in .

Quantum Efficiency

The quantum algorithm for solving linear systems of equations originally proposed by Harrow et. al was shown to be . The runtime of this algorithm was subsequently improved to by Andris Ambainis.[5]

Optimality

-- An important factor in the performance of the matrix inversion algorithm is k, the condition number of A, or the ratio between A's largest and smallest eigenvalues. as the Condition number grows, A becomes closer to a matrix which cannot be inverted, and the solutions become less stable. Such a matrix is said to be "ill-conditioned". This algorithm generally assumes that the singular values of A lie between 1/k and 1, in which case the run time will scale with k^2. Therefore, the greatest advantage this algorithm has over classical algorithms occurs when k is poly(logN), in which case an exponential speedup is achieved.

If the run-time of the algorithm could be made polylogarithmic in k, then any problem solvable on n qubits could be solved in poly(n) time, causing a complexity heirarchy collapse (BQP=PSPACE), which is highly unlikely. Even improving k-dependence to for would imply that BQP=PSPACE. Similarly, improving the error dependence to poly log(1/ would imply that BQP includes PP.

Error Analysis

Performing the phase estimation, which is the dominant source of error, is done by simulating . Assuming that is s-sparse, this can be done with an error bounded by a constant , which will translate to the additive error achieved in the output state .

The phase estimation step errs by in estimating , which translates into a relative error of in . If , taking induces a final error of . This requires that the overall run-time efficiency be increased proportional to to minimize error.

Experimental Realization

Introduction

Demonstration of the powerful quantum algorithms in a scalable quantum system has been considered as a milestone towards achieving true quantum computation. While two of the three most prominent quantum algorithms that can achieve exponential speedup over classical computers have previously been realized, the third, this algorithm, remained a challenge for years after its proposal. The first demonstration of the quantum linear system algorithm was in 2013, though it only tests the simplest meaningful instance: solving a 2x2 linear equation on a photonic quantum computer.

Barz et. Al

On February 5, 2013 Barz et. al demonstrate this quantum algorithm by implementing various instances on a photonic quantum computing architecture. Our implementation involves the application of two consecutive entangling gates on the same pair of polarisation-encoded qubits. We realize two separate controlled-NOT gates where the successful operation of the first gate is heralded by a measurement of two ancillary photons. Our work thus demonstrates the implementation of a quantum algorithm with high practical significance as well as an important technological advance which brings us closer to a comprehensive control of photonic quantum information.

Results?

Pan et. Al

On February 8, 2013 Pan et. al reported a proof-of-concept experimental demonstration of the quantum algorithm using a 4-qubit nuclear magnetic resonance (NMR) quantum information processor. For all the three sets of experiments with different choices of , they obtain the solutions with over 96% fidelity. This experiment is a first implementation of the algorithm. Because solving linear systems is a common problem in nearly all fields of science and engineering, we will also discuss the implication of our results on the potential of using quantum computers for solving practical linear systems.

Results?

Sources

http://arxiv.org/abs/1302.1946

http://arxiv.org/abs/1302.1210v1

http://www.2physics.com/2013/06/quantum-computer-runs-most-practically.html

http://physicsworld.com/cws/article/news/2013/jun/12/quantum-computer-solves-simple-linear-equations

http://phys.org/news/2013-06-quantum-algorithm-linear-equations.html

Applications

Introduction

Quantum computers are devices that harness quantum mechanics to perform computations in ways that classical computers cannot. For certain problems, quantum algorithms supply exponential speedups over their classical counterparts, the most famous example being Shor's factoring algorithm. Few such exponential speedups are known, and those that are (such as the use of quantum computers to simulate other quantum systems) have so far found limited use outside the domain of quantum mechanics. This algorithm provides an exponentially faster method of estimating features of the solution of a set of linear equations, which is a problem ubiquitous in science an engineering, both on its own and as a subroutine in more complex problems.

Linear Differential Equation Solving

Differential equations are used for an enormous variety of applications, including industrial design and weather prediction. This paper considers sparse systems of first-order linear differential equations. Using standard techniques to convert linear differential equations with high-order derivatives into first-order linear differential equations.

http://arxiv.org/abs/1010.2745

Least-Square Curve-Fitting

new quantum algorithm that efficiently determines the quality of a least-squares fit over an exponentially large data set

Wiebe et al. Provide a new quantum algorithm that efficiently determines the quality of a least-squares fit over an exponentially larger data set by building upon the Quantum algorithm for linear systems of equations. They find that in many cases, their algorithm can also efficiently find a concise function that approximates the data to be fitted and bound the approximation error. In cases where the input data is a pure quantum state, the algorithm can be used to provide an efficient parametric estimation of the quantum state and therefore can be applied as an alternative to full quantum state tomography given a fault tolerant quantum computer.

http://arxiv.org/abs/1204.5242

Machine Learning

Machine-learning tasks frequently involve problems of manipulating and classifying large numbers of vectors in high-dimensional spaces. Classical algorithms for solving such problems typically take time polynomial in the number of vectors and the dimension of the space. Quantum computers are good at manipulating high-dimensional vectors in large tensor product spaces. Lloyd et al. provide a suprivsed and unsupervised quantum machine learning algorithm for cluster assignment and cluster finding. Quantum machine learning can take time logarithmic in both the number of vectors and their dimension, an exponential speed-up over classical algorithms.

In supervised learning, the machine infers a function from a set of training examples.

In unsupervised learning, the machine tries to find a hidden structure in unlabeled data.

In machine learning, information processors perform tasks of sorting, assembling, assimilating, and classifying information.

http://arxiv.org/abs/1307.0411

Big Data Analysis

Supervised machine learning is the classification of new data based on already classified training examples. This algorithm has been applied to a support vector machine, am optimized linear and non-linear binary classifier, which can be implemented using a quantum computer with exponential speedupos in the size of the vectors and the number of training exmaples. At the core of the algorithm is a non-sparse matrix simulation technique usd to efficiently perform a principle component analysis and matrix ingversion of the training data kernel matrix. Thus, an example of quantum big feature and big data algorithm has been provided to pave the way for future developments at the intersection of quantum computing and machine learning.

http://arxiv.org/pdf/1307.0471v2.pdf

References