Jump to content

L-notation

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by GregorB (talk | contribs) at 17:43, 27 December 2008 ({{comp-sci-stub}}). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

The L-notation is often used to express the computational complexity of certain algorithms for difficult number theory problems, eg. sieves for integer factorization and methods for solving discrete logarithms. It is defined as

,

where c is a positive constant, and is a constant .

When is 0, then

is a polynomial function of ; when is 1 then

is a fully exponential function of (and thereby polynomial in ).

Example

For the elliptic curve discrete logarithm problem, the fastest general purpose algorithm is the baby-step giant-step algorithm, which has a running time on the order of the square-root of the group order n. In L-notation this would be

.

References