Logarithmic norm
In mathematics, the logarithmic norm is a real-valued functional on operators, and is derived from either an inner product, a vector norm, or its induced operator norm. The logarithmic norm was independently introduced by Germund Dahlquist[1] and Sergei Lozinskiĭ in 1958, for square matrices. It has since been extended to nonlinear operators and unbounded operators as well.[2] The logaritmic norm has a wide range of applications, in particular in matrix theory, differential equations and numerical analysis.
Original Definition
Let be a square matrix and be an induced matrix norm. The associated logarithmic norm of Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle A} is defined
Here is the identity matrix of the same dimension as , and is a real, positive number. The limit as equals , and is in general different from the logarithmic norm , as for all matrices.
While is always positive if , the logarithmic norm may also take negative values, e.g. when is negative definite. Therefore, the logarithmic norm does not satisfy the axioms of a norm. The name logarithmic norm, which does not appear in the original reference, seems to originate from logarithmic differentiation, as the maximal growth rate of the logarithm of the norm of solutions to the differential equation
is . This is expressed by the differential inequality
- Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \frac{\mathrm d}{\mathrm d t} \log \|x\| \leq \mu(A),}
which is directly related to Grönwall's lemma.
Alternative definitions
If the vector norm is an inner product norm, as in a Hilbert space, then the logarithmic norm is the smallest number such that for all
Unlike the original definition, the latter expression also allows to be unbounded. Thus differential operators too can have logarithmic norms, allowing the use of the logarithmic norm both in algebra and in analysis. The modern, extended theory therefore prefers a definition based on inner products or duality. Both the operator norm and the logarithmic norm are then associated with extremal values of quadratic forms as follows:
Properties
Basic properties of the logarithmic norm of a matrix include:
- for scalar Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \gamma > 0 }
- where is the maximal real part of the eigenvalues of
- for
Example logarithmic norms
The logarithmic norm of a matrix can be calculated as follows for the three most common norms. In these formulas, represents the element on the th row and th column of a matrix .
Applications in matrix theory and spectral theory
The logarithmic norm is related to the extreme values of the Rayleigh quotient. It holds that
and both extreme values are taken for some vectors . This also means that every eigenvalue Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \lambda_k} of satisfies
- .
More generally, the logarithmic norm is related to the numerical range of a matrix.
A matrix with is positive definite, and one with is negative definite. Such matrices have inverses. The inverse of a negative definite matrix is bounded by
Both the bounds on the inverse and on the eigenvalues hold irrespective of the choice of vector (matrix) norm. Some results only hold for inner product norms, however. For example, if is a rational function with the property
- Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \real \, (z)\leq 0 \, \Rightarrow \, |R(z)|\leq 1}
then, for inner product norms,
Thus the matrix norm and logarithmic norms may be viewed as generalizing the modulus and real part, respectively, from complex numbers to matrices.
Applications in stability theory and numerical analysis
The logarithmic norm plays an important role in the stability analysis of a continuous dynamical system Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \dot x = Ax} . Its role is analogous to that of the matrix norm for a discrete dynamical system Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle x_{n+1} = Ax_n} .
In the simplest case, when is a scalar complex constant , the discrete dynamical system has stable solutions when Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle |\lambda|\leq 1} , while the differential equation has stable solutions when . When is a matrix, the discrete system has stable solutions if . In the continuous system, the solutions are of the form . They are stable if for all , which follows from property 7 above, if . In the latter case, is a Lyapunov function for the system.
Runge-Kutta methods for the numerical solution of replace the differential equation by a discrete equation , where the rational function is characteristic of the method, and is the time step size. If Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle |R(z)|\leq 1} whenever , then a stable differential equation, having , will always result in a stable (contractive) numerical method, as Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \|R(hA)\|\leq 1} . Runge-Kutta methods having this property are called A-stable.
Retaining the same form, the results can, under additional assumptions, be extended to nonlinear systems as well as to semigroup theory, where the crucial advantage of the logarithmic norm is that it discriminates between forward and reverse time evolution and can establish whether the problem is well posed. Similar results also apply in the stability analysis in control theory, where there is a need to discriminate between positive and negative feedback.
Applications to elliptic differential operators
In connection with differential operators it is common to use inner products and integration by parts. In the simplest case we consider functions satisfying with inner product
Then it holds that
- Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \langle u,u''\rangle = -\langle u',u'\rangle \leq -\pi^2\|u\|^2,}
where the equality on the left represents integration by parts, and the inequality to the right is a Sobolev inequality. In the latter, equality is attained for the function , implying that the constant is the best possible. Thus
- Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \langle u, Au\rangle \leq -\pi^2 \|u\|^2}
for the differential operator , which implies that
As an operator satisfying is called elliptic, the logarithmic norm quantifies the (strong) ellipticity of Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle -\mathrm d^2/\mathrm dx^2} . Thus, if is elliptic, then , and is invertible given proper data.
If a finite difference method is used to solve , the problem is replaced by an algebraic equation Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle Tu=f} . The matrix will typically inherit the ellipticity, i.e., , showing that is positive definite and therefore invertible.
These results carry over to the Poisson equation as well as to other numerical methods such as the Finite element method.
Extensions to nonlinear maps
For nonlinear operators the operator norm and logarithmic norm are defined in terms of the inequalities
where is the least upper bound Lipschitz constant of , and is the greatest lower bound Lipschitz constant; and
where is the leat upper bound logarithmic Lipschitz constant of , and is the greatest lower bound logarithmic Lipschitz constant. Here it holds that (compare above) and, analogously, .
For nonlinear operators that are Lipschitz continuous, it then holds that
An operator having either or is called uniformly monotone. An operator satisfying Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle L(f)<1} is called contractive. This extension offers many connections to fixed point theory, and critical point theory.
The theory becomes completely analogous to the theory of the logarithmic norm for matrices, but is more complicated as the domains of the operators need to be given close attention, as in the case with unbounded operators. Property 8 of the logarithmic norm above carries over, independently of the choice of vector norm, and it holds that
which quantifies the Uniform Monotonicity Theorem due to Browder & Minty (1963).