Jump to content

User:SigmaJargon/Sandbox

From Wikipedia, the free encyclopedia

1.) The Reverse Triangle Inequality

[edit]

Consider two vectors x and y. Without loss of generality, say

Since is a vector norm, the triangle inequality holds. Then if you think of x as x=x-y+y

Since , and thus . So, finally:

2.) An Induced Matrix Norm

[edit]

If [s1,s2,...,sn] are the sums of the values of A (that is, sk=sum(i=1 to n)(a_{i,k))

3.) Frobenius Norm

[edit]

One definition of an induced matrix norm is that

It follows immediately that if A=I, then

However, , so the Frobenius norm is not an induced norm.

4.) Spectral Radius

[edit]

For nxn matrix A and we can find a natural norm satisfying

The left hand equality we verified in class. The difficult bit is the right-hand equality.

We can find a non-singular matrix P such that

Where

... Argh. Incomplete, this proof is. The completion can be found here

5.) Inequities are Sharp

[edit]

This inequality came about due to the study of error in linear systems. To be specific, given a system we want to know what will happen if the value of x changes by some error. That is, given an error e, we want to know the relative error in respect to the perturbed system . By subtraction, it follows that .

To the end of studying this, we know four things:

By composing the 1st and 4th inequalities, we find:

Shuffle around the terms a bit:


By composing the 2nd and 3rd inequalities, we find:

Shuffle around the terms a bit:


And thus we arrive at the desired inequality:

A simple system that satisfies equalities on all counts is this: Let

It follows that for both the 2-norm and the -norm, and all inequalities are satisfied.

6.) A Symmetric Matrix Has Real Eigenvalues

[edit]

First consider a general Hermitian matrix A. If is an eigenvalue of A with associated eigenvector x, then

So, where H indicates the transpose conjugate, and * the conjugate:

Also, since ,

So we then have , so , and thus is real.


Now consider a symmetric matrix B. Since the conjugate of a real matrix is the matrix itself, , so B is Hermitian, and thus has real eigenvalues.


7.) The Gershgorin Theorem

[edit]

Consider a n x n matrix A and . Then let be the union of p Gershgorin circles with disjoint from , the remaining n-p Gershgorin circles. For , let us define with

Then B(1)=A, and B(0) is the diagonal matrix with diagonal entries corresponding to the diagonal entries of A. Therefore, each of the eigenvalues of B(0) is the center of a Gershgorin circle. So, exactly p of the eigenvalues of B(0) lie in . The eigenvalues of B are the zeros of its characteristic polynomial, which is a polynomial with coefficients that are continuous variables of . Thus, the zeros of the characteristic polynomial are also continuous functions of . As goes from 0 to 1 the eigenvalues of move along continuous paths in the complex plane. At the same time, the radii of the Gershgorin circles increase from 0 to their full radius. Since p of the eigenvalues lie in when and these disks are disjoint from , the p eigenvalues must still lie in the union of the disks when .

8.) Block Matrices

[edit]


So

Then note:

Finally, we arrive at:

9.) More on Block Matrices

[edit]

a.) False! Consider the following submatrices:

Obviously, , so . However, if you consider the whole matrix

Then you see that by expansion along the 3rd row and then the 2nd row.

b.) Also false! Consider the following matrices:

Then

Now, , , but

10.) Operation Count

[edit]

The calculation for the diagonal entries of the Cholesky decomposition is:

The only place in here that multiplications occur is in the sum - one for each term of the summation, for a total number of multiplications

The calculation for the entries below the diagonal is:

To determine the number of multiplications here, first consider how many multiplications/divisions occur assuming you are in the kth column. You will always have one division, plus one multiplication from each term of the sum, of which there are k-1. So each entry below the diagonal in the first column will take one multiplication/division, and there are n-1 of these. Each entry in the second column will take two multiplications/divisions, and there are n-2 of them. Continuing onward with the yields this for the total number of multiplications/divisions in the non-diagonal entries:

So the total number of multiplications and divisions involved is: