# The Singular Value Decomposition

## The Action of a Symmetric Matrix

First of all, it is useful to understand the action of a symmetric matrix in a linear transformation, ${\displaystyle \left.T:x\to Ax\right.}$. We know that a symmetric matrix can be represented as

${\displaystyle \left.A=PDP^{T}\right.}$

where ${\displaystyle \left.P\right.}$ is an orthogonal matrix (that is, a matrix whose columns are orthogonal to each other, and of unit length). The columns of ${\displaystyle \left.P\right.}$ are (orthogonal) eigenvectors of ${\displaystyle \left.A\right.}$, and ${\displaystyle \left.D\right.}$ is the diagonal matrix of eigenvalues. Thus, when a vector ${\displaystyle \left.x\right.}$ is multiplied by ${\displaystyle \left.A\right.}$,

${\displaystyle \left.Ax=PDP^{T}x\right.}$

the geometric result can be visualized using the unit ball (of unit radius) in ${\displaystyle \left.R^{n}\right.}$. It is transformed into an ellipsoid, with the eigenvectors representing the axes of the ellipsoid, and the eigenvalues providing the scaling of each axis of the ellipsoid. We can also think of how this transformation comes about as a succession of three steps:

• Multiplication by ${\displaystyle \left.P^{T}\right.}$, by the transpose of the orthogonal matrix ${\displaystyle \left.P\right.}$, corresponds to a rotation of the space ${\displaystyle \left.R^{n}\right.}$. It rotates each of the eigenvectors ${\displaystyle \left.p_{i}\right.}$ into the standard vector ${\displaystyle \left.e_{i}\right.}$.
• Multiplication by ${\displaystyle \left.D\right.}$ represents a scaling of each of the now "standardized" eigenvectors: each is stretched by the appropriate factor, the eigenvalue;
• Multiplication of the result by ${\displaystyle \left.P\right.}$ rotates the scaled eigenvectors back into their original positions.

In conclusion, you can think of the "action" of a symmetric matrix as a product of these three simple actions on ${\displaystyle \left.R^{n}\right.}$: a rotation, a scaling, and a rotation.

## The SVD Theorem

It turns out that this aspect of symmetric matrices is true in general: every matrix can be thought of as the product of an orthogonal matrix, a diagonal matrix, and the transpose of an orthogonal matrix.

Every matrix ${\displaystyle \left.A\right.}$ is related to two important symmetric matrices: ${\displaystyle \left.A^{T}A\right.}$ and ${\displaystyle \left.AA^{T}\right.}$. Since each of these two matrices is symmetric, each can be represented as a product

${\displaystyle \left.{\begin{cases}AA^{T}=UD_{1}U^{T}\\A^{T}A=VD_{2}V^{T}\end{cases}}\right.}$

Furthermore, the diagonal matrices ${\displaystyle \left.D_{1}\right.}$ and ${\displaystyle \left.D_{2}\right.}$ have positive entries on the diagonal. We can see that from the spectral decomposition of ${\displaystyle \left.AA^{T}\right.}$ and ${\displaystyle \left.A^{T}A\right.}$. For example,

${\displaystyle \left.AA^{T}=\lambda _{1}u_{1}u_{1}^{T}+\ldots +\lambda _{m}u_{m}u_{m}^{T}\right.}$

Hence,

${\displaystyle \left.u_{1}^{T}AA^{T}u_{1}=\lambda _{1}u_{1}^{T}u_{1}u_{1}^{T}u_{1}+0=\lambda _{1}(u_{1}\cdot u_{1})(u_{1}\cdot u_{1})=\lambda _{1}\right.}$,

but since

${\displaystyle \left.u_{1}^{T}AA^{T}u_{1}=A^{T}u_{1}\cdot A^{T}u_{1}=||A^{T}u_{1}||^{2}\geq 0\right.}$, we know that ${\displaystyle \left.\lambda _{1}\geq 0\right.}$.

Theorem: Let ${\displaystyle \left.A\right.}$ be an ${\displaystyle \left.mxn\right.}$ matrix with real components. Then ${\displaystyle \left.A=U\Sigma V^{T}\right.}$ where ${\displaystyle \left.U\right.}$ and ${\displaystyle \left.V\right.}$ are as defined above, and ${\displaystyle \left.\Sigma \right.}$ is the matrix with positive (${\displaystyle \left.\geq 0\right.}$) entries such that ${\displaystyle \left.\Sigma \Sigma ^{T}=D_{1}\right.}$ and ${\displaystyle \left.\Sigma ^{T}\Sigma =D_{2}\right.}$.

We can easily show that, with ${\displaystyle \left.A=U\Sigma V^{T}\right.}$, the formulas for ${\displaystyle \left.AA^{T}\right.}$ and ${\displaystyle \left.A^{T}A\right.}$ work out:

${\displaystyle \left.{\begin{cases}AA^{T}=U\Sigma V^{T}V\Sigma ^{T}U^{T}=U\Sigma \Sigma ^{T}U^{T}=UD_{1}U^{T}\\A^{T}A=V\Sigma ^{T}U^{T}U\Sigma V^{T}=V\Sigma ^{T}\Sigma V^{T}=VD_{2}V^{T}\end{cases}}\right.}$

Notice that the orthogonal diagonalization of a symmetric matrix is the singular value decomposition in that case.

What follows is an illustration of the SVD, as diagrammed by Cliff Long and Tom Hern in the case of ${\displaystyle \left.R^{2}\right.}$. In this example, the notation is ${\displaystyle \left.U=Q_{1}\right.}$ and ${\displaystyle \left.V=Q_{2}\right.}$.

You see that the action of a general matrix ${\displaystyle \left.A\right.}$ can also be viewed as a rotation, followed by a scaling, followed by a rotation. If ${\displaystyle \left.A\right.}$ is not square, or not full rank, then there will be a null space, and some of the dimensions will be annihilated.

## The SVD for image processing, image analysis, and statistical analysis

If ${\displaystyle \left.A\right.}$ is rank ${\displaystyle \left.n\right.}$, then ${\displaystyle \left.A\right.}$ can be written as a spectral decomposition, too, in terms of the eigenvectors and the eigenvalues of ${\displaystyle \left.A^{T}A\right.}$ and ${\displaystyle \left.AA^{T}\right.}$. The eigenvectors of ${\displaystyle \left.A^{T}A\right.}$ and ${\displaystyle \left.AA^{T}\right.}$ are called the singular vectors of ${\displaystyle \left.A\right.}$:

${\displaystyle \left.A={\sqrt {\lambda _{1}}}u_{1}v_{1}^{T}+\ldots +{\sqrt {\lambda _{n}}}u_{n}v_{n}^{T}\right.}$

${\displaystyle \left.A=\sigma _{1}u_{1}v_{1}^{T}+\ldots +\sigma _{n}u_{n}v_{n}^{T}\right.}$

and where, by convention, ${\displaystyle \left.\lambda _{i}\geq \lambda _{j}\right.}$ for ${\displaystyle \left.i\leq j\right.}$. The values ${\displaystyle \left.\sigma _{i}={\sqrt {\lambda _{i}}}\right.}$ are called the singular values of the matrix. This is crucial for image processing: often an image is not full rank, so that there are few products; furthermore, it may be that an image contains noise, and the noise tends to be high frequency and of little total weight. It may be that it "hangs out" on the last few singular vector pairs. We simply drop those from the recomposition of the matrix ${\displaystyle \left.A\right.}$, and we will have de-noised the image.

On a similar note, the information contained in the last singular vector pairs may not be very important to the recomposition of the image, so that we can drop them without much loss of information. This represents compression of the image.

## The SVD summarizes the basic spaces related to a matrix ${\displaystyle \left.A\right.}$

• The Col(${\displaystyle \left.A\right.}$) is given by the columns of ${\displaystyle \left.U\right.}$ corresponding to non-zero singular values.
• The Row(${\displaystyle \left.A\right.}$) is given by the columns of ${\displaystyle \left.V\right.}$ corresponding to non-zero singular values.
• The Nul(${\displaystyle \left.A\right.}$) is given by the columns of ${\displaystyle \left.V\right.}$ corresponding to zero singular values; and of course
• The Nul(${\displaystyle \left.A^{T}\right.}$) is given by the columns of ${\displaystyle \left.U\right.}$ corresponding to zero singular values.

## Applications

• We can use the SVD to interpolate a matrix. See Long, A. and C. Long. Surface Approximation and Interpolation Via Matrix SVD. The College Mathematics Journal, Vol. 32, #1, January, 2001:20-25.
• When we can't invert a matrix, we can define a "pseudo-inverse" via the SVD. This is precisely what we do when we solve the linear regression/least squares problem in general: we can't solve ${\displaystyle \left.Ax=b\right.}$ because the system is inconsistent; but we can find a least-squares solution, by inverting (when possible). When it's not possible, we use a pseudo-inverse solution.
• The SVD is used for image compression, as you can see by visiting this page.