Thanks to the following people for feedback: Denizhan Akar, Rudolf Laine, Simon Lermen, Aryan Bhatt, Spencer Becker-Kahn, Arthur Conmy, and anonymous members of my ARENA cohort (-:

If you want to 80-20 this post (and you already know what SVD is), then just read the "Summary" section below.

Summary

The SVD is a matrix decomposition M=USVT, where U and V are orthogonal, and S is diagonal (with non-negative diagonal elements, which we call singular values). The six intuitions are:

1. Rotations and Scalings (geometric picture)

Orthogonal matricesUandVcan be thought of as rotations of our basis, andSas a scaling of our new basis. These operations are easier to geometrically visualise than the entire operation of a matrix, with all its nuances (e.g. things like shear).

2. Best rank-kApproximations

Truncating the SVD sum gives us in some sense the "best way" to approximateMas a sum of rank-1 matrices. In other words, the first few matrices in the SVD sum are the "most important parts ofM".

3. Least Squares Regression

If we're trying to minimize∥Mx−b∥(either with a constraint on the norm ofxor with no constraint), we can express the solution forxin terms of the SVD ofM. When∥x∥is constrained, it becomes most important to get the components ofxright in the directions of the largest singular values ofM; the smaller singular values matter less.

4. Input and Output Directions (like MLPs!)

An MLP can be thought of as a collection of neurons, where each neuron has an input direction (thought of as detecting a feature) and an output direction (outputting a result if that feature is observed).vianduiare like the input and output directions (minus the nonlinearity in the middle), and furthermore each "neuron" in this case is completely independent (orthogonal).

5. Lost & Preserved Information

SVD tells you what information (if any) is lost when you useMas a linear map. It also tells you which bits of information are easier / harder to recover.

6A. Principal Component Analysis

IfMis actually a matrix of data, we can interpret the SVD matrices in terms of that data.Vtells you which features (= linear combinations of data) explain the most variance in the data,Stells you how large this variance is, andUtells you how much each datapoint is exposed to the corresponding feature.

6B. Information Compression

The Fourier transform is a popular choice of basis when compressing things like images (because with a few low-frequency Fourier basis vectors, we can usually reconstruct the image faithfully). We can think of SVD in a similar way, because it gives us the "best basis to use" for reconstructing our image, in some sense.

There are 2 key ideas running through most of these points:

V as input directions and U as the corresponding output directions (i.e. we calculate Mx by projecting x onto the input directions, and using these projections as coefficients for our output directions)

The SVD as being a way to efficiently represent the most important parts of the matrix, especially for low-rank matrices.

Introduction

Motivation

I'm currently doing a lot of mechanistic interpretability, and SVD is a key concept when thinking about the operations done inside transformers. Matrices like WOV and WQK are very large and low-rank, making SVD the perfect tool for interpreting them. Furthermore, we have evidence that the SVD of weight matrices in transformers are highly interpretable, with the principle directions aligning with semantically interpretable directions in the residual stream.

On top of this, I think deeply understanding SVD is crucial for understanding how to think about matrices in general, and although there's a lot of valuable stuff online, it's not been brought together into a single resource. I hope I can do something similar with this post as I did with my KL divergence post.

Notation

Consider a matrix M, with size (m,n). The singular value decomposition (SVD) is M=USVT, where:

The columns of U and V are orthogonal unit vectors,

S is a diagonal matrix with elements (σ1,...,σr) where r is the rank of matrix M, and the singular valuesσi are positive & in decreasing order: σ1≥σ2≥...σr>0.

There are a few different conventions when it comes to SVD. Sometimes it's written with U,S,V as having sizes (m,m),(m,n),(n,n) respectively (in other words we pad S with zeros, and fill out U and V with a complete basis). Alternatively, the matrices can also be written with shapes (m,r),(r,r),(r,n), in other words the matrix S has no zero diagonal elements. I'll most often use the second convention, but there are times when I'll use the first (it should be clear which one I'm using at any given time).

Lastly, note that we can also write M as follows:

M=r∑i=1σiuivTi

in other words as a sum of rank-1 matrices, scaled by the singular values. I claim this is the most natural way to think about SVD, and it's the form I'll be using for most of the rest of the post. For convenience, I'll refer to this as the "SVD sum".

6 ½ Intuitions

Note that there's a lot of overlap between some of these points, and some of them cut a lot closer to the "core of SVD" than others. You might say that they're . . . not linearly independent, and have different singular values. (I make no apologies for that pun.)

1. Rotations and Scalings (geometric picture)

Orthogonal matricesUandVcan be thought of as rotations of our basis, andSas a scaling of our new basis. These operations are easier to geometrically visualise than the entire operation of a matrix, with all its nuances (e.g. things like shear).

U and V are orthogonal matrices, meaning that their column vectors ui and vi are orthogonal unit vectors. We can think of them as rotations in high-dimensional space (in fact, any orthogonal matrix can be formed via a series of rotations, and possibly a reflection). S is a diagonal matrix, which means it scales input along the standard basis.

The key point here is that we're taking M, a general linear operation which it's hard to get geometric intuition for, and breaking it up into a series of operations which are much easier to visualise. Let's take possibly the simplest example of a non-trivial matrix operation: a shear.

The illustration below shows how this can be broken down as a rotation, scaling along the standard basis, then another rotation. See here for the animated version.

2. Best Low-Rank Approximations

Truncating the SVD sum gives us in some sense the "best way" to approximateMas a sum of rank-1 matrices. In other words, the first few matrices in the SVD sum are the "most important parts ofM".

One natural way to formalise "best approximation" Mk would be the matrix which minimises the value ∥M−Mk∥, where ∥⋅∥ is some reasonable choice for the norm of a matrix. For example, you might use:

The Spectral norm, ∥A∥2=max∥x∥2≤1∥Ax∥2

The Frobenius norm, ∥A∥F=√∑ijA2ij

As it happens, the choice Mk=∑ki=1σiuivTiminimises the residual ∥M−Mk∥ for both of these norms (subject to the restriction that Mk can have at most rank k ). This is called the Eckart-Young Theorem. ^{ [2] } You can find a sketch of both proofs in the appendix.

Note - the proof relies heavily on the following lemmas:

The spectral norm of a matrix is its largest singular value.

The squared Frobenius norm of a matrix equals the sum of its squared singular values.

This actually hints at an important meta point in linear algebra - important concepts like Frobenius norm, trace, determinant, etc. often make a lot more sense when they're defined in terms of the matrix when viewed as a linear map^{ [3] }, rather than when viewed as a grid of numbers. In this case, defining the Frobenius norm as the sum of squared singular values in SVD was a lot more natural than describing it as the sum of squared elements (it's arguably easier to see how the former definition captures some notion of the "size" of the matrix). To give another example, it's often more natural to describe the trace as the sum of eigenvalues than the sum of diagonal elements (it's quite easy to prove the latter if you start from the former). For more on this meta point, see this section of Neel Nanda's linear recorded algebra talk.

Key idea - the singular vectors corresponding to the largest singular values are the best way of efficiently capturing what the matrix M is actually doing. If you capture most of the large singular values, then you've explained most of the operation of matrix M (the residual linear transformation is pretty small).

3. Least Squares Regression

If we're trying to minimize∥Mx−b∥(either with a constraint on the norm ofxor with no constraint), we can express the solution forxin terms of the SVD ofM. When∥x∥is constrained, it becomes most important to get the components ofxright in the directions of the largest singular values ofM; the smaller singular values matter less.

Firstly, let's take the least squares expression:

minx∥Mx−b∥22

and substitute in the singular value decomposition of M. Spectral norm is unchanged when you perform unitary maps, so:

∥Mx−b∥2=∥USVTx−b∥2=∥Sxv−bu∥2

where:

xv=VTx are the components ofxin the basis created from the columns ofV

bu=UTb are the components ofbin the basis created by columns ofU

When written in this form, we can read off a closed-form expression for the solution:

xvi={bui/σiifσi>0doesn't matterelse

where xvi=(xv)i is the component of x along the i -th column of V, and bui=(bu)i is the component of b along the i -th column of U. This result suggests the following terminology ^{ [4] }, which we'll use from here on out:

the columns of V are the input directions of the matrix M,

the rows of U are the corresponding output directions.

The problem of least squares regression then reduces to a simple one: make sure the components ofxalong the input directions match up with the corresponding target output directions.

What about constrained optimization? Suppose we were trying to minimize ∥Mx−b∥22subject to the restriction ∥x∥2≤m. We can write the solution in this case as xvi=1σi+λ/σibui, where λ is the smallest possible non-negative real number s.t. ∥x∥2≤m. ^{ [5] } Note that, the larger the singular values σi are, the closer our coefficient 1σi+λ/σi is to the "unconstrained optimal value" of 1/σi. In other words, the larger singular values are more important, so in a constrained optimization we care more about the components ofxalong the more important input directionsvi. ^{ [6] }

A general point here - least squares isn't an easy problem to solve in general, unless we have SVD - then it becomes trivial! This is a pretty clear sign that SVD is in some sense the natural form to write a matrix in.

4. Input and Output Directions (like MLPs!)

An MLP can be thought of as a collection of neurons, where each neuron has an input direction (thought of as detecting a feature) and an output direction (outputting a result if that feature is observed).vianduiare like the input and output directions (minus the nonlinearity in the middle), and furthermore each "neuron" in this case is completely independent (orthogonal).

As we touched on in the previous point, the columns of V can be thought of as input directions for x, and the columns of U are the output directions. This is actually quite similar to how MLPs work! A simple MLP (ignoring biases) is structured like this:

MLP(x)=Woutf(Winx)

where f is a nonlinear function which acts element-wise (e.g. ReLU) and Win, Wout are the input and output weight matrices respectively. We can write this as a sum over neurons:

MLP(x)=dmlp∑i=1f(Win[i,:]⋅x)Wout[:,i]

in other words, each neuron i has an associated input directionWin[i,:]and an output directionWout[:,i], and we get the output of the MLP by projecting x along the input direction, ReLUing the results, and using this as our coefficient for the output vector.

Compare this to SVD. We can write M=USVT=∑ri=1σiuivTi, so we have:

Mx=(r∑i=1σiuivTi)x=r∑i=1(σivTix)ui

in other words, we calculate the output of x when put through the linear map M by projecting it along each of the input directions vi, multiplying by scale factors σi, and using this as our coefficient for the output vector ui.

The main differences between SVD in this form and MLPs are:

MLPs are nonlinear thanks to their ReLU function. SVD is entirely linear.

In MLPs, it's common to have more neurons than dimensions of the input (e.g. in transformers, we usually have 4x more). This means some pairs of neurons are certain to have non-orthogonal input or output directions. In contrast, not only does SVD have r≤dim(x), but every pair of input and output directions is guaranteed to be orthogonal. Furthermore, if most singular values are zero (as is the case for large low-rank matrices like WOV ), then r will be much smaller than dim(x).

These two points help explain why we might expect the SVD of the transformation matrices WOV to be highly interpretable. Note that we can also view SVD as a way of trying to tackle the "lack of privileged basis" problem - just because the standard basis isn't privileged doesn't mean there can't exist directions in the space which are more meaningful than others, and SVD can be thought of as a way to find them.

5. Lost & Preserved Information

SVD tells you what information (if any) is lost when you useMas a linear map. It also tells you which bits of information are easier / harder to recover.

For any vector x, we can write x=Vy=∑iyivi (where vi are the columns of V ). Then, we have:

Mx=USVx=USVTVy=USy=∑iσiyiui

So the singular values σi tell how much we scale the component of x in the i -th input direction vi. If σi=0 then that component of x gets deleted. If σi is very close to zero, then that information gets sent to very-near-zero, meaning it's harder to recover in some sense.

This is why doing line plots of the spectra for transformer weight matrices can be quite informative. Often, the largest singular values will dominate, and the rest of them will be pretty small. Take the example below, of the size (1024, 768). Even though the rank of the matrix is technically 768, we can see from the singular values that the matrix is "approximately singular" after a much smaller number of singular values.

Another way of describing this concept is with pseudo-inverses. We say that matrix N is a left-inverse of M if its shape is the transpose of M, and NM=I. If this is impossible (e.g. M has size (m,n) with m<n ) then we can still choose N to get as close as possible to this:

What does this look like in SVD? If M=USVT (where S is the version with all positive diagonal values), then we have N=VS−1UT as our pseudo left-inverse. We can see that, for singular values σi close to zero, N will be in danger of blowing up.

6A. Principal Component Analysis

IfMis actually a matrix of data, we can interpret the SVD matrices in terms of that data.Vtells you which features (= linear combinations of data) explain the most variance in the data,Stells you how large this variance is, andUtells you how much each datapoint is exposed to the corresponding feature.

Suppose M is a matrix of (centered) data, with size (n,p) - i.e. there are n datapoints, and each datapoint has p features. The rows are the datapoints, the columns are the feature vectors. The empirical covariance matrix is given by Σ=1n−1MTM, i.e. Σij is the estimated covariance of features i and j in the data. When writing this in SVD, we get:

Σ=1n−1(USVT)T(USVT)=1n−1VS2VT

This is just S2 with respect to the basis of V. Conclusion - the columns ofV(which we also call the principal components) are the directions in feature space which have the highest variance, and the (scaled) squared singular valuesarethat variance. Also, note that VTΣV is a diagonal matrix (with diagonal entries σ2in−1 ); this tells us that the "singular features" found in V have zero covariance, i.e. they vary independently.

How does U fit in here? Well, ui=1σiMvi, so each element of the vector ui is the dot product of a row of data with the feature loadings for our i -th "singular feature" (scaled by the standard deviation of that feature). From here, it's not a big leap to see that thei-th column ofUis the exposure of each datapoint in our matrix to thei-th singular feature.

Note that SVD gives us strictly more information than PCA, because PCA gives us the matrix Σ but not U. This is another illustration of the "SVD is the natural matrix form" idea - when you put a matrix into SVD, other things fall out!

6B. Information Compression

The Fourier transform is a popular choice of basis when compressing things like images (because with a few low-frequency Fourier basis vectors, we can usually reconstruct the image faithfully). We can think of SVD in a similar way, because it gives us the "best basis to use" for reconstructing our image, in some sense.

Suppose we wanted to transmit an image with perfect fidelity. This requires sending O(width×height) information (the number of pixels). A more common strategy is to take the discrete Fourier transform of an image, and then only send the first few frequencies. This is effective for 2 main reasons:

The Fourier transform is computationally efficient to calculate,

Most images are generally quite continuous, and so low-frequency Fourier basis terms work well for reconstructing them.

But what if we didn't care about efficiency of calculation, and instead we only wanted to minimize the amount of information we had to transmit? Would the Fourier transform always be the best choice? Answer - no, the SVD is provably the best choice (subject to some assumptions about how we're quantifying "best", and "information"). ^{ [7] }

The algorithm is illustrated below. Algebraically, it's the same as the "best rank- k approximation" formula. We flatten every image in our dataset, stack them horizontally, and get a massive matrix of data. We then perform SVD on this massive matrix.

What's interesting about this is that we can gain insight into our data M by examining the matrices U and V. For instance, if we take the first few columns of U (the "output directions") and reshape them into images of shape (width, height), then we get the "eigenvectors ^{ [8] } of our images". Doing this for images of human faces is often called an eigenface, and for a long time it was used in facial recognition software.

Here are the first 8 eigenfaces of an example faces dataset (link here ), i.e. the first 8 columns of U reshaped into images:

This is pretty cool! We're basically getting versions of the "general shape of a human face". The first few capture broad patterns of shading & basic features, while the later ones capture features such as lips, eyes and shadows in more detail.

If we wanted to compress a face image into a small number of dimensions and transmit it, we might find the projections of our face along the first few "eigenfaces". To make this more concrete, for an image x of shape (width, height), we might flatten this into a vector of length m, then calculate the k -dimensional vector ^UTx (which is equivalent to finding the projections of x along the first k columns of U ), and then reconstruct by multiplying by ^U.

If we wanted to generate a completely new face from scratch, we could choose a feature vector (i.e. some unit vector in n -dimensional space), and then map it through ^M=^U^S^VT. This would give you a face which has "exposure to the i -th eigenface" equal to the i -th element of your chosen feature vector.

Final Thoughts

Recapping these, we find that the SVD:

Is a decomposition of complicated linear operations into simpler components (rotations and scalings),

Allows us to best approximate a matrix with one of smaller rank,

Is the natural way to express solutions to least-squares type equations,

Gives us a set of independent input and output directions which fully describe the linear transformation,

Tells us what information gets lost and what gets preserved by the linear transformation,

Has a natural interpretation when our matrix is a data matrix (for example, when each datapoint is a flattened image - eigenfaces!).

Appendix - First-Principles Proof of SVD

First, a quick rant. It bugs me how almost all the proofs of SVD use the spectral theorem or some variant. This seems like massive overkill to me, when there's actually a very elegant proof which just uses some basic calculus, and also gets to the essence of SVD in a way that the spectral theorem-based proofs just don't. For that reason, I'm including this proof in the post.

Sketch of proof

Our proof involves choosing (σi,ui,vi) sequentially, until we've spanned all of M. At each step, we find unit vector vi to maximize ∥Mvi∥, subject to vi being orthogonal to our previously chosen vectors. Then we define σi=∥Mvi∥ and ui=1σiMvi. The only non-trivial part of our proof will be showing that ui are orthogonal to each other. This will involve a short geometric argument.

Actual proof

We'll sequentially choose (σi,ui,vi), using the following algorithm:

We define vi=max∥v∥=1∥Mv∥ subject to the restriction vi⊥vj for all j<i

We define σi=∥Mvi∥ and ui=1σiMvi.

Most of the properties of SVD are already proved from this algorithm. By our definition, vi are orthogonal unit vectors, ui are unit vectors, and σi are strictly positive & non-increasing (because each vi is chosen with more restrictions than the previous one). The algorithm terminates when Mvi=0 for all possible choices of vi, at which point the v -vectors we've chosen so far must span the domain of M, and we're done. The only thing left is to show that the u -vectors are orthogonal.

Suppose j<i, and so σj>σi>0. We can define the function f(θ)=∥Mvjcosθ+Mvisinθ∥2. We know that vj was chosen to maximise ∥Mvj∥ subject to orthogonality with the other v -vectors, which means (since vi is also orthogonal to the other v -vectors) that θ=0 must be a stationary point of the function f. But if we Taylor-expand f(θ) around θ=0, we get:

and so f′(0)=0 implies (Mvi)T(Mvj)=0, hence uTiuj=0, as required.

The image below shows the geometric intuition for this proof. If Mvi and Mvj weren't orthogonal, then we'd be able to define v′jby rotating vj a small amount wrt vi, and this would result in a larger vector Mvj (contradiction, since vj was chosen to maximise Mvj ).

Appendix - Eckart-Young Theorem

Firstly, let's prove the two lemmas.

The squared Frobenius norm of a matrix equals the sum of its squared singular values.

Sketch of proof - left-multiplying M by an orthogonal matrix is equivalent to doing a unitary operation on the columns of M. Since the squared Frobenius norm is the sum of squared L2 norms of columns, and unitary operations don't change the L2 norm, M and UTM=SVT must have the same Frobenius norm. A similar argument (rows rather than cols) shows that we can right-multiply by an orthogonal matrix without changing the Frobenius norm. So M and UTMV=S have the same Frobenius norm. But S is a diagonal matrix elements equal to the singular values, so ∥S∥2Fis clearly the sum of squared singular values.

The spectral norm of a matrix is its largest singular value.

This follows directly from the way we proved the SVD - we chose v1 to maximize ∥Mv∥ over all possible vectors v of unit norm.

Now, let's prove the full theorem. Both spectral and Frobenius norm are unchanged by unitary operations, so we have ∥M−Mk∥=∥S−Ak∥ for both types of norm (where Ak=UTMkV has the same rank as Mk ). If we choose Ak=Sk (the diagonal matrix formed from the first k singular values of S ), then we get ∥S−Ak∥2=σk+1, and ∥S−Ak∥2F=∑ri=k+1σ2i. It remains to prove that we can't do better than this.

For a general rank- k matrix Ak, we can find a vector f in the span of the standard basis vectors (e1,e2,...,ek+1) s.t. f is in the nullspace of Ak. Then we have:

∥S−Ak∥2≥∥(S−Ak)f∥2=∥Sf∥≥∥σk+1f∥=σk+1

proving the result for spectral norm. Similarly, we can find vectors fk+1,...,fr in the span of (e1,e2,...,er) which are all in the nullspace of Ak. Using the invariance of the Frobenius norm to changes in basis, we have:

^{ ^ }Note that this terminology is not standard (as far as I know).

^{ ^ }The proof is left as an exercise to the reader. Hint - start by writing the Lagrangian L(x,λ)=∥Mx−b∥22+λ(m2−∥x∥22).

^{ ^ }There's a similar story if we had the constraint ∥x∥1≤m. Our solution here is found by trying to set xvi=bui/σi for as many i=1,2,... as possible, until we hit our constraint. In other words, we only care about matching the components of x along the most important input directions (unlike the L2 -restricted case, where we just weighted the more important input directions more highly than the less important ones).

^{ ^ }That is, if we're quantifying "best" by the norm of the approximation residual, for a choice of norm like Frobenius or L2 (see the section on least squares).

^{ ^ }They're eigenvectors of the matrix MMT=US2UT.

The long-awaited

^{ [1] }sequel to my " Six (and a half) intuitions for KL divergence " is finally here!Thanks to the following people for feedback: Denizhan Akar, Rudolf Laine, Simon Lermen, Aryan Bhatt, Spencer Becker-Kahn, Arthur Conmy, and anonymous members of my ARENA cohort (-:

If you want to 80-20 this post (and you already know what SVD is), then just read the "Summary" section below.

## Summary

The SVD is a matrix decomposition M = U S V T , where U and V are orthogonal, and S is diagonal (with non-negative diagonal elements, which we call singular values). The six intuitions are:

1. Rotations and Scalings (geometric picture)2. Best rank-kApproximations3. Least Squares Regression4. Input and Output Directions (like MLPs!)5. Lost & Preserved Information6A. Principal Component Analysis6B. Information CompressionThere are 2 key ideas running through most of these points:

input directionsand U as the correspondingoutput directions(i.e. we calculate M x by projecting x onto the input directions, and using these projections as coefficients for our output directions)efficiently represent the most important parts of the matrix, especially for low-rank matrices.## Introduction

## Motivation

I'm currently doing a lot of mechanistic interpretability, and SVD is a key concept when thinking about the operations done inside transformers. Matrices like W O V and W Q K are very large and low-rank, making SVD the perfect tool for interpreting them. Furthermore, we have evidence that the SVD of weight matrices in transformers are highly interpretable, with the principle directions aligning with semantically interpretable directions in the residual stream.

On top of this, I think deeply understanding SVD is crucial for understanding how to think about matrices in general, and although there's a lot of valuable stuff online, it's not been brought together into a single resource. I hope I can do something similar with this post as I did with my KL divergence post.

## Notation

Consider a matrix M , with size ( m , n ) . The

singular value decomposition(SVD) is M = U S V T , where:orthogonal unit vectors,singular valuesσ i are positive & in decreasing order: σ 1 ≥ σ 2 ≥ . . . σ r > 0 .There are a few different conventions when it comes to SVD. Sometimes it's written with U , S , V as having sizes ( m , m ) , ( m , n ) , ( n , n ) respectively (in other words we pad S with zeros, and fill out U and V with a complete basis). Alternatively, the matrices can also be written with shapes ( m , r ) , ( r , r ) , ( r , n ) , in other words the matrix S has no zero diagonal elements. I'll most often use the second convention, but there are times when I'll use the first (it should be clear which one I'm using at any given time).

Lastly, note that we can also write M as follows:

M = r ∑ i = 1 σ i u i v T iin other words as a sum of rank-1 matrices, scaled by the singular values. I claim this is the most natural way to think about SVD, and it's the form I'll be using for most of the rest of the post. For convenience, I'll refer to this as the

"SVD sum".## 6 ½ Intuitions

Note that there's a lot of overlap between some of these points, and some of them cut a lot closer to the "core of SVD" than others. You might say that they're . . . not linearly independent, and have different singular values. (I make no apologies for that pun.)

## 1. Rotations and Scalings (geometric picture)

U and V are

orthogonal matrices, meaning that their column vectors u i and v i are orthogonal unit vectors. We can think of them asrotationsin high-dimensional space (in fact, any orthogonal matrix can be formed via a series of rotations, and possibly a reflection). S is a diagonal matrix, which means it scales input along the standard basis.The key point here is that we're taking M , a general linear operation which it's hard to get geometric intuition for, and breaking it up into a series of operations which are much easier to visualise. Let's take possibly the simplest example of a non-trivial matrix operation: a

shear.The illustration below shows how this can be broken down as a rotation, scaling along the standard basis, then another rotation. See here for the animated version.

## 2. Best Low-Rank Approximations

One natural way to formalise "best approximation" M k would be the matrix which minimises the value ∥ M − M k ∥ , where ∥ ⋅ ∥ is some reasonable choice for the norm of a matrix. For example, you might use:

Spectral norm, ∥ A ∥ 2 = max ∥ x ∥ 2 ≤ 1 ∥ A x ∥ 2Frobenius norm, ∥ A ∥ F = √ ∑ i j A 2 i jAs it happens, the choice M k = ∑ k i = 1 σ i u i v T i minimises the residual ∥ M − M k ∥ for both of these norms (subject to the restriction that M k can have at most rank k ). This is called the

Eckart-Young Theorem.^{ [2] }You can find a sketch of both proofs in the appendix.Note - the proof relies heavily on the following lemmas:

The spectral norm of a matrix is its largest singular value.The squared Frobenius norm of a matrix equals the sum of its squared singular values.This actually hints at an important meta point in linear algebra - important concepts like Frobenius norm, trace, determinant, etc. often make a lot more sense when they're defined in terms of the matrix

when viewed as a linear map^{ [3] }, rather than when viewed as a grid of numbers. In this case, defining the Frobenius norm as the sum of squared singular values in SVD was a lot more natural than describing it as the sum of squared elements (it's arguably easier to see how the former definition captures some notion of the "size" of the matrix). To give another example, it's often more natural to describe the trace as the sum of eigenvalues than the sum of diagonal elements (it's quite easy to prove the latter if you start from the former). For more on this meta point, see this section of Neel Nanda's linear recorded algebra talk.Key idea- the singular vectors corresponding to the largest singular values are the best way of efficiently capturing what the matrix M is actually doing. If you capture most of the large singular values, then you've explained most of the operation of matrix M (the residual linear transformation is pretty small).## 3. Least Squares Regression

Firstly, let's take the least squares expression:

min x ∥ M x − b ∥ 2 2and substitute in the singular value decomposition of M . Spectral norm is unchanged when you perform unitary maps, so:

∥ M x − b ∥ 2 = ∥ U S V T x − b ∥ 2 = ∥ S x v − b u ∥ 2where:

the components ofxin the basis created from the columns ofVthe components ofbin the basis created by columns ofUWhen written in this form, we can read off a closed-form expression for the solution:

x v i = { b u i / σ i if σ i > 0 doesn't matter elsewhere x v i = ( x v ) i is the component of x along the i -th column of V , and b u i = ( b u ) i is the component of b along the i -th column of U . This result suggests the following terminology

^{ [4] }, which we'll use from here on out:input directionsof the matrix M ,output directions.The problem of least squares regression then reduces to a simple one:x

make sure the components ofalong the input directions match up with the corresponding target output directions.What about constrained optimization? Suppose we were trying to minimize ∥ M x − b ∥ 2 2 subject to the restriction ∥ x ∥ 2 ≤ m . We can write the solution in this case as x v i = 1 σ i + λ / σ i b u i , where λ is the smallest possible non-negative real number s.t. ∥ x ∥ 2 ≤ m .

^{ [5] }Note that, the larger the singular values σ i are, the closer our coefficient 1 σ i + λ / σ i is to the "unconstrained optimal value" of 1 / σ i . In other words, the larger singular values are more important, so in a constrained optimizationwe care more about the components ofxalong the more important input directionsv i .^{ [6] }A general point here - least squares isn't an easy problem to solve in general, unless we have SVD - then it becomes trivial! This is a pretty clear sign that SVD is in some sense thenaturalform to write a matrix in.## 4. Input and Output Directions (like MLPs!)

As we touched on in the previous point, the columns of V can be thought of as

M L P ( x ) = W o u t f ( W i n x )input directionsfor x , and the columns of U are theoutput directions. This is actually quite similar to how MLPs work! A simple MLP (ignoring biases) is structured like this:where f is a nonlinear function which acts element-wise (e.g. ReLU) and W i n , W o u t are the input and output weight matrices respectively. We can write this as a sum over neurons:

M L P ( x ) = d m l p ∑ i = 1 f ( W i n [ i , : ] ⋅ x ) W o u t [ : , i ]in other words, each neuron i has an associated

input directionW i n [ i , : ] and anoutput directionW o u t [ : , i ] , and we get the output of the MLP by projecting x along the input direction, ReLUing the results, and using this as our coefficient for the output vector.Compare this to SVD. We can write M = U S V T = ∑ r i = 1 σ i u i v T i , so we have:

M x = ( r ∑ i = 1 σ i u i v T i ) x = r ∑ i = 1 ( σ i v T i x ) u iin other words, we calculate the output of x when put through the linear map M by projecting it along each of the input directions v i , multiplying by scale factors σ i , and using this as our coefficient for the output vector u i .

The main differences between SVD in this form and MLPs are:

large low-rank matriceslike W O V ), then r will bemuchsmaller than dim ( x ) .These two points help explain why we might expect the SVD of the transformation matrices W O V to be highly interpretable. Note that we can also view SVD as a way of trying to tackle the "lack of privileged basis" problem - just because the standard basis isn't privileged doesn't mean there can't exist directions in the space which are more meaningful than others, and SVD can be thought of as a way to find them.

## 5. Lost & Preserved Information

For any vector x , we can write x = V y = ∑ i y i v i (where v i are the columns of V ). Then, we have:

M x = U S V x = U S V T V y = U S y = ∑ i σ i y i u iSo the singular values σ i tell how much we scale the component of x in the i -th input direction v i . If σ i = 0 then that component of x gets deleted. If σ i is very close to zero, then that information gets sent to very-near-zero, meaning it's harder to recover in some sense.

This is why doing line plots of the spectra for transformer weight matrices can be quite informative. Often, the largest singular values will dominate, and the rest of them will be pretty small. Take the example below, of the size

`(1024, 768)`

. Even though the rank of the matrix is technically 768, we can see from the singular values that the matrix is "approximately singular" after a much smaller number of singular values.Another way of describing this concept is with

N M = [ I r 0 0 0 ]pseudo-inverses. We say that matrix N is a left-inverse of M if its shape is the transpose of M , and N M = I . If this is impossible (e.g. M has size ( m , n ) with m < n ) then we can still choose N to get as close as possible to this:In this case, we call N the " pseudo left-inverse " of M .

What does this look like in SVD? If M = U S V T (where S is the version with all positive diagonal values), then we have N = V S − 1 U T as our pseudo left-inverse. We can see that, for singular values σ i close to zero, N will be in danger of blowing up.

## 6A. Principal Component Analysis

Suppose M is a matrix of (centered) data, with size ( n , p ) - i.e. there are n datapoints, and each datapoint has p features. The rows are the datapoints, the columns are the feature vectors. The

Σ = 1 n − 1 ( U S V T ) T ( U S V T ) = 1 n − 1 V S 2 V Tempirical covariance matrixis given by Σ = 1 n − 1 M T M , i.e. Σ i j is the estimated covariance of features i and j in the data. When writing this in SVD, we get:This is just S 2 with respect to the basis of V . Conclusion -

the columns ofV(which we also call the principal components) are the directions in feature space which have the highest variance, and the (scaled) squared singular valuesarethat variance.Also, note that V T Σ V is a diagonal matrix (with diagonal entries σ 2 i n − 1 ); this tells us that the "singular features" found in V have zero covariance, i.e. they vary independently.How does U fit in here? Well, u i = 1 σ i M v i , so each element of the vector u i is the dot product of a row of data with the feature loadings for our i -th "singular feature" (scaled by the standard deviation of that feature). From here, it's not a big leap to see that

thei-th column ofUis the exposure of each datapoint in our matrix to thei-th singular feature.Note that SVD gives us strictly more information than PCA, because PCA gives us the matrix Σ but not U . This is another illustration of the "SVD is the natural matrix form" idea - when you put a matrix into SVD, other things fall out!

## 6B. Information Compression

Suppose we wanted to transmit an image with perfect fidelity. This requires sending O ( width × height ) information (the number of pixels). A more common strategy is to take the discrete Fourier transform of an image, and then only send the first few frequencies. This is effective for 2 main reasons:

But what if we didn't care about efficiency of calculation, and instead we only wanted to minimize the amount of information we had to transmit? Would the Fourier transform always be the best choice? Answer -

no, the SVD is provably the best choice(subject to some assumptions about how we're quantifying "best", and "information").^{ [7] }The algorithm is illustrated below. Algebraically, it's the same as the "best rank- k approximation" formula. We flatten every image in our dataset, stack them horizontally, and get a massive matrix of data. We then perform SVD on this massive matrix.

What's interesting about this is that we can gain insight into our data M by examining the matrices U and V . For instance, if we take the first few columns of U (the "output directions") and reshape them into images of shape

`(width, height)`

, then we get the "eigenvectors^{ [8] }of our images". Doing this for images of human faces is often called an eigenface, and for a long time it was used in facial recognition software.Here are the first 8 eigenfaces of an example faces dataset (link here ), i.e. the first 8 columns of U reshaped into images:

This is pretty cool!We're basically getting versions of the "general shape of a human face". The first few capture broad patterns of shading & basic features, while the later ones capture features such as lips, eyes and shadows in more detail.If we wanted to compress a face image into a small number of dimensions and transmit it, we might find the projections of our face along the first few "eigenfaces". To make this more concrete, for an image x of shape

`(width, height)`

, we might flatten this into a vector of length m , then calculate the k -dimensional vector ^ U T x (which is equivalent to finding the projections of x along the first k columns of U ), and then reconstruct by multiplying by ^ U .If we wanted to generate a completely new face from scratch, we could choose a

feature vector(i.e. some unit vector in n -dimensional space), and then map it through ^ M = ^ U ^ S ^ V T . This would give you a face which has "exposure to the i -th eigenface" equal to the i -th element of your chosen feature vector.## Final Thoughts

Recapping these, we find that the SVD:

## Appendix - First-Principles Proof of SVD

First, a quick rant. It bugs me how almost all the proofs of SVD use the spectral theorem or some variant. This seems like massive overkill to me, when there's actually a very elegant proof which just uses some basic calculus, and also gets to the essence of SVD in a way that the spectral theorem-based proofs just don't. For that reason, I'm including this proof in the post.

Sketch of proofOur proof involves choosing ( σ i , u i , v i ) sequentially, until we've spanned all of M . At each step, we find unit vector v i to maximize ∥ M v i ∥ , subject to v i being orthogonal to our previously chosen vectors. Then we define σ i = ∥ M v i ∥ and u i = 1 σ i M v i . The only non-trivial part of our proof will be showing that u i are orthogonal to each other. This will involve a short geometric argument.

Actual proofWe'll sequentially choose ( σ i , u i , v i ) , using the following algorithm:

Most of the properties of SVD are already proved from this algorithm. By our definition, v i are orthogonal unit vectors, u i are unit vectors, and σ i are strictly positive & non-increasing (because each v i is chosen with more restrictions than the previous one). The algorithm terminates when M v i = 0 for all possible choices of v i , at which point the v -vectors we've chosen so far must span the domain of M , and we're done. The only thing left is to show that the u -vectors are orthogonal.

Suppose j < i , and so σ j > σ i > 0 . We can define the function f ( θ ) = ∥ M v j cos θ + M v i sin θ ∥ 2 . We know that v j was chosen to maximise ∥ M v j ∥ subject to orthogonality with the other v -vectors, which means (since v i is also orthogonal to the other v -vectors) that θ = 0 must be a stationary point of the function f . But if we Taylor-expand f ( θ ) around θ = 0 , we get:

f ( θ ) = ∥ M v j cos θ + M v i sin θ ∥ 2 = ∥ M v j ∥ 2 + 2 ( M v i ) T ( M v j ) θ + O ( θ 2 )and so f ′ ( 0 ) = 0 implies ( M v i ) T ( M v j ) = 0 , hence u T i u j = 0 , as required.

The image below shows the geometric intuition for this proof. If M v i and M v j weren't orthogonal, then we'd be able to define v ′ j by rotating v j a small amount wrt v i , and this would result in a larger vector M v j (contradiction, since v j was chosen to maximise M v j ).

## Appendix - Eckart-Young Theorem

Firstly, let's prove the two lemmas.

The squared Frobenius norm of a matrix equals the sum of its squared singular values.Sketch of proof - left-multiplying M by an orthogonal matrix is equivalent to doing a unitary operation on the columns of M . Since the squared Frobenius norm is the sum of squared L 2 norms of columns, and unitary operations don't change the L 2 norm, M and U T M = S V T must have the same Frobenius norm. A similar argument (rows rather than cols) shows that we can right-multiply by an orthogonal matrix without changing the Frobenius norm. So M and U T M V = S have the same Frobenius norm. But S is a diagonal matrix elements equal to the singular values, so ∥ S ∥ 2 F is clearly the sum of squared singular values.

The spectral norm of a matrix is its largest singular value.This follows directly from the way we proved the SVD - we chose v 1 to maximize ∥ M v ∥ over all possible vectors v of unit norm.

Now, let's prove the full theorem. Both spectral and Frobenius norm are unchanged by unitary operations, so we have ∥ M − M k ∥ = ∥ S − A k ∥ for both types of norm (where A k = U T M k V has the same rank as M k ). If we choose A k = S k (the diagonal matrix formed from the first k singular values of S ), then we get ∥ S − A k ∥ 2 = σ k + 1 , and ∥ S − A k ∥ 2 F = ∑ r i = k + 1 σ 2 i . It remains to prove that we can't do better than this.

For a general rank- k matrix A k , we can find a vector f in the span of the standard basis vectors ( e 1 , e 2 , . . . , e k + 1 ) s.t. f is in the nullspace of A k . Then we have:

∥ S − A k ∥ 2 ≥ ∥ ( S − A k ) f ∥ 2 = ∥ S f ∥ ≥ ∥ σ k + 1 f ∥ = σ k + 1proving the result for spectral norm. Similarly, we can find vectors f k + 1 , . . . , f r in the span of ( e 1 , e 2 , . . . , e r ) which are all in the nullspace of A k . Using the invariance of the Frobenius norm to changes in basis, we have:

∥ S − A k ∥ 2 F ≥ r ∑ i , j = k + 1 ( f T i ( S − A k ) f j ) 2 = r ∑ i , j = k + 1 ( f T i S f j ) 2 = r ∑ i = k + 1 σ 2 iso we're done.

^{ ^ }Citation needed.^{ ^ }Note, the Eckart-Young Theorem is sometimes used to refer to the Frobenius norm, sometimes to the spectral norm, and sometimes to both.^{ ^ }For more on this, see Evan Chen's Napkin Linear Algebra section, or his rant on why matrices are not arrays of numbers.^{ ^ }Note that this terminology is not standard (as far as I know).^{ ^ }The proof is left as an exercise to the reader. Hint - start by writing the Lagrangian L ( x , λ ) = ∥ M x − b ∥ 2 2 + λ ( m 2 − ∥ x ∥ 2 2 ) .^{ ^ }There's a similar story if we had the constraint ∥ x ∥ 1 ≤ m . Our solution here is found by trying to set x v i = b u i / σ i for as many i = 1 , 2 , . . . as possible, until we hit our constraint. In other words, weonly^{ ^ }That is, if we're quantifying "best" by the norm of the approximation residual, for a choice of norm like Frobenius or L 2 (see the section on least squares).^{ ^ }They're eigenvectors of the matrix M M T = U S 2 U T .