# Randomized SVD

Inspired by the seminar yesterday in ICES.

Thanks to University of Colorado and Yale.

Introduction:

Inspiration: The cost of Singular Value Decomposition algorithm requires $\displaystyle O(mn^2$) flops which is not practical in large scale matrices. Thus we have to find a algorithm with less cost.

Target: if the singular values are $\sigma_1\ge \sigma_2\ge \cdots\ge\sigma_n$, then we need to maintain the first several singular values by using lower rank matrices.

Algorithm:

1. Accuracy:

If we adopt a low rank matrix $B$ with $\mathrm{rank}(B)=k$, then we can make sure that

$||A-B||\le \sqrt{(k+p)n}\sigma_{k+1}(A)$

with probability almost 1.

2. Cost

The method only cost a total amount of $O(k^2(m+n))$ flops.

3. Steps

1. Using $k+p$ column vectors of $R_{n\times (k+p)}$ which are i.i.d random vectors with $N(0,1)$ distribution. And we have:

$P_{m\times (k+p)}=A_{m\times n}\cdot R_{n\times (k+p)}$

2.Using SVD to determine the left singular vectors corresponding to the first $k$ singular values of $P_{m\times (k+p)}$. And form a matrx $Q_{m\times k}$ with the $k$ column vectors. And we can prove the columns of $Q_{m\times k}$ are close to the orthogonal basis of the range of $A$.

3. Apply $\displaystyle A^{T}$ with the orthogonal basis.

$S_{n\times k}=(A^{T}_{m\times n})\cdot Q_{m\times k}$.

4. Apply SVD to $S^{T}_{n\times k}$, then

$S^{T}_{n\times k}=T_{k\times k}\cdot \Sigma_{k\times k}\cdot V^{T}_{k\times n}$

where the columns of $\displaystyle T_{k\times k}$ and $\displaystyle V_{n\times k}$ are orthogonal.

5. $U_{m\times k}=Q_{m\times k}\cdot T_{k\times k}$, then

$U_{m\times k}\cdot \Sigma_{k\times k}\cdot V^{T}_{k\times n}$

is the approximation of $A$.

Implement:

Here we set matrix $A$ is $1000\times 1000$ and sparse.

>> A=randn(1000,1000);% We have to assume that A is sparse.
function randsvd(A, k)
%Randomized SVD sample program.
%Purpose: Elapsed time comparison with traditional SVD.
tic
[m,n]=size(A);
P=A*randn(n,k);
[Q,R]=qr(P,0);
B=Q'*A;
[U,S,V]=svd(B);
disp('for Randsvd:');
toc
tic
svd(A);
disp('for Svd:');
toc
end
>> randsvd(A,10)
for Randsvd:
Elapsed time is 0.103737 seconds.
for Svd:
Elapsed time is 2.123127 seconds.

Compare to Lanczos method:

1. Cost: almost the same, while Lanczos requires many, many iterative loops…

2. Lanczos might have trouble with degenerated cases.

This is just a nerd PhD student of Math@UT Austin.

## 3 thoughts on “Randomized SVD”

1. yelpoo says:

Yeah, it is similar to Nystrom method for matrix approximation, where P is selected as a subset of A’s column. Due to my experiments(just for some application), it is supposed to achieve high accuracy in singular values.

It is very useful in high dimension reduction, and has been proved that Landmark MDS, Fast MDS are all Nystrom method approximation of classical MDS.

• YiMin says:

In this algorithm, the accuracy can be guaranteed by using Probability Theory, actually there are a lot of papers on this topic, it is interesting. Random column vectors will produce high accuracy by the limit theorem, we will see the result can be better if we place more column vectors in $R$.

• YiMin says:

You won’t miss the google project: redsvd. It is useful. However, I only had a glimpse in the AccuracyTest and PerformanceTest in src directory.