*Inspired by the seminar yesterday in ICES.*

Thanks to University of Colorado and Yale.
**Introduction:**

Inspiration: The cost of S*ingular *V*alue* D*ecomposition* algorithm requires ) flops which is not practical in large scale matrices. Thus we have to find a algorithm with less cost.

Target: if the singular values are , then we need to maintain the first several singular values by using lower rank matrices.

**Algorithm:**

1. Accuracy:

If we adopt a low rank matrix with , then we can make sure that

with probability almost 1.

2. Cost

The method only cost a total amount of flops.

3. Steps

1. Using column vectors of which are i.i.d random vectors with distribution. And we have:

2.Using SVD to determine the left singular vectors corresponding to the first singular values of . And form a matrx with the column vectors. And we can prove the columns of are close to the orthogonal basis of the range of .

3. Apply with the orthogonal basis.

.

4. Apply SVD to , then

where the columns of and are orthogonal.

5. , then

is the approximation of .

**Implement: **

Here we set matrix is and sparse.

>> A=randn(1000,1000);% We have to assume that A is sparse.

function randsvd(A, k)
%Randomized SVD sample program.
%Purpose: Elapsed time comparison with traditional SVD.
tic
[m,n]=size(A);
P=A*randn(n,k);
[Q,R]=qr(P,0);
B=Q'*A;
[U,S,V]=svd(B);
disp('for Randsvd:');
toc
tic
svd(A);
disp('for Svd:');
toc
end

>> randsvd(A,10)
for Randsvd:
Elapsed time is 0.103737 seconds.
for Svd:
Elapsed time is 2.123127 seconds.

**Compare to Lanczos method**:

1. Cost: almost the same, while Lanczos requires many, many iterative loops…

2. Lanczos might have trouble with degenerated cases.

0.000000
0.000000

### Like this:

Like Loading...

## Published by YiMin

Traveller, in math.
View all posts by YiMin

Yeah, it is similar to Nystrom method for matrix approximation, where P is selected as a subset of A’s column. Due to my experiments(just for some application), it is supposed to achieve high accuracy in singular values.

It is very useful in high dimension reduction, and has been proved that Landmark MDS, Fast MDS are all Nystrom method approximation of classical MDS.

In this algorithm, the accuracy can be guaranteed by using Probability Theory, actually there are a lot of papers on this topic, it is interesting. Random column vectors will produce high accuracy by the limit theorem, we will see the result can be better if we place more column vectors in .

You won’t miss the google project: redsvd. It is useful. However, I only had a glimpse in the AccuracyTest and PerformanceTest in src directory.