Inspired by the seminar yesterday in ICES.Thanks to University of Colorado and Yale.
Inspiration: The cost of Singular Value Decomposition algorithm requires ) flops which is not practical in large scale matrices. Thus we have to find a algorithm with less cost.
Target: if the singular values are , then we need to maintain the first several singular values by using lower rank matrices.
If we adopt a low rank matrix with , then we can make sure that
with probability almost 1.
The method only cost a total amount of flops.
1. Using column vectors of which are i.i.d random vectors with distribution. And we have:
2.Using SVD to determine the left singular vectors corresponding to the first singular values of . And form a matrx with the column vectors. And we can prove the columns of are close to the orthogonal basis of the range of .
3. Apply with the orthogonal basis.
4. Apply SVD to , then
where the columns of and are orthogonal.
5. , then
is the approximation of .
Here we set matrix is and sparse.
>> A=randn(1000,1000);% We have to assume that A is sparse.
function randsvd(A, k) %Randomized SVD sample program. %Purpose: Elapsed time comparison with traditional SVD. tic [m,n]=size(A); P=A*randn(n,k); [Q,R]=qr(P,0); B=Q'*A; [U,S,V]=svd(B); disp('for Randsvd:'); toc tic svd(A); disp('for Svd:'); toc end
>> randsvd(A,10) for Randsvd: Elapsed time is 0.103737 seconds. for Svd: Elapsed time is 2.123127 seconds.
Compare to Lanczos method:
1. Cost: almost the same, while Lanczos requires many, many iterative loops…
2. Lanczos might have trouble with degenerated cases.