題目:壓縮感知重構算法之子空間追蹤(SP)
轉載自彬彬有禮的專欄
如果掌握了壓縮採樣匹配追蹤(CoSaMP)後,再去學習子空間追蹤(Subspace Pursuit)是一件非常簡單的事情,因爲它們幾乎是完全一樣的。
SP的提出時間比CoSaMP提出時間略晚,首個論文版本是參考文獻[1],後來更新了兩次,最後在IEEE Transactions on Information Theory發表[2]。從算法角度來講,SP與CoSaMP差別非常小,這一點作者也意識到了,在文獻[1]首頁的左下角就有註釋:
在文獻[2]第2頁提到了SP與CoSaMP的具體不同:
從上面可以知道,SP與CoSaMP主要區別在於“Ineach iteration, in the SP algorithm, only K new candidates are added, while theCoSAMP algorithm adds 2K vectors.”,即SP每次選擇K個原子,而CoSaMP則選擇2K個原子;這樣帶來的好處是“This makes the SP algorithm computationally moreefficient,”。
以下是文獻[2]中的給出的SP算法流程:
這個算法流程的初始化(Initialization)其實就是類似於CoSaMP的第1次迭代,注意第(1)步中選擇了K個原子:“K indices corresponding to the largest magnitude entries”,在CoSaMP裏這裏要選擇2K個最大的原子,後面的其它流程都一樣。這裏第(5)步增加了一個停止迭代的條件:當殘差經過迭代後卻變大了的時候就停止迭代。
不只是SP作者認識到了自己的算法與CoSaMP的高度相似性,CoSaMP的作者也同樣關注到了SP算法,在文獻[3]中就提到:
文獻[3]是CoSaMP原始提出文獻的第2個版本,文獻[3]的早期版本[4]是沒有提及SP算法的。
鑑於SP與CoSaMP如此相似,這裏不就再單獨給出SP的步驟了,參考《壓縮感知重構算法之壓縮採樣匹配追蹤(CoSaMP)》,只需將第(2)步中的2K改爲K即可。
引用文獻[5]的3.5節中的幾句話:“貪婪類算法雖然複雜度低運行速度快,但其重構精度卻不如BP類算法,爲了尋求複雜度和精度更好地折中,SP算法應運而生”,“SP算法與CoSaMP算法一樣其基本思想也是借用回溯的思想,在每步迭代過程中重新估計所有候選者的可信賴性”,“SP算法與CoSaMP算法有着類似的性質與優缺點”。
子空間追蹤代碼可實現如下(CS_SP.m),通過對比可以知道該程序與CoSaMP的代碼基本完全一致。本代碼未考慮文獻[2]中的給出的SP算法流程的第(5)步。代碼可參見參考文獻[6]中的Demo_CS_SP.m。- function [ theta ] = CS_SP( y,A,K )
- %CS_SP Summary of this function goes here
- %Version: 1.0 written by jbb0523 @2015-05-01
- % Detailed explanation goes here
- % y = Phi * x
- % x = Psi * theta
- % y = Phi*Psi * theta
- % 令 A = Phi*Psi, 則y=A*theta
- % K is the sparsity level
- % 現在已知y和A,求theta
- % Reference:Dai W,Milenkovic O.Subspace pursuit for compressive sensing
- % signal reconstruction[J].IEEE Transactions on Information Theory,
- % 2009,55(5):2230-2249.
- [y_rows,y_columns] = size(y);
- if y_rows<y_columns
- y = y';%y should be a column vector
- end
- [M,N] = size(A);%傳感矩陣A爲M*N矩陣
- theta = zeros(N,1);%用來存儲恢復的theta(列向量)
- Pos_theta = [];%用來迭代過程中存儲A被選擇的列序號
- r_n = y;%初始化殘差(residual)爲y
- for kk=1:K%最多迭代K次
- %(1) Identification
- product = A'*r_n;%傳感矩陣A各列與殘差的內積
- [val,pos]=sort(abs(product),'descend');
- Js = pos(1:K);%選出內積值最大的K列
- %(2) Support Merger
- Is = union(Pos_theta,Js);%Pos_theta與Js並集
- %(3) Estimation
- %At的行數要大於列數,此爲最小二乘的基礎(列線性無關)
- if length(Is)<=M
- At = A(:,Is);%將A的這幾列組成矩陣At
- else%At的列數大於行數,列必爲線性相關的,At'*At將不可逆
- break;%跳出for循環
- end
- %y=At*theta,以下求theta的最小二乘解(Least Square)
- theta_ls = (At'*At)^(-1)*At'*y;%最小二乘解
- %(4) Pruning
- [val,pos]=sort(abs(theta_ls),'descend');
- %(5) Sample Update
- Pos_theta = Is(pos(1:K));
- theta_ls = theta_ls(pos(1:K));
- %At(:,pos(1:K))*theta_ls是y在At(:,pos(1:K))列空間上的正交投影
- r_n = y - At(:,pos(1:K))*theta_ls;%更新殘差
- if norm(r_n)<1e-6%Repeat the steps until r=0
- break;%跳出for循環
- end
- end
- theta(Pos_theta)=theta_ls;%恢復出的theta
- end
function [ theta ] = CS_SP( y,A,K )
%CS_SP Summary of this function goes here
%Version: 1.0 written by jbb0523 @2015-05-01
% Detailed explanation goes here
% y = Phi * x
% x = Psi * theta
% y = Phi*Psi * theta
% 令 A = Phi*Psi, 則y=A*theta
% K is the sparsity level
% 現在已知y和A,求theta
% Reference:Dai W,Milenkovic O.Subspace pursuit for compressive sensing
% signal reconstruction[J].IEEE Transactions on Information Theory,
% 2009,55(5):2230-2249.
[y_rows,y_columns] = size(y);
if y_rows<y_columns
y = y';%y should be a column vector
end
[M,N] = size(A);%傳感矩陣A爲M*N矩陣
theta = zeros(N,1);%用來存儲恢復的theta(列向量)
Pos_theta = [];%用來迭代過程中存儲A被選擇的列序號
r_n = y;%初始化殘差(residual)爲y
for kk=1:K%最多迭代K次
%(1) Identification
product = A'*r_n;%傳感矩陣A各列與殘差的內積
[val,pos]=sort(abs(product),'descend');
Js = pos(1:K);%選出內積值最大的K列
%(2) Support Merger
Is = union(Pos_theta,Js);%Pos_theta與Js並集
%(3) Estimation
%At的行數要大於列數,此爲最小二乘的基礎(列線性無關)
if length(Is)<=M
At = A(:,Is);%將A的這幾列組成矩陣At
else%At的列數大於行數,列必爲線性相關的,At'*At將不可逆
break;%跳出for循環
end
%y=At*theta,以下求theta的最小二乘解(Least Square)
theta_ls = (At'*At)^(-1)*At'*y;%最小二乘解
%(4) Pruning
[val,pos]=sort(abs(theta_ls),'descend');
%(5) Sample Update
Pos_theta = Is(pos(1:K));
theta_ls = theta_ls(pos(1:K));
%At(:,pos(1:K))*theta_ls是y在At(:,pos(1:K))列空間上的正交投影
r_n = y - At(:,pos(1:K))*theta_ls;%更新殘差
if norm(r_n)<1e-6%Repeat the steps until r=0
break;%跳出for循環
end
end
theta(Pos_theta)=theta_ls;%恢復出的theta
end
鑑於SP與CoSaMP的極其相似性,這裏就不再給出單次重構和測量數M與重構成功概率關係曲線繪製例程代碼了,只需將CoSaMP中調用CS_CoSaMP函數的部分改爲調用CS_SP即可,無須任何其它改動。這裏給出對比兩種重構算法所繪製的測量數M與重構成功概率關係曲線的例程代碼,只有這樣纔可以看出兩種算法的重構性能優劣,以下是在分別運行完SP與CoSaMP的測量數M與重構成功概率關係曲線繪製例程代碼的基礎上,即已經存儲了數據CoSaMPMtoPercentage1000.mat和SPMtoPercentage1000.mat:- clear all;close all;clc;
- load CoSaMPMtoPercentage1000;
- PercentageCoSaMP = Percentage;
- load SPMtoPercentage1000;
- PercentageSP = Percentage;
- S1 = ['-ks';'-ko';'-kd';'-kv';'-k*'];
- S2 = ['-rs';'-ro';'-rd';'-rv';'-r*'];
- figure;
- for kk = 1:length(K_set)
- K = K_set(kk);
- M_set = 2*K:5:N;
- L_Mset = length(M_set);
- plot(M_set,PercentageCoSaMP(kk,1:L_Mset),S1(kk,:));%繪出x的恢復信號
- hold on;
- plot(M_set,PercentageSP(kk,1:L_Mset),S2(kk,:));%繪出x的恢復信號
- end
- hold off;
- xlim([0 256]);
- legend('CoSaK=4','SPK=4','CoSaK=12','SPK=12','CoSaK=20',...
- 'SPK=20','CoSaK=28','SPK=28','CoSaK=36','SPK=36');
- xlabel('Number of measurements(M)');
- ylabel('Percentage recovered');
- title('Percentage of input signals recovered correctly(N=256)(Gaussian)');
clear all;close all;clc;
load CoSaMPMtoPercentage1000;
PercentageCoSaMP = Percentage;
load SPMtoPercentage1000;
PercentageSP = Percentage;
S1 = ['-ks';'-ko';'-kd';'-kv';'-k*'];
S2 = ['-rs';'-ro';'-rd';'-rv';'-r*'];
figure;
for kk = 1:length(K_set)
K = K_set(kk);
M_set = 2*K:5:N;
L_Mset = length(M_set);
plot(M_set,PercentageCoSaMP(kk,1:L_Mset),S1(kk,:));%繪出x的恢復信號
hold on;
plot(M_set,PercentageSP(kk,1:L_Mset),S2(kk,:));%繪出x的恢復信號
end
hold off;
xlim([0 256]);
legend('CoSaK=4','SPK=4','CoSaK=12','SPK=12','CoSaK=20',...
'SPK=20','CoSaK=28','SPK=28','CoSaK=36','SPK=36');
xlabel('Number of measurements(M)');
ylabel('Percentage recovered');
title('Percentage of input signals recovered correctly(N=256)(Gaussian)');
運行結果如下:
可以發現在M較小時SP略好於CoSaMP,當M變大時二者重構性能幾乎一樣。
參考文獻:
[1]Dai W,Milenkovic O. Subspace pursuitfor compressive sensing: Closing the gap between performance and complexity.http://arxiv.org/pdf/0803.0811v1.pdf
[2]Dai W,Milenkovic O.Subspacepursuit for compressive sensing signal reconstruction[J].IEEETransactions on Information Theory,2009,55(5):2230-2249.
[3]D. Needell, J.A. Tropp.CoSaMP: Iterative signal recoveryfrom incomplete and inaccurate samples. http://arxiv.org/pdf/0803.2392v2.pdf
[4]D. Needell, J.A. Tropp.CoSaMP: Iterative signal recoveryfrom incomplete and inaccurate samples. http://arxiv.org/pdf/0803.2392v1.pdf
[5]楊真真,楊震,孫林慧.信號壓縮重構的正交匹配追蹤類算法綜述[J]. 信號處理,2013,29(4):486-496.
[6]Li Zeng. CS_Reconstruction. http://www.pudn.com/downloads518/sourcecode/math/detail2151378.html