題目:壓縮感知重構算法之廣義正交匹配追蹤(gOMP)
廣義正交匹配追蹤(Generalized OMP, gOMP)算法可以看作爲OMP算法的一種推廣,由文獻[1]提出,第1作者本碩爲哈工大畢業,發表此論文時在Korea University攻讀博士學位。OMP每次只選擇與殘差相關最大的一個,而gOMP則是簡單地選擇最大的S個。之所以這裏表述爲“簡單地選擇”是相比於ROMP之類算法的,不進行任何其它處理,只是選擇最大的S個而已。
0、符號說明如下:
壓縮觀測y=Φx,其中y爲觀測所得向量M×1,x爲原信號N×1(M<<N)。x一般不是稀疏的,但在某個變換域Ψ是稀疏的,即x=Ψθ,其中θ爲K稀疏的,即θ只有K個非零項。此時y=ΦΨθ,令A=ΦΨ,則y=Aθ。
(1) y爲觀測所得向量,大小爲M×1
(2)x爲原信號,大小爲N×1
(3)θ爲K稀疏的,是信號在x在某變換域的稀疏表示
(4) Φ稱爲觀測矩陣、測量矩陣、測量基,大小爲M×N
(5) Ψ稱爲變換矩陣、變換基、稀疏矩陣、稀疏基、正交基字典矩陣,大小爲N×N
(6)A稱爲測度矩陣、傳感矩陣、CS信息算子,大小爲M×N
上式中,一般有K<<M<<N,後面三個矩陣各個文獻的叫法不一,以後我將Φ稱爲測量矩陣、將Ψ稱爲稀疏矩陣、將A稱爲傳感矩陣。
注意:這裏的稀疏表示模型爲x=Ψθ,所以傳感矩陣A=ΦΨ;而有些文獻中稀疏模型爲θ=Ψx,而一般Ψ爲Hermite矩陣(實矩陣時稱爲正交矩陣),所以Ψ-1=ΨH (實矩陣時爲Ψ-1=ΨT),即x=ΨHθ,所以傳感矩陣A=ΦΨH,例如沙威的OMP例程中就是如此。
1、gOMP重構算法流程:
2、廣義正交匹配追蹤(gOMP)MATLAB代碼(CS_gOMP.m)
本代碼完全是爲了保證和前面的各算法代法格式一致,可以直接使用該實驗室網站提供的代碼[2]壓縮包中的islsp_EstgOMP.m。
- function [ theta ] = CS_gOMP( y,A,K,S )
- %CS_gOMP Summary of this function goes here
- %Version: 1.0 written by jbb0523 @2015-05-08
- % Detailed explanation goes here
- % y = Phi * x
- % x = Psi * theta
- % y = Phi*Psi * theta
- % 令 A = Phi*Psi, 則y=A*theta
- % 現在已知y和A,求theta
- % Reference: Jian Wang, Seokbeop Kwon, Byonghyo Shim. Generalized
- % orthogonal matching pursuit, IEEE Transactions on Signal Processing,
- % vol. 60, no. 12, pp. 6202-6216, Dec. 2012.
- % Available at: http://islab.snu.ac.kr/paper/tsp_gOMP.pdf
- if nargin < 4
- S = round(max(K/4, 1));
- end
- [y_rows,y_columns] = size(y);
- if y_rows<y_columns
- y = y';%y should be a column vector
- end
- [M,N] = size(A);%傳感矩陣A爲M*N矩陣
- theta = zeros(N,1);%用來存儲恢復的theta(列向量)
- Pos_theta = [];%用來迭代過程中存儲A被選擇的列序號
- r_n = y;%初始化殘差(residual)爲y
- for ii=1:K%迭代K次,K爲稀疏度
- product = A'*r_n;%傳感矩陣A各列與殘差的內積
- [val,pos]=sort(abs(product),'descend');%降序排列
- Sk = union(Pos_theta,pos(1:S));%選出最大的S個
- if length(Sk)==length(Pos_theta)
- if ii == 1
- theta_ls = 0;
- end
- break;
- end
- if length(Sk)>M
- if ii == 1
- theta_ls = 0;
- end
- break;
- end
- At = A(:,Sk);%將A的這幾列組成矩陣At
- %y=At*theta,以下求theta的最小二乘解(Least Square)
- theta_ls = (At'*At)^(-1)*At'*y;%最小二乘解
- %At*theta_ls是y在At)列空間上的正交投影
- r_n = y - At*theta_ls;%更新殘差
- Pos_theta = Sk;
- if norm(r_n)<1e-6
- break;%quit the iteration
- end
- end
- theta(Pos_theta)=theta_ls;%恢復出的theta
- end
function [ theta ] = CS_gOMP( y,A,K,S )
%CS_gOMP Summary of this function goes here
%Version: 1.0 written by jbb0523 @2015-05-08
% Detailed explanation goes here
% y = Phi * x
% x = Psi * theta
% y = Phi*Psi * theta
% 令 A = Phi*Psi, 則y=A*theta
% 現在已知y和A,求theta
% Reference: Jian Wang, Seokbeop Kwon, Byonghyo Shim. Generalized
% orthogonal matching pursuit, IEEE Transactions on Signal Processing,
% vol. 60, no. 12, pp. 6202-6216, Dec. 2012.
% Available at: http://islab.snu.ac.kr/paper/tsp_gOMP.pdf
if nargin < 4
S = round(max(K/4, 1));
end
[y_rows,y_columns] = size(y);
if y_rows<y_columns
y = y';%y should be a column vector
end
[M,N] = size(A);%傳感矩陣A爲M*N矩陣
theta = zeros(N,1);%用來存儲恢復的theta(列向量)
Pos_theta = [];%用來迭代過程中存儲A被選擇的列序號
r_n = y;%初始化殘差(residual)爲y
for ii=1:K%迭代K次,K爲稀疏度
product = A'*r_n;%傳感矩陣A各列與殘差的內積
[val,pos]=sort(abs(product),'descend');%降序排列
Sk = union(Pos_theta,pos(1:S));%選出最大的S個
if length(Sk)==length(Pos_theta)
if ii == 1
theta_ls = 0;
end
break;
end
if length(Sk)>M
if ii == 1
theta_ls = 0;
end
break;
end
At = A(:,Sk);%將A的這幾列組成矩陣At
%y=At*theta,以下求theta的最小二乘解(Least Square)
theta_ls = (At'*At)^(-1)*At'*y;%最小二乘解
%At*theta_ls是y在At)列空間上的正交投影
r_n = y - At*theta_ls;%更新殘差
Pos_theta = Sk;
if norm(r_n)<1e-6
break;%quit the iteration
end
end
theta(Pos_theta)=theta_ls;%恢復出的theta
end
3、gOMP單次重構測試代碼(CS_Reconstuction_Test.m)
以下測試代碼基本與OMP單次重構測試代碼一樣。也可參考該實驗室網站提供的代碼[2]壓縮包中的Test_gOMP.m。
- %壓縮感知重構算法測試
- clear all;close all;clc;
- M = 128;%觀測值個數
- N = 256;%信號x的長度
- K = 30;%信號x的稀疏度
- Index_K = randperm(N);
- x = zeros(N,1);
- x(Index_K(1:K)) = 5*randn(K,1);%x爲K稀疏的,且位置是隨機的
- Psi = eye(N);%x本身是稀疏的,定義稀疏矩陣爲單位陣x=Psi*theta
- Phi = randn(M,N)/sqrt(M);%測量矩陣爲高斯矩陣
- A = Phi * Psi;%傳感矩陣
- y = Phi * x;%得到觀測向量y
- %% 恢復重構信號x
- tic
- theta = CS_gOMP( y,A,K);
- x_r = Psi * theta;% x=Psi * theta
- toc
- %% 繪圖
- figure;
- plot(x_r,'k.-');%繪出x的恢復信號
- hold on;
- plot(x,'r');%繪出原信號x
- hold off;
- legend('Recovery','Original')
- fprintf('\n恢復殘差:');
- norm(x_r-x)%恢復殘差
%壓縮感知重構算法測試
clear all;close all;clc;
M = 128;%觀測值個數
N = 256;%信號x的長度
K = 30;%信號x的稀疏度
Index_K = randperm(N);
x = zeros(N,1);
x(Index_K(1:K)) = 5*randn(K,1);%x爲K稀疏的,且位置是隨機的
Psi = eye(N);%x本身是稀疏的,定義稀疏矩陣爲單位陣x=Psi*theta
Phi = randn(M,N)/sqrt(M);%測量矩陣爲高斯矩陣
A = Phi * Psi;%傳感矩陣
y = Phi * x;%得到觀測向量y
%% 恢復重構信號x
tic
theta = CS_gOMP( y,A,K);
x_r = Psi * theta;% x=Psi * theta
toc
%% 繪圖
figure;
plot(x_r,'k.-');%繪出x的恢復信號
hold on;
plot(x,'r');%繪出原信號x
hold off;
legend('Recovery','Original')
fprintf('\n恢復殘差:');
norm(x_r-x)%恢復殘差
運行結果如下:(信號爲隨機生成,所以每次結果均不一樣)
1) 圖:
2)Command windows
Elapsedtime is 0.155937 seconds.
恢復殘差:
ans=
2.3426e-014
4、信號稀疏度K與重構成功概率關係曲線繪製例程代碼
以下測試代碼爲了與文獻[1]的Fig.1作比較。由於暫未研究學習LP算法,所以相比於文獻[1]的Fig.1)缺少LP算法曲線,加入了SP算法。以下測試代碼與SAMP相應的測試代碼基本一致,可以合併在一起運行,只須在主循環內多加幾種算法重構就行。
- %壓縮感知重構算法測試CS_Reconstuction_KtoPercentagegOMP.m
- % 繪製參考文獻中的Fig.1
- % Reference: Jian Wang, Seokbeop Kwon, Byonghyo Shim. Generalized
- % orthogonal matching pursuit, IEEE Transactions on Signal Processing,
- % vol. 60, no. 12, pp. 6202-6216, Dec. 2012.
- % Available at: http://islab.snu.ac.kr/paper/tsp_gOMP.pdf
- % Elapsed time is 798.718246 seconds.(@20150509pm)
- clear all;close all;clc;
- %% 參數配置初始化
- CNT = 1000;%對於每組(K,M,N),重複迭代次數
- N = 256;%信號x的長度
- Psi = eye(N);%x本身是稀疏的,定義稀疏矩陣爲單位陣x=Psi*theta
- M_set = [128];%測量值集合
- KIND = ['OMP ';'ROMP ';'StOMP ';'SP ';'CoSaMP ';...
- 'gOMP(s=3)';'gOMP(s=6)';'gOMP(s=9)'];
- Percentage = zeros(N,length(M_set),size(KIND,1));%存儲恢復成功概率
- %% 主循環,遍歷每組(K,M,N)
- tic
- for mm = 1:length(M_set)
- M = M_set(mm);%本次測量值個數
- K_set = 5:5:70;%信號x的稀疏度K沒必要全部遍歷,每隔5測試一個就可以了
- %存儲此測量值M下不同K的恢復成功概率
- PercentageM = zeros(size(KIND,1),length(K_set));
- for kk = 1:length(K_set)
- K = K_set(kk);%本次信號x的稀疏度K
- P = zeros(1,size(KIND,1));
- fprintf('M=%d,K=%d\n',M,K);
- for cnt = 1:CNT %每個觀測值個數均運行CNT次
- Index_K = randperm(N);
- x = zeros(N,1);
- x(Index_K(1:K)) = 5*randn(K,1);%x爲K稀疏的,且位置是隨機的
- Phi = randn(M,N)/sqrt(M);%測量矩陣爲高斯矩陣
- A = Phi * Psi;%傳感矩陣
- y = Phi * x;%得到觀測向量y
- %(1)OMP
- theta = CS_OMP(y,A,K);%恢復重構信號theta
- x_r = Psi * theta;% x=Psi * theta
- if norm(x_r-x)<1e-6%如果殘差小於1e-6則認爲恢復成功
- P(1) = P(1) + 1;
- end
- %(2)ROMP
- theta = CS_ROMP(y,A,K);%恢復重構信號theta
- x_r = Psi * theta;% x=Psi * theta
- if norm(x_r-x)<1e-6%如果殘差小於1e-6則認爲恢復成功
- P(2) = P(2) + 1;
- end
- %(3)StOMP
- theta = CS_StOMP(y,A);%恢復重構信號theta
- x_r = Psi * theta;% x=Psi * theta
- if norm(x_r-x)<1e-6%如果殘差小於1e-6則認爲恢復成功
- P(3) = P(3) + 1;
- end
- %(4)SP
- theta = CS_SP(y,A,K);%恢復重構信號theta
- x_r = Psi * theta;% x=Psi * theta
- if norm(x_r-x)<1e-6%如果殘差小於1e-6則認爲恢復成功
- P(4) = P(4) + 1;
- end
- %(5)CoSaMP
- theta = CS_CoSaMP(y,A,K);%恢復重構信號theta
- x_r = Psi * theta;% x=Psi * theta
- if norm(x_r-x)<1e-6%如果殘差小於1e-6則認爲恢復成功
- P(5) = P(5) + 1;
- end
- %(6)gOMP,S=3
- theta = CS_gOMP(y,A,K,3);%恢復重構信號theta
- x_r = Psi * theta;% x=Psi * theta
- if norm(x_r-x)<1e-6%如果殘差小於1e-6則認爲恢復成功
- P(6) = P(6) + 1;
- end
- %(7)gOMP,S=6
- theta = CS_gOMP(y,A,K,6);%恢復重構信號theta
- x_r = Psi * theta;% x=Psi * theta
- if norm(x_r-x)<1e-6%如果殘差小於1e-6則認爲恢復成功
- P(7) = P(7) + 1;
- end
- %(8)gOMP,S=9
- theta = CS_gOMP(y,A,K,9);%恢復重構信號theta
- x_r = Psi * theta;% x=Psi * theta
- if norm(x_r-x)<1e-6%如果殘差小於1e-6則認爲恢復成功
- P(8) = P(8) + 1;
- end
- end
- for iii = 1:size(KIND,1)
- PercentageM(iii,kk) = P(iii)/CNT*100;%計算恢復概率
- end
- end
- for jjj = 1:size(KIND,1)
- Percentage(1:length(K_set),mm,jjj) = PercentageM(jjj,:);
- end
- end
- toc
- save KtoPercentage1000gOMP %運行一次不容易,把變量全部存儲下來
- %% 繪圖
- S = ['-ks';'-ko';'-yd';'-gv';'-b*';'-r.';'-rx';'-r+'];
- figure;
- for mm = 1:length(M_set)
- M = M_set(mm);
- K_set = 5:5:70;
- L_Kset = length(K_set);
- for ii = 1:size(KIND,1)
- plot(K_set,Percentage(1:L_Kset,mm,ii),S(ii,:));%繪出x的恢復信號
- hold on;
- end
- end
- hold off;
- xlim([5 70]);
- legend('OMP','ROMP','StOMP','SP','CoSaMP',...
- 'gOMP(s=3)','gOMP(s=6)','gOMP(s=9)');
- xlabel('Sparsity level K');
- ylabel('The Probability of Exact Reconstruction');
- title('Prob. of exact recovery vs. the signal sparsity K(M=128,N=256)(Gaussian)');
%壓縮感知重構算法測試CS_Reconstuction_KtoPercentagegOMP.m
% 繪製參考文獻中的Fig.1
% Reference: Jian Wang, Seokbeop Kwon, Byonghyo Shim. Generalized
% orthogonal matching pursuit, IEEE Transactions on Signal Processing,
% vol. 60, no. 12, pp. 6202-6216, Dec. 2012.
% Available at: http://islab.snu.ac.kr/paper/tsp_gOMP.pdf
% Elapsed time is 798.718246 seconds.(@20150509pm)
clear all;close all;clc;
%% 參數配置初始化
CNT = 1000;%對於每組(K,M,N),重複迭代次數
N = 256;%信號x的長度
Psi = eye(N);%x本身是稀疏的,定義稀疏矩陣爲單位陣x=Psi*theta
M_set = [128];%測量值集合
KIND = ['OMP ';'ROMP ';'StOMP ';'SP ';'CoSaMP ';...
'gOMP(s=3)';'gOMP(s=6)';'gOMP(s=9)'];
Percentage = zeros(N,length(M_set),size(KIND,1));%存儲恢復成功概率
%% 主循環,遍歷每組(K,M,N)
tic
for mm = 1:length(M_set)
M = M_set(mm);%本次測量值個數
K_set = 5:5:70;%信號x的稀疏度K沒必要全部遍歷,每隔5測試一個就可以了
%存儲此測量值M下不同K的恢復成功概率
PercentageM = zeros(size(KIND,1),length(K_set));
for kk = 1:length(K_set)
K = K_set(kk);%本次信號x的稀疏度K
P = zeros(1,size(KIND,1));
fprintf('M=%d,K=%d\n',M,K);
for cnt = 1:CNT %每個觀測值個數均運行CNT次
Index_K = randperm(N);
x = zeros(N,1);
x(Index_K(1:K)) = 5*randn(K,1);%x爲K稀疏的,且位置是隨機的
Phi = randn(M,N)/sqrt(M);%測量矩陣爲高斯矩陣
A = Phi * Psi;%傳感矩陣
y = Phi * x;%得到觀測向量y
%(1)OMP
theta = CS_OMP(y,A,K);%恢復重構信號theta
x_r = Psi * theta;% x=Psi * theta
if norm(x_r-x)<1e-6%如果殘差小於1e-6則認爲恢復成功
P(1) = P(1) + 1;
end
%(2)ROMP
theta = CS_ROMP(y,A,K);%恢復重構信號theta
x_r = Psi * theta;% x=Psi * theta
if norm(x_r-x)<1e-6%如果殘差小於1e-6則認爲恢復成功
P(2) = P(2) + 1;
end
%(3)StOMP
theta = CS_StOMP(y,A);%恢復重構信號theta
x_r = Psi * theta;% x=Psi * theta
if norm(x_r-x)<1e-6%如果殘差小於1e-6則認爲恢復成功
P(3) = P(3) + 1;
end
%(4)SP
theta = CS_SP(y,A,K);%恢復重構信號theta
x_r = Psi * theta;% x=Psi * theta
if norm(x_r-x)<1e-6%如果殘差小於1e-6則認爲恢復成功
P(4) = P(4) + 1;
end
%(5)CoSaMP
theta = CS_CoSaMP(y,A,K);%恢復重構信號theta
x_r = Psi * theta;% x=Psi * theta
if norm(x_r-x)<1e-6%如果殘差小於1e-6則認爲恢復成功
P(5) = P(5) + 1;
end
%(6)gOMP,S=3
theta = CS_gOMP(y,A,K,3);%恢復重構信號theta
x_r = Psi * theta;% x=Psi * theta
if norm(x_r-x)<1e-6%如果殘差小於1e-6則認爲恢復成功
P(6) = P(6) + 1;
end
%(7)gOMP,S=6
theta = CS_gOMP(y,A,K,6);%恢復重構信號theta
x_r = Psi * theta;% x=Psi * theta
if norm(x_r-x)<1e-6%如果殘差小於1e-6則認爲恢復成功
P(7) = P(7) + 1;
end
%(8)gOMP,S=9
theta = CS_gOMP(y,A,K,9);%恢復重構信號theta
x_r = Psi * theta;% x=Psi * theta
if norm(x_r-x)<1e-6%如果殘差小於1e-6則認爲恢復成功
P(8) = P(8) + 1;
end
end
for iii = 1:size(KIND,1)
PercentageM(iii,kk) = P(iii)/CNT*100;%計算恢復概率
end
end
for jjj = 1:size(KIND,1)
Percentage(1:length(K_set),mm,jjj) = PercentageM(jjj,:);
end
end
toc
save KtoPercentage1000gOMP %運行一次不容易,把變量全部存儲下來
%% 繪圖
S = ['-ks';'-ko';'-yd';'-gv';'-b*';'-r.';'-rx';'-r+'];
figure;
for mm = 1:length(M_set)
M = M_set(mm);
K_set = 5:5:70;
L_Kset = length(K_set);
for ii = 1:size(KIND,1)
plot(K_set,Percentage(1:L_Kset,mm,ii),S(ii,:));%繪出x的恢復信號
hold on;
end
end
hold off;
xlim([5 70]);
legend('OMP','ROMP','StOMP','SP','CoSaMP',...
'gOMP(s=3)','gOMP(s=6)','gOMP(s=9)');
xlabel('Sparsity level K');
ylabel('The Probability of Exact Reconstruction');
title('Prob. of exact recovery vs. the signal sparsity K(M=128,N=256)(Gaussian)');
本程序在聯想ThinkPadE430C筆記本(4GB DDR3內存,i5-3210)上運行共耗時798.718246秒,程序中將所有數據均通過“save KtoPercentage1000gOMP”存儲了下來,以後可以再對數據進行分析,只需“load KtoPercentage1000gOMP”即可。
本程序運行結果:
文獻[1]中的Fig.1:
5、結語
我很好奇:爲什麼相比於OMP算法就是簡單每次多選幾列,重構效果爲什麼這麼好?居然比複雜的ROMP、CoSaMP、StOMP效果還要好……
該課題組還提出了MMP算法,可參見文獻[3]。更多關於該課題組的信息可去官方網站查詢:http://islab.snu.ac.kr/,也可直接查看發表的文章:http://islab.snu.ac.kr/publication.html。
文獻[1]最後有兩個TABLE,分別是算法的流程和複雜度總結:
誰能告訴我TABLE I表頭中的DELETE在這裏是什麼意思啊?不是刪除的意思麼?
6、參考文獻
【1】Jian Wang, Seokbeop Kwon,Byonghyo Shim. Generalized orthogonalmatching pursuit, IEEE Transactions on Signal Processing, vol. 60, no. 12, pp.6202-6216, Dec. 2012.
Available at: http://islab.snu.ac.kr/paper/tsp_gOMP.pdf
【2】http://islab.snu.ac.kr/paper/gOMP.zip
【3】S. Kwon, J. Wang andB. Shim, Multipath matching pursuit, IEEE Transactions on Information Theory,vol. 60, no. 5, pp. 2986-3001, May 2014.
Available at: http://islab.snu.ac.kr/paper/TIT_MMP2014.pdf