迭代硬閾值MATLAB代碼

原文鏈接:https://blog.csdn.net/jbb0523/article/details/52079687

題目:壓縮感知重構算法之迭代硬閾值(Iterative Hard Thresholding,IHT)

        本篇來介紹IHT重構算法。一般在壓縮感知參考文獻中,提到IHT時一般引用的都是文獻【1】,但IHT實際上是在文獻【2】中提出的。IHT並不是一種凸優化算法,它類似於OMP,是一種迭代算法,但它是由一個優化問題推導得到的。文獻【1】和文獻【2】的作者相同,署名單位爲英國愛丁堡大學(University ofEdinburgh),第一作者的個人主頁見參考文獻【3】,從個人主頁來看,作者現在已到英國南安普敦大學(University of Southampton),作者發表的論文均可以從其個人主頁中下載。

        文獻【1】的貢獻是當把IHT應用於壓縮感知重構問題時進行了一個理論分析:


1、迭代硬閾值(IHT)的提出

        值得一提的是,IHT在文獻【2】中提出時並不叫Iterative Hard Thresholding,而是M-Sparse Algorithm,如下圖所示:

        該算法是爲了求解M-稀疏問題(M-sparse problem)式(3.1)而提出的,經過一番推導得到了迭代公式式(3.2),其中HM(·)的含義參見式(3.3)。

        這裏面最關鍵的問題是:式(3.2)這個迭代公式是如何推導得到的呢?

        以下Step1~Step4推導過程可以參見本文的補充說明:迭代硬閾值(IHT)的補充說明,若要透徹地理解IHT,需要知道Majorization-Minimization優化框架硬閾值(Hard Thresholding)函數

2、Step1:替代目標函數

        首先,將式(3.1)的目標函數用替代目標函數(surrogate objective fucntion)式(3.5)替換:

這裏中的M應該指的是M-sparse,S應該指的是Surrogate。這裏要求:


        爲什麼式目標函數式(3.1)可以用式(3.5) 替代呢?這得往回看一下了……

        實際上,文獻【2】分別針對兩個優化問題進行了討論,本篇主要是文獻中的第二個優化問題,由於兩個問題有一定的相似性,所以文中在推導第二個問題時進行了一些簡化,下面簡單回顧一些必要的有關第一個問題的內容,第一個優化問題是:

將目標函數定義爲:

        爲了推導迭代公式(詳見式(2.2)和式(2.3))式(1.5)用如下替代目標函數代替:

        這裏注意波浪下劃線中提到的“[29]”(參見文獻【4】),surrogate objective function的思想來自這篇文件。然後注意對Φ的約束(第一個紅框),之後以會有這個約束,個人認爲是爲了使式(2.5)後半部分大於等於零,即爲了使

大於等於零(當y=z時這部分等於零)。由此自然就有了式(2.5)與式(1.5)兩個目標函數的關係(第二個紅框),這也很容易理解,將y=z代入式(2.5)自然可得這個關係。

        到此應該明白式(2.5)爲什麼可以替代式(1.5)了吧……

        而我們用式(3.5)替代目標函數

的道理是一模一樣的。

        補充一點:有關對||Φ||2<1的約束文獻【2】中有一處提到了如下描述:


3、Step2:替代目標函數變形

        接下來,式(3.5)進行了變形:

        這個式子是怎麼來的呢?我們對式(3.5)進行一下推導:

        這裏,後面三項2範數的平方是與y無關的項,因此可視爲常量,若對參數y求最優化時這三項並不影響優化結果,可略去,因此就有了變形的結果,符號“∝”表示成正比例。

4、Step3:極值點的獲得

        接下來文獻【2】直接給出了極值點:

        注意文中提到了“landweder”,搜索一下可知經常出現的是“landweder迭代”,這個暫且不提。那麼極值點是如何推導得到的呢?其實就是一個簡單的配方,中學生就會的:


        ,則

        ,取得最小值

5、Step4:迭代公式的獲得

        極值點得到了,替代目標函數的極小值也得到了:

        那麼如何得到迭代公式式(3.2)呢?這時要注意,推導過程中有一個約束條件一直沒管,即式(3.1)中的約束條件:

也就是向量y的稀疏度不大於M。綜合起來說,替代函數的最小值是

那麼怎麼使這個最小值在向量y的稀疏度不大於M的約束下最小呢,顯然是保留最大的M項(因爲是平方,所以要取絕對值absolute value),剩餘的置零(注意這裏有個負號,所以要保留最大的M項)。

        至此,我們就得到了迭代公式式(3.2)。

6、IHT算法的MATLAB代碼

         這裏一共給出三個版本的IHT實現:

第一個版本:

        在作者的主頁有官方版IHT算法MATLAB代碼,但有些複雜,這裏給出一個簡化版的IHT代碼,方便理解:

  1. function [ y ] = IHT_Basic( x,Phi,M,mu,epsilon,loopmax )
  2. %IHT_Basic Summary of this function goes here
  3. %Version: 1.0 written by jbb0523 @2016-07-30
  4. %Reference:Blumensath T, Davies M E. Iterative Thresholding for Sparse Approximations[J].
  5. %Journal of Fourier Analysis & Applications, 2008, 14(5):629-654.
  6. %(Available at: http://link.springer.com/article/10.1007%2Fs00041-008-9035-z)
  7. % Detailed explanation goes here
  8. if nargin < 6
  9. loopmax = 3000;
  10. end
  11. if nargin < 5
  12. epsilon = 1e-3;
  13. end
  14. if nargin < 4
  15. mu = 1;
  16. end
  17. [x_rows,x_columns] = size(x);
  18. if x_rows<x_columns
  19. x = x';%x should be a column vector
  20. end
  21. n = size(Phi,2);
  22. y = zeros(n,1);%Initialize y=0
  23. loop = 0;
  24. while(norm(x-Phi*y)>epsilon && loop < loopmax)
  25. y = y + Phi'*(x-Phi*y)*mu;%update y
  26. %the following two lines of code realize functionality of H_M(.)
  27. %1st: permute absolute value of y in descending order
  28. [ysorted inds] = sort(abs(y), 'descend');
  29. %2nd: set all but M largest coordinates to zeros
  30. y(inds(M+1:n)) = 0;
  31. loop = loop + 1;
  32. end
  33. end

 第二個版本:(作者給出的官方版本)

        文件:hard_l0_Mterm.m(\sparsify_0_5\HardLab)

        鏈接:http://www.personal.soton.ac.uk/tb1m08/sparsify/sparsify_0_5.zip

  1. function [s, err_mse, iter_time]=hard_l0_Mterm(x,A,m,M,varargin)
  2. % hard_l0_Mterm: Hard thresholding algorithm that keeps exactly M elements
  3. % in each iteration.
  4. %
  5. % This algorithm has certain performance guarantees as described in [1],
  6. % [2] and [3].
  7. %
  8. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  9. % Usage
  10. %
  11. % [s, err_mse, iter_time]=hard_l0_Mterm(x,P,m,M,'option_name','option_value')
  12. %
  13. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  14. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  15. %
  16. % Input
  17. %
  18. % Mandatory:
  19. % x Observation vector to be decomposed
  20. % P Either:
  21. % 1) An nxm matrix (n must be dimension of x)
  22. % 2) A function handle (type "help function_format"
  23. % for more information)
  24. % Also requires specification of P_trans option.
  25. % 3) An object handle (type "help object_format" for
  26. % more information)
  27. % m length of s
  28. % M non-zero elements to keep in each iteration
  29. %
  30. % Possible additional options:
  31. % (specify as many as you want using 'option_name','option_value' pairs)
  32. % See below for explanation of options:
  33. %__________________________________________________________________________
  34. % option_name | available option_values | default
  35. %--------------------------------------------------------------------------
  36. % stopTol | number (see below) | 1e-16
  37. % P_trans | function_handle (see below) |
  38. % maxIter | positive integer (see below) | n^2
  39. % verbose | true, false | false
  40. % start_val | vector of length m | zeros
  41. % step_size | number | 0 (auto)
  42. %
  43. % stopping criteria used : (OldRMS-NewRMS)/RMS(x) < stopTol
  44. %
  45. % stopTol: Value for stopping criterion.
  46. %
  47. % P_trans: If P is a function handle, then P_trans has to be specified and
  48. % must be a function handle.
  49. %
  50. % maxIter: Maximum number of allowed iterations.
  51. %
  52. % verbose: Logical value to allow algorithm progress to be displayed.
  53. %
  54. % start_val: Allows algorithms to start from partial solution.
  55. %
  56. %
  57. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  58. %
  59. % Outputs
  60. %
  61. % s Solution vector
  62. % err_mse Vector containing mse of approximation error for each
  63. % iteration
  64. % iter_time Vector containing computation times for each iteration
  65. %
  66. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  67. %
  68. % Description
  69. %
  70. % Implements the M-sparse algorithm described in [1], [2] and [3].
  71. % This algorithm takes a gradient step and then thresholds to only retain
  72. % M non-zero elements. It allows the step-size to be calculated
  73. % automatically as described in [3] and is therefore now independent from
  74. % a rescaling of P.
  75. %
  76. %
  77. % References
  78. % [1] T. Blumensath and M.E. Davies, "Iterative Thresholding for Sparse
  79. % Approximations", submitted, 2007
  80. % [2] T. Blumensath and M. Davies; "Iterative Hard Thresholding for
  81. % Compressed Sensing" to appear Applied and Computational Harmonic
  82. % Analysis
  83. % [3] T. Blumensath and M. Davies; "A modified Iterative Hard
  84. % Thresholding algorithm with guaranteed performance and stability"
  85. % in preparation (title may change)
  86. % See Also
  87. % hard_l0_reg
  88. %
  89. % Copyright (c) 2007 Thomas Blumensath
  90. %
  91. % The University of Edinburgh
  92. % Comments and bug reports welcome
  93. %
  94. % This file is part of sparsity Version 0.4
  95. % Created: April 2007
  96. % Modified January 2009
  97. %
  98. % Part of this toolbox was developed with the support of EPSRC Grant
  99. % D000246/1
  100. %
  101. % Please read COPYRIGHT.m for terms and conditions.
  102. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  103. % Default values and initialisation
  104. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  105. [n1 n2]=size(x);
  106. if n2 == 1
  107. n=n1;
  108. elseif n1 == 1
  109. x=x';
  110. n=n2;
  111. else
  112. error('x must be a vector.');
  113. end
  114. sigsize = x'*x/n;
  115. oldERR = sigsize;
  116. err_mse = [];
  117. iter_time = [];
  118. STOPTOL = 1e-16;
  119. MAXITER = n^2;
  120. verbose = false;
  121. initial_given=0;
  122. s_initial = zeros(m,1);
  123. MU = 0;
  124. if verbose
  125. display('Initialising...')
  126. end
  127. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  128. % Output variables
  129. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  130. switch nargout
  131. case 3
  132. comp_err=true;
  133. comp_time=true;
  134. case 2
  135. comp_err=true;
  136. comp_time=false;
  137. case 1
  138. comp_err=false;
  139. comp_time=false;
  140. case 0
  141. error('Please assign output variable.')
  142. otherwise
  143. error('Too many output arguments specified')
  144. end
  145. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  146. % Look through options
  147. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  148. % Put option into nice format
  149. Options={};
  150. OS=nargin-4;
  151. c=1;
  152. for i=1:OS
  153. if isa(varargin{i},'cell')
  154. CellSize=length(varargin{i});
  155. ThisCell=varargin{i};
  156. for j=1:CellSize
  157. Options{c}=ThisCell{j};
  158. c=c+1;
  159. end
  160. else
  161. Options{c}=varargin{i};
  162. c=c+1;
  163. end
  164. end
  165. OS=length(Options);
  166. if rem(OS,2)
  167. error('Something is wrong with argument name and argument value pairs.')
  168. end
  169. for i=1:2:OS
  170. switch Options{i}
  171. case {'stopTol'}
  172. if isa(Options{i+1},'numeric') ; STOPTOL = Options{i+1};
  173. else error('stopTol must be number. Exiting.'); end
  174. case {'P_trans'}
  175. if isa(Options{i+1},'function_handle'); Pt = Options{i+1};
  176. else error('P_trans must be function _handle. Exiting.'); end
  177. case {'maxIter'}
  178. if isa(Options{i+1},'numeric'); MAXITER = Options{i+1};
  179. else error('maxIter must be a number. Exiting.'); end
  180. case {'verbose'}
  181. if isa(Options{i+1},'logical'); verbose = Options{i+1};
  182. else error('verbose must be a logical. Exiting.'); end
  183. case {'start_val'}
  184. if isa(Options{i+1},'numeric') && length(Options{i+1}) == m ;
  185. s_initial = Options{i+1};
  186. initial_given=1;
  187. else error('start_val must be a vector of length m. Exiting.'); end
  188. case {'step_size'}
  189. if isa(Options{i+1},'numeric') && (Options{i+1}) > 0 ;
  190. MU = Options{i+1};
  191. else error('Stepsize must be between a positive number. Exiting.'); end
  192. otherwise
  193. error('Unrecognised option. Exiting.')
  194. end
  195. end
  196. if nargout >=2
  197. err_mse = zeros(MAXITER,1);
  198. end
  199. if nargout ==3
  200. iter_time = zeros(MAXITER,1);
  201. end
  202. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  203. % Make P and Pt functions
  204. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  205. if isa(A,'float') P =@(z) A*z; Pt =@(z) A'*z;
  206. elseif isobject(A) P =@(z) A*z; Pt =@(z) A'*z;
  207. elseif isa(A,'function_handle')
  208. try
  209. if isa(Pt,'function_handle'); P=A;
  210. else error('If P is a function handle, Pt also needs to be a function handle. Exiting.'); end
  211. catch error('If P is a function handle, Pt needs to be specified. Exiting.'); end
  212. else error('P is of unsupported type. Use matrix, function_handle or object. Exiting.'); end
  213. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  214. % Do we start from zero or not?
  215. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  216. if initial_given ==1;
  217. if length(find(s_initial)) > M
  218. display('Initial vector has more than M non-zero elements. Keeping only M largest.')
  219. end
  220. s = s_initial;
  221. [ssort sortind] = sort(abs(s),'descend');
  222. s(sortind(M+1:end)) = 0;
  223. Ps = P(s);
  224. Residual = x-Ps;
  225. oldERR = Residual'*Residual/n;
  226. else
  227. s_initial = zeros(m,1);
  228. Residual = x;
  229. s = s_initial;
  230. Ps = zeros(n,1);
  231. oldERR = sigsize;
  232. end
  233. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  234. % Random Check to see if dictionary norm is below 1
  235. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  236. x_test=randn(m,1);
  237. x_test=x_test/norm(x_test);
  238. nP=norm(P(x_test));
  239. if abs(MU*nP)>1;
  240. display('WARNING! Algorithm likely to become unstable.')
  241. display('Use smaller step-size or || P ||_2 < 1.')
  242. end
  243. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  244. % Main algorithm
  245. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  246. if verbose
  247. display('Main iterations...')
  248. end
  249. tic
  250. t=0;
  251. done = 0;
  252. iter=1;
  253. while ~done
  254. if MU == 0
  255. %Calculate optimal step size and do line search
  256. olds = s;
  257. oldPs = Ps;
  258. IND = s~=0;
  259. d = Pt(Residual);
  260. % If the current vector is zero, we take the largest elements in d
  261. if sum(IND)==0
  262. [dsort sortdind] = sort(abs(d),'descend');
  263. IND(sortdind(1:M)) = 1;
  264. end
  265. id = (IND.*d);
  266. Pd = P(id);
  267. mu = id'*id/(Pd'*Pd);
  268. s = olds + mu * d;
  269. [ssort sortind] = sort(abs(s),'descend');
  270. s(sortind(M+1:end)) = 0;
  271. Ps = P(s);
  272. % Calculate step-size requirement
  273. omega = (norm(s-olds)/norm(Ps-oldPs))^2;
  274. % As long as the support changes and mu > omega, we decrease mu
  275. while mu > (0.99)*omega && sum(xor(IND,s~=0))~=0 && sum(IND)~=0
  276. % display(['decreasing mu'])
  277. % We use a simple line search, halving mu in each step
  278. mu = mu/2;
  279. s = olds + mu * d;
  280. [ssort sortind] = sort(abs(s),'descend');
  281. s(sortind(M+1:end)) = 0;
  282. Ps = P(s);
  283. % Calculate step-size requirement
  284. omega = (norm(s-olds)/norm(Ps-oldPs))^2;
  285. end
  286. else
  287. % Use fixed step size
  288. s = s + MU * Pt(Residual);
  289. [ssort sortind] = sort(abs(s),'descend');
  290. s(sortind(M+1:end)) = 0;
  291. Ps = P(s);
  292. end
  293. Residual = x-Ps;
  294. ERR=Residual'*Residual/n;
  295. if comp_err
  296. err_mse(iter)=ERR;
  297. end
  298. if comp_time
  299. iter_time(iter)=toc;
  300. end
  301. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  302. % Are we done yet?
  303. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  304. if comp_err && iter >=2
  305. if ((err_mse(iter-1)-err_mse(iter))/sigsize<STOPTOL);
  306. if verbose
  307. display(['Stopping. Approximation error changed less than ' num2str(STOPTOL)])
  308. end
  309. done = 1;
  310. elseif verbose && toc-t>10
  311. display(sprintf('Iteration %i. --- %i mse change',iter ,(err_mse(iter-1)-err_mse(iter))/sigsize))
  312. t=toc;
  313. end
  314. else
  315. if ((oldERR - ERR)/sigsize < STOPTOL) && iter >=2;
  316. if verbose
  317. display(['Stopping. Approximation error changed less than ' num2str(STOPTOL)])
  318. end
  319. done = 1;
  320. elseif verbose && toc-t>10
  321. display(sprintf('Iteration %i. --- %i mse change',iter ,(oldERR - ERR)/sigsize))
  322. t=toc;
  323. end
  324. end
  325. % Also stop if residual gets too small or maxIter reached
  326. if comp_err
  327. if err_mse(iter)<1e-16
  328. display('Stopping. Exact signal representation found!')
  329. done=1;
  330. end
  331. elseif iter>1
  332. if ERR<1e-16
  333. display('Stopping. Exact signal representation found!')
  334. done=1;
  335. end
  336. end
  337. if iter >= MAXITER
  338. display('Stopping. Maximum number of iterations reached!')
  339. done = 1;
  340. end
  341. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  342. % If not done, take another round
  343. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  344. if ~done
  345. iter=iter+1;
  346. oldERR=ERR;
  347. end
  348. end
  349. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  350. % Only return as many elements as iterations
  351. %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  352. if nargout >=2
  353. err_mse = err_mse(1:iter);
  354. end
  355. if nargout ==3
  356. iter_time = iter_time(1:iter);
  357. end
  358. if verbose
  359. display('Done')
  360. end

第三個版本:

        文件:Demo_CS_IHT.m(部分)

        鏈接:http://www.pudn.com/downloads518/sourcecode/math/detail2151378.html

  1. function hat_x=cs_iht(y,T_Mat,m)
  2. % y=T_Mat*x, T_Mat is n-by-m
  3. % y - measurements
  4. % T_Mat - combination of random matrix and sparse representation basis
  5. % m - size of the original signal
  6. % the sparsity is length(y)/4
  7. hat_x_tp=zeros(m,1); % initialization with the size of original
  8. s=floor(length(y)/4); % sparsity
  9. u=0.5; % impact factor
  10. % T_Mat=T_Mat/sqrt(sum(sum(T_Mat.^2))); % normalizae the whole matrix
  11. for times=1:s
  12. x_increase=T_Mat'*(y-T_Mat*hat_x_tp);
  13. hat_x=hat_x_tp+u*x_increase;
  14. [val,pos]=sort((hat_x),'descend'); % why? worse performance with abs()
  15. hat_x(pos(s+1:end))=0; % thresholding, keeping the larges s elements
  16. hat_x_tp=hat_x; % update
  17. end

7、單次重構代碼

 %壓縮感知重構算法測試      

  1. clear all;close all;clc;
  2. M = 64;%觀測值個數
  3. N = 256;%信號x的長度
  4. K = 10;%信號x的稀疏度
  5. Index_K = randperm(N);
  6. x = zeros(N,1);
  7. x(Index_K(1:K)) = 5*randn(K,1);%x爲K稀疏的,且位置是隨機的
  8. Psi = eye(N);%x本身是稀疏的,定義稀疏矩陣爲單位陣x=Psi*theta
  9. Phi = randn(M,N);%測量矩陣爲高斯矩陣
  10. Phi = orth(Phi')';
  11. A = Phi * Psi;%傳感矩陣
  12. % sigma = 0.005;
  13. % e = sigma*randn(M,1);
  14. % y = Phi * x + e;%得到觀測向量y
  15. y = Phi * x;%得到觀測向量y
  16. %% 恢復重構信號x
  17. tic
  18. theta = IHT_Basic(y,A,K);
  19. % theta = cs_iht(y,A,size(A,2));
  20. % theta = hard_l0_Mterm(y,A,size(A,2),round(1.5*K),'verbose',true);
  21. x_r = Psi * theta;% x=Psi * theta
  22. toc
  23. %% 繪圖
  24. figure;
  25. plot(x_r,'k.-');%繪出x的恢復信號
  26. hold on;
  27. plot(x,'r');%繪出原信號x
  28. hold off;
  29. legend('Recovery','Original')
  30. fprintf('\n恢復殘差:');
  31. norm(x_r-x)%恢復殘差

        這裏就不給出重構結果了,給出仿真結論:本人編的IHT基本版能夠正常工作,但偶爾會重構失敗;第二個版本hard_l0_Mterm.m重構效果很好;第三個版本Demo_CS_IHT.m重構效果很差,估計是作者疑問(why? worse performance with abs()),沒有加abs取絕對值的原因吧……

8、結束語

8.1有關算法的名字

        值得注意的是,在文獻【2】中將式(2.2)稱爲iterative hard-thresholding algorithm,而將式(3.2)稱爲M-sparse algorithm,在文獻【1】中又將式(3.2)稱爲Iterative Hard Thresholding algorithm (IHTs),一般簡稱IHT的較多,多餘的s指的是s稀疏。可見算法的名稱是也是一不斷完善的過程啊……

8.2與GraDeS算法的關係

        如果你學習過GraDeS算法(參見http://blog.csdn.net/jbb0523/article/details/52059296),然後再學習本算法,是不是有一種似曾相似的感覺?

        沒錯,這兩個算法的迭代公式幾乎是一樣的,尤其是文獻【1】中的式(12)(如上圖第二個紅框)進一步拓展了該算法的定義。這個就跟CoSaMP與SP兩個算法一樣,在GraDeS的提出文獻【5】中開始部分還提到了IHT,但後面就沒提了,不知道作者是怎麼看待這個問題的。如果非說二者有區別,那就是GraDeS的參數γ=1+δ2s,且δ2s<1/3。

        所以,有想法得趕緊寫成論文發表出來,否則被搶了先機那就……

8.3重構效果問題

        另外,在GraDeS算法中提到該算法的重構效果不好,這裏注意文獻【2】中的一段話:

        也就是說,IHT作者也意識到了該種算法的問題,並提出了兩種應用策略(two strategies for asuccessful application of the methods)。

8.4Landweber迭代

        在網上搜索“Landweber迭代”時找到了一段程序[6]

  1. function [x,k]=Landweber(A,b,x0)
  2. alfa=1/sum(diag(A*A'));
  3. k=1;
  4. L=200;
  5. x=x0;
  6. while k<L
  7. x1=x;
  8. x=x+alfa*A'*(b-A*x);
  9. if norm(b-A*x)/norm(b)<0.005
  10. break;
  11. elseif norm(x1-x)/norm(x)<0.001
  12. break;
  13. end
  14. k=k+1;
  15. end

注意該程序的迭代部分“x=x+alfa*A'*(b-A*x);”,除了多了一些alfa係數外,這跟IHT不是基本一樣麼?或者說與GraDeS有什麼區別?

        有關LandWeber迭代可參見文獻:“Landweber L. An iteration formula for Fredholm integral equations of the first kind[J]. American journal of mathematics, 1951, 73(3): 615-624.”,此處不再多述。

8.5改進算法

        作者後來又提出了兩個關於IHT的改進算法,分別是RIHT(Normalized IHT)[7]和AIHT(Accelerated IHT)[8]

        提出RIHT主要是由於IHT有一些缺點[7]

        新算法RIHT將會有如下優點:


        之所以作者提供的軟件包(第二個版本IHT)重構效果更好是由於最新版的hard_l0_Mterm.m (\sparsify_0_5\HardLab)程序中已經更新成了RIHT。

        RIHT的算法流程如下:

 

        將IHT改進爲AIHT後會有如下優點[8]

        值得注意的是,AIHT應該是一類算法的總稱(雖然作者只闡述了兩種實現策略),這個類似於FFT是所有DFT快速算法的總稱:


8.6稀疏度對IHT的影響

        自己可以試一下,IHT輸入參數中的稀疏度並不是很關鍵,若實際稀疏度爲K,則稀疏度這個輸入參數只要不小於K就可以了,重構效果都挺不錯的,比如第三個版本的IHT程序,作者直接將稀疏度定義爲信號y長度的四分之一。

8.7作者去向?

        細心的人會發現,文獻【8】的暑名單位爲劍橋大學(University of Oxford),並不是作者主頁所在的南安普敦大學(University of Southampton),在文獻【8】的最後南提到:

Previous position?難道作者跳到Oxford了?

9、參考文獻

【1】Blumensath T, Davies M E.Iterative hard thresholding for compressed sensing[J]. Applied & Computational HarmonicAnalysis, 2008, 27(3):265-274. (Available at:http://www.sciencedirect.com/science/article/pii/S1063520309000384)

【2】Blumensath T, Davies M E.Iterative Thresholding for Sparse Approximations[J]. Journal of Fourier Analysis & Applications,2008, 14(5):629-654. (Available at:http://link.springer.com/article/10.1007%2Fs00041-008-9035-z)

【3】Homepageof Blumensath T :http://www.personal.soton.ac.uk/tb1m08/index.html

【4】Lange, K., Hunter, D.R., Yang, I.. OptimizationTransfer Using Surrogate Objective Functions[J]. Journal of Computational &Graphical Statistics, 2000, 9(1):1-20. (Available at: http://sites.stat.psu.edu/~dhunter/papers/ot.pdf)

【5】GargR, Khandekar R. Gradient descent with sparsification: an iterative algorithmfor sparse recovery with restricted isometry property[C]//Proceedings of the26th Annual InternationalConference on Machine Learning. ACM, 2009: 337-344

【6】shasying2. landweber迭代方法.http://download.csdn.net/detail/shasying2/5092828

【7】Blumensath T, Davies M E.Normalized Iterative Hard Thresholding: Guaranteed Stability and Performance[J]. IEEE Journal of Selected Topics in Signal Processing, 2010,4(2):298-309.

【8】Blumensath T. Accelerated iterative hard thresholding[J]. Signal Processing, 2012, 92(3):752-756.

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章