論文研讀-基於決策變量聚類的大規模多目標優化進化算法

論文研讀-基於決策變量聚類的大規模多目標優化進化算法

A Decision Variable Clustering-Based Evolutionary Algorithm for Large-Scale Many-Objective Optimization

覺得有用的話,歡迎一起討論相互學習~

我的微博我的github我的B站

  • 此篇文章爲 X. Zhang, Y. Tian, R. Cheng and Y. Jin, "A Decision Variable Clustering-Based Evolutionary Algorithm for Large-Scale Many-Objective Optimization," in IEEE Transactions on Evolutionary Computation, vol. 22, no. 1, pp. 97-112, Feb. 2018, doi: 10.1109/TEVC.2016.2600642. 的論文學習筆記,只供學習使用,不作商業用途,侵權刪除。並且本人學術功底有限如果有思路不正確的地方歡迎批評指正!

  • 在本文中主要提出了兩個創新點

  1. 基於聚類(夾角)的決策變量分類方法
  2. T-ENS 基於樹的快速非支配算法
    本篇博客重點關注於 第一個創新點–即基於聚類(夾角)的決策變量分類方法
    對於 創新點2請參考原文與參考文獻[54]X. Zhang, Y. Tian, R. Cheng, and Y. Jin, “An efficient approach to nondominated sorting for evolutionary multiobjective optimization,” IEEE Trans. Evol. Comput., vol. 19, no. 2, pp. 201–213, Apr. 2015.

Abstract

  • 現有的多目標優化的文獻大多數關注目標的規模而很少有文獻關注決策變量的規模,然而現實中很多問題不僅是超多目標的並且決策變量規模也很大。
  • 爲了解決這種大規模超多目標問題(MaOPs),提出了基於決策變量聚類的算法。
  • 首先,決策變量會分成兩類:1.收斂相關, 2. 多樣相關 ,並且分別對這兩種不同的變量使用不同的進化方式。
  • 另外,提出了一種基於樹結構的快速非支配排序方式以提高計算效率
  • 最後,爲了證明算法的效果,在具有10個目標5000個變量的算例上進行了實驗,實驗結果表明提出的算法相對於目前最先進的算法有顯著的提升。

關鍵詞

  • 聚類,進化多目標優化,大規模優化,超多目標優化,非支配排序,樹

Introduction

  • 多目標優化問題(MaOP)是指涉及三個以上同時要優化的衝突目標的問題,這些問題廣泛存在於實際應用中,例如工程設計[1],空中交通管制[2],地下水監測 [3]和分子設計[4]。 一般而言,MaOPs不能通過大多數旨在解決通常只涉及2-3個目標[5]-[9]的傳統多目標優化問題(MOP)的多目標進化算法(MOEA)來解決。這主要是因爲兩個問題, 第一個問題是 收斂壓力 的損失,這主要是由稱爲“主支配阻力”的現象引起的[1]。 這是由於以下事實:在超多目標優化中,大多數候選解決方案變得相互之間非支配,從而導致傳統MOEA中基於支配的選擇策略失效。另外有一個原因是 多樣性管理。 在超多目標優化中,候選解會稀疏的分佈在高維目標空間這會使得傳統的多樣性管理方法失效。爲了解決上述的兩種問題,已經提出了一些方法[10],[11].具體可以分爲以下四類:
  1. 提升算法收斂性能
  • 爲了提升算法的收斂壓力,最直接的想法是直接修改傳統帕累託支配的定義,這類方法主要有ε-dominance [12], [13], L-optimality [14], fuzzy dominance [15],preference order ranking [16], and θ-dominance [17].另一種想法是將一個額外的收斂性的指標結合到傳統的支配方式中,這其中就包括the substitute distance assignment based NSGA-II 18, grid-based evolutionary algorithm [19], preference-inspired coevolutionary algorithm(表現啓發的協同進化算法) [20], many-objective evolutionary algorithm based on directional diversity and favorable convergence (基於方向多樣性和良好收斂性的多目標進化算法) [21], and knee point driven evolutionary algorithm (KnEA) (拐點驅動的進化算法)[22].
  1. 基於指標
  • 第二類直接採用表現指標作爲選擇標準,以區分傳統帕累託支配無法區分的非支配解決方案。 在許多其他文獻[23]-[26]中,some representative approaches of this category are indicator-based evolutionary algorithm [27], S-metric selection based evolutionary multiobjective optimization algorithm [28], and hypervolume (HV)-based evolutionary algorithm [29].(此類的一些代表性方法是基於指標的進化算法[27],基於S度量選擇的進化多目標優化算法[28]和基於超體積(HV)的進化算法[29]。)
  1. 基於分解
  • 基於分解的算法是將一個超多目標優化問題分解爲一系列簡單的子問題並且以協作的方式解決他們。更確切的說,分解的方法是將多目標問題分解爲一系列單目標優化問題(SOPs).最流行的基於分解的算法是MOEA/D[30]以及其變體穩定匹配模型的MOEA/D(stable matching model based MOEA/D [31]), MOEA/D with adaptive weight vector adjustment[32], external archive guided MOEA/D [33], MOEA/D with a distance-based updating [34], and online diversity metric based MOEA/D [35]. 另一種分解的方法是將一個超多目標優化問題分解爲一系列簡單的多目標優化問題 Some representative MOEAs of this type are MOEA/D-M2M [36], reference-point based many-objective NSGA-II (NSGA-III) [37] (參考點), dominance and decomposition (支配和分解) based MOEA [38], and the recently proposed reference vector (參考向量) guided evolutionary algorithm [39].
  1. 將一個超多目標優化的問題轉化爲多目標優化問題
    這個方向有兩個分支
  • 使用目標縮減的方法消除多餘和不相關的目標 such as dominance relation preservation based algorithms (基於優勢關係保存的算法)[40], [41], unsupervised feature selection based algorithms (無監督特徵選擇算法) [42], Pareto corner search based algorithms (基於帕累託角的搜索算法) [43], machine learning based objective reduction (基於機器學習的目標縮減) [44]–[46], and the recently proposed nonlinear correlation information entropy based objective reduction (基於非線性相關信息熵的目標縮減) [47].
  • 使用兩到三個新定義的目標替換原來的目標 Representative approaches of this type include bi-goal evolution [48] and summation of normalized objective values and diversified selection based MOEA (歸一化目標值的總和和多樣化選擇的MOEA)[49].
  1. 提高計算性能
  • (個人認爲:提高搜索性能和提高計算性能是不一樣的,搜索性能關注的是一次評價時種羣性能指標的改變;而計算性能指的是評價一次或者計算指標時的時間消耗,或者說是計算量的消耗,也可以認爲是時間複雜度,屬於算法層面的優化)
  • For example, Bringmann et al. [50] suggested to use the Monte Carlo method to improve the computational efficiency of HV calculations in the multiobjective covariance matrix adaptation evolution strategy(提出使用蒙特卡羅方法提高CMA-ES計算HV時的計算效率). Some fast nondominated sorting approaches were also developed to reduce the computational cost of (快速非支配排序算法提高排序效率)Pareto-based MOEAs for solving MaOPs, such as deductive sort [51], corner sort [52], M-front [53], efficient nondominated sort (ENS) [54], and approximate nondominated sort [55].
  • 但是值得一提的是還是很少有人關注超多目標算法中的決策變量規模問題,大規模決策變量在單目標中已經研究的很多[56]-[62],但是在超多目標中還是研究的比較少。[63]
  • 最近Ma[64]提出了一種使用決策變量分析方法來對決策變量進行分類的MOEA-MOEA/DVA來解決大規模MOPs。在MOEA/DVA中,基於支配關係的決策變量分析方法把決策變量分爲1) 收斂相關變量,2)多樣性相關 3)收斂性和多樣性都相關。收斂性和多樣性相關的變量將使用不同的進化算子進行優化,其中兩者都相關的變量將視爲多樣性相關的變量 本文認爲這種方法即使適用於兩到三個目標的問題但是對於超多目標的問題的能力還沒有得到驗證
  • 遵循[64]中提出的MOEA / DVA的基本思想,本文提出了一種基於決策變量聚類方法的大規模MaOPs進化算法,主要的新貢獻總結如下。
  1. 本文中提出的基於k-means的聚類算法根據採樣的解和收斂方向的夾角將決策變量分爲收斂性和多樣性的變量,更小的夾角意味着變量對收斂的作用更大,更大的夾角意味着變量對多樣的作用更大。而MOEA/DVA則是基於支配關係,並且提出的方法只會將變量分爲收斂和多樣性兩種變量而MOEA/DVA則含有又收斂又多樣的變量。
  2. 提出了算法LMEA,並且對於多樣性和收斂性方法採用了兩種不同的算子。這兩種策略都先使用非支配培訓作爲第一步,然後對於收斂性變量選擇策略是基於和理想點的歐式距離(此處設置的是目標空間的原點爲理想點,假設優化的都是最小化的問題),對於多樣性變量,第二步選擇策略依賴的是候選解的角度。
  3. 提出基於樹的快速非支配排序算法T-ENS,這是ENS[54]的改進版本。在T-ENS中,用於識別非支配關係的信息被記錄在樹的節點中,由此可以推斷出解之間大量的非支配關係。 最終,只需要比較非支配前沿支配的解的子集,而不是全部。 理論分析表明,所提出的T-ENS的時間複雜度爲O(MNlogN / logM),其中N表示人口規模,M表示目標數量,這要比目前大多數算法O(MN^2)的時間複雜度小很多。
  4. 實驗在MaOPs和大規模的MOP上進行,其中大規模的維度有5000
  • 本文的其餘部分安排如下。 在第二節中,簡要回顧了有關解決大規模MOP的MOEA的一些相關工作。 在第二部分中,針對最近提出的MOEA/DVA,闡述了本文動機。 在第三部分中,我們描述本文針對大規模MaOP提出的LMEA的詳細信息。 在第四部分中給出了仿真結果,以評估LMEA在大規模MaOP上的性能。 最後,結論和今後的工作在第五節中給出。

相關工作和本文動機

  • 在本節中,我們首先回顧一些解決規模MOP的代表性MOEA,然後通過兩個說明性示例詳細闡述本文的動機。 請注意,當前僅僅有少量文獻涉及大規模MOPs,大部分MOEA不是爲大規模決策變量MOPs準備的。
  • Antonio和Coello [63]提出了一個基於合作的協同進化框架來解決大規模的MOP。 該算法的主要思想是將大量決策變量隨機分爲相等大小的幾個小子組件,並在預定數量的週期內共同協作演化這些子組件。 實驗結果證實了該算法在具有兩個和三個目標的大規模MOP上的競爭力。 遵循這一思路,在[65]中,合作協同進化的思想也被用來處理大規模的多目標電容弧佈線問題。
  • 最近,Ma等[64]人提出了一種用於解決大規模MOP的MOEA,稱爲MOEA / DVA。,其中採用決策變量分析策略,通過檢查擾動變量值生成的解之間的優勢關係,將決策變量分爲不同的組。 具體而言,判定變量的方法如下。
  1. 收斂相關的變量,基於擾動生成的解相互支配
  2. 多樣相關的變量,基於擾動生成的解相互之間非支配
  3. 收斂和多樣都相關的變量
  • 對於不同的變量使用一種兩階段的優化方式來進行優化,對於收斂性相關的變量會被優化直到接近帕累託前沿,而多樣性相關的變量會產生一個儘可能大的分佈。
  • 注意,本篇文章認爲MOEA/DVA中收斂相關和多樣相關的變量都會被認爲是多樣相關的變量,這種做法是不合適的,本文提出的算法就只會將決策變量分爲兩類,這大大提升了算法的收斂性和多樣性。
  • 下面舉一個例子:
  • 再舉一個例子,即使在MOEA/DVA中被視爲很明顯的和多樣性相關的變量實際上爲了保證算法的快速收斂有時也需要被考慮成收斂性變量進行優化。例如在例子2中將x2視爲收斂性變量能更有效地驅使算法向前沿收斂。
  • 爲了解決上述問題,本文提出了一種針對大規模MaOP的基於決策變量聚類的MOEA,稱爲LMEA。 在LMEA中,提出了一種決策變量聚類方法,通過分別測量它們對收斂和多樣性的貢獻來區分收斂相關和多樣性相關的變量。 使用提出的決策變量聚類方法,可以正確地分類與以上兩個示例中的x2相似的變量。

提出的算法LMEA

  • 首先展示了LMEA的主要算法流程和兩個重要組件-決策變量聚類方法和基於樹的快速非支配排序

LMEA的主要結構

  • 算法1提出了LMEA的主要框架,該框架由以下五個組件組成。首先,隨機初始化N個候選解的種羣。其次,採用改進的決策變量聚類方法將變量分爲兩組,收斂相關變量和多樣性相關變量。第三,基於這些變量之間的交互作用,與 收斂相關的變量 (注意,這裏只談了收斂性相關的變量進行分組,但是對多樣性相關的變量沒有要求) 進一步分爲幾個子組,其中,這些變量在一個子組內彼此交互,但不與其他任何子組內的交互 這個做法採用的是[64]中提出的方法,使用變量分析的策略將收斂性變量分組,子組內變量相互交互,而組與組之間的變量互不相關可以單獨進行優化 。每個子組中的變量也稱爲交互變量,因爲由於彼此之間的交互而無法單獨優化它們。算法2給出了收斂性相關變量相互作用分析的細節,它採用了[64]中開發的策略。注意,類似的策略也已廣泛用於單目標大規模優化中的變量交互檢測[56],[66]。 這裏意思是說,這種方法已經很廣泛使用了

然後對於收斂性相關的變量採用變量分析的方法將其分組

  • 值得注意的是,這是文獻[64]提出的一種方式,是利用變量分析法,將相互交互的決策變量分到一個子組中,而各組之間保持變量不交互的狀態方便獨立優化,具體的變量分析方法可以參見[64].

收斂性變量的優化

  1. 非支配排序
  2. 計算每個解和理想點之間距離(原點)
  3. 進化每個子組中的收斂性變量以獲得後代
    3.1. 對於每個子組的變量,從P中二進制錦標賽挑兩個解做父代,只將這個維度變量交叉其餘維度不變生成子代子代父代根據收斂性能優勝劣汰 (不同層比層數,層數越低越好,同層比較離理想點的距離,越近越好!)

多樣性變量的優化

  1. 將所有多樣性相關的變量視爲一個整體,對這所有的變量進行一次性的交叉(這和收斂性變量一次一個進行優化是不同的)從種羣P中生成|P|個後代,然後將子代種羣和父代種羣混合,並從中挑選出後代。
  2. 按照非支配層的觀點從合種羣中挑選出子代解,如果到達K層時留下的個體總數量超過|P|,則在K層挑選出多樣性較好的解。 注意,這裏的多樣性不是使用擁擠距離進行衡量而是使用目標空間中候選解之間的角度 這個多樣性的規則詳情要看[39]

基於聚類的決策變量分類方法

  • 圖3(a) 顯示了這種聚類分類的方法,不同顏色線表示改變調查的不同的變量,相同顏色的條數表示從種羣中採樣的個體數目,例如,白色的線條有兩條就是x2的採樣個數是兩個個體,而同一條線上點的個數表示對個體上這個變量採取的擾動的數量,例如這裏對於單個個體的研究變量的擾動數目爲8,所以一條線上的點的個數爲8,同時,其他沒有考慮的變量此時保持不變。
  • 圖3(b)顯示了聚類的標準,就是當考慮同一個變量的改變時,兩個解構成的線分別與收斂方向構成的夾角,夾角較大表明變量與多樣更相關,而夾角越小表明變量與收斂更相關。
  • 圖3© 顯示了聚類的操作方法,由於考慮的變量的採樣解的個數爲二,因此構成的是一個二維的座標系,通過這個座標系上點的分佈而通過K-means將變量分爲兩個簇、對於考慮多個採樣,角度都比較大的變量劃歸爲多樣性相關的變量,而對於考慮多個採樣,角度都比較小的變量劃歸爲收斂性相關的變量。這種方式而言,單個變量採樣的數量越多,座標系的維度越大,聚類的效果越好!
  • 圖3.示例,說明了如何針對具有四個決策變量(分別爲x1,x2,x3和x4)的雙目標最小化問題在LMEA中識別收斂相關變量和多樣相關變量。 在此示例中,在LMEA中x1和x2被標識爲與多樣性相關的變量,x3和x4被標識爲與收斂相關的變量。 (a)由擾動產生的採樣解的目標值。 (b)擬合線和超平面法線之間的角度。 (c)對四個決策變量x1,x2,x3和x4進行聚類結果。
  • 圖3給出了一個示例,以說明所提出的決策變量聚類方法的主要思想,其中考慮了具有四個決策變量x1,x2,x3和x4的雙目標最小化問題。 爲了確定決策變量是收斂性相關還是多樣性相關性,首先從總體中隨機選擇多個nSel(在此示例中爲兩個)候選解決方案。 然後,對所選候選解的四個變量中的每個變量執行多個nPer(在此示例中爲8)擾動。 圖3(a)描繪了由擾動產生的採樣解的目標值。
  • 然後,通過擾動每個變量的值的採樣解被標準化,生成一條線L以擬合每組標準化樣品解。使用歸一化的樣品解和擬合線L,計算每個擬合線L與超平面f1 +··+ fM = 1的法線之間的夾角,其中法線表示收斂方向,M是 目標。這樣,每個變量都會有幾個角度相關聯,這個數量和挑選的採樣解的數量相關(此處爲2)
  • 最後圖3©中表示的是按照角度進行的聚類方法。

和MOEA/DVA的不同

  • MOEA/DVA將變量分爲三類,其中有一類是既和多樣性相關也和收斂性相關的變量。而此方法只會將變量分爲兩類,即與收斂性相關的變量和與多樣性相關的變量,MOEA/DVA中的第三類變量會被分類爲多樣性或者收斂性相關中的其中一類。並且MOEA/DVA中某些多樣性相關的變量也會被識別爲收斂性相關的變量。
  • MOEA / DVA中的決策變量分析方法根據支配關係確定每個變量的類別。 相比之下,所提出的決策變量聚類方法採用k-means方法來確定每個變量的類別。 與MOEA / DVA中的決策變量分析方法相比,所提出的決策變量聚類方法更健壯,因爲其性能獨立於基於支配的關係,由於支配抗性問題,基於支配的關係在許多目標優化中可能無效。

參考文獻

[1] P. J. Fleming, R. C. Purshouse, and R. J. Lygoe, “Many-objective optimization: An engineering design perspective,” in Proc. 3rd Int. Conf. Evol. Multi Criterion Optim., Guanajuato, Mexico, 2005, pp. 14–32.
[2] J. Garcia, A. Berlanga, and J. M. M. López, “Effective evolutionary algorithms for many-specifications attainment: Application to air traffic control tracking filters,” IEEE Trans. Evol. Comput., vol. 13, no. 1, pp. 151–168, Feb. 2009.
[3] J. B. Kollat, P. M. Reed, and R. M. Maxwell, “Many-objective groundwater monitoring network design using bias-aware ensemble Kalman filtering, evolutionary optimization, and visual analytics,” Water Resources Res., vol. 47, no. 2, pp. 1–18, 2011.
[4] J. W. Kruisselbrink et al., “Combining aggregation with Pareto optimization: A case study in evolutionary molecular design,” in Proc. 5th Int. Conf. Evol. Multi Criterion Optim., Nantes, France, 2009, pp. 453–467.
[5] K. Ikeda, H. Kita, and S. Kobayashi, “Failure of Pareto-based MOEAs: Does non-dominated really mean near to optimal?” in Proc. IEEE Congr. Evol. Comput., Seoul, South Korea, 2001, pp. 957–962.
[6] V. Khare, X. Yao, and K. Deb, “Performance scaling of multiobjective evolutionary algorithms,” in Proc. 2nd Int. Conf. Evol. Multi Criterion Optim., Faro, Portugal, 2003, pp. 376–390.
[7] D. Brockhoff et al., “On the effects of adding objectives to plateau functions,” IEEE Trans. Evol. Comput., vol. 13, no. 3, pp. 591–603, Jun. 2009.
[8] R. C. Purshouse and P. J. Fleming, “Evolutionary many-objective optimisation: An exploratory analysis,” in Proc. Congr. Evol. Comput., Canberra, ACT, Australia, 2003, pp. 2066–2073.
[9] R. C. Purshouse and P. J. Fleming, “On the evolutionary optimization of many conflicting objectives,” IEEE Trans. Evol. Comput., vol. 11, no. 6, pp. 770–784, Dec. 2007.
[10] S. Chand and M. Wagner, “Evolutionary many-objective optimization: A quick-start guide,” Surveys Oper. Res. Manag. Sci., vol. 20, no. 2, pp. 35–42, 2015.
[11] B. Li, J. Li, K. Tang, and X. Yao, “Many-objective evolutionary algorithms: A survey,” ACM Comput. Surveys, vol. 48, no. 1, pp. 1–37, 2015.
[12] M. Laumanns, L. Thiele, K. Deb, and E. Zitzler, “Combining convergence and diversity in evolutionary multiobjective optimization,” Evol. Comput., vol. 10, no. 3, pp. 263–282, 2002.
[13] D. Hadka and P. Reed, “Borg: An auto-adaptive many-objective evolutionary computing framework,” Evol. Comput., vol. 21, no. 2, pp. 231–259, 2013.
[14] X. Zou, Y. Chen, M. Liu, and L. Kang, “A new evolutionary algorithm for solving many-objective optimization problems,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 38, no. 5, pp. 1402–1412, Oct. 2008.
[15] G. Wang and H. Jiang, “Fuzzy-dominance and its application in evolutionary many objective optimization,” in Proc. Int. Conf. Comput. Intell. Security Workshops, Harbin, China, 2007, pp. 195–198.
[16] F. di Pierro, S.-T. Khu, and D. A. Savi´c, “An investigation on preference order ranking scheme for multiobjective evolutionary optimization,” IEEE Trans. Evol. Comput., vol. 11, no. 1, pp. 17–45, Feb. 2007.
[17] Y. Yuan, H. Xu, B. Wang, and X. Yao, “A new dominance relationbased evolutionary algorithm for many-objective optimization,” IEEE Trans. Evol. Comput., vol. 20, no. 1, pp. 16–37, Feb. 2016.
[18] M. Köppen and K. Yoshida, “Substitute distance assignments in NSGA-II for handling many-objective optimization problems,” in Proc. 4th Int. Conf. Evol. Multi Criterion Optim., Matsushima, Japan, 2007, pp. 727–741.
[19] S. Yang, M. Li, X. Liu, and J. Zheng, “A grid-based evolutionary algorithm for many-objective optimization,” IEEE Trans. Evol. Comput., vol. 17, no. 5, pp. 721–736, Oct. 2013.
[20] R. Wang, R. C. Purshouse, and P. J. Fleming, “Preference-inspired coevolutionary algorithms for many-objective optimization,” IEEE Trans. Evol. Comput., vol. 17, no. 4, pp. 474–494, Aug. 2013.
[21] J. Cheng, G. G. Yen, and G. Zhang, “A many-objective evolutionary algorithm with enhanced mating and environmental selections,” IEEE Trans. Evol. Comput., vol. 19, no. 4, pp. 592–605, Aug. 2015.
[22] X. Zhang, Y. Tian, and Y. Jin, “A knee point-driven evolutionary algorithm for many-objective optimization,” IEEE Trans. Evol. Comput., vol. 19, no. 6, pp. 761–776, Dec. 2014.
[23] C. A. R. Villalobos and C. A. C. Coello, “A new multi-objective evolutionary algorithm based on a performance assessment indicator,” in Proc. 14th Annu. Conf. Genet. Evol. Comput. (GECCO), Philadelphia, PA, USA, 2012, pp. 505–512.
[24] T. Ulrich, J. Bader, and L. Thiele, “Defining and optimizing indicatorbased diversity measures in multiobjective search,” in Proc. 11th Int. Conf. Parallel Problem Solving Nat., Kraków, Poland, 2010, pp. 707–717.
[25] H. Trautmann, T. Wagner, and D. Brockhoff, “R2-EMOA: Focused multiobjective search using R2-indicator-based selection,” in Proc. 7th Int. Conf. Learn. Intell. Optim., Catania, Italy, 2013, pp. 70–74.
[26] G. Rudolph, O. Schütze, C. Grimme, and H. Trautmann, “An aspiration set EMOA based on averaged hausdorff distances,” in Proc. 8th Int. Conf. Learn. Intell. Optim., Gainesville, FL, USA, 2014, pp. 153–156.
[27] E. Zitzler and S. Künzli, “Indicator-based selection in multiobjective search,” in Proc. 8th Int. Conf. Parallel Problem Solving Nat., Birmingham, U.K., 2004, pp. 832–842.
[28] N. Beume, B. Naujoks, and M. Emmerich, “SMS-EMOA: Multiobjective selection based on dominated hypervolume,” Eur. J. Oper. Res., vol. 181, no. 3, pp. 1653–1669, 2007.
[29] J. Bader and E. Zitzler, “HypE: An algorithm for fast hypervolume-based many-objective optimization,” Evol. Comput., vol. 19, no. 1, pp. 45–76, 2011.
[30] Q. Zhang and H. Li, “MOEA/D: A multiobjective evolutionary algorithm based on decomposition,” IEEE Trans. Evol. Comput., vol. 11, no. 6, pp. 712–731, Dec. 2007.
[31] K. Li, Q. Zhang, S. Kwong, M. Li, and R. Wang, “Stable matching-based selection in evolutionary multiobjective optimization,” IEEE Trans. Evol. Comput., vol. 18, no. 6, pp. 909–923, Dec. 2014.
[32] Y. Qi et al., “MOEA/D with adaptive weight adjustment,” Evol. Comput., vol. 22, no. 2, pp. 231–264, 2014.
[33] X. Cai, Y. Li, Z. Fan, and Q. Zhang, “An external archive guided multiobjective evolutionary algorithm based on decomposition for combinatorial optimization,” IEEE Trans. Evol. Comput., vol. 19, no. 4, pp. 508–523, Aug. 2015.
[34] Y. Yuan, H. Xu, B. Wang, B. Zhang, and X. Yao, “Balancing convergence and diversity in decomposition-based many-objective optimizers,” IEEE Trans. Evol. Comput., vol. 20, no. 2, pp. 180–198, Apr. 2016.
[35] S. B. Gee, K. C. Tan, V. A. Shim, and N. R. Pal, “Online diversity assessment in evolutionary multiobjective optimization: A geometrical perspective,” IEEE Trans. Evol. Comput., vol. 19, no. 4, pp. 542–559, Aug. 2015.
[36] H.-L. Liu, F. Gu, and Q. Zhang, “Decomposition of a multiobjective optimization problem into a number of simple multiobjective subproblems,” IEEE Trans. Evol. Comput., vol. 18, no. 3, pp. 450–455, Jun. 2014.
[37] K. Deb and H. Jain, “An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: Solving problems with box constraints,” IEEE Trans. Evol. Comput., vol. 18, no. 4, pp. 577–601, Aug. 2014.
[38] K. Li, K. Deb, Q. Zhang, and S. Kwong, “An evolutionary many-objective optimization algorithm based on dominance and decomposition,” IEEE Trans. Evol. Comput., vol. 19, no. 5, pp. 694–716, Oct. 2015.
[39] R. Cheng, Y. Jin, M. Olhofer, and B. Sendhoff, “A reference vector guided evolutionary algorithm for many-objective optimization,” IEEE Trans. Evol. Comput., to be published.
[40] D. Brockhoff and E. Zitzler, “Are all objectives necessary? On dimensionality reduction in evolutionary multiobjective optimization,” in Proc. 9th Int. Conf. Parallel Problem Solving Nat., Reykjavik, Iceland, 2006, pp. 533–542.
[41] D. Brockhoff and E. Zitzler, “Objective reduction in evolutionary multiobjective optimization: Theory and applications,” Evol. Comput., vol. 17, no. 2, pp. 135–166, 2009.
[42] A. L. Jaimes, C. A. C. Coello, and D. Chakraborty, “Objective reduction using a feature selection technique,” in Proc. 10th Annu. Conf. Genet. Evol. Comput., Atlanta, GA, USA, 2008, pp. 673–680.
[43] H. K. Singh, A. Isaacs, and T. Ray, “A Pareto corner search evolutionary algorithm and dimensionality reduction in many-objective optimization problems,” IEEE Trans. Evol. Comput., vol. 15, no. 4, pp. 539–556, Aug. 2011.
[44] K. Deb and D. Saxena, “Searching for Pareto-optimal solutions through dimensionality reduction for certain large-dimensional multi-objective optimization problems,” in Proc. World Congr. Comput. Intell., Vancouver, BC, Canada, 2006, pp. 3352–3360.
[45] D. K. Saxena and K. Deb, “Non-linear dimensionality reduction procedures for certain large-dimensional multi-objective optimization problems: Employing correntropy and a novel maximum variance unfolding,” in Proc. 4th Int. Conf. Evol. Multi Criterion Optim., Matsushima, Japan, 2007, pp. 772–787.
[46] D. K. Saxena, J. A. Duro, A. Tiwari, K. Deb, and Q. Zhang, “Objective reduction in many-objective optimization: Linear and nonlinear algorithms,” IEEE Trans. Evol. Comput., vol. 17, no. 1, pp. 77–99, Feb. 2013.
[47] H. Wang and X. Yao, “Objective reduction based on nonlinear correlation information entropy,” Soft Comput., vol. 20, no. 6, pp. 2393–2407, 2016.
[48] M. Li, S. Yang, and X. Liu, “Bi-goal evolution for many-objective optimization problems,” Artif. Intell., vol. 228, pp. 45–65, Nov. 2015.
[49] B.-Y. Qu and P. N. Suganthan, “Multi-objective evolutionary algorithms based on the summation of normalized objectives and diversified selection,” Inf. Sci., vol. 180, no. 17, pp. 3170–3181, 2010.
[50] K. Bringmann, T. Friedrich, C. Igel, and T. Voß, “Speeding up manyobjective optimization by Monte Carlo approximations,” Artif. Intell., vol. 204, pp. 22–29, Nov. 2013.
[51] K. McClymont and E. Keedwell, “Deductive sort and climbing sort: New methods for non-dominated sorting,” Evol. Comput., vol. 20, no. 1, pp. 1–26, Mar. 2012.
[52] H. Wang and X. Yao, “Corner sort for Pareto-based many-objective optimization,” IEEE Trans. Cybern., vol. 44, no. 1, pp. 92–102, Jan. 2014.
[53] M. Drozdík, Y. Akimoto, H. Aguirre, and K. Tanaka, “Computational cost reduction of nondominated sorting using the M-front,” IEEE Trans. Evol. Comput., vol. 19, no. 5, pp. 659–678, Oct. 2015.
[54] X. Zhang, Y. Tian, R. Cheng, and Y. Jin, “An efficient approach to nondominated sorting for evolutionary multiobjective optimization,” IEEE Trans. Evol. Comput., vol. 19, no. 2, pp. 201–213, Apr. 2015.
[55] X. Zhang, Y. Tian, and Y. Jin, “Approximate non-dominated sorting for evolutionary many-objective optimization,” Inf. Sci., Jun. 2016.
[56] Z. Yang, K. Tang, and X. Yao, “Large scale evolutionary optimization using cooperative coevolution,” Inf. Sci., vol. 178, no. 15, pp. 2985–2999, 2008.
[57] M. N. Omidvar, X. Li, Y. Mei, and X. Yao, “Cooperative co-evolution with differential grouping for large scale optimization,” IEEE Trans. Evol. Comput., vol. 18, no. 3, pp. 378–393, Jun. 2014.
[58] X. Li and X. Yao, “Cooperatively coevolving particle swarms for large scale optimization,” IEEE Trans. Evol. Comput., vol. 16, no. 2, pp. 210–224, Apr. 2012.
[59] R. Cheng and Y. Jin, “A competitive swarm optimizer for large scale optimization,” IEEE Trans. Cybern., vol. 45, no. 2, pp. 191–204, Feb. 2015.
[60] K. H. Hajikolaei, Z. Pirmoradi, G. H. Cheng, and G. G. Wang, “Decomposition for large-scale global optimization based on quantified variable correlations uncovered by metamodelling,” Eng. Optim., vol. 47, no. 4, pp. 429–452, 2015.
[61] J. Fan, J. Wang, and M. Han, “Cooperative coevolution for largescale optimization based on kernel fuzzy clustering and variable trust region methods,” IEEE Trans. Fuzzy Syst., vol. 22, no. 4, pp. 829–839, Aug. 2014.
[62] R. Cheng and Y. Jin, “A social learning particle swarm optimization algorithm for scalable optimization,” Inf. Sci., vol. 291, pp. 43–60, Jan. 2015.
[63] L. M. Antonio and C. A. C. Coello, “Use of cooperative coevolution for solving large scale multiobjective optimization problems,” in Proc. IEEE Congr. Evol. Comput., Cancún, Mexico, 2013, pp. 2758–2765.
[64] X. Ma et al., “A multiobjective evolutionary algorithm based on decision variable analyses for multiobjective optimization problems with largescale variables,” IEEE Trans. Evol. Comput., vol. 20, no. 2, pp. 275–298, Apr. 2016.
[65] R. Shang, K. Dai, L. Jiao, and R. Stolkin, “Improved memetic algorithm based on route distance grouping for multiobjective large scale capacitated arc routing problems,” IEEE Trans. Cybern., vol. 46, no. 4, pp. 1000–1013, Apr. 2016.
[66] W. Chen, T. Weise, Z. Yang, and K. Tang, “Large-scale global optimization using cooperative coevolution with variable interaction learning,” in Proc. 11th Int. Conf. Parallel Problem Solving Nat., Kraków, Poland, 2010, pp. 300–309.
[67] K. Deb and R. B. Agrawal, “Simulated binary crossover for continuous search space,” Complex Systems, vol. 9, no. 4, pp. 115–148, 1995.
[68] K. Deb and M. Goyal, “A combined genetic adaptive search (GeneAS) for engineering design,” Comput. Sci. Informat., vol. 26, no. 4, pp. 30–45, 1996.
[69] B. V. Babu and M. M. L. Jehan, “Differential evolution for multi-objective optimization,” in Proc. Congr. Evol. Comput., vol. 4. Canberra, ACT, Australia, 2003, pp. 2696–2703.

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章