論文:(2018ICML)
https://ml.informatik.uni-freiburg.de/papers/18-AUTOML-AutoChallenge.pdf
代碼:
http://ml.informatik.uni-freiburg.de/downloads/automl_competition_2018.zip
數據:(codalab平臺,需要註冊)
https://competitions.codalab.org/competitions/17767#participate-get_data
TODO LIST
- PoSH對時序數據是怎麼處理的?
特別苟的Manual design decisions
- 如果特徵數>500, 用單變量特徵選擇(聽起來很牛逼,看代碼就知道怎麼回事了)
- 如果樣本數<1000, 不採用SuccessiveHaving。並且採用交叉驗證的方式,而不是
HoldOut
在附錄A.3
和A.4
有比較詳細的需求。
特徵選擇
特徵選擇
logic.project_data_via_feature_selection
imp = sklearn.preprocessing.Imputer(strategy='median')
pca = sklearn.feature_selection.SelectKBest(k=n_keep)
pipeline = sklearn.pipeline.Pipeline((('imp', imp), ('pca', pca)))
如果特徵數>500, 強行將爲500.
lib/logic.py:114
D.feat_type = [
ft for i, ft in enumerate(D.feat_type) if rval[2][i] == True
]
更新feat_type
少樣本數不採用SH
回到logic
lib/logic.py:227
if min_budget == max_budget:
res = SSB.run(len(autosklearn_portfolio), min_n_workers=1)
else:
res = SSB.run(1, min_n_workers=1)
如果樣本數<1000, 則設置min_budget = max_budget
, 且不採用SH,強行迭代16次。
Budget 的計算
max_budget = 1.0
min_budget = 1.0 / 16
eta = 4
eta是什麼意思呢?可以回顧我的這篇文章:HpBandSter源碼分析
n_iterations
是用戶指定的,stages
是根據max_budget
,min_budget
,eta
決定的,詳見hpbandster.optimizers.bohb.BOHB#__init__
hpbandster/optimizers/bohb.py:101
self.max_SH_iter = -int(np.log(min_budget/max_budget)/np.log(eta)) + 1
max_budget=243
min_budget=9
eta=3
eta
其實是SuccessiveHaving的budget擴增倍數,從9到243,每次增加3倍,即:
9, 27, 81, 243
再看到PoSH的代碼
max_budget = 1.0
min_budget = 1.0 / 16
eta = 4
hp_util.SideShowBOHB
self.max_SH_iter = -int(
np.log(min_budget / max_budget) / np.log(eta)) + 1
self.budgets = max_budget * np.power(eta,
-np.linspace(self.max_SH_iter - 1,
0, self.max_SH_iter))
self.max_SH_iter
Out[5]: 3
self.budgets
Out[6]: array([0.0625, 0.25 , 1. ])
lib/hp_util.py:62
self.budget_converter = {
'libsvm_svc': lambda b: b,
'random_forest': lambda b: int(b*128),
'sgd': lambda b: int(b*512),
'xgradient_boosting': lambda b: int(b*512),
'extra_trees': lambda b: int(b*1024)
}
budget與iterations的轉換數與論文也是對應的
lib/logic.py:202
worker.run(background=True)
在用線程跑
portfolio.get_hydra_portfolio
獲取混合文件夾
balancing
的策略還是存在的
SSB = hp_util.SideShowBOHB(
configspace=worker.get_config_space(),
initial_configs=autosklearn_portfolio,
run_id=run_id,
eta=eta, min_budget=min_budget, max_budget=max_budget,
SH_only=True, # suppresses Hyperband's outer loop and runs SuccessiveHalving only
nameserver=ns_host,
nameserver_port=ns_port,
ping_interval=sleep,
job_queue_sizes=(-1, 0),
dynamic_queue_size=True,
)
eta
Out[4]: 4
min_budget
Out[5]: 0.0625
max_budget
Out[6]: 1.0
sleep
Out[7]: 5
run_id
Out[8]: '0'
SideShowBOHB
是Master
, AutoMLWorker
是Worker
。
class PortfolioBOHB(BOHB):
""" subclasses the config_generator BOHB"""
def __init__(self, initial_configs=None, *args, **kwargs):
super().__init__(*args, **kwargs)
if initial_configs is None:
# dummy initial portfolio
self.initial_configs = [self.configspace.sample_configuration().get_dictionary() for i in range(5)]
else:
self.initial_configs = initial_configs
繼承BOHB
,僅僅多個initial_configs
cg = PortfolioBOHB(
initial_configs=initial_configs,
configspace=configspace,
min_points_in_model=min_points_in_model,
top_n_percent=top_n_percent,
num_samples=num_samples,
random_fraction=random_fraction,
bandwidth_factor=bandwidth_factor,
)
cg
: ConfigGenerate
min_points_in_model
top_n_percent
Out[10]: 15
num_samples
Out[11]: 64
random_fraction
Out[12]: 0.5
bandwidth_factor
Out[13]: 3
min_points_in_model
= None
hp_util.SideShowBOHB#get_next_iteration
iteration
Out[2]: 0
s
Out[3]: 2
n0
Out[4]: 16
ns
Out[5]: [16, 4, 1]
self.budgets[(-s - 1):]
Out[6]: array([0.0625, 0.25 , 1. ])
s
是HyperBand的bracket
,代表stages
數目。
ns
代表配置數,依次遞減。budgets
預算數依次遞增。
hpbandster.iterations.base.BaseIteration#add_configuration
self.config_sampler
Out[7]: <bound method PortfolioBOHB.get_config of <hp_util.PortfolioBOHB object at 0x7f1670208208>>
hp_util.PortfolioBOHB#get_config
def get_config(self, budget):
# return a portfolio member first
if len(self.initial_configs) > 0 and True:
c = self.initial_configs.pop()
return (c, {'portfolio_member': True})
return (super().get_config(budget))
用元學習文件夾代替了隨機推薦
最後用SH的方法迭代1000次