mxnet 更新學習率方式
https://github.com/gengyanlei/insightFace-gluon/blob/master/main.py 代碼使用mxnet.gluon
1. 採用mxnet官方函數API 自動更改學習率 (按照epoch更新,mxnet.gluon)
mxnet.lr_scheduler 模塊下面就可以;但是這裏都是按照step執行的,如何按照pytorch那樣在epoch下執行,每個epoch執行一次類似pytorch那樣的lr_scheduler.step(),代碼如下:
optimizer = gluon.Trainer(params=model.collect_params(), optimizer='sgd', optimizer_params={'learning_rate': args.lr, 'wd': 0.0001, 'momentum': 0.9, } ) # 不能在這裏設置 'lr_shceduler': lr_scheduler,這裏不是在epoch裏面執行
lr_scheduler = mx.lr_scheduler.FactorScheduler(step=10, factor=0.1, base_lr=0.1)
for epoch in range(40):
lr_scheduler.__call__(num_update=epoch)
print(lr_scheduler.base_lr)
optimizer.set_learning_rate(lr_scheduler.base_lr)
2. 採用自定義的更新學習率函數 (按照epoch更新,mxnet.gluon)
代碼如下:
def lr_scheduler(base_lr, epoch, optimiser, steps, rate=0.1):
'''
:param base_lr: float baseline learning rate
:param epoch: int, current epoch
:param optimiser: optimiser,
:param steps: int, every steps's epoches
:param rate: float, lr update rate
:return: update lr
''' #個人喜歡自定義學習率設置函數,當然可以使用mxnet自帶的update_lr_func, pytorch也是如此
lr = base_lr * (rate ** (epoch // steps))
optimiser.set_learning_rate(lr)
return # 不需要返回optimiser,仍然會更新lr
3. 採用半官方半自定義的函數 (按照epoch更新,mxnet.symbol)
在model.fit( epoch_end_callback= )設置更新學習率,保存模型參數
opt = optimizer.SGD(learning_rate=base_lr, momentum=base_mom, wd=base_wd)
def epoch_end_callback(param):
'''
:param param: BatchParams [epoch=epoch, nbatch=nbatch, eval_metric=eval_metric, locals=locals()]
:return:
'''
epoch = param.epoch
opt.lr = base_lr * rate**(epoch // epoch_steps)
#optimizer是直接對其本身操作,是地址操作,並不是表面上看到的值傳遞,因此不需要返回!#
4. 若是按照step操作,那麼直接設置params,再傳給optimizer就可以了
optimizer_params = {'wd': wd, 'momentum': momentum, 'lr_scheduler': lr_scheduler}