【轉載來源】https://blog.csdn.net/u012862372/article/details/80367607
在使用 callbacks.ModelCheckpoint()
並進行多 gpu 並行計算時,callbacks
函數會報錯:
TypeError: can't pickle ...(different text at different situation) objects
- 1
這個錯誤形式其實跟使用多 gpu 訓練時保存模型不當造成的錯誤比較相似:
To save the multi-gpu model, use
.save(fname)
or.save_weights(fname)
with the template model (the argument you passed tomulti_gpu_model
),
rather than the model returned bymulti_gpu_model
.
這個問題在我之前的文章中也有提到:[Keras] 使用Keras調用多GPU,並保存模型
。顯然,在使用檢查點時,默認還是使用了 paralleled_model.save()
,進而導致錯誤。爲了解決這個問題,我們需要自己定義一個召回函數。
解決方法
法一
original_model = ...
parallel_model = multi_gpu_model(original_model, gpus=n)
class MyCbk(keras.callbacks.Callback):
def __init__(self, model):
self.model_to_save = model
def on_epoch_end(self, epoch, logs=None):
self.model_to_save.save('model_at_epoch_%d.h5' % epoch)
cbk = MyCbk(original_model)
parallel_model.fit(..., callbacks=[cbk])>
法二
class ParallelModelCheckpoint(ModelCheckpoint):
def __init__(self,model,filepath, monitor='val_loss', verbose=0,
save_best_only=False, save_weights_only=False,
mode='auto', period=1):
self.single_model = model
super(ParallelModelCheckpoint,self).__init__(filepath, monitor, verbose,save_best_only, save_weights_only,mode, period)
def set_model(self, model):
super(ParallelModelCheckpoint,self).set_model(self.single_model)
check_point = ParallelModelCheckpoint(single_model ,'best.hd')
法三
class CustomModelCheckpoint(keras.callbacks.Callback):
def __init__(self, model, path):
self.model = model
self.path = path
self.best_loss = np.inf
def on_epoch_end(self, epoch, logs=None):
val_loss = logs['val_loss']
if val_loss < self.best_loss:
print("\nValidation loss decreased from {} to {}, saving model".format(self.best_loss, val_loss))
self.model.save_weights(self.path, overwrite=True)
self.best_loss = val_loss
model.fit(X_train, y_train,
batch_size=batch_size*G, epochs=nb_epoch, verbose=0, shuffle=True,
validation_data=(X_valid, y_valid),
callbacks=[CustomModelCheckpoint(model, '/path/to/save/model.h5'
參考資料