持續學習——Automatic Recall Machines-Internal Replay, Continual Learning and the Brain——arxiv202006

作者信息

Abstract

Replay-based methods, present a method where these auxiliary samples are generated on the fly(出發點,就是減少內存開銷),也加入了神經科學的啓發來加強motivation。

Introduction

learn from sequential or non-stationary data的能力(人和神經網絡相比),談到replay這一類的方法;The goal of this work, Automatic Recall Machines, is to optimally exploit the implicit memory in the tasks model for not forgetting, by using its parameters for both inference and generation.(核心想法就是說基於當前任務模型的參數來產生replay,之前的生成模型replay方法生成模型難訓練,直接存樣本replay方法內存開銷大);每個batch,基於當前的真實樣本生成一些most dissonant related samples。
Provide a formal explanation for why training with the most dissonant related samples is optimal for not forgetting,基於這個intuition that was used for buffer selection《Online continual learning with maximal interfered retrieval, NeurIPS2019》

Method

方法內容非常少,一頁左右;

Conclusion

conditional replay
Key points: paper-writing一般;這篇思想有點類似於《Dreaming to Distill: Data-free Knowledge Transfer via Deep Inversion》;三步走粗讀一遍;最重要的Insight是training with the most dissonant related samples is optimal for not forgetting。然後作者的replay就是設計一個方法只要當前模型就能夠生成這樣的樣本。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章