华为云AAAI 2021论文:一站式AI平台ModelArts联邦学习服务技术揭秘

{"type":"doc","content":[{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"人工智能顶级会议AAAI 2021将于2月2日-9 日在线上召开,本次会议,华为云AI最新联邦学习成果“Personalized Cross-Silo Federated Learning on Non-IID Data”成功入选。这篇论文首创自分组个性化联邦学习框架,该框架让拥有相似数据分布的客户进行更多合作,并对每个客户的模型进行个性化定制,从而有效处理普遍存在的数据分布不一致问题,并大幅度提高联邦学习性能。该框架已被集成至华为云一站式AI开发管理平台ModelArts联邦学习服务中。"}]}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"背景介绍"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"联邦学习机制以其独有的隐私保护机制受到很多拥有高质量数据的客户青睐。通过联邦学习,能有效地打破数据孤岛,使数据发挥更大的作用,实现多方客户在保证隐私的情况下共赢。但与此同时,在实际应用中各个客户的数据分布非常不一致,对模型的需求也不尽相同,这些在很大程度上制约了传统联邦学习方法的性能和应用范围。为此, 在客户数据分布不一致的情况下如何提高模型的鲁棒性成为了当前学术界与工业界对联邦学习算法优化的核心目标,也就是希望通过联邦学习得到的模型能满足不同客户的需求。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"传统的联邦学习的目的是为了获得一个全局共享的模型,供所有参与者使用。但当各个参与者数据分布不一致时,全局模型却无法满足每个联邦学习参与者对性能的需求,有的参与者甚至无法获得一个比仅采用本地数据训练模型更优的模型。这大大降低了部分用户参与联邦学习的积极性。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"为了解决上述问题,让每个参与方都在联邦学习过程中获益,个性化联邦学习在最近获得了极大的关注。与传统联邦学习要求所有参与方最终使用同一个模型不同,个性化联邦学习允许每个参与方生成适合自己数据分布的个性化模型。为了生成这样的个性化的模型,常见的方法是通过对一个统一的全局模型在本地进行定制化。而这样的方法仍然依赖一个高效可泛化的全局模型,然而这样的模型在面对每个客户拥有不同分布数据时却是经常可遇而不可求的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"为此,华为云EI温哥华大数据与人工智能实验室自研了一套个性化联邦学习框架FedAMP。该框架使用独特的自适应分组学习机制,让拥有相似数据分布的客户进行更多的合作,并对每个客户的模型进行个性化定制,从而有效地处理普遍存在的数据分布不一致问题,并大幅度提高联邦学习性能。下面我们来具体看下这一新的框架FedAMP是怎么提升联邦学习性能的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"论文地址:https:\/\/arxiv.org\/abs\/2007.03797"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/1c\/07\/1cd1a010869fb3f49a75ae3b47a73a07.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图一: FedAMP的注意消息传递机制"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"算法介绍"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在这个新的联邦学习框架FedAMP中,作者首先引入了一种新颖的注意消息传递机制(Attentive message passing mechanism)。如图一所示,这种机制允许每个客户在拥有本地个性化模型, 同时在云端维持一个个性化的云端模型。FedAMP通过计算本地个性化模型两两之间的相似度来实现注意消息传递机制,从而使云端可以利用注意消息传递机制聚合本地个性化模型,得到云端个性化模型, 然后再通过本地个性化训练拉近本地个性化模型与云端个性化模型之间的距离。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/a7\/34\/a7e50fdf660d6e490081251c57043434.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图二:FedAMP伪代码"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"基于上述描述,图二给出了FedAMP伪代码。不难看出,在FedAMP的迭代中实现了一种正反馈循环,即拥有相似模型参数的客户将逐步形成越来越紧密合作。这样的合作将自适应地隐性地将相似的客户组合起来并因此形成更为高效的合作。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"文章在此基础上给出了FedAMP框架的收敛性证明,并进一步针对深度学习网络提出了一套启发式个性化联邦学习框架HeurFedAMP。"}]},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/98\/e3\/982cd785d06128a2fbc3bffdaf850ee3.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图三:最优平均测试准确率"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"结果展示"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"为了评估FedAMP及HeurFedAMP的性能,作者设计了一套更为符合实际应用场景的非均匀数据分布。如图三所示,FedAMP及HeurFedAMP在四个常见数据集上展示了比现有七种SOTA算法更高的最优平均测试准确率。相比 Google 提出的原始联邦学习框架FedAvg,FedAMP及HeurFedAMP所获得的最优平均测试准确率更是大幅提升,表现非常亮眼。"}]},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/f7\/89\/f7f6cfd4f2da72570aee2d4dc4e85e89.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图四:所有客户测试准确率分布"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"通过分析进一步统计的结果(如图四),作者发现通过FedAMP和HeurFedAMP所得到的模型对于每个客户的测试精度在统计上显著高于其他方法获得的结果。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/89\/48\/89d70f99c00bf23ee219fd6660d2ac48.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图五:对于EMNIST数据集的可视化分组结果"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"为了更好的理解FedAMP及HeurFedAMP的机理, 作者进一步分析了注意消息传递机制(如图五)。作者发现FedAMP和HeurFedAMP均成功发现了蕴含在客户之间的真实分组关系。这一发现进一步解释了FedAMP及HeurFedAMP在数据分布不均匀时性能卓越的原因。联邦学习三步骤,降低使用门槛基于华为云ModelArts平台,实现联邦学习仅需简单的三步操作:第一步:发起者创建一个联邦学习团队,定义联邦任务,并邀请参与者,如图六所示(其中更新策略可配置FedAVG,FedAMP等):"}]},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/32\/80\/32cb8af8ba15eb2dd09804721be2a780.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图六:基于ModelArts的联邦训练任务创建"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第二步:参与者同意加入联邦团队,并配置数据及资源类型,如图七所示:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/15\/9c\/15f9606e8a8107cff1ed3f1691e2319c.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/3b\/5e\/3b0e30d59da1dc7a7c2f3b74fb0c1f5e.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图七:基于ModelArts的联邦学习团队加入"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第三步:联邦训练发起者启动联邦训练,直至训练完成,如图八所示:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/a0\/95\/a004e5ce9cc8982fec11cc48e9a2b295.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图八:基于ModelArts的联邦学习训练"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"总结"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"FedAMP\/HeurFedAMP是两种简单高效的个性化联邦学习框架。通过注意消息传递机制,FedAMP\/HeurFedAMP还将天然拥有抗投毒潜力。其在数据分布不均匀时的优异表现,将为云产商吸引更多拥有高质量数据的客户参与联邦学习。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"基于上述框架,华为云一站式AI开发 ModelArts提供联邦学习特性,用户各自利用本地数据训练,不交换数据本身,只用加密方式交换更新的模型参数,实现联合建模。"},{"type":"link","attrs":{"href":"https:\/\/console.huaweicloud.com\/modelarts\/?region=cn-north-4#\/notebook\/loading?share-url-b64=aHR0cHM6Ly9jbm5vcnRoNC1tb2RlbGFydHMtc2RrLm9icy5jbi1ub3J0aC00Lm15aHVhd2VpY2xvdWQuY29tL3NuYXBzaG90L3B5dG9yY2hfZmVkYW1wX2VtbmlzdF9jbGFzc2lmaWNhdGlvbi5pcHluYg==","title":"xxx","type":null},"content":[{"type":"text","text":"算法体验链接"}]}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章