为什么大型机器学习模型必须缩小 ?

{"type":"doc","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"更大的规模不一定更适合机器学习。但是,随着研究人员相互竞争追求最先进的基准,深度学习模型和训练它们的数据集不断扩展。不管它们如何突破,更大的模型都会对预算和环境产生严重的影响。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"比如 GPT-3,一个在去年夏天推出的大受欢迎的自然语言处理模型,据说花了"},{"type":"link","attrs":{"href":"https:\/\/venturebeat.com\/2020\/06\/01\/ai-machine-learning-openai-gpt-3-size-isnt-everything\/?fileGuid=W7n6QtTkCpYUnzp6","title":"","type":null},"content":[{"type":"text","text":"1200 万美元"}]},{"type":"text","text":"用于训练。更有甚者,马萨诸塞大学阿默斯特分校(UMass Amherst)的"},{"type":"link","attrs":{"href":"https:\/\/arxiv.org\/abs\/1906.02243?fileGuid=W7n6QtTkCpYUnzp6","title":"","type":null},"content":[{"type":"text","text":"研究人员发现"}]},{"type":"text","text":",训练大型人工智能模型所需的计算能力能够产生 60 多万磅的二氧化碳排放——是普通汽车寿命周期排放量的 5 倍。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"目前,没有迹象表明,以机器学习行业的发展速度,计算密集型工作将会放缓。OpenAI 的研究显示,深度学习模型的计算能力在 2012 到 2018 年间增长了惊人的 30 万倍,超过了摩尔定律。这个问题不仅仅是训练这些算法,而是要在生产环境下运行它们,或者说在推理阶段。对很多团队而言,由于纯粹的成本和资源的限制,深度学习模型的实际应用仍然遥不可及。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"幸好,研究人员发现了一些新的方法来缩小深度学习模型,并通过更智能的算法来优化训练数据集,使得模型在生产环境下运行得更快,计算量也更少。就连业界的一个峰会也专门讨论低功耗、"},{"type":"link","attrs":{"href":"https:\/\/www.tinyml.org\/summit\/?fileGuid=W7n6QtTkCpYUnzp6","title":"","type":null},"content":[{"type":"text","text":"微型机器学习"}]},{"type":"text","text":"。剪枝(Purning)、优化(Quantization)和迁移学习(Transfer Learning)就是三种具体的技术。这些技术可以让那些无法投资数百万美元把模型转换成生产环境的组织实现机器学习的民主化。对“边缘”用例来说,这一点尤为重要,因为大型专用人工智能硬件在物理上并不切实际。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第一种技术,即剪枝,是近几年来研究的热点之一。包含“"},{"type":"link","attrs":{"href":"https:\/\/arxiv.org\/abs\/1510.00149?fileGuid=W7n6QtTkCpYUnzp6","title":"","type":null},"content":[{"type":"text","text":"深度压缩"}]},{"type":"text","text":"”(Deep Compression)和“"},{"type":"link","attrs":{"href":"https:\/\/arxiv.org\/abs\/1803.03635?fileGuid=W7n6QtTkCpYUnzp6","title":"","type":null},"content":[{"type":"text","text":"彩票假说"}]},{"type":"text","text":"”(Lottery Ticket Hypothesis)在内的高引用文献表明,可以在不损失正确性的情况下消除神经网络中“神经元”之间一些不必要的连接,有效地使模型更小、更容易在资源有限的设备上运行。"},{"type":"link","attrs":{"href":"https:\/\/arxiv.org\/abs\/2004.14340?fileGuid=W7n6QtTkCpYUnzp6","title":"","type":null},"content":[{"type":"text","text":"最新的论文"}]},{"type":"text","text":"进一步验证并完善了早期的技术,以开发出更小的模型,使其达到更高的速度和正确度。对某些模型,比如"},{"type":"link","attrs":{"href":"https:\/\/www.geeksforgeeks.org\/residual-networks-resnet-deep-learning\/?fileGuid=W7n6QtTkCpYUnzp6","title":"","type":null},"content":[{"type":"text","text":"ResNet"}]},{"type":"text","text":",可以在不影响正确性的情况下剪枝 90% 左右。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第二种技术,即优化,也正在逐步普及。"},{"type":"link","attrs":{"href":"https:\/\/www.qualcomm.com\/news\/onq\/2019\/03\/12\/heres-why-quantization-matters-ai?fileGuid=W7n6QtTkCpYUnzp6","title":"","type":null},"content":[{"type":"text","text":"优化"}]},{"type":"text","text":"涉及许多不同的技术,它们可以将大的输入值转换为小的输出值。换句话来说,在硬件上运行神经网络可以产生上百万次乘和加运算。减少这些数学运算的复杂性有助于减少内存需求和计算成本,这将大大提高性能。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"最后,虽然这不是一种缩小模型的技术,但是"},{"type":"link","attrs":{"href":"https:\/\/www.amazon.science\/blog\/when-does-transfer-learning-work?fileGuid=W7n6QtTkCpYUnzp6","title":"","type":null},"content":[{"type":"text","text":"迁移学习"}]},{"type":"text","text":"能够在有限的数据中帮助训练一个新模型。迁移学习以预训练模型作为起点。通过有限的数据集,模型的知识可以“迁移”到一个新的任务中,而无需从头再来训练原始模型。在训练模型时,这是一种减少计算能力、能源和资金的重要方法。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"最重要的启示是,模型可以(也应该)尽可能地优化,使其在较少的计算量下运行。在不牺牲性能和正确性的情况下,寻找减小模型大小和相关计算能力的方法将是机器学习的下一大突破。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如果能有更多人在生产环境中低成本地使用深度学习模型,我们就能真正看到现实世界中创新的新应用。这些应用可以在任何地方运行,甚至是在最小的设备上,以达到做出即使决定所需的速度和正确性。或许,小型模型最好的效果是整个行业能够减少其环境硬件,而不是每六年增加 30 万倍。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"作者介绍:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Sasa Zelenovic,Neural Magiic 团队成员,帮助数据科学家发现开源、廉价的硬件加速器替代品,以实现深度学习性能。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"原文链接:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"https:\/\/www.datasciencecentral.com\/profiles\/blogs\/honey-i-shrunk-the-model-why-big-machine-learning-models-must-go"}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章