爲什麼大型機器學習模型必須縮小 ?

{"type":"doc","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"更大的規模不一定更適合機器學習。但是,隨着研究人員相互競爭追求最先進的基準,深度學習模型和訓練它們的數據集不斷擴展。不管它們如何突破,更大的模型都會對預算和環境產生嚴重的影響。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"比如 GPT-3,一個在去年夏天推出的大受歡迎的自然語言處理模型,據說花了"},{"type":"link","attrs":{"href":"https:\/\/venturebeat.com\/2020\/06\/01\/ai-machine-learning-openai-gpt-3-size-isnt-everything\/?fileGuid=W7n6QtTkCpYUnzp6","title":"","type":null},"content":[{"type":"text","text":"1200 萬美元"}]},{"type":"text","text":"用於訓練。更有甚者,馬薩諸塞大學阿默斯特分校(UMass Amherst)的"},{"type":"link","attrs":{"href":"https:\/\/arxiv.org\/abs\/1906.02243?fileGuid=W7n6QtTkCpYUnzp6","title":"","type":null},"content":[{"type":"text","text":"研究人員發現"}]},{"type":"text","text":",訓練大型人工智能模型所需的計算能力能夠產生 60 多萬磅的二氧化碳排放——是普通汽車壽命週期排放量的 5 倍。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"目前,沒有跡象表明,以機器學習行業的發展速度,計算密集型工作將會放緩。OpenAI 的研究顯示,深度學習模型的計算能力在 2012 到 2018 年間增長了驚人的 30 萬倍,超過了摩爾定律。這個問題不僅僅是訓練這些算法,而是要在生產環境下運行它們,或者說在推理階段。對很多團隊而言,由於純粹的成本和資源的限制,深度學習模型的實際應用仍然遙不可及。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"幸好,研究人員發現了一些新的方法來縮小深度學習模型,並通過更智能的算法來優化訓練數據集,使得模型在生產環境下運行得更快,計算量也更少。就連業界的一個峯會也專門討論低功耗、"},{"type":"link","attrs":{"href":"https:\/\/www.tinyml.org\/summit\/?fileGuid=W7n6QtTkCpYUnzp6","title":"","type":null},"content":[{"type":"text","text":"微型機器學習"}]},{"type":"text","text":"。剪枝(Purning)、優化(Quantization)和遷移學習(Transfer Learning)就是三種具體的技術。這些技術可以讓那些無法投資數百萬美元把模型轉換成生產環境的組織實現機器學習的民主化。對“邊緣”用例來說,這一點尤爲重要,因爲大型專用人工智能硬件在物理上並不切實際。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第一種技術,即剪枝,是近幾年來研究的熱點之一。包含“"},{"type":"link","attrs":{"href":"https:\/\/arxiv.org\/abs\/1510.00149?fileGuid=W7n6QtTkCpYUnzp6","title":"","type":null},"content":[{"type":"text","text":"深度壓縮"}]},{"type":"text","text":"”(Deep Compression)和“"},{"type":"link","attrs":{"href":"https:\/\/arxiv.org\/abs\/1803.03635?fileGuid=W7n6QtTkCpYUnzp6","title":"","type":null},"content":[{"type":"text","text":"彩票假說"}]},{"type":"text","text":"”(Lottery Ticket Hypothesis)在內的高引用文獻表明,可以在不損失正確性的情況下消除神經網絡中“神經元”之間一些不必要的連接,有效地使模型更小、更容易在資源有限的設備上運行。"},{"type":"link","attrs":{"href":"https:\/\/arxiv.org\/abs\/2004.14340?fileGuid=W7n6QtTkCpYUnzp6","title":"","type":null},"content":[{"type":"text","text":"最新的論文"}]},{"type":"text","text":"進一步驗證並完善了早期的技術,以開發出更小的模型,使其達到更高的速度和正確度。對某些模型,比如"},{"type":"link","attrs":{"href":"https:\/\/www.geeksforgeeks.org\/residual-networks-resnet-deep-learning\/?fileGuid=W7n6QtTkCpYUnzp6","title":"","type":null},"content":[{"type":"text","text":"ResNet"}]},{"type":"text","text":",可以在不影響正確性的情況下剪枝 90% 左右。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第二種技術,即優化,也正在逐步普及。"},{"type":"link","attrs":{"href":"https:\/\/www.qualcomm.com\/news\/onq\/2019\/03\/12\/heres-why-quantization-matters-ai?fileGuid=W7n6QtTkCpYUnzp6","title":"","type":null},"content":[{"type":"text","text":"優化"}]},{"type":"text","text":"涉及許多不同的技術,它們可以將大的輸入值轉換爲小的輸出值。換句話來說,在硬件上運行神經網絡可以產生上百萬次乘和加運算。減少這些數學運算的複雜性有助於減少內存需求和計算成本,這將大大提高性能。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"最後,雖然這不是一種縮小模型的技術,但是"},{"type":"link","attrs":{"href":"https:\/\/www.amazon.science\/blog\/when-does-transfer-learning-work?fileGuid=W7n6QtTkCpYUnzp6","title":"","type":null},"content":[{"type":"text","text":"遷移學習"}]},{"type":"text","text":"能夠在有限的數據中幫助訓練一個新模型。遷移學習以預訓練模型作爲起點。通過有限的數據集,模型的知識可以“遷移”到一個新的任務中,而無需從頭再來訓練原始模型。在訓練模型時,這是一種減少計算能力、能源和資金的重要方法。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"最重要的啓示是,模型可以(也應該)儘可能地優化,使其在較少的計算量下運行。在不犧牲性能和正確性的情況下,尋找減小模型大小和相關計算能力的方法將是機器學習的下一大突破。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如果能有更多人在生產環境中低成本地使用深度學習模型,我們就能真正看到現實世界中創新的新應用。這些應用可以在任何地方運行,甚至是在最小的設備上,以達到做出即使決定所需的速度和正確性。或許,小型模型最好的效果是整個行業能夠減少其環境硬件,而不是每六年增加 30 萬倍。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"作者介紹:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Sasa Zelenovic,Neural Magiic 團隊成員,幫助數據科學家發現開源、廉價的硬件加速器替代品,以實現深度學習性能。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"原文鏈接:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"https:\/\/www.datasciencecentral.com\/profiles\/blogs\/honey-i-shrunk-the-model-why-big-machine-learning-models-must-go"}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章