绕过硬件瓶颈,成倍提升芯片算力,软件层面深挖芯片性能可行吗?

{"type":"doc","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如果盘点近两年的行业热词和社会热词排行榜,“芯片”一定榜上有名。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"随着AI 技术在各行各业的广泛实践,应用层对深度学习模型的通用性和复杂性要求越来越高。与之相应,深度学习对芯片算力的要求随之增加。信息时代,处处都需要芯片,但是芯片却属于稀缺资源。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"业内的解决办法有两种,一种是定制芯片,一种是针对模型进行修改。通过使用小模型或者压缩模型,降低到算力的要求。两种方式各有优劣,定制芯片性能强悍但成本、周期、风险都很大;小模型或者压缩模型成本较低、周期较短,但是会导致准确度下降,很难在高精度和高性能之间取得较好的平衡。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"现有AI计算中的过多的冗余计算和运行引擎的能力有限,制约了对芯片性能的挖掘。在芯片资源供需不平衡的情况下,目前主流的做法是攻坚生产力的难题。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"也有技术团队另辟蹊径。一家叫做CoCoPIE 的 AI 公司,宣布可以通过压缩和编译协同设计技术,从软件层面挖掘现有芯片算力,有望让现有芯片性能成倍提升。于是我们找到了CoCoPIE公司负责人李晓峰。据他介绍,目前CoCoPIE 已经搭建了 CoCo-Gen 和 CoCo-Tune 等产品。这些产品能够在不额外增加人工智能专用硬件的情况下,让现有处理器实时地处理人工智能应用。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"他告诉InfoQ:“CoCoPIE 独有的 AI 软件技术栈,解决了端侧AI发展和普及的瓶颈问题,这在业界目前还是独一无二。测试数据和客户反馈都表明,与其它方案的比较优势十分明显,有较大的机会在端侧设备智慧化的浪潮中胜出。”"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"绕过硬件瓶颈,成倍提升芯片算力,软件层面提升芯片性能是否可行?为了进一步了解CoCoPIE 采用的技术,得到这个问题的答案,InfoQ 日前采访了李晓峰。"}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"从软件层面榨出芯片算力"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"Q:通过优化压缩和编译协同设计,解决性能问题,具体的技术实现和学术论文支持是什么?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"李晓峰:CoCoPIE 的技术核心是创始团队中的三位教授,他们都是天分很高又异常勤奋的人,在各自的领域都是佼佼者。其中王言治教授侧重AI 模型算法,任彬教授侧重AI模型编译,慎熙鹏教授侧重 AI 的系统引擎。这几个研究领域在技术上是一个很好的互补,构成了 AI 计算优化技术的铁三角,相互不可或缺,共同打造公司的核心竞争力,也算是一种天作之合。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"先介绍一下AI 模型优化执行的基本技术。一个 AI 任务在设备上进行运行,实际上就是把 AI 模型在映射为芯片指令序列的过程。压缩和编译是执行的两个关键步骤。先通过权重剪枝、权值量化的方式对模型进行结构层面的压缩优化,减少模型本身的复杂度。再针对压缩后的模型优化编译,生成执行代码。这样一方面AI任务的执行效率更高,另一方面可以充分利用芯片能力。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"但是压缩和编译这两步,目前在业内并没有做得非常好的。现有的技术要么只能压缩,要么只能编译,或者虽然两者都有,但它们在设计上相互隔离,没有很好的协同设计,所以很难达到既保证推理精度又保证运行效率的效果。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"CoCoPIE 技术的核心在于压缩和编译两个步骤的“协同设计”,即在设计压缩的时候考虑编译器及硬件的偏好从而选择压缩的方式,在设计编译器的时候利用压缩模型的特点来设计相应的编译优化方法。对应压缩和编译两个步骤,我们为 CoCoPIE 框架设计了两个组件:CoCo-Gen 和 CoCo-Tune。CoCo-Gen 通过将基于模式的神经网络剪枝与基于模式的代码生成相协同,生成高效的执行代码;CoCo-Tune 则能够显著缩短 DNN 模型压缩及训练的过程。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"CoCoPIE 的技术是通用的,可广泛地应用于各种 CPU、GPU、DSP及AI专用芯片,如NPU、APU、TPU 等。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"CoCoPIE 在相关领域发表了大量的顶级国际会议论文,从上层 AI 应用优化技术,AI 模型设计技术,到编译器优化技术,底层硬件相关优化技术。特别是 CoCoPIE 的技术介绍文章发表在今年6月份的 Communications of ACM 上,这是美国计算机学会的旗舰刊物,与今年的图灵奖同期发布,这说明学术界对 CoCoPIE 的工作的高度认可。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"Q:目前的核心产品 CoCo-Gen 和 CoCo-Tune 可以单独使用吗?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"李晓峰:这两个产品提供了我们AI 模型优化的关键技术,CoCo-Gen 通过将基于模式的神经网络剪枝与基于模式的代码生成相协同,生成高效的执行代码;CoCo-Tune 则能够显著缩短 DNN 模型压缩及训练的过程。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"CoCo-Gen 和 CoCo-Tune 可以单独使用。它们构成了 CoCoPIE 工具链的核心,所以优先推出。作为连接上层AI任务和下层硬件的桥梁,CoCoPIE 的产品体系会不断增添新成员。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"Q:从软件层面解决芯片荒问题,行业内是否有类似的软件技术?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"李晓峰:目前的端侧AI 技术栈,只有 CoCoPIE 的优化技术可以在主流芯片上达到或超过 AI 专用芯片的性能,这是通过大量实测验证得到的结论。目前已知的技术,要么侧重压缩,要么侧重编译,没有见到二者协同设计的技术,这是CoCoPIE 的专利技术。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"因为虽然当前主流芯片已经具有很好的潜力,但要发挥它们的这个潜力,必须通过压缩和编译的协同设计,通过精巧的算法,把AI 任务转换为合适的矢量计算,并很好地控制总体计算量。这个正是 CoCoPIE 的技术关键所在。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"Q:尺有所短、寸有所长,这种技术当下的优势和局限是什么?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"李晓峰:CoCoPIE 的优势在于,一方面是使得大量原来在端侧设备上无法正常运行的AI 任务也可以运行,另一方面原来在端侧必须通过专用AI芯片才能运行的 AI 任务,现在通过主流芯片也可以运行。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"AI 任务的执行总是会受到芯片算力的制约,CoCoPIE 技术的能力总有自己的局限,解放出来的 AI 算力也不是无限的。另外,CoCoPIE 技术目前侧重的是 AI 推理任务,至于专门的 AI 训练任务的加速不是我们的重点。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"Q:CoCoPIE 的技术能够让芯片算力提高 3-4 倍,让芯片效能最高可提升 5-10 倍,衡量标准是什么?对于不同芯片都能实现这种水平吗?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"李晓峰:这些数据是实测出来,通过了同行评审,也通过了客户的认定。也就是说,在技术上有理论支撑,在实践上有产品落地。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"例如,用通用芯片和谷歌TPU-V2 的对比:使用 CoCoPIE,VGG-16 神经网络在移动设备 Samsung Galaxy S10上比在 TPU-V2 上效能提升了近18倍,ResNet-50 则取得了4.7 倍的效能提升。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在同样的Samsung Galaxy S10 平台上,运行行为识别的 C3D 和 S3D 两个任务,CoCoPIE 的速度比 Pytorch Mobile分别提高了 17 倍和 22 倍。运行 MobileNetV3,CoCoPIE的速度比 TensorFlow Lite 和 Pytorch Mobile分别提升了近 3 倍和 4 倍。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"另外,对功耗测试(Qualcomm Trepn power profiler)的结果显示,CoCoPIE 与 TVM相比,执行时间缩短了 9 倍以上,功率却仅多消耗了不到 10%。在基于 AQFP 超导的DNN 推理加速方面的工作中,通过低温测试验证,我们的研究在所有硬件设备中也是迄今为止能量效率最高的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"Q:效能的提升不会凭空得来,这项软件技术的运行对硬件环境有哪些要求?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"李晓峰:是的。CoCoPIE 技术对硬件环境的要求不高,主流芯片都可以满足,具体来说就是芯片需要有矢量计算能力,比如ARM的NEON指令集,Intel的SSE、AVX指令集,RISC-V的向量扩展,等,都是当前CPU普遍存在的,GPU和APU\/NPU就更不用说了。当然如果没有矢量计算能力,CoCoPIE 的技术仍然可以发挥作用,但是会受较大限制。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"Q:技术实践过程中遇到的主要挑战是什么?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"李晓峰:CoCoPIE 的技术在实践中遇到的主要挑战是,我们目前的产品体系还不是很完善,而客户的需求也是多种多样,具体的服务方式千差万别,因此目前我们还没有进行大规模的商业推广,主要是针对一些关键领域、关键客户,比如选择有代表性的主流芯片提供商、设备提供商、软件服务提供商等,按照我们的产品发展策略有选择地提供服务。我们会通过这个过程,与各种客户需求进行磨合,不断探索最佳的产品服务体系。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"Q:这项技术目前是否有实际的落地案例?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"李晓峰:目前合作客户已经有十几个,这些客户中有多个领域的,比如腾讯、滴滴、某著名芯片平台提供商、某著名手机厂商、还有美国交通部、全球知名服务提供商Cognizant 等。"}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"CoCoPIE 不是技术过渡期产品"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"Q:主流处理器是实时人工智能的更优解,您认同这个观点吗?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"李晓峰:是的。对于端侧设备来说,主流处理器是实时人工智能的更优解。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"1、从功能上说,端侧设备资源受限,应用场景千变万化,而专用的AI处理器功能相对固化,应对端侧的异常灵活的功能需求有较大的挑战。主流处理器如果通过软件技术已经可以处理AI问题了,当然就没有必要再节外生枝。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2、从技术上说,为解决AI问题的专用芯片的做法其实都是加大矢量计算的处理能力、提高内存访问效率,有的称为张量计算单元。如前面所说,当前的主流芯片其实都已经有了矢量计算单元。这些矢量计算单元相比专用张量处理芯片虽然能力可能要弱一些,但执行当前的AI 任务一般也足够了,前提是必须有优异的模型压缩和编译工具,能够通过精巧的设计,把AI 任务转换为合适的矢量计算,并能很好地控制总体计算量。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"3、从成本上说,主流处理器大规模量产,价格相对专用芯片要便宜很多,而且供货渠道的选择也更多。除了采购 AI 芯片本身的成本,还有一些隐性成本。多一个芯片会造成 PCB 、散热等的重新设计,封装也是额外成本。很多设备比如智能耳机、微型医疗设备等对这些因素很敏感。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"4、需要强调的是,CoCoPIE 的技术并不排斥专用 AI 芯片。作为 AI 全栈的软件优化技术,CoCoPIE 也支持 AI 处理器,让 AI 处理器发挥出更大的效能。因此,我们也乐见 AI 处理器在特定的应用领域发挥重要作用。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"Q:CoCoPIE 是一个技术过渡期的产品吗?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"李晓峰:恰恰相反,因为端侧AI 的泛在普及化才刚刚开始,CoCoPIE 作为该领域先进技术的引领者,我们认为它的未来发展空间非常广大。我们内部有一整套的产品发展战略,未来的产品形态会和现在有所不同,但核心技术是一脉相承的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"另外假设未来有朝一日AI 专用芯片得到普及,也不会对 CoCoPIE 的生存空间带来不利影响。首先 AI 专用芯片永远不会比通用芯片更普及,对于通用芯片已经能做好的工作,可能还是会在通用芯片上做更有效、也更灵活;其次, AI 芯片即使发展了,也还是离不开编译优化技术。我们的技术会让 AI 芯片的能力进一步提升。其实通用芯片也一样,比如 CPU 或 GPU ,不论多便宜、性能多高,仍然需要高性能的编译器支持,比如 LLVM 或 NVCC 等。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"长期来看,AI 技术栈的发展对 CoCoPIE 软件技术的需求只会越来越大,就像手机 SoC芯片随着时间的发展,功能不是越来越简单了,而是越来越强大了,8 核手机很常见,对软件技术的要求也越来越高。事实上,AI计算对算力的要求远远超过AI硬件能力的发展速度。据美国MIT大学的研究报告,近年来AI计算的算力需求发展是每两年700倍,这个发展速度只通过硬件能力的提升是根本无法满足要求的,必须在软件技术上有所突破。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"Q:您怎么看待如今热门的大模型技术?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"李晓峰:大模型在训练数据足够多的情况下,往往能取得更高的AI 能力。这个是人类对未知世界探索的必经之路。这个就像在高能物理学界,人们为了做出新的发现,不断建设能量更高的粒子对撞机。但是大模型这个事情也需要从两方面看,如果一味地追求更大的模型,所需要的训练数据量、训练时间、算力支持、能量消耗等等都不断提高,边际效益会越来越小,这个趋势显然不可持续。可能未来只会在个别重大挑战的工作上持续这种增大模型的做法。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这里给一个关于大模型收益方面的数字,ResNet 是2015 年发布的一个著名计算机视觉模型。该模型的改进版本称为 ResNeXt,于 2017 年问世。与 ResNet 相比,ResNeXt 所需的计算资源要多 35%(以总浮点运算来衡量),准确度却只提高了 0.5%。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"再给一个关于碳排放方面的数字,据福布斯杂志去年的一篇报道,自从深度学习在2012 年开始大发展后,产生一流的人工智能模型所需的计算资源,平均每 3.4 个月翻一番;这意味着,训练 AI 模型所需的能量从 2012 年到 2018 年就已经增加了 30 万倍。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如果做一个对比,深度学习的能力即使和一个婴儿相比,在很多方面还仍有很大差距,更别说与成人大脑相比了。而我们成人的大脑运转,也只需要20 瓦左右的能量,这只能够给一个灯泡供电。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们显然不可能只是通过扩大模型来提高机器的智能,而学界也在不断探索新的方法。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"Q:可否谈谈如今正在上浮的基础软件行业?简单聊聊您对芯片行业的判断?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"李晓峰:基础软件的重要性越来越大。这有两方面的原因,一个是近年来技术发展很快,对基础软件有实际需求,提出了必要性;另一个是过去已经培养了大量高质量的工程师,提供了可能性。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"芯片行业还会继续蓬勃发展。科技发展趋势就是不断地将数字世界渗透到物理世界的各个方面,而数字化的根本体现就是芯片在各种设备的不断植入。上一波设备智能化的核心手段是在设备上植入芯片、能跑应用,而这一波智能化的核心手段则是设备上能跑深度神经网络,这是浩浩荡荡的发展大势。这也是CoCoPIE 的根本机会所在。"}]},{"type":"heading","attrs":{"align":null,"level":1},"content":[{"type":"text","text":"写在最后"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"李晓峰告诉InfoQ, CoCoPIE 的技术领先优势至少有几年的时间,足够在计算机领域争得一席之地。CoCoPIE 的技术并非是为了解决芯片荒的问题,而是为了实现AI任务的普及化,遇到芯片荒只是恰巧的事情,这算是 CoCoPIE 技术能力的副产品。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在李晓峰看来,面对如今复杂多样的场景和终端,现有技术水平无法完全发挥主流芯片的能力,所以才有了 CoCoPIE 的发展空间。可以确认的是,CoCoPIE 的发展为将芯片能力“物尽其用”,提供了一种新思路。"}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章