模型端侧加速哪家强?一文揭秘百度EasyEdge平台技术内核

{"type":"doc","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"近年来,深度学习技术在诸多领域大放异彩,因此广受学术界和工业界的青睐。随着深度学习的发展,神经网络结构变得越来越复杂。复杂的模型固然具有更好的性能,但是高额的存储空间与计算资源消耗使其难以有效地应用在各硬件平台上。因此深度学习模型在端上部署和加速成为了学术界和工业界都重点关注的研究领域。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"一方面,有许多深度学习框架可以让开发者和研究者用于设计模型,每个框架具备各自独特的网络结构定义和模型保存格式。AI工程师和研究者希望自己的模型能够在不同的框架之间转换,但框架之间的差距阻碍了模型之间的交互操作。另一方面,由于深度学习模型庞大的参数量,直接在边缘端部署模型会产生较高的时延。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"百度EasyEdge端与边缘AI服务平台可以很好地解决上述问题。EasyEdge可以支持多种主流深度学习框架的模型输入,提供了方便的部署功能,针对业内各类主流芯片与操作系统进行了适配,省去了繁杂的代码过程,可以轻松将模型部署到端设备上。EasyEdge在集成了多种加速技术的同时对外提供多个等级的加速服务,以平衡模型推理时间和精度,一方面可以最大限度的减小模型在端上部署的延时,另一方面可以匹配更广泛的使用场景。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"EasyEdge支持多种不同类型深度学习模型的部署,包括常见的模型类型包括图像分类、检测、分割以及部分人脸检测、姿态估计。目前EasyEdge支持的经典网络种类超过60种以及多种自定义的网络类型。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"同时EasyEdge支持接入多种深度学习框架,包括飞桨PaddlePaddle、Pytorch、Tensorflow、MxNet等。为了更方便的实现部署,"},{"type":"text","marks":[{"type":"strong"}],"text":"目前EasyEdge支持部分深度学习框架模型的互转换"},{"type":"text","text":",如图1所示。例如用户想要在Intel的CPU上使用OpenVINO部署一个Pytorch模型,EasyEdge可以实现经过多次模型转换,将torch模型格式转换成OpenVINO IR格式,最后基于OpenVINO部署框架完成模型部署。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/7a\/9f\/7a33d79684f72cfccba4cefyy071ba9f.jpg","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图1 EasyEdge支持多种模型框架转换"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"EasyEdge对于端设备的支持也是很广泛的,既支持常见的通用芯片CPU、GPU以及通用arm设备,也支持市面上主流的专用芯片,如Intel Movidius系列,海思NNIE等,如图2所示,EasyEdge目前已建设为业界适配最广泛的端与边缘服务平台。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/0e\/8f\/0e3163fbe4ce9fb9192d61d88d98398f.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图2 EasyEdge支持多种硬件设备部署"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":" "}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"解析EasyEdge中的模型压缩技术"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"为了能实现多种网络在不同芯片的高效部署,"},{"type":"text","marks":[{"type":"strong"}],"text":"EasyEdge后台提供了多种优化操作,如模型格式转换、图优化、芯片优化、模型低精度计算优化、模型裁剪和蒸馏等"},{"type":"text","text":"。其中模型压缩技术是至关重要的"},{"type":"text","marks":[{"type":"strong"}],"text":"一环,"},{"type":"text","text":"EasyEdge中用到的模型压缩技术包括常见的"},{"type":"text","marks":[{"type":"strong"}],"text":"模型低精度计算"},{"type":"text","text":","},{"type":"text","marks":[{"type":"strong"}],"text":"结构化裁剪"},{"type":"text","text":"以及"},{"type":"text","marks":[{"type":"strong"}],"text":"模型蒸馏"},{"type":"text","text":"等。如图3所示,为了更好的适配端设备,EasyEdge 集成了多种模型压缩库,可根据实际部署情况灵活调用。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/ac\/10\/acf94d1c69571f4fa89bd59cce8f8910.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图3 EasyEdge中的模型压缩技术"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"模型低精度计算旨在通过少量的比特去表示原本32bit的浮点数据。一方面是为了压缩模型体积大小,对于较大的模型可以使端侧设备更快地将模型load到内存中,减小IO时延,另一方面,通常处理器对于定点的计算能力会强于浮点,因此量化后的模型往往可以被更快的推理计算。如图4所示,分布不规则的浮点数据被量化到少数几个定点。"},{"type":"text","marks":[{"type":"strong"}],"text":"EasyEdge支持包括常见低精度类型包括FP16和INT8"},{"type":"text","text":",其中INT8量化技术能提供最大限度的无损压缩。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/24\/29\/2485669891b5a937b60211b635df8229.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图4 模型量化"},{"type":"sup","content":[{"type":"text","text":"["}]},{"type":"sup","content":[{"type":"text","text":"1]"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"INT8量化技术的实现方法大致分为两种,训练后量化和训练中量化"},{"type":"text","marks":[{"type":"strong"}],"text":"。"},{"type":"text","text":"顾名思义训练后量化就是在已经训练好的FP32模型中插入量化节点,通过统计学方法尽可能通过少量定点数去还原原来的浮点数据,而训练中量化会在训练之前就插入模拟量化节点,在训练过程中就模拟量化后的数据去计算各个节点的output,这样模型最终会拟合收敛到模型量化后最优。如图5所示。相比之下训练中量化具有更好的精度,但是需要耗费更长的时间。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/37\/78\/37ea582882cc45165b49ea87091c7178.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图5 训练量化原理"},{"type":"sup","content":[{"type":"text","text":"["}]},{"type":"sup","content":[{"type":"text","text":"2]"}]},{"type":"sup"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"EasyEdge同时具备训练中量化和离线训练量化的能力,并且会根据不同的实际情况选择不一样的量化方法"},{"type":"text","text":"。深度学习模型中,分类模型最终一般会以计算最终Layer的topK最为最终的输出结果,这种性质就决定了模型更注重最终输出的排序关系而非数值本身的大小,因此分类模型相比于基于数值回归的检测模型具有更强的量化鲁棒性。基于这一点,"},{"type":"text","marks":[{"type":"strong"}],"text":"EasyEdge的量化策略会根据模型类型灵活调整"},{"type":"text","text":",在分类相关任务中会倾向于使用离线量化技术,以缩短发布时长,而基于anchor回归的一系列检测模型中则更倾向于通过再训练来保证精度。另一方面,根据端侧设备、部署框架不同,EasyEdge采取的量化策略也会有所区别。例如在使用PaddleFluid框架将模型部署到CPU上时,较敏感的OP在量化之后会极大的影响最终精度,因此在EasyEdge中这些OP的输入输出数据类型采用FP32,而其余OP的计算会采用INT8。"},{"type":"text","marks":[{"type":"strong"}],"text":"这种Layer级别的混合精度量化策略可以很好的平衡推理速度和精度。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在离线量化过程中,会出现部分outlier数据点距离中心分布太远的情况,这会导致传统的量化策略会过大的预估量化range,而导致最终量化精度较低,如图13所示。为了应对这种情况,"},{"type":"text","marks":[{"type":"strong"}],"text":"EasyEdge集成了后校准技术"},{"type":"text","text":",通过多次迭代以寻找更合适的阈值,使量化后INT8数据分布和量化前FP32数据分布具有最小的KL散度,以此来降低量化误差。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"模型裁剪通常指的是结构化裁剪。结构化裁剪是通道级别的裁剪,如图6所示,旨在删除多余的计算通道。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/76\/cd\/76ec737f4abd755fb1yy712e920400cd.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":" 图6 模型结构化裁剪"},{"type":"sup","content":[{"type":"text","text":"["}]},{"type":"sup","content":[{"type":"text","text":"3]"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"对于某一个卷积核的裁剪,如图7所示,在中间的kernel同时裁剪掉input和output的一个通道时,其输入输出tensor对应的通道将减小,这带来两方面好处,一方面是在减小卷积核大小之后,模型体积得以减小,减少了推理过程中的IO时间,另一方面tensor本身体积被压缩,因此相比压缩之前只需要更少的内存开销。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"EasyEdge目前采取的就是这种通道裁剪技术"},{"type":"text","text":"。同时在裁剪通道的选择上,封装了基于"},{"type":"text","marks":[{"type":"strong"}],"text":"L1-norm、L2-norm和FPGM"},{"type":"sup","content":[{"type":"text","marks":[{"type":"strong"}],"text":"[8]"}],"marks":[{"type":"strong"}]},{"type":"text","text":"等多种方法,并且会根据实际情况灵活选择。另一方面,裁剪后的模型由于更改了部分Layer的shape,因此可能会影响到网络拓扑结构的合理性,"},{"type":"text","marks":[{"type":"strong"}],"text":"EasyEdge平台集成了通道调整方法"},{"type":"text","text":",实现通过广度优先查找算法,逐个矫正通道数,并且对于部分特殊难以调整的block会配置跳过,保证裁剪算法的合理性。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/0a\/fc\/0a21886f7e7b3082bd05986b364cyyfc.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":" 图7 针对一个卷积核的结构化裁剪"},{"type":"sup","content":[{"type":"text","text":"["}]},{"type":"sup","content":[{"type":"text","text":"4]"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"对于部分模型的裁剪,EasyEdge采用通道敏感度分析技术"},{"type":"text","text":",通过在每个Layer上多次裁剪推理计算最终精度损失来分析各个Layer对于通道裁剪的敏感度。另一方面,"},{"type":"text","marks":[{"type":"strong"}],"text":"EasyEdge还集成了Layer级别的配置裁剪策略"},{"type":"text","text":",通过阈值过滤的方法,在相同压缩率目标下,尽可能多的保留更敏感的层,以达到最小的精度影响。举个例子,如图8所示,一个ResNet50网络中,通过敏感度分析得出结论,起始层和终止层对裁剪更敏感,因此实施更低的裁剪率,而中间层具有更多的冗余,因此采用更高的裁剪率。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"不仅如此,"},{"type":"text","marks":[{"type":"strong"}],"text":"EasyEdge在上层融合了一些简单的超参搜索的技术,一方面需要尽可能保留敏感Layer的参数信息,另一方面需要找出最匹配设定压缩率的模型"},{"type":"text","text":"。例如一个120M大小的模型,在配置裁剪率为50%的时候,可以精确裁剪到60M左右,这种技术使EasyEdge平台在模型压缩层面可以提供更差异化的服务。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/39\/e2\/39502cedcc639498d0dd5ab248d7dee2.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图8 基于敏感度的裁剪,精准的裁剪率控制"},{"type":"sup","content":[{"type":"text","text":"["}]},{"type":"sup","content":[{"type":"text","text":"5]"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"对于部分模型的加速,"},{"type":"text","marks":[{"type":"strong"}],"text":"EasyEdge使用了基于Hinton"},{"type":"sup","content":[{"type":"text","marks":[{"type":"strong"}],"text":"[9]"}],"marks":[{"type":"strong"}]},{"type":"text","marks":[{"type":"strong"}],"text":"的蒸馏技术"},{"type":"text","text":"。模型蒸馏的目的是利用大模型学习到的知识去调教更小的模型,目的是为了让小模型的精度能够逼近大模型的精度。如图9所示,一般蒸馏的方法是在同一个session中,将大模型的某些层输出和小模型的部分输出以一定的形式关联,这样在训练小模型的过程中,大模型所学到的知识会作用于小模型的梯度反向传播,促进小模型的收敛。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/86\/71\/868109710c5250bb12fec37c0eac1f71.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":" 图9 知识蒸馏功能"},{"type":"sup","content":[{"type":"text","text":"["}]},{"type":"sup","content":[{"type":"text","text":"6]"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"本次新上功能,主要功能基于模型压缩框架"},{"type":"text","marks":[{"type":"strong"}],"text":"PaddleSlim"},{"type":"text","text":"开发,EasyEdge平台基于其中的压缩功能做了进一步的封装和优化。想了解更多相关信息可以登录github搜索PaddleSlim。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"(在CPU GPU ARM上的加速效果展示,最后总结下模型压缩效)我们分别在三种最常用的端设备,即CPU、GPU和ARM上发布了超高精度检测模型,具体设备型号如下:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"CPU: "},{"type":"link","attrs":{"href":"http:\/\/ark.intel.com\/products\/92981\/Intel-Xeon-Processor-E5-2630-v4-25M-Cache-2_20-GHz","title":null,"type":null},"content":[{"type":"text","text":"Intel® Xeon® Processor E5-2630 v4"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"GPU: NVIDIA Tesla V100"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"ARM: "},{"type":"link","attrs":{"href":"https:\/\/www.t-firefly.com\/product\/rk3399.html","title":null,"type":null},"content":[{"type":"text","text":"Firefly-RK3399"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如图10所示,其中直方图中acc1-acc3分别代表不同加速等级,加速等级越高模型被裁剪的通道数越多,纵座标是网络对单张图片的推理延时。可以观察到EasyEdge的模型压缩能力在三种端上速度收益都很明显,直观上看通用CPU上加速效果最好,可以达到超过一倍的速度提升,这也跟EasyEdge平台在不同端设备上采取的加速方法相关,当多种加速技术同时使用时会取得较大的提升。其中GPU本身拥有更强的算力,因此减小FLOPS对于GPU的加速效果而言略弱于CPU和通用ARM。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/7d\/96\/7d936a51f1c807aa41b20a7307789596.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图10 不同端设备上的加速情况"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"那么接下来对比一下同一个硬件设备上,不同类型的模型的加速效果。我们实验了几种不同精度的模型在Jetson (jetson4.4-xavier)上的推理效果,包括MobileNetv1-SSD、MobileNetv1-YOLOv3和YOLOv3。如图11所示,acc1-acc3的含义同上,总体来说,新上的模型压缩功能在牺牲少量精度的情况下最多可以获得"},{"type":"text","marks":[{"type":"strong"}],"text":"40%"},{"type":"text","text":"左右的速度收益,效果明显。另一方面,高性能模型的加速效果相比之下会略差一点,因为高性能模型本身具备一定的加速特质,例如更小的模型体积和更少的FLOPS,因此再提升的空间相比之下更小。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/c8\/42\/c8d6066d13639dff234d6470ec99c342.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图11 不同的检测模型在Jetson上的推理延时"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"实际使用过程中具体的速度提升会根据端侧设备和模型类型的不同而有所区别,EasyEdge平台的模型压缩能力在后续迭代中也会持续优化和更新。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"现在可以体验一下新功能,在发布模型的时候可以根据自身需求选择合适的加速方案,如图12所示。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/c5\/30\/c5c2efb51ff29287999db73facb51130.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图12 EasyEdge提供多种加速方案"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"发布模型后可以在评测页面观看sdk在端上的推理效果,如图13所示,最快的加速方案伴随着较少的精度损失,可将模型速度提升30%。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/9a\/58\/9a66a4df4yy9cb7ed189e33f09d6eb58.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":"center","origin":null},"content":[{"type":"text","text":"图13 EasyEdge提供模型评测功能"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"EasyEdge的能力也全面集成于飞桨企业版EasyDL和BML中,使用这两大平台,可以一站式完成数据处理、模型训练、服务部署全流程,实现AI模型的高效开发和部署。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"近期,飞桨企业版开展了2021万有引力计划"},{"type":"link","attrs":{"href":"https:\/\/ai.baidu.com\/easydl\/universal-gravitation","title":"xxx","type":null},"content":[{"type":"text","text":"https:\/\/ai.baidu.com\/easydl\/universal-gravitation"}]},{"type":"text","text":",为企业提供AI基金,可用于购买飞桨企业版EasyDL和BML公有云的线上服务,最高可兑换:6000+ 小时的自定义模型训练时长;590+ 小时的脚本调参;公有云部署400+ 小时配额;或者兑换50 个设备端的 SDK。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" "}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"italic"}],"text":"注释:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"italic"}],"text":"[1] Fang J, Shafiee A, Abdel-Aziz H, et al. Near-lossless post-training quantization of deep neural networks via a piecewise linear approximation[J]. arXiv preprint arXiv:2002.00104, 2020."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"italic"}],"text":"[2] Jacob B, Kligys S, Chen B, et al. Quantization and training of neural networks for efficient integer-arithmetic-only inference[C]\/\/Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 2704-2713."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"italic"}],"text":"[3] Han S, Pool J, Tran J, et al. Learning both weights and connections for efficient neural networks[J]. arXiv preprint arXiv:1506.02626, 2015."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"italic"}],"text":"[4] Li H, Kadav A, Durdanovic I, et al. Pruning filters for efficient convnets[J]. arXiv preprint arXiv:1608.08710, 2016."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"italic"}],"text":"[5] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]\/\/Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770-778."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"italic"}],"text":"[6] Gou J, Yu B, Maybank S J, et al. Knowledge distillation: A survey[J]. International Journal of Computer Vision, 2021, 129(6): 1789-1819."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"italic"}],"text":"[7] Wu H, Judd P, Zhang X, et al. Integer quantization for deep learning inference: Principles and empirical evaluation[J]. arXiv preprint arXiv:2004.09602, 2020."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"italic"}],"text":"[8] He Y, Liu P, Wang Z, et al. Filter pruning via geometric median for deep convolutional neural networks acceleration[C]\/\/Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition. 2019: 4340-4349."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"italic"}],"text":"[9] Hinton G, Vinyals O, Dean J. Distilling the knowledge in a neural network[J]. arXiv preprint arXiv:1503.02531, 2015."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"italic"}],"text":" "}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章