全民K歌客观清晰度评估算法技术分享

{"type":"doc","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"当我们讨论视频清晰度时,我们在讨论什么?"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"一、背景介绍"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"很多时候清晰度会被等同于视频分辨率和码流等等,在PGC时代也确如此,电影、电视剧、新闻媒体等都是通过专业设备录制剪辑和压缩,制作精良的源视频能够代表最高的清晰度,下采样降低分辨率和增大QP压低码流等操作都会丢失有效信息,导致视频清晰度变差。此类场景下我们能够通过峰值信噪比(PSNR)和基于人眼视觉特征的SSIM等评价准则来测量用户接受视频的主观质量,与源视频越相近则清晰度越高。然而在UGC时代用户多样化的视频录制设备和参差不齐的专业水平,无法继续提供有效的高质量源视频作为参考,用户看的视频既可能是由于压缩传输质量变差,也可能是由于低光照噪声和抖动等视频采集问题,这时只能根据Human Visual System(HVS)的日常观察经验来判定视频的清晰程度。另外,比如游戏视频和屏幕分享等媒体内容的日趋多样化,也使通用且有效的视频清晰度评估算法发展变得更加困难。本文主要分享多媒体实验室和全民K歌团队合作开发的针对细分主播场景定制的无参考清晰度评估算法,主要介绍我们如何在细分场景获取有效标注数据、模型训练和模型部署之后的数据上报汇总分析的细节内容:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"区别于常见CV标注的主观打分数据集构建细节"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"清晰度算法着重解决的问题及结果分析"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"针对低质量视频的讨论分析"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"客观无参考质量评估算法效果展示,如下为算法对最近采集上报视频的预测分数以及视频码率和分辨率(由于最多上传三个视频,故转码 gif 格式呈现,可能有一定质量变化):"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/1d\/e5\/1df171fd6cba745ab8d288c3773ecfe5.gif","alt":null,"title":"【清晰度评分:84.91 bitrate: 2014 kb\/s resolution: 720x1280】","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/eb\/f4\/eb7e71c0cyye1ebf6d6e5627faf2eff4.gif","alt":null,"title":"【清晰度评分:40.74  bitrate: 1906 kb\/s resolution: 720x1280】","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/03\/6d\/03bbe9d579b3dcfc77694b0af2f8606d.gif","alt":null,"title":"【清晰度评分:90.89  bitrate: 9096 kb\/s resolution: 720x1280】","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/3c\/8b\/3c19e3a835550a007c7384059b6b428b.gif","alt":null,"title":"【清晰度评分:55.20  bitrate: 2019 kb\/s resolution: 720x1280】","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"二、数据集构建"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"近些年来,比较流行的基于rank learning的质量评估算法在很多公开的IQA数据集上(比如TID2013、LIVE challenge和KonIQ-10K)都有比较明显的指标提升,rank learning的无监督思路在一定程度上可以通过人为产生的数据对来缓解对主观标注的数据的依赖,进而在目前相对其他CV任务体积较小的主观数据集上取得不错的性能提升,但是依然很难有效解决训练样本与实际应用场景样本分布差异的gap,针对细分场景的足够大小的数据集依然是质量评估算法落地的不可或缺的步骤。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":" 通常的,CV任务标注目标是如物体类别、位置和mask等客体信息,少量的熟练的标注人员即可满足标注的需求,但是QA相关的数据集捕捉的是广泛的人群对同一个媒体内容的平均主观评价信息,属于用户主体信息,必须有效地收集足够数量的被测者的评价结果,过滤掉个人的偏好等,从而获取接近真实大众评价,即所谓的mean opinion score (MOS)。目前主观数据集的构建一般是参考ITU recommendations。广泛使用的TID-2013数据集,共有3000张失真图片,共计有非重复的971人在实验室环境下参与一共524,340次打分,采用的是pair-wise的打分方式,平均每张图片被约170人打分,从这些数字我们可以看到主观数据集的构建是非常的耗时耗力,这也是现有主观数据集的大小受限的主要原因。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在足够的资金支持下,我们可以选择通过众包方式来完成主观打分,但将任务直接分发给公司外部的众包测试会遇到很多不稳定的因素,带来过多的噪声信息,无法控制每个session之间的休息间隔和用户多次重复参与打分等现象。为此我们从2018年就开始搭建了包含视频和音频的主观测试平台,通过建立防水墙和白名单等模式,经过了长时间的不同任务训练筛选过滤,屏蔽掉很多非法刷任务的薅羊毛党,逐步保留下来了一批质量相对稳定的用户定期参与我们的主观打分任务,这使得我们通过通过众包的方式能获得相对可靠的主观打分。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"具体的数据集构建信息:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"视频内容:2595条视频,长度5s,2135条来自K歌,460条来自微视"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"打分方式:三分类 (好中坏三挡)"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"参与人数:134个独立用户"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"打分人次:30组x100条视频x60人次=(2595+405) * 60=180,000,其中405条冗余视频用作一致性校验"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"有效比例:85.3%,根据打分偏向,一致性和outlier检测,共计剔除264组无效打分"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"打分方式:相对于常用的五分类打分,为了降低众包打分的复杂度,我们选用了更简单的三分类打分方式,可以一定程度避免混淆,也方便后续的埋点数据校验,易于数据的清洗。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Ksong Dataset部分视频展示,最后一行为来自微视的视频源:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/87\/5a\/87425b107bf043664521a580b442565a.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"数据分布问题:K歌直播场景为了保障产品的流畅度,相比于短视频等场景在在码率方面有所牺牲,一定程度上限制了高质量视频的比例,所以我们通过人脸检测+人工二次分类筛选出和K歌场景类似的约460条码率较高的微视短视频片段,其中大部分视频质量较高,从而使得数据源的分布更加均衡。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"用户打分raw data,打分选项1,2,3对应低,中,高质量:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/80\/24\/809bc95a8b2225629163cc47ae10ac24.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"数据清洗:虽然参与打分的志愿者整体质量有所保障,但仍需对所有打分做数据清洗。用户参与的每个session包含共计100条视频,主观测试预估时长约15mins,每组视频中有随机的约13条视频会重复出现两次。如果用户对这些埋点的anchor视频判定分数差距过大,比如同一个视频两次打分为3和1,则该用户在此session所有的数据都会被视作无效数据丢弃。如果用户过多的给出接近全部是同一分数的打分,比如80%打分均为2,则也视为无效数据。另外,每组视频都获取到约60人次的打分,如果单一用户打分与平均打分偏差过大,则会被视作outlier丢弃。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/c5\/d8\/c514cf7053b9bb6824fd9e56628a58d8.png","alt":null,"title":"【数据集主观打分分布图】","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"最终获得的数据集的MOS打分分布如上图所示,可以看出大部分的视频偏向于低质量的区间[1,2],也再次验证了加入码率更高的视频源的必要性。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"三、算法及分析"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们的视频清晰度评估算法使用离散的视频帧作为输入,不考虑额外的时域信息,算法针对前处理、模型和训练方式的整体改进可以简要概括为三点:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Larger input:更接近720p的672x448输入尺寸"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Hyper-column结构:擡升low-level特征对质量预测的影响"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"结合Rank-learning:利用rank order强化学习效果"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"使用不同的缩放函数的效果对比:降采样4倍之后使用nearest neighbor上采样:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/aa\/08\/aa76ff0015228ba302959e3eeae27b08.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Larger input:传统的质量评估方法如SSIM和BRISQUE等仅依赖于low-level特征即可达到不错的性能,这些算法除非是multi-scale情况下,通常是使用原始分辨率作为输入。CNN常用的输入缩放尺寸如224等会大大降低图像原有的信息量,导致后续的算法性能降低。经过部分实验对比,通过全卷积方式FCN输入更大的图像尺寸,能够明显地提升预测效果。考虑应用场景视频的aspect ratio大都为16:9和4:3等,同时为了避免非均匀缩放拉伸带来的干扰 ,我们采用了(224x3)x(224x2)=672x448的输入尺寸来更充分得利用有限的输入尺寸。如下图所示,视频的输入帧经过缩放之后填充至642x448的尺寸,保持aspect ratio的情况下输入的长或者宽缩放至642或448,剩余部分使用zero-padding黑边;如果输入为横屏模式,则对视频帧做90度翻转。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/5d\/ce\/5d12cc6777abaf7505540afcea4764ce.png","alt":null,"title":"【核心网络结构】","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Hyper-column结构:如上图的特征提取模块所示,我们借鉴语义分割中的hyper-column结构,对每个block的最后一层做global avg pooling,分别提取不同level的特征向量并将所有block的的特征拼接在一起通过FC layer进行最终的评分预测。相比于直接在使用最后一层layer信息,hyper-column对于语义分割可以提供更多的local细节位置信息,而对于质量评估则提供了更多的比如梯度变化等low-level图像失真信息。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Rank-learning: 除了常用的用于回归训练的L1 loss,我们也结合使用hinged ranking loss来通过不同视频之间的分数差来强化视频order的学习效果"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"L1 Loss: L_reg = sum|mos - pred|"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Hinged rank loss: L_rank = sum(max(0, thres-(mos_a – mos_b)*(pred_a-pred_b) ))"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Overall Loss: L = L_rank + lambda*(sum(|mos_a-pred_a|+|mos_b-pred_b|))"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"模型训练方式分为两步:首先在公开数据集KonIQ-10K进行预训练,之后在KsongDataset进行进一步调优训练,两次训练采用相同的参数如下:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"text"},"content":[{"type":"text","text":"{ \n \"arch\":\"resnet18 or other backbones\", \n \"epochs\":100,\n \"batch_size\":256, \n \"opt_id\": Adam, \n \"lr\":1e-4, \n \"loss_type\": reg+rank, \n \"workers\":24, \n \"shuffle\":1, \n \"fixsize\":[672,448]\n}"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/ba\/7c\/ba45f53953b0ba8c1e770b81d6aced7c.png","alt":null,"title":"【KonIQ-10K: comparison with SOTA methods】","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在KonIQ-10K数据集上,BIQI, BRISEQUE, DIIVINE和HOSA为使用传统特征的评估算法,指标最好的HOSA的PLCC和SRCC均在0.8左右("},{"type":"text","marks":[{"type":"italic"}],"text":"PLCC和SRCC区间为-1到1,越接近约1则正相关性越强,-1为负相关,0为相关性最弱;PLCC关注线性一致性,SRCC关注单调性"},{"type":"text","text":");和近期的DIQA和Learn-from-rank IQA等CNN based方法对比,我们的算法的预测效果与state-of-the-art的指标不相上下,SRCC和PLCC指标均在0.9以上。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/96\/ba\/96d71fda9c76c863de248e07dcb91aba.png","alt":null,"title":"【Performance on KsongDataset using different backbone CNN models】","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Hyper-resnet18在KsongDataset的scatter plot,散点分布越贴合中心实线,算法与人眼主观预测的相关性越好:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/46\/cf\/468b0c3f48677186b6ac664fd13caacf.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"从使用不同backbone models结果对比来看,基于ResNet50\/ResNext50的算法在KSongDataset上的PLCC\/SRCC指标相对较高,但是考虑到ResNet18指标与之很接近,但模型更小且前馈速度更快,所以目前我们主要使用的是基于ResNet18的质量评估算法。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"四、讨论"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"目前我们算法已经集成在QUAlity Standalone Interface (QUASI) SDK,对K歌每日线上视频片段进行扫描,经过从2020年1月份至今约3个月左右的线上监测,我们进一步验证了算法对清晰度评估的可靠性,同时也收集到一批低质量的视频序列。作为无参考视频清晰度闭环反馈的关键步骤,我们针对性分析了低质量视频的产生原因,迭代了低质量原因分析算法,从而进一步提升K歌直播视频的整体质量。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"2020-04-13 低质量视频片段,常出现强噪声、过曝光等明显失真类型:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/eb\/8f\/eb346yy526a147628fd4abdd9047c88f.gif","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/a7\/58\/a7115ba4cba1532d75a9fe4822b89458.gif","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"低质量视频分析:我们首先分析了头部用户和尾部用户的质量是否和设备网络等硬件设备有明显相关性。如下图所示,我们采集了约450个主播的网络和设备信息,发现网络类型主要是3G、4G和WIFI,从分布比例上头尾主播没有明显的差异。用户机型方面,我们主要分为三类:>=iPhone10的机型,<=iPhone8的机型、和android机型。由于android型号复杂且收集的型号不全,我们更侧重iPhone机型用户的对比,如下图所示,我们发现头部用户使用相对老旧的“<=iPhone8的机型”的主播相比尾部用户的比例更少。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/3e\/39\/3e6a34ef12f03f2cfe8864848d934839.png","alt":null,"title":"【主播的网络和设备信息】","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"由于全民K歌主播大多使用手机的前置摄像头来采集直播画面,我们来看下iPhone不同机型的设备细节:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"iPhone7 front camera:7 MP, f\/2.2 aperture, 1080p HD video recording"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"iPhone8 front camera: 7 MP, f\/2.2 aperture, 1080p HD video recording"}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"iPhone 11 Pro max: 12 MP, ƒ\/2.2 aperture, 1080p HD at 30 or 60 FPS"}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/62\/db\/627b14b64269057238f7b50eea3e19db.png","alt":null,"title":"【DXOMARK – selfie 测评】","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"硬件方面iPhone11相对于iPhone7的提升主要是在分辨率从7MP提升至12MP,其他字面硬件细节区别不大,但根据专业测评网站的不完全的测评打分(iPhone X, Xs 11 Pro)我们可以看到,每年的iPhone更新换代对前置摄像头的录制效果也是有比较可观的提升的。这样我们可以初步得出一个结论:头部较多比例的主播使用了相对较新的硬件设备,对视频录制的效果提升有一定的帮助。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"虽然我们是无法保障每个用户都是用最好的设备,但是我们依然是可以采用不同的措施来辅助改善用户的视频清晰度的。手机平板等前置摄像头相对于后置摄像头自身感光CMOS元器件面积过小,镜头入光量也有限,导致在常见的室内\/夜晚等直播环境下对录制光照环境比较敏感,视频质量容易受到环境光的干扰,很容易在弱\/强光源下导致欠\/过曝光的现象,如背景光过弱导致整体画质变差、背景灯光直射导致局部过曝光和光线不均。在弱光环境下,相机ISO调高也会导致非常明显的白噪声类型失真。所以根据收集到的低质量视频,我们调整测试开发了相应的基于low-level特征的过曝光和噪点检测的等算法,可以实时监测用户直播的周遭光照环境,可以试试提供:如调整室内灯光、增加补光设备和摄像头角度调整等建议和调整策略。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"码率&质量:另外,我们也可以看下清晰度算法对一些高质量视频在不同转码码率的质量变化趋势预测效果。我们将上图所示高质量demo视频转码至500-8000kb\/s之间的不同码率,可以看到视频在2000kb\/s的码率以上时,视频质量的降低较为缓慢,但当码率低于2000kb\/s之后视频质量就开始出现比较明显的下降趋势。感兴趣的同学可以在附件中对比观看不同码率的视频的质量,进一步确认效果(附件链接:"},{"type":"link","attrs":{"href":"https:\/\/share.weiyun.com\/gipQeGlS","title":"","type":null},"content":[{"type":"text","text":"https:\/\/share.weiyun.com\/gipQeGlS"}]},{"type":"text","text":")."}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"demo视频在500-8000kb\/s的码率区间的无参考清晰度打分:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.infoq.cn\/resource\/image\/75\/9b\/75831918yy87bf988f5b6c972c5b7d9b.png","alt":null,"title":"","style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":"","fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"五、总结"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"本文主要介绍了我们针对细分的主播场景的清晰度算法的开发过程,包含数据集构建、算法细节和算法上线后的一些反馈分析讨论。在未来的工作中,我们会将清晰度算法的应用扩展到更多的如游戏和视频会议等应用场景,欢迎有需求的小伙伴一起合作开发新的算法。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"PS:用个人见解来回应导语的问题,视频的清晰度,是以人眼为标杆视频对捕捉场景的还原度,不存在分辨率和码流越高清晰度越高的必然关系,不论是建筑、风光、人像等自然图像,或者屏幕录制、CG动画、游戏等非自然图像,我们都是以日常观察作为先验知识去感受视频是否清晰,视频的捕获、编码、传输等处理如源场景和人眼之间的一层层玻璃,每个阶段的质量干扰均会降低终端用户感知的清晰度。UGC用户多变的视频录制场景、随视频会议普及的非自然屏幕内容(Screen Content Coding)以及5G+云游戏的多样化的游戏场景不仅对视频编码传输等有着更高的要求,也更需要适用性更广的无参考质量评估算法来辅助提升优化视频的用户体验。"}]},{"type":"horizontalrule"},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"头图"},{"type":"text","text":":Unsplash"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"作者"},{"type":"text","text":":张亚彬"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"原文"},{"type":"text","text":":"},{"type":"link","attrs":{"href":"https:\/\/mp.weixin.qq.com\/s\/VhS-jnJTVo5-4cqoMcMXiw","title":"","type":null},"content":[{"type":"text","text":"全民K歌客观清晰度评估算法技术分享"}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"来源"},{"type":"text","text":":腾讯多媒体实验室 - 微信公众号 [ID:TencentAVLab]"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"转载"},{"type":"text","text":":著作权归作者所有。商业转载请联系作者获得授权,非商业转载请注明出处。"}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章