webRTC探索音视频的录制的实现

{"type":"doc","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"一直心痒于“在线视频通话”却无从下手,直到最近接触了webRTC技术。虽然还未有此功力,但却“误打误撞”解决了困扰我的另一个问题:音视频录制!","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"什么是webRTC","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https://developer.mozilla.org/zh-CN/docs/Web/API/WebRTC_API","title":"","type":null},"content":[{"type":"text","text":"MDN","attrs":{}}]},{"type":"text","text":"描述说:“WebRTC (Web Real-Time Communications) 是一项实时通讯技术,它允许网络应用或者站点,在不借助中间媒介的情况下,建立浏览器之间点对点(Peer-to-Peer)的连接,实现视频流和(或)音频流或者其他任意数据的传输。WebRTC包含的这些标准使用户在无需安装任何插件或者第三方的软件的情况下,创建点对点(Peer-to-Peer)的数据分享和电话会议成为可能。”总结下来,其实有四点:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"numberedlist","attrs":{"start":1,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"跨平台","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"(主要)用于浏览器","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":"实时传输","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":4,"align":null,"origin":null},"content":[{"type":"text","text":"音视频引擎","attrs":{}}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在稍稍深入学习了webRTC之后,我发现其不仅可以用于「音视频录制、视频通话」,还可以用在「照相机、音乐播放器、共享远程桌面、即时通信工具、P2P网络加速、文件传输、实时人脸识别」等场景上 —— 当然,是结合了其他众多技术的基础上完成。不过这些中有几项似乎让我们很“熟悉”:照相机、人脸识别、共享桌面。这是因为RTC使用的是基于音视频流的API!","attrs":{}}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"webRTC音视频数据采集","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"实现数据传输最重要的就是数据采集了,这里有一个非常重要的API:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"javascript"},"content":[{"type":"text","text":"let promise=navigator.mediaDevices.getUserMedia(containts);\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这个API会提示用户给予使用媒体输入的许可,媒体输入会产生一个","attrs":{}},{"type":"codeinline","content":[{"type":"text","text":"MediaStream","attrs":{}}],"attrs":{}},{"type":"text","text":",里面包含了请求的媒体类型的","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"轨道","attrs":{}},{"type":"text","text":"。此流可以包含一个视频轨道(来自硬件或者虚拟视频源,比如相机、视频采集设备和屏幕共享服务等等)、一个音频轨道(同样来自硬件或虚拟音频源,比如麦克风、A/D转换器等等),也可能是其它轨道类型。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"它返回一个 Promise 对象,成功后会resolve回调一个 MediaStream 对象。若用户拒绝了使用权限,或者需要的媒体源不可用,promise会reject回调一个 PermissionDeniedError 或者 NotFoundError 。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这里最重要的就是“轨道”了:因为它是基于“流”的。它得到的是一个MediaSource 对象(这是一个stream对象)!也就意味着:如果要使用此结果,要么用","attrs":{}},{"type":"codeinline","content":[{"type":"text","text":"srcObject","attrs":{}}],"attrs":{}},{"type":"text","text":"属性,要么就得用 ","attrs":{}},{"type":"codeinline","content":[{"type":"text","text":"URL.createObjectURL()","attrs":{}}],"attrs":{}},{"type":"text","text":" 转为url!","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/fe/fe3f626198bfb015af04008f8da0f471.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"回到API本身,我们可以这么理解:通过它我们可以得到当前页面所有的音视频通道的集合,根据不同的接口我们将它们分为不同的“轨道”,我们可以对它们进行设置和输出配置项。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在 ","attrs":{}},{"type":"link","attrs":{"href":"https://blog.csdn.net/qq_43624878/article/details/107575226#id=input_30","title":"","type":null},"content":[{"type":"text","text":"这篇文章","attrs":{}}]},{"type":"text","text":" 中我们可以看到实现“拍照”的部分就用到了这个API","attrs":{}}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们先在页面上写个video元素:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"html"},"content":[{"type":"text","text":"\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这里为什么要用video元素?我们需要获取音视频流,而HTML5中相关的元素也就video和audio了,而能同时获取这两个的就只有video元素了(如果你只需要音频流,你完全可以用audio元素)。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"根据上面getUserMedia API 的用法,我们不难写出如下的语法:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"javascript"},"content":[{"type":"text","text":"navigator.mediaDevices.getUserMedia(constraints)\n .then(gotMediaStream)\n .catch(handleError);\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"啊,我突然想起来,还没有介绍 ","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"contraints","attrs":{}},{"type":"text","text":" 参数呢!","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"horizontalrule","attrs":{}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"webRTC获取约束","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"“约束”是指:控制对象的一些配置项。约束分为两类:视频约束和音频约束。在webRTC中,常用的视频约束有:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"width","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"height","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"aspectRatio:宽高比","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"frameRate","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"facingMode:摄像头翻转(前后摄像头)(这个配置主要是对于移动端来说,因为浏览器只有前置摄像头)","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"resizeMode:是否进行剪裁","attrs":{}}]}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"而常用的音频约束有:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"volume:音量","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"sampleRate:采样率","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"sampleSize:采样大小","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"echoCancellation:回音消除设置","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"autoGainControl:自动增益","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"noiseSuppression:降噪","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"latency:延迟(根据不同场景,一般来说越小延迟越小体验越好)","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"channelCount:单/双声道","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"...","attrs":{}}]}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"horizontalrule","attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"contraints是什么?官方把它称作“MediaStreamContraints”。通过代码我们可以更清晰地看到它:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"javascript"},"content":[{"type":"text","text":"dictionary MediaStreamContraints{\n (boolean or MediaTrackContaints) video = false;\n (boolean or MediaTrackContaints) audio = false;\n}\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"它是负责","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"对音视频约束的采集","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如果video/audio是简单的设置为bool类型,则只是简单决定是否要采集","attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如果video/audio是Media Track,则可进一步设置比如:视频的分辨率、帧率、音频的音量、采样率等","attrs":{}}]}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"根据上面说明,我们可以在代码中完善一下配置项 —— 当然,通常使用这种API第一件事是要判断用户当前浏览器是否支持:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"javascript"},"content":[{"type":"text","text":"if(!navigator.mediaDevices || !navigator.mediaDevices.getUserMedia){\n console.log('getUserMedia is not supported!');\n return;\n}else{\n var constraints={\n // 这里也可以直接:video:false/true,则知识简单的表示不采集/采集视频\n video:{\n width:640,\n height:480,\n frameRate:60\n },\n // 这里也可以直接:audio:false/true,则只是简单的表示不采集/采集音频\n audio:{\n noiseSuppression:true,\n echoCancellation:true\n }\n }\n navigator.mediaDevices.getUserMedia(constraints)\n .then(gotMediaStream)\n .catch(handleError);\n\n}\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"下面就该实现getUserMedia API 成功和失败时的回调函数了,这里我们要先搞懂“在回调时要干什么” —— 其实无非是将“获取到的流”交出去:给全局变量保存起来或者直接作为video/audio的","attrs":{}},{"type":"codeinline","content":[{"type":"text","text":"srcObject","attrs":{}}],"attrs":{}},{"type":"text","text":"值(成功)或者抛出错误(失败)","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"javascript"},"content":[{"type":"text","text":"var videoplay=document.querySelector('#player');\nfunction gotMediaStream(stream){\n // 【1】\n videoplay.srcObject=stream;\n // 【2】\n // return navigator.mediaDevices.enumerateDevices();\n}\nfunction handleError(err){\n console.log('getUserMedia error:',err);\n}\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"注释中用了一个API:","attrs":{}},{"type":"codeinline","content":[{"type":"text","text":"mediaDevices.enumerateDevices()","attrs":{}}],"attrs":{}},{"type":"text","text":"。MediaDevices方法列举了所有可用的媒体输入和输出设备(如麦克风、相机、耳机等)。返回的承诺通过描述设备的MediaDevice 信息阵列得到解决。我们也可以把它看做上面说的“轨道集合”。比如这里如果你把注释按promise方式写出,就会发现它是一个数组,里面包含了三个“轨道”。两个audio(输入、输出)一个video:","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/69/69dcb6e0780844d4d570817f18ef8788.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"而当你输出stream参数后你会发现","attrs":{}}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/4b/4b0d016381cefe977857a3f9ef9df64d.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":false,"pastePass":false}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在一些场景下,通过这个API我们可以将一些设备配置暴露给用户","attrs":{}}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"至此视频中的第一个效果——实时捕获音视频就完成了。这也是下面的基础:不能捕获,又怎么录制呢?","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们先再加一些需要的节点:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"html"},"content":[{"type":"text","text":"\n\n\n\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"前面说了,经过捕获后得到的是一个“流”对象,那么接收时也一定需要一个能拿到流并进行操作的API :MediaStream API!","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https://developer.mozilla.org/zh-CN/docs/Web/API/MediaRecorder","title":"","type":null},"content":[{"type":"text","text":"MDN","attrs":{}}]},{"type":"text","text":"上是这样介绍的:“MediaRecorder 是 MediaStream Recording API 提供的用来进行媒体轻松录制的接口, 他需要通过调用 ","attrs":{}},{"type":"codeinline","content":[{"type":"text","text":"MediaRecorder()","attrs":{}}],"attrs":{}},{"type":"text","text":" 构造方法进行实例化。”因为是 MediaStream Recording API 提供的接口,所以它有一个构造函数:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"bulletedlist","content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"codeinline","content":[{"type":"text","text":"MediaRecorder.MediaRecorder()","attrs":{}}],"attrs":{}},{"type":"text","text":":创建一个新的MediaRecorder对象,对指定的MediaStream 对象进行录制,支持的配置项包括设置容器的MIME 类型 (例如\"video/webm\" 或者 \"video/mp4\")和音频及视频的码率或者二者同用一个码率","attrs":{}}]}]}],"attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"根据MDN上提供的方法,我们可以得到一个比较清晰的思路:先调用构造函数,将前面获取到的流传入,经过API的解析后拿到一个对象,在规定的时间切片下将他们依次传入数组中(因为至此工作基本就已经完成了,用数组接收方便以后转为Blob对象操作):","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"javascript"},"content":[{"type":"text","text":"function startRecord(){\n // 定义一个接收数组\n buffer=[];\n\n var options={\n mimeType:'video/webm;codecs=vp8'\n }\n // 返回一个Boolean值,来表示设置的MIME type 是否被当前用户的设备支持.\n if(!MediaRecorder.isTypeSupported(options.mimeType)){\n console.error(`${options.mimeType} is not supported`);\n return;\n }\n\n try{\n mediaRecorder=new MediaRecorder(window.stream,options);\n }catch(e){\n console.error('Fail to create');\n return;\n }\n mediaRecorder.ondataavailable=handleDataAvailable;\n // 时间片\n // 开始录制媒体,这个方法调用时可以通过给timeslice参数设置一个毫秒值,如果设置这个毫秒值,那么录制的媒体会按照你设置的值进行分割成一个个单独的区块\n mediaRecorder.start(10);\n}\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这里面有两点需要注意的地方:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"numberedlist","attrs":{"start":1,"normalizeStart":1},"content":[{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"获取的stream流:因为之前捕获音视频的函数必然要封装起来,所以这里如果再要使用就一定要把这个流对象设置为全局对象 —— 笔者直接将其挂载到了window对象上,","attrs":{}},{"type":"text","marks":[{"type":"strong","attrs":{}}],"text":"在前面代码中注释[1]的地方接收对象:","attrs":{}},{"type":"text","text":" ","attrs":{}},{"type":"codeinline","content":[{"type":"text","text":"window.stream=stream;","attrs":{}}],"attrs":{}}]}]},{"type":"listitem","attrs":{"listStyle":null},"content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"这里定义了一个buffer数组,和mediaRecorder对象一样,考虑到要在","attrs":{}},{"type":"codeinline","content":[{"type":"text","text":"ondataavailable","attrs":{}}],"attrs":{}},{"type":"text","text":" 方法中的使用以及在完成后需要停止继续捕获录制音视频流,也放在全局下声明变量","attrs":{}}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"javascript"},"content":[{"type":"text","text":"var buffer;\nvar mediaRecorder;\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"ondataavailable方法就是在录制数据准备完成后才触发的。在里面我们需要做的就是按照之前的思路将切片好的数据依次存入","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"javascript"},"content":[{"type":"text","text":"function handleDataAvailable(e){\n // console.log(e)\n if(e && e.data && e.data.size>0){\n buffer.push(e.data);\n }\n}\n\n//结束录制\nfunction stopRecord(){\n mediaRecorder.stop();\n}\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"然后调用:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"javascript"},"content":[{"type":"text","text":"let recvideo=document.querySelector('#recplayer');\nlet btnRecord=document.querySelector('#record');\nlet btnPlay=document.querySelector('#recplay');\nlet btnDownload=document.querySelector('#download');\n// 开始/停止录制\nbtnRecord.onclick=()=>{\n if(btnRecord.textContent==='Start Record'){\n startRecord();\n btnRecord.textContent='Stop Record';\n btnPlay.disabled=true;\n btnDownload.disabled=true;\n }else{\n stopRecord();\n btnRecord.textContent='Start Record';\n btnPlay.disabled=false;\n btnDownload.disabled=false;\n }\n}\n// 播放\nbtnPlay.onclick=()=>{\n var blob=new Blob(buffer,{type: 'video/webm'});\n recvideo.src=window.URL.createObjectURL(blob);\n recvideo.srcObject=null;\n recvideo.controls=true;\n recvideo.play();\n}\n// 下载\nbtnDownload.onclick=()=>{\n var blob=new Blob(buffer,{type:'video/webm'});\n var url=window.URL.createObjectURL(blob);\n var a=document.createElement('a');\n a.href=url;\n a.style.display='none';\n a.download='recording.webm';\n a.click();\n}\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"文章到这里功能就基本实现了,但是还有一点问题:如果只想要录制音频,这时候在你说话的时候因为捕获流一直在工作所以其实这时候有两个声源 —— 你的声音和捕获到的你的声音。所以我们需要将捕获时的声音关掉!但是如果你根据前面说的在","attrs":{}},{"type":"codeinline","content":[{"type":"text","text":"getUserMedia","attrs":{}}],"attrs":{}},{"type":"text","text":" API中添加 ","attrs":{}},{"type":"codeinline","content":[{"type":"text","text":"volume:0","attrs":{}}],"attrs":{}},{"type":"text","text":" 是不会有任何效果的 —— 因为至今还没有任何浏览器对这个属性支持。","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"但是联想到我们承载音视频的是一个video/audio容器,所以我们其实可以直接用其对应DOM节点的volume属性控制:","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"javascript"},"content":[{"type":"text","text":"// 在前面代码中注释为【2】的地方添加代码\nvideoplay.volume=0;\n","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"大功告成!","attrs":{}}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"horizontalrule","attrs":{}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"本文完整代码已开源到","attrs":{}},{"type":"link","attrs":{"href":"https://github.com/1314mxc/study_for_webRTC","title":"","type":null},"content":[{"type":"text","text":"GitHub:study_for_webRTC","attrs":{}}]},{"type":"text","text":" ,欢迎阅读、下载 & star!","attrs":{}}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章