1.項目做人臉識別,要求錄製人臉讀數視頻後進行上傳處理。但是手機上錄製的視頻非常大,安卓上3s的視頻就有5M左右大小。所以嘗試做了前端的js壓縮處理。一般來說視頻壓縮是在服務端通過ffmpeg做壓縮。但是這個據說對服務器的要求也很高。前端是不好做壓縮處理的,但是也不是不可以做,性能不好而已。在github上查了下試了幾種前端的壓縮組件。最後我試了一個用的比較好用順手,是輕量級的,適合H5。鏈接如下https://github.com/ffmpegjs/ffmpeg.js
2.用法github上已經有了。我這裏貼一下它examples裏這次參考的html 的代碼做備份。
<html>
<head>
<script src="/dist/ffmpeg.dev.js"></script>
<style>
html, body {
margin: 0;
width: 100%;
height: 100%
}
body {
display: flex;
flex-direction: column;
align-items: center;
}
</style>
</head>
<body>
<h3>Record video from webcam and transcode to mp4 (x264) and play!</h3>
<div>
<video id="webcam" width="320px" height="180px"></video>
<video id="output-video" width="320px" height="180px" controls></video>
</div>
<button id="record" disabled>Start Recording</button>
<p id="message"></p>
<script>
const { createWorker } = FFmpeg;
const worker = createWorker({
corePath: '../../node_modules/@ffmpeg/core/ffmpeg-core.js',
logger: ({ message }) => console.log(message),
});
const webcam = document.getElementById('webcam');
const recordBtn = document.getElementById('record');
const startRecording = () => {
const rec = new MediaRecorder(webcam.srcObject);
const chunks = [];
recordBtn.textContent = 'Stop Recording';
recordBtn.onclick = () => {
rec.stop();
recordBtn.textContent = 'Start Recording';
recordBtn.onclick = startRecording;
}
rec.ondataavailable = e => chunks.push(e.data);
rec.onstop = async () => {
transcode(new Uint8Array(await (new Blob(chunks)).arrayBuffer()));
};
rec.start();
};
(async () => {
webcam.srcObject = await navigator.mediaDevices.getUserMedia({ video: true, audio: true });
await webcam.play();
recordBtn.disabled = false;
recordBtn.onclick = startRecording;
})();
const transcode = async (webcamData) => {
const message = document.getElementById('message');
const name = 'record.webm';
message.innerHTML = 'Loading ffmpeg-core.js';
await worker.load();
message.innerHTML = 'Start transcoding';
await worker.write(name, webcamData);
await worker.transcode(name, 'output.mp4');
message.innerHTML = 'Complete transcoding';
const { data } = await worker.read('output.mp4');
const video = document.getElementById('output-video');
video.src = URL.createObjectURL(new Blob([data.buffer], { type: 'video/mp4' }));
}
</script>
</body>
</html>
3.下面是我的總結。這個前端的ffmpeg.js首先引用有兩種,如果你是js直接寫的H5,可以通過下面的方式直接引入
<script src="https://unpkg.com/@ffmpeg/[email protected]/dist/ffmpeg.min.js"></script>
const { createWorker } = FFmpeg;
如果是ES的方式,需要通過npm安裝
npm install @ffmpeg/ffmpeg
再
const { createWorker } = require('@ffmpeg/ffmpeg');
const worker = createWorker();
4.那麼在我的項目裏,前端壓縮的時候我其實一開始就是借鑑上面的代碼,只用了transcode方法的部分,通過把uni.chooseVideo調取本地攝像頭,錄製的視頻用webm讀取然後轉爲mp4格式,達到壓縮目的
const { createWorker } = FFmpeg;
const worker = createWorker();
uni.chooseVideo({
sourceType: ['camera'],
camera: 'front',
success(chooseRes) {
//chooseRes.tempFilePath爲本地錄製好的視頻blob格式的地址
await worker.load();
const name = 'record.webm';
await worker.write(name, chooseRes.tempFilePath);
await worker.transcode(name, 'output.mp4');
const { data } = await worker.read('output.mp4');
const src = URL.createObjectURL(new Blob([data.buffer], { type: 'video/mp4' }));
self.uploadVideo(chooseRes.tempFilePath)
}
})
5.這種方法可以達到效果,但是會有兩個新的問題,【1】耗時很長,4s的視頻在手機上需要快40s左右進行壓縮,時間太久了。微信瀏覽器錄製的視頻非常大,都壓不動。可見這種方法性能並不好。【2】uni-app的uni-chooseVideo不支持H5方法調用前置攝像頭,而這邊希望調前置攝像頭,用這種方法無法調用前置攝像頭。後來採用上面html的方法,進行改造。通過測試發現,讀取前置攝像頭的數據然後在video裏呈現,這樣的video src本身就比較小,不需要壓縮,這樣的方式可以避開錄製本地視頻非常大,同時能調用H5前置攝像頭。在uni-app的Hbuilder中自己寫了camera錄製視頻的組件,代碼如下
<template>
<view class="pageContent">
<view>
<video id="webcam" :class="cameraVisible?'recordSrc':'hiddenClass'" muted="true" controls="false" ></video>
</view>
<video class="recordSrc playvideo" controls :src="recordBlob" :class="cameraVisible?'hiddenClass':'recordSrc'"></video>
</div>
<button class="button" @click="startRecording" v-if="step==0">開始錄製</button>
<button class="button" @click="stopRecording" v-if="step == 1">停止錄像</button>
<button class="button restart" @click="restart" v-if="step==2">重新錄製</button>
<button class="button upload" @click="uploadVideo" v-if="step==2">上傳視頻</button>
</view>
</template>
<script>
import {uploadVideo} from '../../api/global.js'
import {doVideo} from '../../api/smz.js'
export default {
data() {
return {
message:"",
mediaObject:'',
rec:'',
chunks:[],
recordBlob:'',
step:"0" ,//0:開始錄像, 1:錄像中,2:停止錄像
cameraVisible: true, // 顯示/隱藏相機
}
},
onLoad() {
},
onReady() {
this.init();
},
components: {
},
methods: {
async init(){
this.videoContext = uni.createVideoContext('webcam');
const dom = document.getElementsByClassName("uni-video-video")[0]
await navigator.mediaDevices.getUserMedia({ video: true, audio: true }).then(function(media) {
console.log('getUserMedia completed successfully.');
dom.srcObject = media
}).catch(function(error) {
console.log(error.name + ": " + error.message);
alert(error.name + ": " + error.message)
});
console.log(dom.srcObject)
this.mediaObject = dom.srcObject;
await this.videoContext.play();
},
startRecording (){
// alert("開始錄像了")
this.step = "1"
this.rec = new MediaRecorder(this.mediaObject);
this.chunks = [];
this.rec.start();
// alert("啓動錄像成功")
},
stopRecording(){
this.step = "2";
this.cameraVisible = false;
this.rec.stop();
//alert("停止成功")
this.rec.ondataavailable = e => this.chunks.push(e.data);
//alert("導數據了")
this.rec.onstop = async () => {
//alert("輸出錄像blob:"+URL.createObjectURL(new Blob(this.chunks, { type: 'video/mp4' })))
console.log(URL.createObjectURL(new Blob(this.chunks, { type: 'video/mp4' })))
this.recordBlob = URL.createObjectURL(new Blob(this.chunks, { type: 'video/mp4' }));
};
},
async restart(){
this.step = "0";
this.cameraVisible = true;
console.log(this.mediaObject)
// await this.videoContext.play();
},
uploadVideo(){
//關閉攝像頭
if (this.mediaObject) {
console.log(this.mediaObject);
console.log(this.mediaObject.getTracks());
this.mediaObject.getTracks()[0].stop();
this.mediaObject.getTracks()[1].stop();
}
const self = this
console.log(self.recordBlob)
//進行下一步處理。。。。。。
},
}
}
</script >
<style scoped>
uni-page-body{
height: 100%
}
uni-view{
display:contents;
}
html, body {
margin: 0;
width: 100%;
height: 100%
}
body {
display: flex;
flex-direction: column;
align-items: center;
}
.recordSrc {
width:100%;
height:100%;
position:absolute;
}
.playvideo{
left:0
}
.button {
position: absolute;
bottom: 11%;
left: 50%;
margin-left: -50px;
width: 100px;
border-radius: 42px;
background-color: red;
}
.restart{
left:25%;
}
.upload{
left:75%;
}
.hiddenClass {
visibility: hidden;
}
</style>
這種方式在我的安卓機上chrome瀏覽器支持,微信瀏覽器也支持,但是原生的華爲瀏覽器不支持,IOS不可以用。具體如何在IOS中做到兼容還在探索中。有好方法的朋友歡迎指正。