最近做了一個android視頻播放器,在jni中採用c/c++現了播放器的播放,暫停,快進等基本的播放器功能.
使用開源庫FFMpeg來解碼,得到音視頻數據,FFMPEG是一個功能強大的音視頻解碼,編碼的庫,當然了,若要在android中使用ffmpeg庫,必須要交叉編譯,才能得到arm平臺才能運行的.so文件.
在ffmpeg的官網https://ffmpeg.org/中下載最新的ffmpeg源碼。
本人是在ubutn中編譯安裝ffmpeg的源碼.ffmpeg-3.0.1, 解壓源碼,並在源碼目錄下編譯,命令如下:
#!/bin/bash
make clean
export NDK=/home/zsy/setup/android-ndk-r10
export PREBUILT=$NDK/toolchains/arm-linux-androideabi-4.9/prebuilt
export PLATFORM=$NDK/platforms/android-L/arch-arm
export PREFIX=../fftoollib
build_one(){
./configure --target-os=linux --prefix=$PREFIX --enable-cross-compile --enable-libfreetype --enable-runtime-cpudetect --disable-asm --arch=arm --cc=$PREBUILT/linux-x86_64/bin/arm-linux-androideabi-gcc --cross-prefix=$PREBUILT/linux-x86_64/bin/arm-linux-androideabi-
--disable-stripping --nm=$PREBUILT/linux-x86_64/bin/arm-linux-androideabi-nm --sysroot=$PLATFORM --enable-gpl --enable-shared --enable-static --enable-nonfree --enable-small --enable-version3 --disable-vda --disable-iconv --enable-encoder=libx264 --enable-encoder=libfaac
--enable-zlib --enable-ffprobe --enable-sdl --enable-ffplay --enable-ffmpeg --enable-ffserver --disable-debug --pkg-config-flags="pkg-config --cflags freetype2 --libs freetype2" --extra-cflags="-fPIC -DANDROID -D__thumb__ -mthumb -Wfatal-errors -Wno-deprecated
-mfloat-abi=softfp -marm -march=armv7-a"
--enable-libfreetype
}
build_one
make
make install
編譯成功後,生成了libavcodec-57.so,libavdevice-57.so,libavfilter-6.so libavformat-57.so libavutil-55.so libpostproc-54.so libswresample-2.so libswscale-4.so等8個共享庫.
播放器的代碼主要思路是:使用兩個隊列來保存解封裝出來的音視頻數據,其結構是AVPacket, 使用兩個數組來保存解碼出來的音視頻Frame,其結構是AVFrame。使用pthread_create開啓是個線程,一個線程read_thread()來讀取解封裝的數據分別放入音頻隊列audio_q,視屏隊列video_q,一個線程video_thread()來讀取video_q的數據,並解碼,把數據放入到AVFrame數組video_array中,一個線程audio_thread()來讀取audio_q中的數據,最後一個線程video_fresh()來循環讀取 video_array的數據並顯示出來.
那麼,在播放過程中,如何來顯示圖像, 播放聲音呢。
對於解碼出來的視頻數據可以作爲紋理數據,通過opengl 來顯示。具體的代碼如下:
void drawFrame(const void *pixels_Y , const void *pixels_U ,const void *pixels_V , int pixel_w , int pixel_h){
glClear( GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
checkGlError("glClear");
// texturecoord
glVertexAttribPointer(maTexCoorHandle, 2, GL_FLOAT, GL_FALSE, 0, textureVertices);
checkGlError("glVertexAttribPointer text");
glEnableVertexAttribArray(maTexCoorHandle);
checkGlError("glVertexAttribPointer text");
//Set postion
glEnableVertexAttribArray(maPositionHandle);
checkGlError("glEnableVertexAttribArray postion");
glVertexAttribPointer(maPositionHandle, 3, GL_FLOAT, GL_FALSE, 0, vertexVertices);
checkGlError("glVertexAttribPointer postion");
glActiveTexture(GL_TEXTURE0);checkGlError("glActiveTexture");
glBindTexture(GL_TEXTURE_2D, text);checkGlError("glBindTexture");
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, pixel_w, pixel_h, 0, GL_RGB, GL_UNSIGNED_BYTE, pixels_Y);
checkGlError("glTexImage2D");
glUniform1i(textureUniformY, 0);checkGlError("glUniform1i");
}
嗯,作爲紋理的視頻數據格式是RGB24,必須將解碼出來的AVFrame數據的格式轉化爲RGB24格式,才能顯示出來.轉化的主要代碼如下:
av_image_fill_arrays(vp->frame->data, vp->frame->linesize,out_buffer,
AV_PIX_FMT_RGB24,width ,height,1);
is->img_convert_ctx = sws_getCachedContext(is->img_convert_ctx,vp->width , vp->height,
src_frame->format,
width , height,
AV_PIX_FMT_RGB24, SWS_BICUBIC, NULL, NULL, NULL);
sws_scale( is->img_convert_ctx , (uint8_t const * const *)src_frame->data,
src_frame->linesize, 0, src_frame->height,
vp->frame->data, vp->frame->linesize);
//頂點座標
static const GLfloat vertexVertices[] = {
-1.0f, -1.0f,0,
1.0f, -1.0f,0,
-1.0f, 1.0f,0,
1.0f, 1.0f,0,
};
// 紋理座標
static const GLfloat textureVertices[] = {
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
};
在這之前,必須加載shader語句,初始化程序.
shader語句如下:
static const char g_vertextShader[] = {
"attribute vec4 vertexIn;\n"
"attribute vec2 textureIn;\n"
"varying vec2 textureOut;\n"
"void main() {\n"
" gl_Position = vertexIn;\n"
" textureOut = textureIn;\n"
"}\n"
};
static const char g_fragmentShader[] = {
"uniform sampler2D tex;\n"
"varying vec2 textureOut;\n"
"void main(void) {\n"
// "vec3 rgb = texture2D(tex, textureOut);\n"
" gl_FragColor=texture2D(tex, textureOut.st);\n"
"}\n"
}
顯示的圖片如下:
對於播放聲音,可以使用opensl es來播放,可以參考ndk中native-audio例子的代碼,在本項目中使用buffer queue player,
初始化代碼:
SLDataFormat_PCM format_pcm;
format_pcm.formatType = SL_DATAFORMAT_PCM;
format_pcm.numChannels = channel;
format_pcm.samplesPerSec = rate *1000;
format_pcm.bitsPerSample = SL_PCMSAMPLEFORMAT_FIXED_16;
format_pcm.containerSize = SL_PCMSAMPLEFORMAT_FIXED_16;
if(channel == 2){
format_pcm.channelMask = SL_SPEAKER_FRONT_LEFT | SL_SPEAKER_FRONT_RIGHT;
}else{
format_pcm.channelMask = SL_SPEAKER_FRONT_CENTER;
}
format_pcm.endianness = SL_BYTEORDER_LITTLEENDIAN;
SLDataSource audioSrc = {&loc_bufq, &format_pcm};
// configure audio sink
SLDataLocator_OutputMix loc_outmix = {SL_DATALOCATOR_OUTPUTMIX, outputMixObject};
SLDataSink audioSnk = {&loc_outmix, NULL};
// create audio player
const SLInterfaceID ids[3] = {SL_IID_BUFFERQUEUE, SL_IID_EFFECTSEND,
/*SL_IID_MUTESOLO,*/ SL_IID_VOLUME};
const SLboolean req[3] = {SL_BOOLEAN_TRUE, SL_BOOLEAN_TRUE,
/*SL_BOOLEAN_TRUE,*/ SL_BOOLEAN_TRUE};
result = (*engineEngine)->CreateAudioPlayer(engineEngine, &bqPlayerObject, &audioSrc, &audioSnk,
3, ids, req);
if(SL_RESULT_SUCCESS != result){
LOGI("CreateAudioPlayer fial\n");
return result;
}
播放回調代碼:
void bqPlayerCallback(SLAndroidSimpleBufferQueueItf bq, void *context)
{
assert(bq == bqPlayerBufferQueue);
assert(NULL == context);
// 讀取audio AVFrame中的數據.
audio_decode_callback(&audio_buf, &size);
SLresult result;
result = (*bqPlayerBufferQueue)->Enqueue(bqPlayerBufferQueue, audio_buf,
size);
assert(SL_RESULT_SUCCESS == result);
(void) result;
}
好了,至此,可以通過的ni來播放一個既有視頻圖像,又有聲音的視頻文件了.
我們也可以給視頻添加一些水印,
加圖片水印:
邊緣檢測:
加字體水印:
點擊VR按鈕,顯示如下圖像: