使用SDL2.0進行YUV顯示


本文描述如何從ffmpeg decode出來的YUV frame到SDL2.0顯示,重點在如何將ffmpeg decode的yuv format轉換到SDL2.0可以顯示的format,以及SDL如何顯示。


爲什麼要用YUV顯示

在使用軟解方式實現一個視頻播放器時,顯示部分如果需要使用RGB format,由於ffmepg decode出來的format爲YUV420,則需要做一次從YUV420到RGB32的轉換,這個轉換過程中涉及到大量的浮點運算,performance會比較低,導致視頻播放幀率受到影響,如果可以直接使用YUV做顯示,理論上performance會比使用RGB更好一些。

convert YUV420 to YUV420P(YV12)

1. SDL2.0支援的YUV format

從SDL2.0的官方文檔可以瞭解到, SDL2.0所支援的YUV format如下:

YUV Format comment
SDL_PIXELFORMAT_YV12 planar mode: Y + V + U (3 planes)
SDL_PIXELFORMAT_IYUV planar mode: Y + U + V (3 planes)
SDL_PIXELFORMAT_YUY2 packed mode: Y0+U0+Y1+V0 (1 plane)
SDL_PIXELFORMAT_UYVY packed mode: U0+Y0+V0+Y1 (1 plane)
SDL_PIXELFORMAT_YVYU packed mode: Y0+V0+Y1+U0 (1 plane)

2. ffmepg output YUV format

ffmepg decode output format爲YUV420.

從上述可以看出,ffmpeg decode的output沒有辦法直接送給SDL2.0進行顯示,需要將YUV420轉換爲SDL2.0支援的YUV format,YUV420和YV12(YUV420P)同爲YUV420格式,只是排列方式不同,所以我們選取YV12作爲我們的轉換目標。

3. sample code

AVFrame* pFrameYUV;

pFrameYUV = av_frame_alloc();
if( pFrameYUV == NULL )
    return -1;

int numBytes = avpicture_get_size( PIX_FMT_YUV420P, pCodecCtx->width, pCodecCtx->height );
uint8_t* buffer = ( uint8_t* )av_malloc( numBytes * sizeof( uint8_t ) );

avpicture_fill( ( AVPicture* )pFrameYUV, buffer, PIX_FMT_YUV420P, pCodecCtx->width, pCodecCtx->height );

struct SwsContext* sws_ctx = NULL;
sws_ctx = sws_getContext( pCodecCtx->width, pCodecCtx->height, pCdoecCtx->pix_fmt, pCodecCtx->width, pCdoecCtx->height, PIX_FMT_YUV420P, SWS_BILINEAR, NULL, NULL, NULL );

// use av_read_frame & avcodec_decode_video2 to get a complete frame 
sws_scale( sws_ctx, ( uint8_t const* const* )pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameYUV->data, pFrameYUV->linesize );

save YUV420P to files

YUV420P的格式是先放整張圖所有的Y data,然後是所有的U和V(U V順序需要確認);
這樣在將YUV420P存到file的時候,需要從AVFrame中獎data[0],data[1],data[2]的所有數據依次寫入。

void SaveYuvFrame( AVFrame* pFrame, int width, int height, int iFrame ) {
    FILE* pFile;
    char szFilename[32];
    int y;

    // Open file
    sprintf( szFilename, "frame%d.yuv", iFrame );
    pFile = fopen( szFilename, "wb" );
    if( pFile == NULL ) {
        return;
    }

    // write yuv data
    fwrite( pFrame->data[0], 1, ( int )(pFrame->linesize[0] * height), pFile );
    fwrite( pFrame->data[1], 1, ( int )pFrame->linesize[1]*height, pFile );
    fwrite( pFrame->data[2], 1, pFrame->linesize[2]*height, pFile );

    // Close file
    fclose( pFile );
}

上面code中,沒有將linesize存下來(linesize中有3個數據,y,v,v在每一行中的byte數目),以1920X1080的data爲例,linesize={1920, 960, 960}

load YUV420P from files and input to SDL2.0

FILE* pFile = fopen( "frame0.yuv", "rb" );
if( pFile == NULL )
    return -1;

char* yuvdata[3];
int linesize[3] = {1920, 960, 960};
yuvdata[0] = ( char* )malloc( ( linesize[0] +  linesize[1] + linesize[2]) * pCodecCtx->height );
yuvdata[1] = yuvdata[0] + linesize[0] * pCodecCtx->height;
yuvdata[2] = yuvdata[1] + linesize[1] * pCodecCtx->height;

int size = fread( yuvdata[0], 1, linesize[0]*pCodecCtx->height, pFile );
size = fread( yuvdata[1], 1, linesize[1]*pCodecCtx->height, pFile );
size = fread( yuvdata[2], 1, linesize[2]*pCodecCtx->height, pFile );
fclose( pFile );

上面code中,由於yuv file中沒有存linesize,所以預設一個linesize,實際使用時,在存yuv data時,可以將linesize當做file header存下來

SDL2.0 image display flow

// SDL init
SDL_Window *window;
SDL_Renderer *renderer;
SDL_RendererInfo info;
SDL_Surface *image;
SDL_Rect    rect;
Uint8 *imageYUV;
SDL_Texture *texture;
SDL_Event event;
Uint32 then, now, frames;
SDL_bool done = SDL_FALSE;

if (SDL_Init(SDL_INIT_VIDEO) < 0) {
    fprintf(stderr, "Couldn't initialize SDL: %s\n", SDL_GetError());
    return 2;
}

/* Create the window and renderer */
window = SDL_CreateWindow("YUV speed test",
                          SDL_WINDOWPOS_UNDEFINED,
                          SDL_WINDOWPOS_UNDEFINED,
                          pCodecCtx->width, pCodecCtx->height,
                          SDL_WINDOW_SHOWN|SDL_WINDOW_RESIZABLE);
if (!window) {
    fprintf(stderr, "Couldn't set create window: %s\n", SDL_GetError());
    quit(5);
}

renderer = SDL_CreateRenderer(window, -1, 0);
if (!renderer) {
    fprintf(stderr, "Couldn't set create renderer: %s\n", SDL_GetError());
    quit(6);
}
SDL_GetRendererInfo(renderer, &info);
printf("Using %s rendering\n", info.name);

texture = SDL_CreateTexture(renderer, SDL_PIXELFORMAT_YV12, SDL_TEXTUREACCESS_STREAMING, pCodecCtx->width, pCodecCtx->height);
if (!texture) {
    fprintf(stderr, "Couldn't set create texture: %s\n", SDL_GetError());
    quit(7);
}

rect.x = 0;
rect.y = 0;
rect.w = pCodecCtx->width;
rect.h = pCodecCtx->height;
// end

// get YUV420P frame data

SDL_UpdateTexture( texture, &rect, yuvdata[0], linesize[0] );
SDL_RenderClear( renderer );
SDL_RenderCopy( renderer, texture, &rect, &rect );
SDL_RenderPresent( renderer );
SDL_Delay( 1000 );
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章