QT+ffmpeg打造跨平臺多功能播放器

概述

此程序用QT的Qwidget做視頻渲染,QT Mutimedia做音頻渲染,ffmpeg作爲音視頻編解碼內核,以CMake作跨平臺編譯。

編譯參數:
DepsPath : ffmpeg庫cmake路徑
QT_Dir: Qt cmake路徑

程序分爲輸入以及渲染兩個部分,輸入負責打開視頻流並從中解碼出音/視頻幀數據並分開保存到對應的數據隊列,渲染部分負責從隊列中獲取到數據然後轉換對應圖像/音頻格式,最後通過QT渲染出來。

程序流程如下圖:

流程圖

數據模塊

數據模塊封裝了數據隊列以及圖像格式轉換以及音頻數據格式轉換。包含AudioCvt ImageCvt 以及DataContext 幾個類以及對AvFrame數據的再封裝;

圖像轉換

ImageConvert::ImageConvert(AVPixelFormat in_pixelFormat, int in_width, int in_height,
                                 AVPixelFormat out_pixelFormat, int out_width, int out_height)
{
    this->in_pixelFormat    = in_pixelFormat;
    this->out_pixelFormat   = out_pixelFormat;
    this->in_width          = in_width;
    this->in_height         = in_height;
    this->out_width         = out_width;
    this->out_height        = out_height;
    this->swsContext        = sws_getContext(in_width, in_height, in_pixelFormat,
                                              out_width, out_height, out_pixelFormat,
                                              SWS_POINT, nullptr, nullptr, nullptr);
    this->frame             = av_frame_alloc();
    this->buffer            = (uint8_t*)av_malloc(avpicture_get_size(out_pixelFormat, out_width, out_height));

    avpicture_fill((AVPicture *)this->frame, this->buffer, out_pixelFormat, out_width, out_height);

}

ImageConvert::~ImageConvert()
{
    sws_freeContext(this->swsContext);
    av_frame_free(&this->frame);
}

void ImageConvert::convertImage(AVFrame *frame)
{
    sws_scale(this->swsContext, (const uint8_t *const *) frame->data, frame->linesize, 0,
              this->in_height, this->frame->data, this->frame->linesize);

    this->frame->width  = this->out_width;
    this->frame->height = this->out_height;
    this->frame->format = this->out_pixelFormat;
    this->frame->pts    = frame->pts;
}

音頻轉換

AudioConvert::AudioConvert(AVSampleFormat in_sample_fmt, int in_sample_rate, int in_channels,
                                 AVSampleFormat out_sample_fmt, int out_sample_rate, int out_channels)
{

    this->in_sample_fmt     = in_sample_fmt;
    this->out_sample_fmt    = out_sample_fmt;
    this->in_channels       = in_channels;
    this->out_channels      = out_channels;
    this->in_sample_rate    = in_sample_rate;
    this->out_sample_rate   = out_sample_rate;
    this->swrContext        = swr_alloc_set_opts(nullptr,
                                                 av_get_default_channel_layout(out_channels),
                                                 out_sample_fmt,
                                                 out_sample_rate,
                                                 av_get_default_channel_layout(in_channels),
                                                 in_sample_fmt,
                                                 in_sample_rate, 0, nullptr);

    this->invalidated       = false;

    swr_init(this->swrContext);

    this->buffer = nullptr;

    this->buffer = (uint8_t**)calloc(out_channels,sizeof(**this->buffer));

}d

AudioConvert::~AudioConvert()
{
    swr_free(&this->swrContext);

    av_freep(&this->buffer[0]);

}

int AudioConvert::allocContextSamples(int nb_samples)
{
    if(!this->invalidated)
    {
        this->invalidated = true;

        return av_samples_alloc(this->buffer, nullptr, this->out_channels,
                                nb_samples, this->out_sample_fmt, 0);
    }


    return 0;
}

int AudioConvert::convertAudio(AVFrame *frame)
{
    int len = swr_convert(this->swrContext, this->buffer, frame->nb_samples,
                          (const uint8_t **) frame->extended_data, frame->nb_samples);

    this->bufferLen  =  this->out_channels * len * av_get_bytes_per_sample(this->out_sample_fmt);

    return this->bufferLen;
}

輸入

輸入部分由一個解碼線程組成,解碼線程負責解碼音視頻數據然後存儲到對應的數據隊列,包含兩個類:InputThread以及InputFormat,InputFormat是對ffmpeg解碼音視頻過程的封裝,InputThread實例化InputFormat然後從中讀取數據並存儲到對應的音視頻隊列。

這裏寫圖片描述

視頻渲染

衆所周知的,視頻其實就是一個連續的在屏幕上按照一定的時間序列播放的圖像序列。用QT渲染視頻只需要將採集到的AvFrame轉換成圖像就能夠在QT上顯示。QWidget能夠通過獲取到QPainter在其paintEvent事件中能夠將圖像(QImage/QPixmap)渲染出來,要做到用QWidget渲染視頻只需要從隊列中取出AVFrame然後轉換成QImage的格式就可以將QImage繪製到對應的QWidget上。相關代碼如下:

渲染QIMage

void VideoRender::paintEvent(QPaintEvent *event)
{
    QPainter painter(this);

    painter.setRenderHint(QPainter::Antialiasing, true);

    painter.setBrush(QColor(0xcccccc));

    painter.drawRect(0,0,this->width(),this->height());

    if(!frame.isNull())
    {
        painter.drawImage(QPoint(0,0),frame);
    }
}

圖像轉換

void VideoThread::run()
{
    AvFrameContext  *videoFrame     = nullptr;
    ImageConvert    *imageContext   = nullptr;
    int64_t         realTime        = 0;
    int64_t         lastPts         = 0;
    int64_t         delay           = 0;
    int64_t         lastDelay       = 0;

    while (!isInterruptionRequested())
    {
        videoFrame = dataContext->getFrame();

        if(videoFrame == nullptr)
            break;

        if(imageContext != nullptr && (imageContext->in_width != videoFrame->frame->width ||
                                       imageContext->in_height != videoFrame->frame->height||
                                       imageContext->out_width != size.width() ||
                                       imageContext->out_height != size.height()))
        {
            delete imageContext;
            imageContext = nullptr;
        }

        if(imageContext == nullptr)
            imageContext = new ImageConvert(videoFrame->pixelFormat,
                                               videoFrame->frame->width,
                                               videoFrame->frame->height,
                                               AV_PIX_FMT_RGB32,
                                               size.width(),
                                               size.height());

        imageContext->convertImage(videoFrame->frame);


        if(audioRender != nullptr)
        {
            realTime = audioRender->getCurAudioTime();

            if(lastPts == 0)
                lastPts = videoFrame->pts;

            lastDelay   = delay;
            delay       = videoFrame->pts - lastPts;

            lastPts = videoFrame->pts;

            if(delay < 0 || delay > 1000000)
            {
                delay = lastDelay != 0 ? lastDelay : 0;
            }

            if(delay != 0)
            {
                if(videoFrame->pts > realTime)
                    QThread::usleep(delay * 1.5);
//                else
//                    QThread::usleep(delay / 1.5);
            }
        }

        QImage img(imageContext->buffer, size.width(), size.height(), QImage::Format_RGB32);

        emit onFrame(img);

        delete videoFrame;

    }

    delete imageContext;

}

音頻渲染

音頻渲染器使用QT Mutimedia模塊QAudioOutput渲染。打開一個音頻輸出,然後音頻轉換線程從數據隊列取出數據然後轉換格式之後寫入到音頻輸出的buffer。通過寫入時的音頻幀時間戳-緩衝區中剩餘的音頻數據所需要的時間就是當前音頻播放的時間,此時間可以作爲同步時鐘,視頻渲染線程根據音頻渲染器的當前時間來進行音視頻同步。

時間計算方法如下:

int64_t AudioRender::getCurAudioTime()
{

    int64_t size = audioOutput->bufferSize() - outputBuffer->bytesAvailable();

    int bytes_per_sec = 44100 *2 * 2;

    int64_t pts = this->curPts - static_cast<double>(size) / bytes_per_sec * 1000000;

    return pts;
}

本文代碼地址:https://github.com/Keanhe/QtVideoPlayer

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章