使用FFMpeg API 獲取攝像頭的圖像數據


攝像頭是我們比較常用的外設,很多場景我們都會用到攝像頭。比如視頻直播、視頻監控等各個領域都會用到攝像頭。攝像頭圖像數據的獲取,方法有很多,比如可以使用Qt自帶的API獲取,也可以使用DirectShow、OpenCV、FFMpeg提供的API方式獲取(本質上是通過DirectShow)。本篇文章主要講述使用FFMpeg API獲取攝像頭的數據信息。

下面是一個簡單的攝像頭顯示的例子的實現效果:
CameraCapture
使用FFMpeg獲取攝像頭的圖像數據基本分爲如下步驟:

  1. 使用 av_register_all();avdevice_register_all(); 中註冊全部和設備。
  2. 獲取攝像頭相關信息
  3. 打開攝像頭
  4. 獲取攝像頭數據並渲染顯示

1. 獲取攝像頭的信息

首先要獲取當前設備的攝像頭列表。主要是攝像頭的名稱,這裏我們使用Qt的API獲取。方法如下:

// 獲取可用攝像頭列表
QList<QCameraInfo> cameras = QCameraInfo::availableCameras();
foreach (const QCameraInfo &cameraInfo, cameras)
{
    // 獲取攝像頭的名稱
    QString cameraName = cameraInfo.description();
    // 添加到ComboBox中
    m_pComboBox->addItem(cameraName);
}

如果想要查看攝像頭的具體信息,可以使用FFMpeg命令:

ffmpeg -list_options true -f dshow -i video="BisonCam, NB Pro"

我這裏攝像頭的名稱爲 BisonCam, NB Pro

得到的結果如下:
CameraInfo

2. 打開並初始化攝像頭

這裏直接貼了代碼

// 打開攝像頭
bool CameraCapture::open(const QString& deviceName)
{
    m_avFrame = av_frame_alloc();

    AVInputFormat *inputFormat = av_find_input_format("dshow");

    AVDictionary *format_opts =  nullptr;
    //av_dict_set_int(&format_opts, "rtbufsize", 3041280 * 10, 0);
    av_dict_set(&format_opts, "avioflags", "direct", 0);
    av_dict_set(&format_opts, "video_size", "1280x720", 0);
    av_dict_set(&format_opts, "framerate", "30", 0);
    av_dict_set(&format_opts, "vcodec", "mjpeg", 0);

    m_pFormatContent = avformat_alloc_context();
    QString urlString = QString("video=") + deviceName;
    // 打開輸入
    int result = avformat_open_input(&m_pFormatContent, urlString.toLocal8Bit().data(), inputFormat, &format_opts);
    if (result < 0)
    {
        qDebug() << "AVFormat Open Input Error!";
        return false;
    }

    result = avformat_find_stream_info(m_pFormatContent, nullptr);
    if (result < 0)
    {
        qDebug() << "AVFormat Find Stream Info Error!";
        return false;
    }

    // find Video Stream Index
    int count = m_pFormatContent->nb_streams;
    for (int i=0; i<count; ++i)
    {
        if (m_pFormatContent->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
        {
            m_nVideoStreamIndex = i;
            break;
        }
    }

    if (m_nVideoStreamIndex < 0)
        return false;

    // 查找解碼器
    m_pCaptureContext = m_pFormatContent->streams[m_nVideoStreamIndex]->codec;
    AVCodec* codec = avcodec_find_decoder(m_pCaptureContext->codec_id);
    if (codec == nullptr)
        return false;

    // 打開解碼器
    if (avcodec_open2(m_pCaptureContext, codec, nullptr) != 0)
        return false;

    // 設置尺寸、格式等信息
    m_pCameraData.m_nWidth = m_pCaptureContext->width;
    m_pCameraData.m_nHeight = m_pCaptureContext->height;
    AVPixelFormat format = m_pCaptureContext->pix_fmt;
    format = convertDeprecatedFormat(format);

    if (m_isUsedSwsScale)
        return true;

    m_isUsedSwsScale = false;
    if (format == AV_PIX_FMT_YUV420P)
        m_pCameraData.m_pixelFormat = CameraData::PIXFORMAT_YUV420P;
    else if (format == AV_PIX_FMT_YUV422P)
        m_pCameraData.m_pixelFormat = CameraData::PIXFORMAT_YUV422P;
    else if (format == AV_PIX_FMT_YUV444P)
        m_pCameraData.m_pixelFormat = CameraData::PIXFORMAT_YUV444P;
    else {
        m_pCameraData.m_pixelFormat = CameraData::PIXFORMAT_RGB24;
        m_isUsedSwsScale = true;
    }

    return true;
}

這裏需要注意的有下面幾點:

  1. 打開攝像頭之前,可以配置一些攝像頭配置的參數,比如分辨率、幀率等信息。
  2. 在打開的URL中,一定要使用如下格式 video=你的攝像頭名字 。否則,則會打開失敗。
  3. 打開攝像頭之後的操作,就是FFMpeg的常規操作了,跟打開視頻文件、打開音頻文件的使用方法差不多,這裏就不作過多的說明了。感興趣的可以閱讀我的這篇文章:使用FFMpeg 解碼音頻文件

3. 獲取攝像頭數據及渲染

這裏我就直接貼出代碼了

// 獲取一幀數據
bool CameraCapture::capture(void)
{
    AVPacket pkt;
    // 獲取一幀數據
    int result = av_read_frame(m_pFormatContent, &pkt);
    if (result)
        return false;

    if (pkt.stream_index != m_nVideoStreamIndex)
    {
        av_packet_unref(&pkt);
        return false;
    }

    // 解碼視頻數據
    result = avcodec_send_packet(m_pCaptureContext, &pkt);
    if (result)
    {
        av_packet_unref(&pkt);
        return false;
    }

    result = avcodec_receive_frame(m_pCaptureContext, m_avFrame);
    if (result)
    {
        av_packet_unref(&pkt);
        return false;
    }

    // 轉換成RGB24
    if (m_isUsedSwsScale)
    {
        // 設置RGBFrame
        if (m_pRGBFrame == nullptr)
        {
            m_pRGBFrame = av_frame_alloc();
            m_pRGBFrame->width = m_avFrame->width;
            m_pRGBFrame->height = m_avFrame->height;
            m_pRGBFrame->linesize[0] = m_pRGBFrame->width * m_pRGBFrame->height * 3;
            av_image_alloc(m_pRGBFrame->data, m_pRGBFrame->linesize,
                m_pRGBFrame->width, m_pRGBFrame->height, AV_PIX_FMT_RGB24, 1);
        }

        // 轉化爲RGB24
        frameToRgbImage(m_pRGBFrame, m_avFrame);

        // 設置數據
        m_pMutex.lock();
        m_pCameraData.m_cameraData.clear();
        m_pCameraData.m_cameraData.append((char*)m_pRGBFrame->data[0], \
                m_pRGBFrame->width * m_pRGBFrame->height * 3);
        m_pCameraData.m_pixelFormat = CameraData::PIXFORMAT_RGB24;
        m_pMutex.unlock();
    }
    else
    {
        disposeYUVData();
    }

    av_packet_unref(&pkt);
    return true;
}
  1. 使用函數 av_read_frame 獲取一幀數據。
  2. 使用 avcodec_send_packetavcodec_receive_frame 解碼這一幀視頻數據。
  3. 獲取 AVFrame 數據並處理, AVFrame 就是解碼後的這幀圖像數據。

關於渲染
AVFrame 中的數據,它可能是 RGB24 類型的,也可能是 YUV 類型的。 AVFrame中的 format 記錄這圖像的數據格式。 我這裏根據數據的類型,分別做了處理。如果是YUV的數據,使用 OepnGL 渲染YUV的數據,其他類類型的數據, 使用 SWS 方法轉成RGB的數據顯示。(使用GPU解碼YUV並渲染,效率會更高一下),當然也可以使用SDL渲染。

SWS 使用也很簡單:

void CameraCapture::frameToRgbImage(AVFrame* pDest, AVFrame* frame)
{
    // 創建SWS上下文
    if (m_pSwsContext == nullptr)
    {
        m_pSwsContext = sws_getContext(frame->width, frame->height, convertDeprecatedFormat((AVPixelFormat)(frame->format)), \
            frame->width, frame->height, AV_PIX_FMT_RGB24, \
            SWS_BILINEAR, nullptr, nullptr, nullptr);
    }

    //avpicture_fill( )
    sws_scale(m_pSwsContext, frame->data, frame->linesize, 0, frame->height, \
        pDest->data, pDest->linesize);
}

對於YUV的數據,處理存在 lineSize 中的大小比實際的數據大的情況,也就是說如果分辨率爲 1280*720,實際存儲的數據可能會比這個偏大(每一行都會多出一部分數據)。lineSize 表示每一行的大小,一共有 Height 個行。這裏貼出我處理的代碼:

void CameraCapture::disposeYUVData(void)
{
    QMutexLocker locker(&m_pMutex);
    m_pCameraData.m_cameraData.clear();

    AVPixelFormat pixFormat = convertDeprecatedFormat((AVPixelFormat)m_avFrame->format);
    // 設置Y的數據
    if (m_avFrame->linesize[0] == m_avFrame->width)
    {
        m_pCameraData.m_cameraData.append((char*)m_avFrame->data[0], \
                m_avFrame->linesize[0] * m_avFrame->height);
    }
    else
    {
        for (int i=0; i<m_avFrame->height; ++i)
        {
            m_pCameraData.m_cameraData.append((char*)m_avFrame->data[0], m_avFrame->width);
        }
    }

    // 設置U的數據
    int uDataWidth = m_avFrame->width;
    int uDataHeight = m_avFrame->height;
    if (pixFormat == AV_PIX_FMT_YUV420P)
    {
        uDataWidth = uDataWidth / 2;
        uDataHeight = uDataHeight / 2;
    }
    else if (pixFormat == AV_PIX_FMT_YUV422P)
        uDataWidth = uDataWidth / 2;

    if (m_avFrame->linesize[1] == uDataWidth)
    {
        m_pCameraData.m_cameraData.append((char*)m_avFrame->data[1], \
                m_avFrame->linesize[1] * uDataHeight);
    }
    else
    {
        for (int i=0; i<uDataHeight; ++i)
        {
            m_pCameraData.m_cameraData.append((char*)m_avFrame->data[1], uDataWidth);
        }
    }

    // 設置V的數據
    int vDataWidth = uDataWidth;
    int vDataHeight = uDataHeight;
    if (m_avFrame->linesize[1] == vDataWidth)
    {
        m_pCameraData.m_cameraData.append((char*)m_avFrame->data[2], \
                m_avFrame->linesize[2] * vDataHeight);
    }
    else
    {
        for (int i=0; i<vDataHeight; ++i)
        {
            m_pCameraData.m_cameraData.append((char*)m_avFrame->data[2], vDataWidth);
        }
    }
}

data[0] 和 linesize[0] 表示Y分量的數據和一行數據的大小
data[1] 和 linesize[1] 表示U分量的數據和一行數據的大小
data[2] 和 linesize[U2] 表示V分量的數據和一行數據的大小

函數 convertDeprecatedFormat 是一個格式轉換的函數,目的是有些被廢棄的格式得到轉換,比如說 AV_PIX_FMT_YUVJ420PAV_PIX_FMT_YUV420P 實際上是一樣的。

實現代碼如下:

AVPixelFormat CameraCapture::convertDeprecatedFormat(enum AVPixelFormat format)
{
    switch (format)
    {
    case AV_PIX_FMT_YUVJ420P:
        return AV_PIX_FMT_YUV420P;
    case AV_PIX_FMT_YUVJ422P:
        return AV_PIX_FMT_YUV422P;
    case AV_PIX_FMT_YUVJ444P:
        return AV_PIX_FMT_YUV444P;
    case AV_PIX_FMT_YUVJ440P:
        return AV_PIX_FMT_YUV440P;
    default:
        return format;
    }
}

完整代碼:
頭文件 CameraCapture.h

#ifndef CAMERACAPTURE_H
#define CAMERACAPTURE_H

#include <QObject>
#include <atomic>
#include <QMutex>
#include <QMutexLocker>
#include "audiovideocore_global.h"
extern "C"
{
#include <libavdevice/avdevice.h>
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libswscale/swscale.h>
#include <libavutil/imgutils.h>
#include <libavutil/parseutils.h>
}

class CameraCapture;
class AUDIOVIDEOCORESHARED_EXPORT CameraData
{
public:
    enum PixelFormat
    {
        PIXFORMAT_YUV420P,
        PIXFORMAT_YUV422P,
        PIXFORMAT_YUV444P,

        PIXFORMAT_RGB24
    };

public:
    CameraData(QMutex *mutex)
        :m_pMutex(mutex){
        //qRegisterMetaType<CameraData>("CameraData");
    }
    ~CameraData(){}

    int getWidth(void) const {return m_nWidth;}
    int getHeight(void) const {return m_nHeight;}
    QByteArray getCameraData(void) {
        QMutexLocker locker(m_pMutex);
        return m_cameraData;
    }
    PixelFormat getPixelFormat(void) const {return m_pixelFormat;}

    friend class CameraCapture;

private:
    QByteArray m_cameraData;
    std::atomic<int> m_nWidth;
    std::atomic<int> m_nHeight;
    PixelFormat m_pixelFormat;

    QMutex* m_pMutex = nullptr;
};

class CameraCapture : public QObject
{
public:
    CameraCapture(QObject* parent = nullptr);
    virtual ~CameraCapture();

    // 打開攝像頭
    bool open(const QString& deviceName);
    // 關閉攝像頭
    void close(void);

    // 獲取一幀數據
    bool capture(void);

    // 是否使用SWS轉化爲RGB格式
    void setUsedSwsScaleEnabled(bool isEnabled);

    // 獲取數據
    const CameraData& getCameraData(void) {return m_pCameraData;}

private:
    AVFrame* m_avFrame = nullptr;
    AVFrame* m_pRGBFrame = nullptr;

    int m_nVideoStreamIndex = -1;
    AVFormatContext* m_pFormatContent = nullptr;
    AVCodecContext* m_pCaptureContext = nullptr;

    SwsContext* m_pSwsContext = nullptr;

    CameraData m_pCameraData;
    AVPixelFormat m_pixelFormat;
    QMutex m_pMutex;

    bool m_isUsedSwsScale = false;

    void frameToRgbImage(AVFrame* pDest, AVFrame* frame);
    AVPixelFormat convertDeprecatedFormat(enum AVPixelFormat format);

    // 處理YUV數據組合成一個Buffer
    void disposeYUVData(void);
};

#endif

源文件 CameraCapture.cpp

#include "CameraCapture.h"
#include <QDebug>

CameraCapture::CameraCapture(QObject* parent)
    :QObject(parent),
     m_pCameraData(&m_pMutex)
{
    av_register_all();
    avdevice_register_all();
}

CameraCapture::~CameraCapture()
{

}

// 打開攝像頭
bool CameraCapture::open(const QString& deviceName)
{
    m_avFrame = av_frame_alloc();

    AVInputFormat *inputFormat = av_find_input_format("dshow");

    AVDictionary *format_opts =  nullptr;
    //av_dict_set_int(&format_opts, "rtbufsize", 3041280 * 10, 0);
    av_dict_set(&format_opts, "avioflags", "direct", 0);
    av_dict_set(&format_opts, "video_size", "1280x720", 0);
    av_dict_set(&format_opts, "framerate", "30", 0);
    av_dict_set(&format_opts, "vcodec", "mjpeg", 0);

    m_pFormatContent = avformat_alloc_context();
    QString urlString = QString("video=") + deviceName;
    // 打開輸入
    int result = avformat_open_input(&m_pFormatContent, urlString.toLocal8Bit().data(), inputFormat, &format_opts);
    if (result < 0)
    {
        qDebug() << "AVFormat Open Input Error!";
        return false;
    }

    result = avformat_find_stream_info(m_pFormatContent, nullptr);
    if (result < 0)
    {
        qDebug() << "AVFormat Find Stream Info Error!";
        return false;
    }

    // find Video Stream Index
    int count = m_pFormatContent->nb_streams;
    for (int i=0; i<count; ++i)
    {
        if (m_pFormatContent->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
        {
            m_nVideoStreamIndex = i;
            break;
        }
    }

    if (m_nVideoStreamIndex < 0)
        return false;

    // 查找解碼器
    m_pCaptureContext = m_pFormatContent->streams[m_nVideoStreamIndex]->codec;
    AVCodec* codec = avcodec_find_decoder(m_pCaptureContext->codec_id);
    if (codec == nullptr)
        return false;

    // 打開解碼器
    if (avcodec_open2(m_pCaptureContext, codec, nullptr) != 0)
        return false;

    // 設置尺寸、格式等信息
    m_pCameraData.m_nWidth = m_pCaptureContext->width;
    m_pCameraData.m_nHeight = m_pCaptureContext->height;
    AVPixelFormat format = m_pCaptureContext->pix_fmt;
    format = convertDeprecatedFormat(format);

    if (m_isUsedSwsScale)
        return true;

    m_isUsedSwsScale = false;
    if (format == AV_PIX_FMT_YUV420P)
        m_pCameraData.m_pixelFormat = CameraData::PIXFORMAT_YUV420P;
    else if (format == AV_PIX_FMT_YUV422P)
        m_pCameraData.m_pixelFormat = CameraData::PIXFORMAT_YUV422P;
    else if (format == AV_PIX_FMT_YUV444P)
        m_pCameraData.m_pixelFormat = CameraData::PIXFORMAT_YUV444P;
    else {
        m_pCameraData.m_pixelFormat = CameraData::PIXFORMAT_RGB24;
        m_isUsedSwsScale = true;
    }

    return true;
}

// 關閉攝像頭
void CameraCapture::close(void)
{
    sws_freeContext(m_pSwsContext);
    av_frame_free(&m_avFrame);
    av_frame_free(&m_pRGBFrame);
    m_pRGBFrame = nullptr;
    m_avFrame = nullptr;

    avcodec_close(m_pCaptureContext);
    avformat_close_input(&m_pFormatContent);
}

void CameraCapture::setUsedSwsScaleEnabled(bool isEnabled)
{
    m_isUsedSwsScale = isEnabled;
}

void CameraCapture::frameToRgbImage(AVFrame* pDest, AVFrame* frame)
{
    // 創建SWS上下文
    if (m_pSwsContext == nullptr)
    {
        m_pSwsContext = sws_getContext(frame->width, frame->height, convertDeprecatedFormat((AVPixelFormat)(frame->format)), \
            frame->width, frame->height, AV_PIX_FMT_RGB24, \
            SWS_BILINEAR, nullptr, nullptr, nullptr);
    }

    //avpicture_fill( )
    sws_scale(m_pSwsContext, frame->data, frame->linesize, 0, frame->height, \
        pDest->data, pDest->linesize);
}

AVPixelFormat CameraCapture::convertDeprecatedFormat(enum AVPixelFormat format)
{
    switch (format)
    {
    case AV_PIX_FMT_YUVJ420P:
        return AV_PIX_FMT_YUV420P;
    case AV_PIX_FMT_YUVJ422P:
        return AV_PIX_FMT_YUV422P;
    case AV_PIX_FMT_YUVJ444P:
        return AV_PIX_FMT_YUV444P;
    case AV_PIX_FMT_YUVJ440P:
        return AV_PIX_FMT_YUV440P;
    default:
        return format;
    }
}

void CameraCapture::disposeYUVData(void)
{
    QMutexLocker locker(&m_pMutex);
    m_pCameraData.m_cameraData.clear();

    AVPixelFormat pixFormat = convertDeprecatedFormat((AVPixelFormat)m_avFrame->format);
    // 設置Y的數據
    if (m_avFrame->linesize[0] == m_avFrame->width)
    {
        m_pCameraData.m_cameraData.append((char*)m_avFrame->data[0], \
                m_avFrame->linesize[0] * m_avFrame->height);
    }
    else
    {
        for (int i=0; i<m_avFrame->height; ++i)
        {
            m_pCameraData.m_cameraData.append((char*)m_avFrame->data[0], m_avFrame->width);
        }
    }

    // 設置U的數據
    int uDataWidth = m_avFrame->width;
    int uDataHeight = m_avFrame->height;
    if (pixFormat == AV_PIX_FMT_YUV420P)
    {
        uDataWidth = uDataWidth / 2;
        uDataHeight = uDataHeight / 2;
    }
    else if (pixFormat == AV_PIX_FMT_YUV422P)
        uDataWidth = uDataWidth / 2;

    if (m_avFrame->linesize[1] == uDataWidth)
    {
        m_pCameraData.m_cameraData.append((char*)m_avFrame->data[1], \
                m_avFrame->linesize[1] * uDataHeight);
    }
    else
    {
        for (int i=0; i<uDataHeight; ++i)
        {
            m_pCameraData.m_cameraData.append((char*)m_avFrame->data[1], uDataWidth);
        }
    }

    // 設置V的數據
    int vDataWidth = uDataWidth;
    int vDataHeight = uDataHeight;
    if (m_avFrame->linesize[1] == vDataWidth)
    {
        m_pCameraData.m_cameraData.append((char*)m_avFrame->data[2], \
                m_avFrame->linesize[2] * vDataHeight);
    }
    else
    {
        for (int i=0; i<vDataHeight; ++i)
        {
            m_pCameraData.m_cameraData.append((char*)m_avFrame->data[2], vDataWidth);
        }
    }
}

// 獲取一幀數據
bool CameraCapture::capture(void)
{
    AVPacket pkt;
    // 獲取一幀數據
    int result = av_read_frame(m_pFormatContent, &pkt);
    if (result)
        return false;

    if (pkt.stream_index != m_nVideoStreamIndex)
    {
        av_packet_unref(&pkt);
        return false;
    }

    // 解碼視頻數據
    result = avcodec_send_packet(m_pCaptureContext, &pkt);
    if (result)
    {
        av_packet_unref(&pkt);
        return false;
    }

    result = avcodec_receive_frame(m_pCaptureContext, m_avFrame);
    if (result)
    {
        av_packet_unref(&pkt);
        return false;
    }

    // 轉換成RGB24
    if (m_isUsedSwsScale)
    {
        // 設置RGBFrame
        if (m_pRGBFrame == nullptr)
        {
            m_pRGBFrame = av_frame_alloc();
            m_pRGBFrame->width = m_avFrame->width;
            m_pRGBFrame->height = m_avFrame->height;
            m_pRGBFrame->linesize[0] = m_pRGBFrame->width * m_pRGBFrame->height * 3;
            av_image_alloc(m_pRGBFrame->data, m_pRGBFrame->linesize,
                m_pRGBFrame->width, m_pRGBFrame->height, AV_PIX_FMT_RGB24, 1);
        }

        // 轉化爲RGB24
        frameToRgbImage(m_pRGBFrame, m_avFrame);

        // 設置數據
        m_pMutex.lock();
        m_pCameraData.m_cameraData.clear();
        m_pCameraData.m_cameraData.append((char*)m_pRGBFrame->data[0], \
                m_pRGBFrame->width * m_pRGBFrame->height * 3);
        m_pCameraData.m_pixelFormat = CameraData::PIXFORMAT_RGB24;
        m_pMutex.unlock();
    }
    else
    {
        disposeYUVData();
    }

    av_packet_unref(&pkt);
    return true;
}

線程中,調用部分:
頭文件 CameraCaptureThread.h

#ifndef CAMERACAPTURETHREAD_H
#define CAMERACAPTURETHREAD_H

#include <QThread>
#include "CameraCapture.h"
#include "audiovideocore_global.h"
class AUDIOVIDEOCORESHARED_EXPORT CameraCaptureThread : public QThread
{
    Q_OBJECT

public:
    CameraCaptureThread(QObject* parent = nullptr);
    virtual ~CameraCaptureThread();

    void run(void) override;

    // 打開攝像頭
    bool openCamera(const QString& cameraName);
    // 關閉攝像頭
    void closeCamera(void);

    // 是否使用SWS轉化爲RGB格式
    void setUsedSwsScaleEnabled(bool isEnabled){
        m_pCameraCapture->setUsedSwsScaleEnabled(isEnabled);
    }

    // 獲取數據
    const CameraData& getCameraData(void) {return m_pCameraCapture->getCameraData();}

private:
    CameraCapture* m_pCameraCapture = nullptr;

signals:
    void needUpdate();
};

#endif

cpp文件 CameraCaptureThread.cpp

#include "CameraCaptureThread.h"

CameraCaptureThread::CameraCaptureThread(QObject* parent)
    :QThread(parent)
{
    m_pCameraCapture = new CameraCapture(this);
}

CameraCaptureThread::~CameraCaptureThread()
{
    closeCamera();
}

void CameraCaptureThread::run(void)
{
    while (!this->isInterruptionRequested())
    {
        // 獲取攝像頭數據
        m_pCameraCapture->capture();

        // 同步通知顯示
        emit needUpdate();
    }
}

bool CameraCaptureThread::openCamera(const QString& cameraName)
{
    bool isOpened = m_pCameraCapture->open(cameraName);
    if (isOpened)
        this->start();

    return isOpened;
}

void CameraCaptureThread::closeCamera(void)
{
    if (this->isRunning())
    {
        this->requestInterruption();
        this->wait();

        m_pCameraCapture->close();
    }
}

如果想打開攝像頭 直接使用函數 openCamera 打開攝像頭就可以了,關聯信號 needUpdate 刷界面顯示即可。

我這裏關聯的槽函數

void Widget::onNeedUpdate(void)
{
    const CameraData& cameraData = m_pCameraCaptureControl->getCameraData();
    CameraData& tempCameraData = (CameraData&)cameraData;

    QByteArray byteArray = tempCameraData.getCameraData();
    int width = cameraData.getWidth();
    int height = cameraData.getHeight();

	//	設置YUV的數據
    uchar* pData[4] = {0};
    pData[0] = (uchar*)byteArray.constData();
    pData[1] = (uchar*)byteArray.constData() + width * height;
    pData[2] = (uchar*)byteArray.constData() + width * height + width / 2 * height;
	// 渲染YUV數據
    m_pImageViewer->setYUVData(pData, width, height);
}

關於使用OpenGL渲染YUV數據,後面的文章會做相關的講解。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章