Ogre中使用Opencv加載視頻作爲紋理貼圖

增強現實中經常要用到真實的2D視頻圖片再配合虛擬對象(如三維模型)結合,以達到增強現實的效果。這裏使用Ogre結合Opencv來實現

在開始前要先瞭解到紋理貼圖的長寬必須是2的n次方,所以假如加載的視頻沒有達到這個值,必須做一下resize

 

1.創建動態紋理、背景框矩形,並添加到場景

   在CreateScene中加入如下代碼

void CreateScene()
{
    MaterialPtr material = MaterialManager::getSingleton().create(
        "DynamicTextureMaterial", // name
        ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME);
    material->getTechnique(0)->getPass(0)->createTextureUnitState("DynamicBg");
    material->getTechnique(0)->getPass(0)->setDepthCheckEnabled(false);
    material->getTechnique(0)->getPass(0)->setDepthWriteEnabled(false);
    material->getTechnique(0)->getPass(0)->setLightingEnabled(false);

    // Create background rectangle covering the whole screen
    
    Rectangle2D* rect = new Rectangle2D(true);
    rect->setCorners(-1.0, 1.0, 1.0, -1.0);
    rect->setMaterial("DynamicTextureMaterial");

    // Render the background before everything else
    rect->setRenderQueueGroup(RENDER_QUEUE_BACKGROUND);

    // Hacky, but we need to set the bounding box to something big
    // NOTE: If you are using Eihort (v1.4), please see the note below on setting the bounding box
    rect->setBoundingBox(AxisAlignedBox(-100000.0*Vector3::UNIT_SCALE, 100000.0*Vector3::UNIT_SCALE));

    // Attach background to the scene
    SceneNode* node = mSceneMgr->getRootSceneNode()->createChildSceneNode("Background");
    node->attachObject(rect);

    //add other scene node here
}


 

2.初始化視頻圖像獲取相關數據

void InitVideoQuery(char *szVideoPath)
{
    m_szVideoPath = szVideoPath;
    m_pFileCamer=cvCreateFileCapture(szVideoPath );            
    m_pVideoImage=cvQueryFrame(m_pFileCamer);

    m_pDrawImage=cvCreateImage(cvSize(1024,1024),8,3);
    
    m_pTex=TextureManager::getSingleton().createManual("DynamicBg","General",
		TEX_TYPE_2D,m_pDrawImage->width,m_pDrawImage->height,1,0,PF_A8B8G8R8,TU_WRITE_ONLY);
}



 

3.更新頂點緩存

void UpdateTextureImage()
{
    m_pVideoImage=cvQueryFrame(m_pFileCamer);
    if(m_pDrawImage==0)
    {
        cvReleaseCapture(&m_pFileCamer);
        m_pFileCamer=cvCreateFileCapture(m_szVideoPath);
        m_pVideoImage=cvQueryFrame(m_pFileCamer);
    }
    cvShowImage("img",m_pVideoImage);

    
    cvResize(m_pVideoImage,m_pDrawImage);

    HardwarePixelBufferSharedPtr buffer=m_pTex->getBuffer(0,0);
    buffer->lock(HardwareBuffer::HBL_DISCARD);
    const PixelBox &pb = buffer->getCurrentLock();
    uint32 *data = static_cast<uint32*>(pb.data);
    size_t height = pb.getHeight();
    size_t width = pb.getWidth();
    size_t pitch = pb.rowPitch; // Skip between rows of image
    for(size_t y=0; y<m_pDrawImage->height; ++y)
    {
        unsigned char *pImgLine=(unsigned char *)(m_pDrawImage->imageData+y*m_pDrawImage->widthStep);
        for(size_t x=0; x<m_pDrawImage->width; ++x)
        {
            // 0xRRGGBB -> fill the buffer with yellow pixels
            unsigned char B=pImgLine[3*x];
            unsigned char G=pImgLine[3*x+1];
            unsigned char R=pImgLine[3*x+2];
            uint32 pixel=(R<<16)+(G<<8)+(B);
            data[pitch*y + x] =pixel ;
        }
    }

    buffer->unlock();
}


 

 

值得一提的是,在更新頂點緩存的過程中,即lock到unlock之間,由於操作硬件緩存效率低,爲了提高效率,應該儘量減少代碼的操作量,可選的做法是在外面將數據打包好,然後在更新的時候直接memcpy



發佈了48 篇原創文章 · 獲贊 25 · 訪問量 42萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章