播放器插件實現系列 —— directshow

directshow的文檔比較詳細,這裏我們其實是要實現一個DirectShow的SourceFilter,在DirectShow提供的sdk包中,有實例代碼(目錄:Extras\DirectShow\Samples\C++\DirectShow\),我們的工程是拷貝Filters\PushSource然後做修改。

主要修改如下:

1、setup.cpp註冊我們的filter

原來的PushSource註冊的三個filter,每個filter一個pin,我們改成註冊一個filter,包含兩個pin,音視頻各一個

const AMOVIESETUP_MEDIATYPE sudOpPinTypes[] =
{
{
&MEDIATYPE_Video,       // Major type
&MEDIASUBTYPE_NULL      // Minor type
},
{
&MEDIATYPE_Audio,       // Major type
&MEDIASUBTYPE_NULL      // Minor type
}
};
const AMOVIESETUP_PIN sudMylibPin[] =
{
{
L"Output",      // Obsolete, not used.
FALSE,          // Is this pin rendered?
TRUE,           // Is it an output pin?
FALSE,          // Can the filter create zero instances?
TRUE,           // Does the filter create multiple instances?
&CLSID_NULL,    // Obsolete.
NULL,           // Obsolete.
1,              // Number of media types.
&sudOpPinTypes[0]  // Pointer to media types.
},
{
L"Output",      // Obsolete, not used.
FALSE,          // Is this pin rendered?
TRUE,           // Is it an output pin?
FALSE,          // Can the filter create zero instances?
TRUE,           // Does the filter create multiple instances?
&CLSID_NULL,    // Obsolete.
NULL,           // Obsolete.
1,              // Number of media types.
&sudOpPinTypes[1]  // Pointer to media types.
}
};
const AMOVIESETUP_FILTER sudMylibSource =
{
&CLSID_MylibSource,     // Filter CLSID
g_wszMylib,             // String name
MERIT_DO_NOT_USE,       // Filter merit
2,                      // Number pins
sudMylibPin             // Pin details
};
// List of class IDs and creator functions for the class factory. This
// provides the link between the OLE entry point in the DLL and an object
// being created. The class factory will call the static CreateInstance.
// We provide a set of filters in this one DLL.
CFactoryTemplate g_Templates[] =
{
{
g_wszMylib,                // Name
&CLSID_MylibSource,        // CLSID
CMylibSource::CreateInstance,  // Method to create an instance of MyComponent
NULL,                      // Initialization function
&sudMylibSource            // Set-up information (for filters)
},
};
int g_cTemplates = sizeof(g_Templates) / sizeof(g_Templates[0]);

注意將原來的CFactoryTemplateg_Templates[3]改成CFactoryTemplateg_Templates[],我一開始忘了修改,在將我們的filter註冊到系統時間老是失敗,後來一步步跟蹤發現提供的CFactoryTemplate無效,才找到這個BUG

2、實現SourceFilter

我們的SourceFilter需要打開一個URL,這一點實例代碼裏面沒有相關內容。通過查文檔發現需要暴露一個IFileSourceFilter接口,這樣GraphBuilder纔會將URL傳遞過來。

STDMETHODIMP CMylibSource::Load(LPCOLESTR pszFileName,
const AM_MEDIA_TYPE *pmt)
{
Mylib_Open(pszFileName);
return S_OK
}
STDMETHODIMP CMylibSource::GetCurFile(LPOLESTR *ppszFileName, AM_MEDIA_TYPE *pmt)
{
return E_FAIL;
}
STDMETHODIMP CMylibSource::NonDelegatingQueryInterface(REFIID riid,
void **ppv)
{
/* Do we have this interface */
if (riid == IID_IFileSourceFilter) {
return GetInterface((IFileSourceFilter *) this, ppv);
} else {
return CSource::NonDelegatingQueryInterface(riid, ppv);
}
}

IFileSourceFilter有兩個函數:Load、GetCurFile,load直接調用我們的open,GetCurFile直接返回錯誤。然後在NonDelegatingQueryInterface查詢接口時,把新加的接口暴露出去。

在增加繼承IFileSourceFilter接口後,編譯會報IUnkown的一些函數沒有實現,只需要在頭文件類的定義裏面增加“DECLARE_IUNKNOWN”就可以解決問題。

3、實現Pin

因爲我們的媒體SDK庫輸出的音視頻是統一處理的,這裏要實現的Pin用一個類就可以了,運行時會生成兩個實例,音視頻各一個。

如果不需要拖動的話,實現GetMediaType、DecideBufferSize、FillBuffer就可以了,支持拖動比較複雜,我們先看沒有拖動的情況

HRESULT CMylibPin::GetMediaType(CMediaType *pMediaType)
{
CAutoLock cAutoLock(m_pFilter->pStateLock());
CheckPointer(pMediaType, E_POINTER);
return FillMediaType(pMediaType);
}
HRESULT CMylibPin::DecideBufferSize(IMemAllocator *pAlloc, ALLOCATOR_PROPERTIES *pRequest)
{
HRESULT hr;
CAutoLock cAutoLock(m_pFilter->pStateLock());
CheckPointer(pAlloc, E_POINTER);
CheckPointer(pRequest, E_POINTER);
// Ensure a minimum number of buffers
if (pRequest->cBuffers == 0)
{
pRequest->cBuffers = 20;
}
if (m_info->type == mylib_video)
pRequest->cbBuffer = 2 * 1024 * 1024;
else
pRequest->cbBuffer = 2 * 1024;
ALLOCATOR_PROPERTIES Actual;
hr = pAlloc->SetProperties(pRequest, &Actual);
if (FAILED(hr))
{
return hr;
}
// Is this allocator unsuitable?
if (Actual.cbBuffer < pRequest->cbBuffer)
{
return E_FAIL;
}
return S_OK;
}
// This is where we insert the DIB bits into the video stream.
// FillBuffer is called once for every sample in the stream.
HRESULT CMylibPin::FillBuffer(IMediaSample *pSample)
{
CheckPointer(pSample, E_POINTER);
// If the bitmap file was not loaded, just fail here.
CAutoLock cAutoLockShared(&m_cSharedState);
Mylib_SampleEx2 sample;
sample.stream_index = m_nIndex;
HRESULT ret = m_SampleCache->ReadSample(sample, &m_bCancel);
if (ret == S_OK) {
BYTE *pData;
long cbData;
pSample->GetPointer(&pData);
cbData = pSample->GetSize();
if (cbData > sample.buffer_length)
cbData = sample.buffer_length;
memcpy(pData, sample.buffer, cbData);
pSample->SetActualDataLength(cbData);
if (sample.start_time * UINTS_MICROSECOND < m_rtStart)
m_rtStart = sample.start_time * UINTS_MICROSECOND;
REFERENCE_TIME rtStart = sample.start_time * UINTS_MICROSECOND - m_rtStart;
REFERENCE_TIME rtStop  = rtStart + m_nSampleDuration;
pSample->SetTime(&rtStart, &rtStop);
pSample->SetSyncPoint(sample.is_sync);
pSample->SetDiscontinuity(m_bDiscontinuty);
m_bDiscontinuty = FALSE;
}
return ret;
}

GetMediaType需要設置pMediaType,格式DShow我們的媒體編碼格式,函數FillMediaType根據當前流是音頻還是視頻分別調用FillAAC1、FillAVC1。AAC、AVC都是編碼格式,具體在DShow中的設置方式如下:

static HRESULT FillAVC1(CMediaType *pMediaType, Mylib_StreamInfoEx const * m_info)
{
MPEG2VIDEOINFO * pmvi =  // maybe  sizeof(MPEG2VIDEOINFO) + m_info->format_size - 7 - 4
(MPEG2VIDEOINFO *)pMediaType->AllocFormatBuffer(sizeof(MPEG2VIDEOINFO) + m_info->format_size - 11);
if (pmvi == 0)
return(E_OUTOFMEMORY);
ZeroMemory(pmvi, pMediaType->cbFormat);
pMediaType->SetSubtype(&MEDIASUBTYPE_AVC1);
pMediaType->SetFormatType(&FORMAT_MPEG2Video);
pMediaType->SetTemporalCompression(TRUE);
pMediaType->SetVariableSize();
VIDEOINFOHEADER2 * pvi = &pmvi->hdr;
SetRectEmpty(&(pvi->rcSource));
SetRectEmpty(&(pvi->rcTarget));
pvi->AvgTimePerFrame = UNITS / m_info->video_format.frame_rate;
BITMAPINFOHEADER * bmi = &pvi->bmiHeader;
bmi->biSize = sizeof(BITMAPINFOHEADER);
bmi->biWidth = m_info->video_format.width;
bmi->biHeight = m_info->video_format.height;
bmi->biBitCount = 0;
bmi->biPlanes = 1;
bmi->biCompression = 0x31435641; // AVC1
//pmvi->dwStartTimeCode = 0;
pmvi->cbSequenceHeader = m_info->format_size - 7;
BYTE * s = (BYTE *)pmvi->dwSequenceHeader;
My_uchar const * p = m_info->format_buffer;
//My_uchar const * e = p + stream_info.format_size;
My_uchar Version = *p++;
My_uchar Profile = *p++;
My_uchar Profile_Compatibility = *p++;
My_uchar Level = *p++;
My_uchar Nalu_Length = 1 + ((*p++) & 3);
size_t n = (*p++) & 31;
My_uchar const * q = p;
for (size_t i = 0; i < n; ++i) {
size_t l = (*p++);
l = (l << 8) + (*p++);
p += l;
}
memcpy(s, q, p - q);
s += p - q;
n = (*p++) & 31;
q = p;
for (size_t i = 0; i < n; ++i) {
size_t l = (*p++);
l = (l << 8) + (*p++);
p += l;
}
memcpy(s, q, p - q);
s += p - q;
pmvi->dwProfile = Profile;
pmvi->dwLevel = Level;
pmvi->dwFlags = Nalu_Length;
return S_OK;
}
static HRESULT FillAAC1(CMediaType *pMediaType, Mylib_StreamInfoEx const * m_info)
{
WAVEFORMATEX  * wf = (WAVEFORMATEX  *)pMediaType->AllocFormatBuffer(sizeof(WAVEFORMATEX) + m_info->format_size);
if (wf == 0)
return(E_OUTOFMEMORY);
ZeroMemory(wf, pMediaType->cbFormat);
pMediaType->SetSubtype(&MEDIASUBTYPE_RAW_AAC1);
pMediaType->SetFormatType(&FORMAT_WaveFormatEx);
pMediaType->SetTemporalCompression(TRUE);
pMediaType->SetVariableSize();
wf->cbSize = sizeof(WAVEFORMATEX);
wf->nChannels = m_info->audio_format.channel_count;
wf->nSamplesPerSec = m_info->audio_format.sample_rate;
wf->wBitsPerSample = m_info->audio_format.sample_size;
wf->wFormatTag = WAVE_FORMAT_RAW_AAC1;
memcpy(wf + 1, m_info->format_buffer, m_info->format_size);
return S_OK;
}

DecideBufferSize就比較簡單了,我們針對音視頻分別採用2M和2K的幀緩存大小,緩存數量都用20

HRESULT CMylibPin::DecideBufferSize(IMemAllocator *pAlloc, ALLOCATOR_PROPERTIES *pRequest)
{
HRESULT hr;
CAutoLock cAutoLock(m_pFilter->pStateLock());
CheckPointer(pAlloc, E_POINTER);
CheckPointer(pRequest, E_POINTER);
// Ensure a minimum number of buffers
if (pRequest->cBuffers == 0)
{
pRequest->cBuffers = 20;
}
if (m_info->type == mylib_video)
pRequest->cbBuffer = 2 * 1024 * 1024;
else
pRequest->cbBuffer = 2 * 1024;
ALLOCATOR_PROPERTIES Actual;
hr = pAlloc->SetProperties(pRequest, &Actual);
if (FAILED(hr))
{
return hr;
}
// Is this allocator unsuitable?
if (Actual.cbBuffer < pRequest->cbBuffer)
{
return E_FAIL;
}
return S_OK;
}

FillBuffer有點小麻煩,因爲我們的媒體SDK庫輸出的音視頻是混在一起的,都是Pin上的FillBuffer是分開的,所以這裏做的一個Sample緩存,比如需要一個視頻幀時,如果讀出來的是音頻幀,就緩存起來,知道讀到一個視頻幀。下次需要音頻幀時,可以從緩衝裏直接給。這裏只給出FillBuffer的實現,沒有緩存相關的代碼。

HRESULT CMylibPin::FillBuffer(IMediaSample *pSample)
{
CheckPointer(pSample, E_POINTER);
// If the bitmap file was not loaded, just fail here.
CAutoLock cAutoLockShared(&m_cSharedState);
Mylib_Sample sample;
sample.stream_index = m_nIndex;
HRESULT ret = m_SampleCache->ReadSample(sample, &m_bCancel);
if (ret == S_OK) {
BYTE *pData;
long cbData;
pSample->GetPointer(&pData);
cbData = pSample->GetSize();
if (cbData > sample.buffer_length)
cbData = sample.buffer_length;
memcpy(pData, sample.buffer, cbData);
pSample->SetActualDataLength(cbData);
if (sample.start_time * UINTS_MICROSECOND < m_rtStart)
m_rtStart = sample.start_time * UINTS_MICROSECOND;
REFERENCE_TIME rtStart = sample.start_time * UINTS_MICROSECOND - m_rtStart;
REFERENCE_TIME rtStop  = rtStart + m_nSampleDuration;
pSample->SetTime(&rtStart, &rtStop);
pSample->SetSyncPoint(sample.is_sync);
pSample->SetDiscontinuity(m_bDiscontinuty);
m_bDiscontinuty = FALSE;
}
return ret;
}

4、支持拖動

要支持拖動,Pin(注意是Pin,不是SourceFilter)需要繼承CSourceSeeking,實現下面三個虛函數:

virtual HRESULT ChangeStart();
virtual HRESULT ChangeStop() {return S_OK;};
virtual HRESULT ChangeRate() {return E_FAIL;};

另外別忘了暴露新的接口:

STDMETHODIMP CMylibPin::NonDelegatingQueryInterface(REFIID riid, void **ppv)
{
if( riid == IID_IMediaSeeking && m_bSeekable)
{
return CSourceSeeking::NonDelegatingQueryInterface( riid, ppv );
}
return CSourceStream::NonDelegatingQueryInterface(riid, ppv);
}

ChangeRate是改變播放速度的,不支持,直接返回錯誤。ChangeStop是改變結束位置,這個對正常觀看來說沒有什麼意義,返回OK好了。按照DShow的要求,改變位置後需要清空前面的音視頻緩存(要通知到各個Filter),然後停止,再從新的位置開始:

void CMylibPin::ChangeStart()
{
if (ThreadExists())
{
OutputDebugString(_T("UpdateFromSeek Cancel\r\n"));
m_bCancel = TRUE;
DeliverBeginFlush();
// Shut down the thread and stop pushing data.
Stop();
m_SampleCache->Seek(m_rtStart);
m_bDiscontinuty = TRUE;
OutputDebugString(_T("UpdateFromSeek Resume\r\n"));
m_bCancel = FALSE;
DeliverEndFlush();
// Restart the thread and start pushing data again.
Pause();
}
}

5、測試運行

代碼寫完了,編譯腳本里面已經有自動註冊到系統了。但是還有一件事情,要讓系統知道我們的vod://協議用這個SourceFilter打開。方法是修改註冊表,導入下面的內容:

[HKEY_CLASSES_ROOT\vod]
"SourceFilter"="{6A881765-07FA-404b-B9B8-6ED429385ECC}"

現在可以通過graphedit測試了,打開url:vod://xxxxxxx。當然其他基於directShow的播放器都是可以播放這樣的url的哦,試一下windowsmediaplayer吧,哈哈,竟然也能播放呢。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章