工作中一個信號量鎖使用問題

最近碰到一個skymtc 發雙流崩潰問題,每次開始編碼雙流,第一幀就會崩潰。
Windbg 分析 dump 堆棧:


Microsoft (R) Windows Debugger Version 10.0.10586.567 X86
Copyright (c) Microsoft Corporation. All rights reserved.


Loading Dump File [C:\Users\lijian\Desktop\skymtcdump\skymtcscrash.dmp]
User Mini Dump File: Only registers, stack and portions of memory are available

Symbol search path is: srv*
Executable search path is: 
Windows 10 Version 18363 MP (12 procs) Free x86 compatible
Product: WinNt, suite: SingleUserTS
Built by: 18362.1.amd64fre.19h1_release.190318-1202
Machine Name:
Debug session time: Fri Apr 24 17:19:07.000 2020 (UTC + 8:00)
System Uptime: not available
Process Uptime: 0 days 1:31:03.000
................................................................
................................................................
................................................................
....
This dump file has an exception of interest stored in it.
The stored exception information can be accessed via .ecxr.
(1b8c.5b74): Access violation - code c0000005 (first/second chance not available)
*** WARNING: Unable to verify timestamp for ntdll.dll
*** ERROR: Module load completed but symbols could not be loaded for ntdll.dll
eax=00000000 ebx=22ccd0d0 ecx=00000000 edx=00000000 esi=22ccd080 edi=22ccd090
eip=77772cec esp=0f94d7d8 ebp=0f94d7e4 iopl=0         nv up ei pl nz na pe nc
cs=0023  ss=002b  ds=002b  es=002b  fs=0053  gs=002b             efl=00000206
ntdll+0x72cec:
77772cec c20800          ret     8



0:032> .ecxr
*** ERROR: Symbol file could not be found.  Defaulted to export symbols for IntelHwWrapper.dll - 
eax=00000000 ebx=00000780 ecx=00000000 edx=00000029 esi=38a22fa0 edi=353efc20
eip=7aff996c esp=0f94f2b4 ebp=0f94f2c0 iopl=0         nv up ei pl nz na po nc
cs=0023  ss=002b  ds=002b  es=002b  fs=0053  gs=002b             efl=00010202
IntelHwWrapper!CInstance::InstanceDump+0x11c8c:
7aff996c 660f6f7660      movdqa  xmm6,xmmword ptr [esi+60h] ds:002b:38a23000=????????????????????????????????



0:032> lmvm IntelHwWrapper
Browse full module list
start    end        module name
7afe0000 7b020000   IntelHwWrapper   (export symbols)       IntelHwWrapper.dll
    Loaded symbol image file: IntelHwWrapper.dll
    Mapped memory image file: C:\Program Files (x86)\Kedacom\SkyMTC\IntelHwWrapper.dll
    Image path: C:\Program Files (x86)\Kedacom\SkyMTC\IntelHwWrapper.dll
    Image name: IntelHwWrapper.dll
    Browse all global symbols  functions  data
    Timestamp:        Thu Feb 21 11:07:03 2019 (5C6E15D7)
    CheckSum:         0004A0E2
    ImageSize:        00040000
    File version:     0.0.0.0
    Product version:  0.0.0.0
    File flags:       0 (Mask 0)
    File OS:          0 Unknown Base
    File type:        0.0 Unknown
    File date:        00000000.00000000
    Translations:     0000.04b0 0000.04e4 0409.04b0 0409.04e4
    
************* Symbol Path validation summary **************
Response                         Time (ms)     Location
OK                                             C:\Users\lijian\Desktop\skymtcdump
DBGHELP: Symbol Search Path: c:\users\lijian\desktop\skymtcdump
DBGHELP: Symbol Search Path: c:\users\lijian\desktop\skymtcdump



0:032> .reload
................................................................
................................................................
................................................................
....
DBGHELP: C:\Program Files (x86)\Windows Kits\10\Debuggers\ntdll.dll - file not found
DBGHELP: C:\Program Files (x86)\Windows Kits\10\Debuggers\ntdll.dll - file not found
DBGHELP: ntdll.dll not found in c:\users\lijian\desktop\skymtcdump
DBGHELP: ntdll.dll not found in c:\users\lijian\desktop\skymtcdump
DBGENG:  C:\Windows\System32\ntdll.dll image header does not match memory image header.
DBGENG:  C:\Windows\System32\ntdll.dll - Couldn't map image from disk.
DBGENG:  ntdll.dll - Partial symbol image load missing image info
DBGHELP: Module is not fully loaded into memory.
DBGHELP: Searching for symbols using debugger-provided data.
DBGHELP: c:\users\lijian\desktop\skymtcdump\wntdll.pdb - file not found
DBGHELP: c:\users\lijian\desktop\skymtcdump\dll\wntdll.pdb - file not found
DBGHELP: c:\users\lijian\desktop\skymtcdump\symbols\dll\wntdll.pdb - file not found
DBGHELP: wntdll.pdb - file not found
*** WARNING: Unable to verify timestamp for ntdll.dll
*** ERROR: Module load completed but symbols could not be loaded for ntdll.dll
DBGHELP: ntdll - no symbols loaded

************* Symbol Loading Error Summary **************
Module name            Error
ntdll                  PDB not found : c:\users\lijian\desktop\skymtcdump\symbols\dll\wntdll.pdb
				Unable to locate the .pdb file in this location



0:032> .ecxr
DBGHELP: C:\Program Files (x86)\Windows Kits\10\Debuggers\IntelHwWrapper.dll - file not found
DBGHELP: C:\Program Files (x86)\Windows Kits\10\Debuggers\IntelHwWrapper.dll - file not found
DBGHELP: IntelHwWrapper.dll not found in c:\users\lijian\desktop\skymtcdump
DBGHELP: IntelHwWrapper.dll not found in c:\users\lijian\desktop\skymtcdump
DBGENG:  C:\Program Files (x86)\Kedacom\SkyMTC\IntelHwWrapper.dll - Mapped image memory

DBGHELP: IntelHwWrapper - private symbols & lines 
        c:\users\lijian\desktop\skymtcdump\IntelHwWrapper.pdb
DBGHELP: C:\Program Files (x86)\Windows Kits\10\Debuggers\msvcr100.dll - file not found
DBGHELP: C:\Program Files (x86)\Windows Kits\10\Debuggers\msvcr100.dll - file not found
DBGHELP: msvcr100.dll not found in c:\users\lijian\desktop\skymtcdump
DBGHELP: msvcr100.dll not found in c:\users\lijian\desktop\skymtcdump
DBGENG:  C:\Program Files (x86)\Kedacom\SkyMTC\msvcr100.dll image header does not match memory image header.
DBGENG:  C:\Program Files (x86)\Kedacom\SkyMTC\msvcr100.dll - Couldn't map image from disk.
DBGHELP: C:\Program Files (x86)\Windows Kits\10\Debuggers\igd9dxva32.dll - file not found
DBGHELP: C:\Program Files (x86)\Windows Kits\10\Debuggers\igd9dxva32.dll - file not found
DBGHELP: igd9dxva32.dll not found in c:\users\lijian\desktop\skymtcdump
DBGHELP: igd9dxva32.dll not found in c:\users\lijian\desktop\skymtcdump
DBGENG:  C:\Windows\System32\DriverStore\FileRepository\ki127018.inf_amd64_0f67ff47e9e30716\igd9dxva32.dll - Couldn't map image from disk.
DBGHELP: C:\Program Files (x86)\Windows Kits\10\Debuggers\mediasdkvc10.dll - file not found
DBGHELP: C:\Program Files (x86)\Windows Kits\10\Debuggers\mediasdkvc10.dll - file not found
DBGHELP: mediasdkvc10.dll not found in c:\users\lijian\desktop\skymtcdump
DBGHELP: mediasdkvc10.dll not found in c:\users\lijian\desktop\skymtcdump
DBGENG:  C:\Program Files (x86)\Kedacom\SkyMTC\mediasdkvc10.dll - Mapped image memory
DBGHELP: C:\Program Files (x86)\Windows Kits\10\Debuggers\HwCodecWrapper.dll - file not found
DBGHELP: C:\Program Files (x86)\Windows Kits\10\Debuggers\HwCodecWrapper.dll - file not found
DBGHELP: HwCodecWrapper.dll not found in c:\users\lijian\desktop\skymtcdump
DBGHELP: HwCodecWrapper.dll not found in c:\users\lijian\desktop\skymtcdump
DBGENG:  C:\Program Files (x86)\Kedacom\SkyMTC\HwCodecWrapper.dll - Mapped image memory
eax=00000000 ebx=00000780 ecx=00000000 edx=00000029 esi=38a22fa0 edi=353efc20
eip=7aff996c esp=0f94f2b4 ebp=0f94f2c0 iopl=0         nv up ei pl nz na po nc
cs=0023  ss=002b  ds=002b  es=002b  fs=0053  gs=002b             efl=00010202
IntelHwWrapper!_VEC_memcpy+0x50:
7aff996c 660f6f7660      movdqa  xmm6,xmmword ptr [esi+60h] ds:002b:38a23000=????????????????????????????????



0:032> kn
  *** Stack trace for last set context - .thread/.cxr resets it
 # ChildEBP RetAddr  
00 0f94f2c0 7afe6d1f IntelHwWrapper!_VEC_memcpy+0x50
01 0f94f2e0 7afe6eec IntelHwWrapper!CEncodingPipeline::LoadFrame+0x11f [k:\cbb\media\mediacontrol_sky\40-saturn\mediasdk\3rdparty\intelhwwrapper\source\pipeline_encode.cpp @ 1201]
02 0f94f34c 7afe1c04 IntelHwWrapper!CEncodingPipeline::EncodeFrame+0x1bc [k:\cbb\media\mediacontrol_sky\40-saturn\mediasdk\3rdparty\intelhwwrapper\source\pipeline_encode.cpp @ 1268]
03 0f94f378 609419d6 IntelHwWrapper!CIntelHwEncoderWrapper::HwEncodeFrame+0x94 [k:\cbb\media\mediacontrol_sky\40-saturn\mediasdk\3rdparty\intelhwwrapper\source\intelhwencoderwrapper.cpp @ 131]
WARNING: Stack unwind information not available. Following frames may be wrong.
04 0f94f390 7a872521 HwCodecWrapper!HWEnc_EncodeFrame+0x26
05 0f94f4a4 7a86fafd mediasdkvc10!MediaSDKInitial+0xd721
06 0f94f664 7a8517de mediasdkvc10!MediaSDKInitial+0xacfd
07 0f94f900 7a7d8d13 mediasdkvc10!SetRecoderDiskCheckPartitionAlarm+0x3004e
08 0f94f9fc 60eec556 mediasdkvc10!CInstance::operator=+0x5593
09 0f94fa34 60eec600 msvcr100+0x5c556
0a 0f94fa40 74f06359 msvcr100+0x5c600
0b 0f94fa50 77767c14 kernel32+0x16359
0c 0f94faac 77767be4 ntdll+0x67c14
0d 0f94fabc 00000000 ntdll+0x67be4

從Windbg 分析 dump 堆棧 發現崩潰在 Intel硬編庫的 CEncodingPipeline::LoadFrame 接口中,
具體崩潰代碼:

mfxStatus CEncodingPipeline::LoadFrame(mfxFrameSurface1* pSurface, 
	const TVidRawData *ptVidoRAwData)
{
	// check if reader is initialized
	if (!pSurface)
	{
		return MFX_ERR_NULL_PTR;
	}

	mfxU32 nBytesRead;
	mfxU16 w, h, pitch;
	mfxU8 *ptr, *ptr2;
	mfxFrameInfo& pInfo = pSurface->Info;
	mfxFrameData& pData = pSurface->Data;

	mfxU32 vid = pInfo.FrameId.ViewId;

	// this reader supports only NV12 mfx surfaces for code transparency,
	// other formats may be added if application requires such functionality
	if (MFX_FOURCC_NV12 != pInfo.FourCC && MFX_FOURCC_YV12 != pInfo.FourCC)
	{
		return MFX_ERR_UNSUPPORTED;
	}

	if (pInfo.CropH > 0 && pInfo.CropW > 0)
	{
		w = pInfo.CropW;
		h = pInfo.CropH;
	}
	else
	{
		w = pInfo.Width;
		h = pInfo.Height;
	}

	pitch = pData.Pitch;
	nBytesRead = w * h;
	ptr = pData.Y + pInfo.CropX + pInfo.CropY * pData.Pitch;
	//y
	memcpy(ptr, ptVidoRAwData->pDataY, nBytesRead);
	switch(pInfo.FourCC)
	{
	case MFX_FOURCC_YV12:
		pitch /= 2;
		ptr  = pData.U + (pInfo.CropX / 2) + (pInfo.CropY / 2) * pitch;
		ptr2 = pData.V + (pInfo.CropX / 2) + (pInfo.CropY / 2) * pitch;
		nBytesRead = w * h >> 2;
		//u
		memcpy(ptr, ptVidoRAwData->pDataU, nBytesRead);
		//v
		memcpy(ptr2, ptVidoRAwData->pDataV, nBytesRead);
		break;
	case MFX_FOURCC_NV12:
		ptr  = pData.UV + pInfo.CropX + (pInfo.CropY / 2) * pitch;
		nBytesRead = w * h >> 1;
		memcpy(ptr, ptVidoRAwData->pDataU, nBytesRead); // Crash !!!
		break;
	default:
		return MFX_ERR_UNSUPPORTED;
	}

	return MFX_ERR_NONE;
}

最終崩潰在 memcpy 這裏:

memcpy(ptr, ptVidoRAwData->pDataU, nBytesRead);

我猜測應該是 memcpy拷貝過程中 源緩衝或目的緩衝 發生內存越界情況,到底是 目的緩衝寫越界還是源緩衝讀越界,有待進一步排查。
此處插一句:

“內存越界訪問有兩種:一種是讀越界,即讀了不屬於自己的數據,如果所讀的內存地址
是無效的,程度立刻就崩潰了。如果所讀內存地址是有效的,在讀的時候不會出問題,但由
於讀到的數據是隨機的,它會產生不可預料的後果。另外一種是寫越界,又叫緩衝區溢出。
它會產生不可預料的後果,比如把程序返回地址改掉了,使函數返回時跳到未知內存,導致崩潰。”

memcpy越界就是 源緩衝大小 與 目的緩衝大小 不匹配、不相等導致的,而二者的大小與數據寬高有關,他們都是根據數據寬高(還有步長)分配的緩衝。
對目的緩衝而言,pInfo.CropX、pInfo.CropY、pInfo.CropW、pInfo.CropH、pInfo.Width、pInfo.Height 都是要確認的因素。
對源緩衝而言,輸入I420數據(TVidRawData *ptVidoRAwData)的寬高也是需要確認的因素。
最後,檢查源和目的緩衝數據 的寬高是否相同,不相同就會導致內存越界。
在此,我懷疑最大的可能就是 源和目的緩衝數據 的寬高不同 導致memcpy越界崩潰的。

其中,目的緩衝數據的這些寬高、裁剪信息都來自 mfxFrameSurface1* pSurface,而pSurface來自下面的接口:

mfxStatus CEncodingPipeline::EncodeFrame(const TVidRawData *pTVidRawData)
{
	MSDK_CHECK_POINTER(m_pmfxENC, MFX_ERR_NOT_INITIALIZED);

	mfxStatus sts = MFX_ERR_NONE;

	mfxFrameSurface1* pSurf = NULL; // dispatching pointer

	sTask *pCurrentTask = NULL; // a pointer to the current task
	mfxU16 nEncSurfIdx = 0;     // index of free surface for encoder input (vpp output)
	mfxU16 nVppSurfIdx = 0;     // index of free surface for vpp input

	mfxSyncPoint VppSyncPoint = NULL; // a sync point associated with an asynchronous vpp call
	bool bVppMultipleOutput = false;  // this flag is true if VPP produces more frames at output
	// than consumes at input. E.g. framerate conversion 30 fps -> 60 fps


	// Since in sample we support just 2 views
	// we will change this value between 0 and 1 in case of MVC
	mfxU16 currViewNum = 0;

	sts = MFX_ERR_NONE;

	// main loop, preprocessing and encoding
	int nIndex = 0;
	
	{
		// get a pointer to a free task (bit stream and sync point for encoder)
		sts = GetFreeTask(&pCurrentTask);
		MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);

		// find free surface for encoder input
		nEncSurfIdx = GetFreeSurface(m_pEncSurfaces, m_EncResponse.NumFrameActual);
		MSDK_CHECK_ERROR(nEncSurfIdx, MSDK_INVALID_SURF_IDX, MFX_ERR_MEMORY_ALLOC);

		// point pSurf to encoder surface
		pSurf = &m_pEncSurfaces[nEncSurfIdx];
		if (!bVppMultipleOutput)
		{
			// if vpp is enabled find free surface for vpp input and point pSurf to vpp surface
			if (m_pmfxVPP)
			{
				nVppSurfIdx = GetFreeSurface(m_pVppSurfaces, m_VppResponse.NumFrameActual);
				MSDK_CHECK_ERROR(nVppSurfIdx, MSDK_INVALID_SURF_IDX, MFX_ERR_MEMORY_ALLOC);

				pSurf = &m_pVppSurfaces[nVppSurfIdx];
			}

			// load frame from file to surface data
			// if we share allocator with Media SDK we need to call Lock to access surface data and...
			if (m_bExternalAlloc)
			{
				// get YUV pointers
				sts = m_pMFXAllocator->Lock(m_pMFXAllocator->pthis, pSurf->Data.MemId, &(pSurf->Data));
				MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);
			}

			pSurf->Info.FrameId.ViewId = currViewNum;
			sts = LoadFrame(pSurf, pTVidRawData);  //讀取一幀數據 // !!! Crash Stack

			MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);

			// ... after we're done call Unlock
			if (m_bExternalAlloc)
			{
				sts = m_pMFXAllocator->Unlock(m_pMFXAllocator->pthis, pSurf->Data.MemId, &(pSurf->Data));
				MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);
			}
		}

		// perform preprocessing if required
		if (m_pmfxVPP)
		{
			bVppMultipleOutput = false; // reset the flag before a call to VPP
			for (;;)
			{
				sts = m_pmfxVPP->RunFrameVPPAsync(&m_pVppSurfaces[nVppSurfIdx], &m_pEncSurfaces[nEncSurfIdx],
					NULL, &VppSyncPoint);

				if (MFX_ERR_NONE < sts && !VppSyncPoint) // repeat the call if warning and no output
				{
					if (MFX_WRN_DEVICE_BUSY == sts)
						MSDK_SLEEP(1); // wait if device is busy
				}
				else if (MFX_ERR_NONE < sts && VppSyncPoint)
				{
					sts = MFX_ERR_NONE; // ignore warnings if output is available
					break;
				}
				else
					break; // not a warning
			}

			// process errors
			if (MFX_ERR_MORE_DATA == sts)
			{
				MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);
			}
			else if (MFX_ERR_MORE_SURFACE == sts)
			{
				bVppMultipleOutput = true;
			}
			else
			{
				MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);
			}
		}

		// save the id of preceding vpp task which will produce input data for the encode task
		if (VppSyncPoint)
		{
			pCurrentTask->DependentVppTasks.push_back(VppSyncPoint);
			VppSyncPoint = NULL;
		}

		mfxEncodeCtrl EncodeCtrl;
		memset(&EncodeCtrl, 0, sizeof(mfxEncodeCtrl)); 
		if (m_bKeyFlags)  
		{
			EncodeCtrl.FrameType = MFX_FRAMETYPE_I | MFX_FRAMETYPE_IDR | MFX_FRAMETYPE_REF; /*申請關鍵幀*/
			m_bKeyFlags = false;
		}

		//EncFrame
		{
			// at this point surface for encoder contains either a frame from file or a frame processed by vpp
			sts = m_pmfxENC->EncodeFrameAsync(&EncodeCtrl, &m_pEncSurfaces[nEncSurfIdx], &pCurrentTask->mfxBS, &pCurrentTask->EncSyncP); //編碼一幀數據
			if (MFX_ERR_NONE < sts && !pCurrentTask->EncSyncP) // repeat the call if warning and no output
			{
				if (MFX_WRN_DEVICE_BUSY == sts)
					MSDK_SLEEP(1); // wait if device is busy
			}
			else if (MFX_ERR_NONE < sts && pCurrentTask->EncSyncP)
			{
				sts = MFX_ERR_NONE; // ignore warnings if output is available
				MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);
			}
			else if (MFX_ERR_NOT_ENOUGH_BUFFER == sts)
			{
				sts = AllocateSufficientBuffer(&pCurrentTask->mfxBS);
				MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);
			}
			else
			{
				// get next surface and new task for 2nd bitstream in ViewOutput mode
				MSDK_IGNORE_MFX_STS(sts, MFX_ERR_MORE_BITSTREAM);
				MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);
			}
		}
	}

	// means that the input file has ended, need to go to buffering loops
	MSDK_IGNORE_MFX_STS(sts, MFX_ERR_MORE_DATA);
	// exit in case of other errors
	MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);

	if (m_pmfxVPP)
	{
		// loop to get buffered frames from vpp
		while (MFX_ERR_NONE <= sts || MFX_ERR_MORE_DATA == sts || MFX_ERR_MORE_SURFACE == sts)
			// MFX_ERR_MORE_SURFACE can be returned only by RunFrameVPPAsync
			// MFX_ERR_MORE_DATA is accepted only from EncodeFrameAsync
		{
			// find free surface for encoder input (vpp output)
			nEncSurfIdx = GetFreeSurface(m_pEncSurfaces, m_EncResponse.NumFrameActual);
			MSDK_CHECK_ERROR(nEncSurfIdx, MSDK_INVALID_SURF_IDX, MFX_ERR_MEMORY_ALLOC);

			for (;;)
			{
				sts = m_pmfxVPP->RunFrameVPPAsync(NULL, &m_pEncSurfaces[nEncSurfIdx], NULL, &VppSyncPoint);

				if (MFX_ERR_NONE < sts && !VppSyncPoint) // repeat the call if warning and no output
				{
					if (MFX_WRN_DEVICE_BUSY == sts)
						MSDK_SLEEP(1); // wait if device is busy
				}
				else if (MFX_ERR_NONE < sts && VppSyncPoint)
				{
					sts = MFX_ERR_NONE; // ignore warnings if output is available
					break;
				}
				else
					break; // not a warning
			}

			if (MFX_ERR_MORE_SURFACE == sts)
			{
				continue;
			}
			else
			{
				MSDK_BREAK_ON_ERROR(sts);
			}

			// get a free task (bit stream and sync point for encoder)
			sts = GetFreeTask(&pCurrentTask);
			MSDK_BREAK_ON_ERROR(sts);

			// save the id of preceding vpp task which will produce input data for the encode task
			if (VppSyncPoint)
			{
				pCurrentTask->DependentVppTasks.push_back(VppSyncPoint);
				VppSyncPoint = NULL;
			}

			for (;;)
			{
				sts = m_pmfxENC->EncodeFrameAsync(NULL, &m_pEncSurfaces[nEncSurfIdx], &pCurrentTask->mfxBS, &pCurrentTask->EncSyncP);

				if (MFX_ERR_NONE < sts && !pCurrentTask->EncSyncP) // repeat the call if warning and no output
				{
					if (MFX_WRN_DEVICE_BUSY == sts)
						MSDK_SLEEP(1); // wait if device is busy
				}
				else if (MFX_ERR_NONE < sts && pCurrentTask->EncSyncP)
				{
					sts = MFX_ERR_NONE; // ignore warnings if output is available
					break;
				}
				else if (MFX_ERR_NOT_ENOUGH_BUFFER == sts)
				{
					sts = AllocateSufficientBuffer(&pCurrentTask->mfxBS);
					MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);
				}
				else
				{
					// get next surface and new task for 2nd bitstream in ViewOutput mode
					MSDK_IGNORE_MFX_STS(sts, MFX_ERR_MORE_BITSTREAM);
					break;
				}
			}
		}

		// MFX_ERR_MORE_DATA is the correct status to exit buffering loop with
		// indicates that there are no more buffered frames
		MSDK_IGNORE_MFX_STS(sts, MFX_ERR_MORE_DATA);
		// exit in case of other errors
		MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);
	}

	// MFX_ERR_MORE_DATA is the correct status to exit buffering loop with
	// indicates that there are no more buffered frames
	MSDK_IGNORE_MFX_STS(sts, MFX_ERR_MORE_DATA);
	// exit in case of other errors
	MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);


	// MFX_ERR_NOT_FOUND is the correct status to exit the loop with
	// EncodeFrameAsync and SyncOperation don't return this status
	MSDK_IGNORE_MFX_STS(sts, MFX_ERR_NOT_FOUND);
	// report any errors that occurred in asynchronous part
	MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);

	return sts;
}

因爲我沒有用到 VPP 功能所以,目的緩衝的來源是這一行賦值語句:

// point pSurf to encoder surface
pSurf = &m_pEncSurfaces[nEncSurfIdx];

由此可見,目的緩衝是來自於 m_pEncSurfaces 數組的,而這個數組的賦值來自下面接口:

mfxStatus CEncodingPipeline::AllocFrames()
{
    MSDK_CHECK_POINTER(m_pmfxENC, MFX_ERR_NOT_INITIALIZED);

    mfxStatus sts = MFX_ERR_NONE;
    mfxFrameAllocRequest EncRequest;
    mfxFrameAllocRequest VppRequest[2];

    mfxU16 nEncSurfNum = 0; // number of surfaces for encoder
    mfxU16 nVppSurfNum = 0; // number of surfaces for vpp

    MSDK_ZERO_MEMORY(EncRequest);
    MSDK_ZERO_MEMORY(VppRequest[0]);
    MSDK_ZERO_MEMORY(VppRequest[1]);

    // Calculate the number of surfaces for components.
    // QueryIOSurf functions tell how many surfaces are required to produce at least 1 output.
    // To achieve better performance we provide extra surfaces.
    // 1 extra surface at input allows to get 1 extra output.
    sts = m_pmfxENC->QueryIOSurf(&m_mfxEncParams, &EncRequest);
    MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);

	printf("QueryIOSurf [OUT]: EncRequest.NumFrameSuggested=%u ,m_mfxEncParams.AsyncDepth=%u\n", EncRequest.NumFrameSuggested, m_mfxEncParams.AsyncDepth);
	if (EncRequest.NumFrameSuggested < m_mfxEncParams.AsyncDepth)
	{
		EncRequest.NumFrameSuggested = m_mfxEncParams.AsyncDepth;
	}
	printf(">>>>> EncRequest.Type = 0x%p\n", EncRequest.Type);
	printf(">>>>> EncRequest.NumFrameSuggested = %u\n", EncRequest.NumFrameSuggested);
	printf(">>>>> EncRequest.NumFrameMin = %u, \n", EncRequest.NumFrameMin);
	printf(">>>>> EncRequest.AllocId = %u, \n", EncRequest.AllocId);
    // The number of surfaces shared by vpp output and encode input.
    nEncSurfNum = EncRequest.NumFrameSuggested;

    if (m_pmfxVPP)
    {
		printf(">>>>> use m_pmfxVPP\n");
        // VppRequest[0] for input frames request, VppRequest[1] for output frames request
        sts = m_pmfxVPP->QueryIOSurf(&m_mfxVppParams, VppRequest);
        MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);

        // The number of surfaces for vpp input - so that vpp can work at async depth = m_nAsyncDepth
        nVppSurfNum = VppRequest[0].NumFrameSuggested;
        // If surfaces are shared by 2 components, c1 and c2. NumSurf = c1_out + c2_in - AsyncDepth + 1
        nEncSurfNum += nVppSurfNum - m_mfxEncParams.AsyncDepth + 1;
    }

    // prepare allocation requests
    EncRequest.NumFrameSuggested = EncRequest.NumFrameMin = nEncSurfNum;
    MSDK_MEMCPY_VAR(EncRequest.Info, &(m_mfxEncParams.mfx.FrameInfo), sizeof(mfxFrameInfo));
    if (m_pmfxVPP)
    {
        EncRequest.Type |= MFX_MEMTYPE_FROM_VPPOUT; // surfaces are shared between vpp output and encode input
    }
	printf(">>>>> EncRequest.Type = 0x%p\n", EncRequest.Type);
	printf(">>>>> EncRequest.NumFrameSuggested = %u\n", EncRequest.NumFrameSuggested);
	printf(">>>>> EncRequest.NumFrameMin = %u, \n", EncRequest.NumFrameMin);
	printf(">>>>> EncRequest.AllocId = %u, \n", EncRequest.AllocId);
    // alloc frames for encoder
    sts = m_pMFXAllocator->Alloc(m_pMFXAllocator->pthis, &EncRequest, &m_EncResponse);
    MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);

    // alloc frames for vpp if vpp is enabled
    if (m_pmfxVPP)
    {
        VppRequest[0].NumFrameSuggested = VppRequest[0].NumFrameMin = nVppSurfNum;
        MSDK_MEMCPY_VAR(VppRequest[0].Info, &(m_mfxVppParams.mfx.FrameInfo), sizeof(mfxFrameInfo));

        sts = m_pMFXAllocator->Alloc(m_pMFXAllocator->pthis, &(VppRequest[0]), &m_VppResponse);
        MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);
    }

    // prepare mfxFrameSurface1 array for encoder
    m_pEncSurfaces = new mfxFrameSurface1 [m_EncResponse.NumFrameActual];
    MSDK_CHECK_POINTER(m_pEncSurfaces, MFX_ERR_MEMORY_ALLOC);

    for (int i = 0; i < m_EncResponse.NumFrameActual; i++)
    {
        memset(&(m_pEncSurfaces[i]), 0, sizeof(mfxFrameSurface1));
        MSDK_MEMCPY_VAR(m_pEncSurfaces[i].Info, &(m_mfxEncParams.mfx.FrameInfo), sizeof(mfxFrameInfo));

        if (m_bExternalAlloc)
        {
            m_pEncSurfaces[i].Data.MemId = m_EncResponse.mids[i];
        }
        else
        {
            // get YUV pointers
            sts = m_pMFXAllocator->Lock(m_pMFXAllocator->pthis, m_EncResponse.mids[i], &(m_pEncSurfaces[i].Data));
            MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);
        }
    }

    // prepare mfxFrameSurface1 array for vpp if vpp is enabled
    if (m_pmfxVPP)
    {
        m_pVppSurfaces = new mfxFrameSurface1 [m_VppResponse.NumFrameActual];
        MSDK_CHECK_POINTER(m_pVppSurfaces, MFX_ERR_MEMORY_ALLOC);

        for (int i = 0; i < m_VppResponse.NumFrameActual; i++)
        {
            MSDK_ZERO_MEMORY(m_pVppSurfaces[i]);
            MSDK_MEMCPY_VAR(m_pVppSurfaces[i].Info, &(m_mfxVppParams.mfx.FrameInfo), sizeof(mfxFrameInfo));

            if (m_bExternalAlloc)
            {
                m_pVppSurfaces[i].Data.MemId = m_VppResponse.mids[i];
            }
            else
            {
                sts = m_pMFXAllocator->Lock(m_pMFXAllocator->pthis, m_VppResponse.mids[i], &(m_pVppSurfaces[i].Data));
                MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);
            }
        }
    }

    return MFX_ERR_NONE;
}

其中 數組 m_pEncSurfaces[i].Info 的賦值語句:

MSDK_MEMCPY_VAR(m_pEncSurfaces[i].Info, &(m_mfxEncParams.mfx.FrameInfo), sizeof(mfxFrameInfo));

從中可知,數組 m_pEncSurfaces[i].Info 中的數據來自於 m_mfxEncParams.mfx.FrameInfo,而後者來自於:

mfxStatus CEncodingPipeline::InitMfxEncParams(sInputParams *pInParams)
{
    m_mfxEncParams.mfx.CodecId           = pInParams->CodecId; // codec type (AVC or HEVC)
    m_mfxEncParams.mfx.TargetUsage       = pInParams->nTargetUsage; // trade-off between quality and speed
    m_mfxEncParams.mfx.TargetKbps        = pInParams->nBitRate; // in Kbps
	m_mfxEncParams.mfx.GopPicSize		 = pInParams->nGopPicSize;
	m_mfxEncParams.mfx.NumSlice          = pInParams->nNumSlice; // single slice
	m_mfxEncParams.mfx.EncodedOrder      = 0; // binary flag, 0 signals encoder to take frames in display order.

	m_mfxEncParams.mfx.NumRefFrame       = 1; // ref frame num
	m_mfxEncParams.mfx.NumThread         = 0; //2; //??? Deprecated; Always set this parameter to zero.
	m_mfxEncParams.mfx.GopRefDist        = 1; //設置不編碼B幀 //If GopRefDist = 1, there are no B-frames used.	
	m_mfxEncParams.mfx.GopOptFlag        = MFX_GOP_CLOSED; //Frames in this GOP do not use frames in previous GOP as reference.
	if (m_mfxEncParams.mfx.CodecId == MFX_CODEC_AVC)
	{
		m_mfxEncParams.mfx.IdrInterval   = 0; //For H.264, if IdrInterval=0, then every I-frame is an IDR-frame. 
	}
	else if (m_mfxEncParams.mfx.CodecId == MFX_CODEC_HEVC)
	{
		m_mfxEncParams.mfx.IdrInterval   = 1; //For HEVC, If IdrInterval=1, then every I-frame is an IDR-frame.
	}
	//Frames in this GOP do not use frames in previous GOP as reference.
	// frame info parameters
	m_mfxEncParams.mfx.FrameInfo.FourCC       = MFX_FOURCC_NV12; // FourCC code of the color format;
	m_mfxEncParams.mfx.FrameInfo.ChromaFormat = MFX_CHROMAFORMAT_YUV420; // Color sampling method;ChromaFormat is not defined if FourCC is zero.
	m_mfxEncParams.mfx.FrameInfo.PicStruct    = pInParams->nPicStruct; // Picture type (Frame or Field)

	m_mfxEncParams.mfx.RateControlMethod = pInParams->nRateControlMethod; // CBR or VBR
    if (m_mfxEncParams.mfx.RateControlMethod == MFX_RATECONTROL_CQP) // fixed QP
    {
        m_mfxEncParams.mfx.QPI = pInParams->nQPI;
        m_mfxEncParams.mfx.QPP = pInParams->nQPP;
        m_mfxEncParams.mfx.QPB = pInParams->nQPB;
    }
	/*Specify the frame rate by the formula: FrameRateExtN/FrameRateExtD.
	For encoding, frame rate must be specified. 
	For decoding, frame rate may be unspecified (FrameRateExtN and FrameRateExtD are all zeros.) 
	In this case, the frame rate is default to 30 frames per second.*/
    ConvertFrameRate(pInParams->dFrameRate, &m_mfxEncParams.mfx.FrameInfo.FrameRateExtN, &m_mfxEncParams.mfx.FrameInfo.FrameRateExtD);

    // specify memory type
    if (D3D9_MEMORY == pInParams->memType || D3D11_MEMORY == pInParams->memType)
    {
        m_mfxEncParams.IOPattern = MFX_IOPATTERN_IN_VIDEO_MEMORY;
    }
    else
    {
        m_mfxEncParams.IOPattern = MFX_IOPATTERN_IN_SYSTEM_MEMORY;
    }

    // set frame size and crops
    // width must be a multiple of 16
    // height must be a multiple of 16 in case of frame picture and a multiple of 32 in case of field picture
    m_mfxEncParams.mfx.FrameInfo.Width  = MSDK_ALIGN16(pInParams->nDstWidth);
    m_mfxEncParams.mfx.FrameInfo.Height = (MFX_PICSTRUCT_PROGRESSIVE == m_mfxEncParams.mfx.FrameInfo.PicStruct)?
        MSDK_ALIGN16(pInParams->nDstHeight) : MSDK_ALIGN32(pInParams->nDstHeight);

    m_mfxEncParams.mfx.FrameInfo.CropX = 0;
    m_mfxEncParams.mfx.FrameInfo.CropY = 0;
    m_mfxEncParams.mfx.FrameInfo.CropW = pInParams->nDstWidth; // frame_cropping_flag
    m_mfxEncParams.mfx.FrameInfo.CropH = pInParams->nDstHeight; // frame_cropping_flag

	if (MFX_CODEC_AVC == pInParams->CodecId)
	{
		if (pInParams->nCodecProfile == MFX_PROFILE_AVC_HIGH || \
			pInParams->nCodecProfile == MFX_PROFILE_AVC_BASELINE || \
			pInParams->nCodecProfile == MFX_PROFILE_AVC_MAIN)
		{
			m_mfxEncParams.mfx.CodecProfile = pInParams->nCodecProfile; 
		}
		else
		{
			m_mfxEncParams.mfx.CodecProfile = MFX_PROFILE_AVC_HIGH;
		}
	}
	else if (MFX_CODEC_HEVC == pInParams->CodecId)
	{
		m_mfxEncParams.mfx.CodecProfile = MFX_PROFILE_HEVC_MAIN;
	}

	bool bCodingOption = false;
	bool bCodingOption2 = false;
	bool bCodingOption3 = false;
	m_EncExtParams.clear();

	/*	NalHrdConformance
	If this option is turned ON, then AVC encoder produces HRD conformant bitstream. 
	If it is turned OFF, then AVC encoder may, but not necessary does, violate HRD conformance.
	I.e. this option can force encoder to produce HRD conformant stream, but cannot force it to produce unconformant stream.
	
		VuiVclHrdParameters
	If set and VBR rate control method is used then VCL HRD parameters are written in bitstream with identical to NAL HRD parameters content. 
	See the CodingOptionValue enumerator for values of this option.
	
		AUDelimiter
	Set this flag to insert the Access Unit Delimiter NAL. See the CodingOptionValue enumerator for values of this option.
	
		PicTimingSEI 
	(SEI: 補充增強信息(Supplemental Enhancement Information))
	Set this flag to insert the picture timing SEI with pic_struct syntax element. 
	See the CodingOptionValue enumerator for values of this option. The default value is ON.
	
		VuiNalHrdParameters
	Set this flag to insert NAL HRD parameters in the VUI header. See the CodingOptionValue enumerator for values of this option.*/

	//close SEI info
	if (MFX_CODEC_AVC == pInParams->CodecId || MFX_CODEC_HEVC == pInParams->CodecId)
	{
		if (pInParams->nCodecProfile == MFX_PROFILE_AVC_HIGH)
		{
			m_CodingOption.CAVLC = MFX_CODINGOPTION_OFF;
		}
		else if (pInParams->nCodecProfile == MFX_PROFILE_AVC_BASELINE)
		{
			m_CodingOption.CAVLC = MFX_CODINGOPTION_ON;
		}
		m_CodingOption.VuiVclHrdParameters = MFX_CODINGOPTION_OFF;
		m_CodingOption.NalHrdConformance  = MFX_CODINGOPTION_OFF;
		m_CodingOption.AUDelimiter        = MFX_CODINGOPTION_OFF;
		m_CodingOption.PicTimingSEI       = MFX_CODINGOPTION_OFF;
		m_CodingOption.VuiNalHrdParameters = MFX_CODINGOPTION_OFF;
		m_CodingOption.EndOfStream        = MFX_CODINGOPTION_OFF; //Deprecated.

		bCodingOption = true;
	}
	
	/*	DisableVUI
	This option completely disables VUI in output bitstream.
	Not all codecs and SDK implementations support this value. Use Query function to check if this feature is supported.
		
		AdaptiveB
	This flag controls changing of frame type from B to P. Turn ON this flag to allow such changing. 
	This option is ignored if GopOptFlag in mfxInfoMFX structure is equal to MFX_GOP_STRICT. 
	See the CodingOptionValue enumerator for values of this option. 
	This parameter is valid only during initialization.*/
	
	// configure the depth of the look ahead BRC if specified in command line
	if (pInParams->nLADepth || pInParams->nMaxSliceSize || 
		MFX_CODEC_AVC == pInParams->CodecId || MFX_CODEC_HEVC == pInParams->CodecId)
	{
		m_CodingOption2.LookAheadDepth = pInParams->nLADepth;
		m_CodingOption2.MaxSliceSize   = pInParams->nMaxSliceSize;
		m_CodingOption2.BRefType = MFX_B_REF_OFF;
		m_CodingOption2.DisableVUI = MFX_CODINGOPTION_ON; //此處禁用標誌實際上是不起作用的(實測)
		m_CodingOption2.DisableDeblockingIdc = MFX_CODINGOPTION_ON; /*系統部解碼hevc,要求有濾波的情況下,跨邊界濾波標示要爲*/
		//m_CodingOption2.AdaptiveB = MFX_CODINGOPTION_ON; //此處必須去掉,否則AVC初始化失敗

		bCodingOption2 = true;
	}
	
	//kvh4編碼沒有VUI 因此硬編也去掉VUI    ---add lijian 2019/1/25
	if (MFX_CODEC_AVC == pInParams->CodecId || MFX_CODEC_HEVC == pInParams->CodecId)
	{
		//關閉VUI參數//擴展參數3必須加載,VUI實際上就是它禁用掉的 ---  lijian 2019/1/25
		m_CodingOption3.AspectRatioInfoPresent = MFX_CODINGOPTION_OFF;
		m_CodingOption3.OverscanInfoPresent = MFX_CODINGOPTION_OFF;
		m_CodingOption3.TimingInfoPresent = MFX_CODINGOPTION_OFF;
		m_CodingOption3.BitstreamRestriction = MFX_CODINGOPTION_OFF;
		//m_CodingOption3.GPB = MFX_CODINGOPTION_OFF; //(無用,HEVC還是編出來B幀而非I幀)

		bCodingOption3 = true; 
	}

	//導入所有擴展參數地址
	if (bCodingOption)
	{
		m_EncExtParams.push_back((mfxExtBuffer *)&m_CodingOption);
	}
	if (bCodingOption2)
	{
		m_EncExtParams.push_back((mfxExtBuffer *)&m_CodingOption2);
	}
	if (bCodingOption3)
	{
		m_EncExtParams.push_back((mfxExtBuffer *)&m_CodingOption3);
	}
    if (!m_EncExtParams.empty())
    {
        m_mfxEncParams.ExtParam = &m_EncExtParams[0]; // vector is stored linearly in memory
        m_mfxEncParams.NumExtParam = (mfxU16)m_EncExtParams.size();
    }

    // JPEG encoder settings overlap with other encoders settings in mfxInfoMFX structure
    if (MFX_CODEC_JPEG == pInParams->CodecId)
    {
        m_mfxEncParams.mfx.Interleaved = 1;
        m_mfxEncParams.mfx.Quality = pInParams->nQuality;
        m_mfxEncParams.mfx.RestartInterval = 0;
        MSDK_ZERO_MEMORY(m_mfxEncParams.mfx.reserved5);
    }

    m_mfxEncParams.AsyncDepth = pInParams->nAsyncDepth;

	//mfxStatus sts = m_pmfxENC->Query(&m_mfxEncParams, &m_mfxEncParams);
	//MSDK_CHECK_RESULT(sts, MFX_ERR_NONE, sts);

    return MFX_ERR_NONE;
}

其中 m_mfxEncParams.mfx.FrameInfo 賦值語句是:

// set frame size and crops
    // width must be a multiple of 16
    // height must be a multiple of 16 in case of frame picture and a multiple of 32 in case of field picture
    m_mfxEncParams.mfx.FrameInfo.Width  = MSDK_ALIGN16(pInParams->nDstWidth);
    m_mfxEncParams.mfx.FrameInfo.Height = (MFX_PICSTRUCT_PROGRESSIVE == m_mfxEncParams.mfx.FrameInfo.PicStruct)?
        MSDK_ALIGN16(pInParams->nDstHeight) : MSDK_ALIGN32(pInParams->nDstHeight);

    m_mfxEncParams.mfx.FrameInfo.CropX = 0;
    m_mfxEncParams.mfx.FrameInfo.CropY = 0;
    m_mfxEncParams.mfx.FrameInfo.CropW = pInParams->nDstWidth; // frame_cropping_flag
    m_mfxEncParams.mfx.FrameInfo.CropH = pInParams->nDstHeight; // frame_cropping_flag

從中看出,Width 、Height 都是對用戶設置編碼分辨率做 16字節對齊適配後的結果,
而Crop根本就沒裁剪掉任何東西。

到此爲止,可以看出 目的緩衝只與初始化接口 InitMfxEncParams() 輸入的寬高有關(因爲我在調用初始化接口前就已經對編碼寬高做過16字節對齊適配)

至於 源緩衝數據的 寬高就是編碼接口 EncodeFrame() 中的輸入數據寬高。

因此,我們最終要比較的就是 InitMfxEncParams() 和 EncodeFrame() 輸入參數寬高是否相同。

查找二者調用代碼:

u16 CVidEncoder::ResizeAndEncode(TEncInputStruct &tIn420Data, TEncOutputStruct& OutPutData)
{
	u16 wRet = 0;
	if (tIn420Data.pInData == NULL || OutPutData.pOutStreamData == NULL)
	{
		m_pRef->m_tErrorDescription.dwErrorId = ERROR_CODEC_PARAM;
		MError("[Encoder:%d] Input param is error. tIn420Data.pInData=0x%p, OutPutData.pOutStreamData=0x%p\n", 
			m_dwIndex, tIn420Data.pInData, OutPutData.pOutStreamData);
		return -1;
	}
	if (tIn420Data.pInData->pDataY == NULL)
	{
		MError("[Encoder:%d] Input param is error. tIn420Data.pInData->pDataY=NULL\n", m_dwIndex);
		return -1;
	}
	if (tIn420Data.pInData->dwWidth == 0 || tIn420Data.pInData->dwHeight == 0 )
	{
		MError("[Encoder:%d] Input param is error. WxH=%dx%d\n", 
			m_dwIndex, tIn420Data.pInData->dwWidth, tIn420Data.pInData->dwHeight == 0);
		m_pRef->m_tErrorDescription.dwErrorId  = ERROR_ENC_CAP_WIDTH;
		return -1;
	}
	if (((u32)en_VideoI420) != tIn420Data.pInData->dwCompression)
	{
		MError("[Encoder:%d] Input format is not I420. tIn420Data.pInData->dwCompression=%d (0-I420)\n", 
			m_dwIndex, tIn420Data.pInData->dwCompression);
		m_pRef->m_tErrorDescription.dwErrorId  = ERROR_ENC_CAP_FORMATE;
		return -1;
	}
	m_cSaveEncIn.SaveToFile(tIn420Data.pInData->pDataY, (u32)(tIn420Data.pInData->dwHeight*tIn420Data.pInData->dwWidth * 1.5), FALSE);
	
	//m_cLockParam.Take();
	////採集分辨率調整   -----多餘操作,註釋掉  ----lijian 2018/11/29
	//SizeChanged(tIn420Data.pInData->dwWidth, tIn420Data.pInData->dwHeight);
	
	m_cEncParamLock.Take(); //=====================進鎖========================
	//必須先初始化 然後再縮放,因爲在初始化接口中 編碼參數可能會適配更改
	//1. 編碼參數調整--->編碼器重建(先關閉之前的編碼器再重建)
	if (m_bEncParamChanged == TRUE)
	{	
		//關閉之前的編碼器
		CloseAllEncoder();
		//重新初始化編碼器
		BOOL32 bInitRet = FALSE;
#ifdef  _INTEL_HW_ENC_
		if ( TRUE == m_bUseHwEncoder && 
			(m_tEncParamFromUsr.m_byEncType == MEDIA_TYPE_H265||
			(m_tEncParamFromUsr.m_byEncType == MEDIA_TYPE_H264 && 
			(m_tEncParamFromUsr.m_wBitRate > 1080 ||m_tEncParamFromUsr.m_wBitRate == 1080))))
		{
			MPrintf("[Encoder:%d][pre] init hw encoder: %dx%d\n", m_dwIndex,
				m_tEncParamFromUsr.m_wEncVideoWidth, m_tEncParamFromUsr.m_wEncVideoHeight);

			bInitRet = InitHwEncoder(m_tEncParamFromUsr); 
			if (FALSE == bInitRet)
			{
				MError("[Encoder:%d][HW] InitHwEncoder() failed, so close HW encoder and open SW encoder: m_bUseHwEncoder = FALSE\n", m_dwIndex);
				m_bUseHwEncoder = FALSE;
			}
			MPrintf("[Encoder:%d][post] init hw encoder: %dx%d\n", m_dwIndex,
				m_tEncParamFromUsr.m_wEncVideoWidth, m_tEncParamFromUsr.m_wEncVideoHeight);
		}
#endif // _INTEL_HW_ENC_
		BOOL32 bIsRebuilt = FALSE;

		if (bInitRet == TRUE)
		{
			m_byEncoderType = em_Codec_HW;
		}
		else if (TRUE == KedaSetRatio(bIsRebuilt, TRUE))
		{
			m_byEncoderType = em_Codec_VideoUnit;
			bInitRet = TRUE;
		}
		else if (TRUE == InitSoftEncoder(m_tEncParamFromUsr))
		{	
			bInitRet = TRUE;
			m_byEncoderType = em_Codec_VFW;
		}
#ifdef USE_X264_ENC     // X264 不在使用了   ---add lijian 2019/4/15
        else if (m_tEncParamFromUsr.m_byEncType == MEDIA_TYPE_H264)
        {           
			bInitRet = InitX264Encoder(m_tEncParamFromUsr);
			if (bInitRet == TRUE)
			{
				m_byEncoderType = em_Codec_X264;
			}           		
        } 
#endif
		if (FALSE == bInitRet)
		{
			m_byEncoderType = em_Codec_None;
			MError("[Encoder:%d] All kind of encoder have create failed. \n", m_dwIndex);
			m_cEncParamLock.Give(); //=====================出鎖========================
			return -2;
		}
		else
		{
			MPrintf("[Encoder:%d][%s] Encoder is initialized successfully: m_bEncParamChanged = FALSE \n", m_dwIndex, GetCodecModeName(m_byEncoderType));
			m_bEncParamChanged = FALSE;
		}
		//記錄當前硬編狀態 //移動到這裏是爲了 開啓編碼後再檢測是否硬編
		if (m_pStatus->m_kdsEncStatus.m_emHwStatus != en_Unsupported)
		{
			m_pStatus->m_kdsEncStatus.m_emHwStatus = (m_byEncoderType==em_Codec_HW?en_SupportedAndOpened:en_SupportedAndClosed);
		}
	}
	m_cEncParamLock.Give(); //=====================出鎖========================

	MPrintf("[Encoder:%d][pre] pre-process(MergeZoom):  %dx%d(input) -> %dx%d(enc param)\n",m_dwIndex, tIn420Data.pInData->dwWidth, tIn420Data.pInData->dwHeight,
		m_tEncParamFromUsr.m_wEncVideoWidth, m_tEncParamFromUsr.m_wEncVideoHeight);

	//2. 縮放----將採集分辨率 縮放到 編碼分辨率
	u8* pZoomOutYuv = tIn420Data.pInData->pDataY;
	if (tIn420Data.pInData->dwWidth != m_tEncParamFromUsr.m_wEncVideoWidth ||
		tIn420Data.pInData->dwHeight != m_tEncParamFromUsr.m_wEncVideoHeight)
	{
		ReInitRawYuvData(&m_tZoomYuvData, (u32)(m_tEncParamFromUsr.m_wEncVideoWidth * m_tEncParamFromUsr.m_wEncVideoHeight * 1.5));
		BOOL32 bMergeZoomResult = m_cImgZoom.MergeZoom(
			tIn420Data.pInData->pDataY, tIn420Data.pInData->dwWidth, tIn420Data.pInData->dwHeight, tIn420Data.pInData->dwWidth, 
			m_tZoomYuvData.pDataY, m_tEncParamFromUsr.m_wEncVideoWidth, m_tEncParamFromUsr.m_wEncVideoHeight, m_tEncParamFromUsr.m_wEncVideoWidth, 
			m_emZoomStyleCapToEnc);
		pZoomOutYuv = m_tZoomYuvData.pDataY;
		if (bMergeZoomResult == FALSE)
		{
			MError("[Encoder:%d] CVidScalerImgunit::MergeZoom() failed. Input_WxH=%dx%d\n", 
				m_dwIndex, tIn420Data.pInData->dwWidth, tIn420Data.pInData->dwHeight);
			return -3;
		}
	}
	MPrintf("[Encoder:%d][post] pre-process(MergeZoom):  %dx%d(input) -> %dx%d(enc param)\n",m_dwIndex, tIn420Data.pInData->dwWidth, tIn420Data.pInData->dwHeight,
		m_tEncParamFromUsr.m_wEncVideoWidth, m_tEncParamFromUsr.m_wEncVideoHeight);

	DrawLogoI420(pZoomOutYuv, m_tEncParamFromUsr.m_wEncVideoWidth, m_tEncParamFromUsr.m_wEncVideoHeight, ADDLOGO_ENC);
	
	// 720x576 編碼MP4時,幀 -> 場
	if (m_tEncParamFromUsr.m_wEncVideoWidth == 720 && m_tEncParamFromUsr.m_wEncVideoHeight == 576 && m_tEncParamFromUsr.m_byEncType == MEDIA_TYPE_MP4)
	{
		TScalerParam tScalerParam;
		memset(&tScalerParam, 0, sizeof(TScalerParam));
		tScalerParam.tSrcRect.dwTop = 0;
		tScalerParam.tSrcRect.dwLeft = 0;
		tScalerParam.tSrcRect.dwBottom = m_tEncParamFromUsr.m_wEncVideoHeight;
		tScalerParam.tSrcRect.dwRight = m_tEncParamFromUsr.m_wEncVideoWidth;
		tScalerParam.tDestRect.dwTop = 0;
		tScalerParam.tDestRect.dwLeft = 0;
		tScalerParam.tDestRect.dwBottom = m_tEncParamFromUsr.m_wEncVideoHeight;
		tScalerParam.tDestRect.dwRight = m_tEncParamFromUsr.m_wEncVideoWidth;
		tScalerParam.emImgType = en_VideoI420;
		tScalerParam.dwMode = EN_ZOOM_FILLBLACK;
		wRet = m_cImgFrmToField.SetParam(tScalerParam, enMc_FrmToField);
		if (wRet == VIDEO_SUCCESS)
		{
			ReInitRawYuvData(&m_tFrmToFieldYuvData, (u32)(m_tEncParamFromUsr.m_wEncVideoWidth * m_tEncParamFromUsr.m_wEncVideoHeight * 1.5));
			TScalerInOutParam tInputParam = {0};
			TScalerInOutParam tOutputParam = {0};
			tInputParam.pData[0] = pZoomOutYuv;
			tInputParam.pData[1] = tInputParam.pData[0] + m_tEncParamFromUsr.m_wEncVideoWidth * m_tEncParamFromUsr.m_wEncVideoHeight;
			tInputParam.pData[2] = tInputParam.pData[1] + m_tEncParamFromUsr.m_wEncVideoWidth * m_tEncParamFromUsr.m_wEncVideoHeight / 4 ;
			tInputParam.szPitch[0] = m_tEncParamFromUsr.m_wEncVideoWidth;
			tInputParam.szPitch[1] = m_tEncParamFromUsr.m_wEncVideoWidth >> 1;
			
			tOutputParam.pData[0] = m_tFrmToFieldYuvData.pDataY;
			tOutputParam.pData[1] = tOutputParam.pData[0] + m_tEncParamFromUsr.m_wEncVideoWidth * m_tEncParamFromUsr.m_wEncVideoHeight;
			tOutputParam.pData[2] = tOutputParam.pData[1] + m_tEncParamFromUsr.m_wEncVideoWidth * m_tEncParamFromUsr.m_wEncVideoHeight / 4 ;
			tOutputParam.szPitch[0] = m_tEncParamFromUsr.m_wEncVideoWidth;
			tOutputParam.szPitch[1] = m_tEncParamFromUsr.m_wEncVideoWidth >> 1;
			
			wRet = m_cImgFrmToField.Process(tInputParam, tOutputParam);
			if (0 != wRet)
			{
				MError("[Encoder:%d] CVidScalerImgunit::Process() failed!:wRet=%d\n", m_dwIndex, (s16)wRet);
			}
			pZoomOutYuv = m_tFrmToFieldYuvData.pDataY;
		}
		else
		{
			MError("[Encoder:%d] CVidScalerImgunit::SetParam() failed!:wRet=%d\n", m_dwIndex, (s16)wRet);
		}	
	}
	MPrintf("[Encoder:%d][1] ===========> %dx%d(enc param)\n",m_dwIndex, m_tEncParamFromUsr.m_wEncVideoWidth, m_tEncParamFromUsr.m_wEncVideoHeight);

	//3. 編碼,三種編碼方式(VFW、X264、HW)
	if (m_byEncoderType == em_Codec_VideoUnit)
	{
		if (FALSE == KedaConvert(pZoomOutYuv, 0, OutPutData.pOutStreamData, OutPutData.dwStreamLen, (BOOL32&)(OutPutData.IsKeyFrame), tIn420Data.dwSetKeyFrame))
		{
			MError("[Encoder:%d][VU] KedaConvert() failed.\n", m_dwIndex);
			return -3;
		}
	}
	else if (m_byEncoderType == em_Codec_VFW)
	{
		wRet = VFWEncode(pZoomOutYuv, 0, OutPutData.pOutStreamData, OutPutData.dwStreamLen, tIn420Data.dwSetKeyFrame, (BOOL32&)(OutPutData.IsKeyFrame));//輸入長度爲0是因爲接口不需要
		if (wRet != VIDEO_SUCCESS)
		{
			MError("[Encoder:%d][VFW] VFWEncode() failed. nRet=%d\n", m_dwIndex, (s16)wRet);
			return -3;
		}
	}
#ifdef  _INTEL_HW_ENC_
	else if(m_byEncoderType == em_Codec_HW)
	{
		wRet = HwEncode(pZoomOutYuv, 0, OutPutData.pOutStreamData, OutPutData.dwStreamLen, tIn420Data.dwSetKeyFrame, (BOOL32&)OutPutData.IsKeyFrame);//輸入長度爲0是因爲接口不需要
		if (wRet != VIDEO_SUCCESS)
		{
			m_cEncParamLock.Take();
			MError("[Encoder:%d][HW] HwEncode() failed, so close HW encoder and open SW encoder: m_bUseHwEncoder = FALSE, m_bEncParamChanged = TRUE\n", m_dwIndex);
			CloseHwEncoder();
			m_bUseHwEncoder = FALSE;
			m_bEncParamChanged = TRUE;
			m_cEncParamLock.Give();
			return -3;
		}
	}
#endif
#ifdef USE_X264_ENC   // X264 不在使用了   ---add lijian 2019/4/15
	else if (m_byEncoderType == em_Codec_X264)
	{
		wRet = X264Encode(pZoomOutYuv, 0, OutPutData.pOutStreamData, OutPutData.dwStreamLen, tIn420Data.dwSetKeyFrame, (BOOL32&)(OutPutData.IsKeyFrame));//輸入長度爲0是因爲接口不需要
		if (wRet != VIDEO_SUCCESS)
		{
			MError("[Encoder:%d][X264] X264Encode() failed. nRet=%d\n", m_dwIndex, (s16)wRet);
			return -3;
		}
	}
#endif
	else
	{
		MError("[Encoder:%d] m_byEncoderType=%d (em_Codec_None:%d)\n", m_dwIndex, m_byEncoderType, em_Codec_None);
		return -4;
	}
	
	if (wRet == VIDEO_SUCCESS && OutPutData.IsKeyFrame == TRUE)
	{
		MPrintf("=======>[Encoder:%d][%s] This is Key frame! Current key frame interval=%d.IdrSize=%d\n",
			m_dwIndex, GetCodecModeName(m_byEncoderType), m_nLastKeyFrameInterval, OutPutData.dwStreamLen);
		if (m_dwSaveCount != 0)
		{
			m_cSaveEncOut.StartSave(m_dwSaveCount);
			m_dwSaveCount = 0;
		}
	}	
	//4. 保存碼流文件,並計算碼率幀率
	if (OutPutData.dwStreamLen > 0)
	{
		m_cSaveEncOut.SaveToFile(OutPutData.pOutStreamData, OutPutData.dwStreamLen, TRUE);
		CalcStatis(OutPutData.dwStreamLen);
	}

	return wRet;
}

InitMfxEncParams() 和 EncodeFrame()接口分別封裝在 InitHwEncoder 和 HwEncode 中,並且都是將 m_tEncParamFromUsr.m_wEncVideoHeight 和 m_tEncParamFromUsr.m_wEncVideoHeight 作爲 InitMfxEncParams() 和 EncodeFrame() 最終輸入寬高,因此我們就是要比較二者在傳入寬高時 m_tEncParamFromUsr的寬高數據有沒有發生改變。

而 m_tEncParamFromUsr 更新賦值發生在主線程的編碼參數設置接口,它的使用卻在編碼線程的 InitHwEncoder 和 HwEncode 中,雖然兩個線程中都有加信號量鎖 m_cEncParamLock 來控制數據同步,但是編碼線程中的鎖只鎖住了 InitHwEncoder 而沒有將 HwEncode 一起鎖住。

下面是主線程 編碼參數設置接口 SetVideoEncParam()

u16 CVidEncoder::SetVideoEncParam( TVideoEncParam &tVideoEncParam )
{
	//CAutoLockCom cAutoLock(m_cLockParam);

	if (tVideoEncParam.m_wEncVideoWidth == 0 || tVideoEncParam.m_wEncVideoHeight == 0)
	{
		MError("[Encoder:%d] Input param is error. EncSize: %ux%u\n", m_dwIndex, tVideoEncParam.m_wEncVideoWidth, tVideoEncParam.m_wEncVideoHeight);
		return -1;
	}
	
	//目前只支持mpeg4、mpeg2、h261、h263、h264
	if( (tVideoEncParam.m_byEncType != MEDIA_TYPE_MP4)  &&
		(tVideoEncParam.m_byEncType != MEDIA_TYPE_H262)  &&
		(tVideoEncParam.m_byEncType != MEDIA_TYPE_H261) &&
		(tVideoEncParam.m_byEncType != MEDIA_TYPE_H263) &&
		(tVideoEncParam.m_byEncType != MEDIA_TYPE_H263PLUS) &&
		(tVideoEncParam.m_byEncType != MEDIA_TYPE_H264) &&
		(tVideoEncParam.m_byEncType != MEDIA_TYPE_H265))
	{
		m_pRef->m_tErrorDescription.dwErrorId = ERROR_CODEC_PARAM;
		MError("[Encoder:%d] Input MediaType is error. tVideoEncParam.m_byEncType=%u\n", m_dwIndex, tVideoEncParam.m_byEncType);
		return -2;
	}

	if(0 == tVideoEncParam.m_wBitRate)
	{
		m_pRef->m_tErrorDescription.dwErrorId = ERROR_CODEC_PARAM;
		MError("[Encoder:%d] Input BitRate is error. tVideoEncParam.m_wBitRate=0\n", m_dwIndex);
		return -3;
	}
	//檢測輸入編碼參數 與 已有編碼參數是否完全相同
	if (memcmp(&tVideoEncParam, &m_tEncParamFromUsr, sizeof(TVideoEncParam)) == 0)
	{
		return 0;
	}

	m_cEncParamLock.Take(); //=====================進鎖========================
	//編碼參數不相等時
	MPrintf("[Encoder:%d] The encoding parameters that the user just set are changed compared to the previous one: m_bEncParamChanged = TRUE \n", m_dwIndex);
	m_bEncParamChanged = TRUE;
	//if (tVideoEncParam.m_wEncVideoWidth != m_tEncParamFromUsr.m_wEncVideoWidth ||
	//	tVideoEncParam.m_wEncVideoHeight != m_tEncParamFromUsr.m_wEncVideoHeight)
	//{
	//	m_bEncParamSizeChanged = TRUE;
	//}
	
	m_tEncParamFromUsr = tVideoEncParam; //更新編碼參數


	// 所有格式的 寬高至少是 2 的倍數
	//if (m_tEncParamFromUsr.m_byEncType == MEDIA_TYPE_H264)
	{
		m_tEncParamFromUsr.m_wEncVideoWidth = ((m_tEncParamFromUsr.m_wEncVideoWidth + 1)>>1<<1);
		m_tEncParamFromUsr.m_wEncVideoHeight= ((m_tEncParamFromUsr.m_wEncVideoHeight + 1)>>1<<1);
	}
	// H265 寬高必須是 64 的倍數,否則編碼器內部自動適配成 64 的倍數
	if (m_tEncParamFromUsr.m_byEncType == MEDIA_TYPE_H265)
	{
		m_tEncParamFromUsr.m_wEncVideoWidth = ((m_tEncParamFromUsr.m_wEncVideoWidth + 63)>>6<<6);
		m_tEncParamFromUsr.m_wEncVideoHeight= ((m_tEncParamFromUsr.m_wEncVideoHeight + 63)>>6<<6);
	}	
	// MP2/MP4 寬高必須是16的倍數
	if (m_tEncParamFromUsr.m_byEncType == MEDIA_TYPE_H262 || m_tEncParamFromUsr.m_byEncType == MEDIA_TYPE_MP4)
	{
		m_tEncParamFromUsr.m_wEncVideoWidth = ((m_tEncParamFromUsr.m_wEncVideoWidth + 15)>>4<<4);
		m_tEncParamFromUsr.m_wEncVideoHeight= ((m_tEncParamFromUsr.m_wEncVideoHeight + 15)>>4<<4);
	}
	// H261只支持 CIF 和 QCIF
	if (m_tEncParamFromUsr.m_byEncType == MEDIA_TYPE_H261)
	{
		if ( (m_tEncParamFromUsr.m_wEncVideoWidth == 176 && m_tEncParamFromUsr.m_wEncVideoHeight == 144) ||
			 (m_tEncParamFromUsr.m_wEncVideoWidth == 352 && m_tEncParamFromUsr.m_wEncVideoHeight == 288))
		{
			//H261只支持 CIF 和 QCIF
		}
		else
		{
			if (m_tEncParamFromUsr.m_wEncVideoWidth <= 176)
			{
				m_tEncParamFromUsr.m_wEncVideoWidth = 176;
				m_tEncParamFromUsr.m_wEncVideoHeight = 144;
			}
			else
			{
				m_tEncParamFromUsr.m_wEncVideoWidth = 352;
				m_tEncParamFromUsr.m_wEncVideoHeight = 288;
			}
		}
	}
	//H263/H263+ 寬高必須是以下五種分辨率之一
	if (m_tEncParamFromUsr.m_byEncType == MEDIA_TYPE_H263 || tVideoEncParam.m_byEncType == MEDIA_TYPE_H263PLUS)
	{
		if ( (m_tEncParamFromUsr.m_wEncVideoWidth == 352 && m_tEncParamFromUsr.m_wEncVideoHeight == 288) ||
			 (m_tEncParamFromUsr.m_wEncVideoWidth == 640 && m_tEncParamFromUsr.m_wEncVideoHeight == 480) ||
			 (m_tEncParamFromUsr.m_wEncVideoWidth == 800 && m_tEncParamFromUsr.m_wEncVideoHeight == 600) ||
			 (m_tEncParamFromUsr.m_wEncVideoWidth == 1024 && m_tEncParamFromUsr.m_wEncVideoHeight == 768) ||
			 (m_tEncParamFromUsr.m_wEncVideoWidth == 1280 && m_tEncParamFromUsr.m_wEncVideoHeight == 1024) )
		{
			//H263 最好在這五種分辨率中做適配,medianet 對H263發送有限制
		}
		else
		{
			if (m_tEncParamFromUsr.m_wEncVideoWidth <= 352)
			{
				m_tEncParamFromUsr.m_wEncVideoWidth = 352;
				m_tEncParamFromUsr.m_wEncVideoHeight = 288;
			}
			else if (m_tEncParamFromUsr.m_wEncVideoWidth <= 640)
			{
				m_tEncParamFromUsr.m_wEncVideoWidth = 640;
				m_tEncParamFromUsr.m_wEncVideoHeight = 480;
			}
			else if (m_tEncParamFromUsr.m_wEncVideoWidth <= 800)
			{
				m_tEncParamFromUsr.m_wEncVideoWidth = 800;
				m_tEncParamFromUsr.m_wEncVideoHeight = 600;
			}
			else if (m_tEncParamFromUsr.m_wEncVideoWidth <= 1024)
			{
				m_tEncParamFromUsr.m_wEncVideoWidth = 1024;
				m_tEncParamFromUsr.m_wEncVideoHeight = 768;
			}
			else
			{
				m_tEncParamFromUsr.m_wEncVideoWidth = 1280;
				m_tEncParamFromUsr.m_wEncVideoHeight = 1024;
			}
		}
	}

	if (m_pStatus)
	{
		m_pStatus->m_kdsEncStatus.m_tVideoEncParam = m_tEncParamFromUsr; //記錄編碼參數
	}

	m_cEncParamLock.Give(); //=====================出鎖========================
	return 0;
}

我猜測在編碼線程的同一次調度中,InitHwEncoder 和 HwEncode 調用時 編碼參數m_tEncParamFromUsr 不一樣,它在 InitHwEncoder 調用完後就被主線程的 SetVideoEncParam接口改變了,從而導致 最終的崩潰。隨後我在代碼中 InitHwEncoder 和 HwEncode 調用前後打印 m_tEncParamFromUsr 中的寬高,復現測試崩潰。

結果果真如此,當編碼線程剛調用完 InitHwEncoder 後緊接着主線程調用了 SetVideoEncParam,導致了編碼線程接下來調用 HwEncode 時 編碼參數m_tEncParamFromUsr的寬高被改變了,被 SetVideoEncParam改變了,不在和 InitHwEncoder 調用時一樣,從而導致了最終的內存越界崩潰。

綜上,我們定位到了LoadFrame中 memcpy 的源緩衝 與 目的緩衝寬高果然不一致,從而導致了內存越界崩潰,而不一致的原因是 多線程數據同步沒做好,也就是 編碼寬高數據的同步沒做好。

現在修改方案有兩種:

  1. 將編碼線程中的 InitHwEncoder 和 HwEncode 一起用 信號量鎖 m_cEncParamLock 鎖住。

  2. 將編碼線程中 InitHwEncoder 用到的編碼參數m_tEncParamFromUsr記錄到一個臨時變量中,然後之後的用到m_tEncParamFromUsr地方都用這個臨時變量代替。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章