Image Stride(內存圖像行跨度)

Image Stride(內存圖像行跨度)

轉自:http://www.cnblogs.com/gamedes/p/4541765.html。

如果你用的是 MSDN Library For Visual Studio 2008 SP1,那麼你應該能夠在下面地址中找到這篇文章的原文:

ms-help://MS.MSDNQTR.v90.chs/medfound/html/13cd1106-48b3-4522-ac09-8efbaab5c31d.htm

轉載不需要指明出處,但是必須保持本文的完整性;

====================================================================

When a video image is stored in memory, the memory buffer might contain extra padding bytes after each row of pixels. The padding bytes affect how the image is store in memory, but do not affect how the image is displayed.

當視頻圖像存儲在內存時,圖像的每一行末尾也許包含一些擴展的內容,這些擴展的內容隻影響圖像如何存儲在內存中,但是不影響圖像如何顯示出來;

The stride is the number of bytes from one row of pixels in memory to the next row of pixels in memory. Stride is also called pitch. If padding bytes are present, the stride is wider than the width of the image, as shown in the following illustration.

Stride 就是這些擴展內容的名稱,Stride 也被稱作 Pitch,如果圖像的每一行像素末尾擁有擴展內容,Stride 的值一定大於圖像的寬度值,就像下圖所示:

Two buffers that contain video frames with equal dimensions can have two different strides. If you process a video image, you must take into the stride into account.

兩個緩衝區包含同樣大小(寬度和高度)的視頻幀,卻不一定擁有同樣的 Stride 值,如果你處理一個視頻幀,你必須在計算的時候把 Stride 考慮進去;

In addition, there are two ways that an image can be arranged in memory. In a top-down image, the top row of pixels in the image appears first in memory. In a bottom-up image, the last row of pixels appears first in memory. The following illustration shows the difference between a top-down image and a bottom-up image.

另外,一張圖像在內存中有兩種不同的存儲序列(arranged),對於一個從上而下存儲(Top-Down) 的圖像,最頂行的像素保存在內存中最開頭的部分,對於一張從下而上存儲(Bottom-Up)的圖像,最後一行的像素保存在內存中最開頭的部分,下面圖示展示了這兩種情況:

A bottom-up image has a negative stride, because stride is defined as the number of bytes need to move down a row of pixels, relative to the displayed image. YUV images should always be top-down, and any image that is contained in a Direct3D surface must be top-down. RGB images in system memory are usually bottom-up. 

一張從下而上的圖像擁有一個負的 Stride 值,因爲 Stride 被定義爲[從一行像素移動到下一行像素時需要跨過多少個像素],僅相對於被顯示出來的圖像而言;而 YUV 圖像永遠都是從上而下表示的,以及任何包含在 Direct3D Surface 中的圖像必須是從上而下,RGB 圖像保存在系統內存時通常是從下而上;

Video transforms in particular need to handle buffers with mismatched strides, because the input buffer might not match the output buffer. For example, suppose that you want to convert a source image and write the result to a destination image. Assume that both images have the same width and height, but might not have the same pixel format or the same image stride.

尤其是視頻變換,特別需要處理不同 Stride 值的圖像,因爲輸入緩衝也許與輸出緩衝不匹配,舉個例子,假設你想要將源圖像轉換並且將結果寫入到目標圖像,假設兩個圖像擁有相同的寬度和高度,但是其像素格式與 Stride 值也許不同;

The following example code shows a generalized approach for writing this kind of function. This is not a complete working example, because it abstracts many of the specific details.

下面代碼演示了一種通用方法來編寫這種功能,這段代碼並不完整,因爲這只是一個抽象的算法,沒有完全考慮到真實需求中的所有細節;

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
void ProcessVideoImage(
    BYTE*       pDestScanLine0,    
    LONG        lDestStride,       
    const BYTE* pSrcScanLine0,     
    LONG        lSrcStride,        
    DWORD       dwWidthInPixels,    
    DWORD       dwHeightInPixels
    )
{
    for (DWORD y = 0; y < dwHeightInPixels; y++)
    {
        SOURCE_PIXEL_TYPE *pSrcPixel = (SOURCE_PIXEL_TYPE*)pDestScanLine0;
        DEST_PIXEL_TYPE *pDestPixel = (DEST_PIXEL_TYPE*)pSrcScanLine0;
 
        for (DWORD x = 0; x < dwWidthInPixels; x +=2)
        {
            pDestPixel[x] = TransformPixelValue(pSrcPixel[x]);
        }
        pDestScanLine0 += lDestStride;
        pSrcScanLine0 += lSrcStride;
    }
}

This function takes six parameters:

  • A pointer to the start of scan line 0 in the destination image.

  • The stride of the destination image.

  • A pointer to the start of scan line 0 in the source image.

  • The stride of the source image.

  • The width of the image in pixels.

  • The height of the image in pixels.

這個函數需要六個參數:

  1. 目標圖像的起始掃描行的內存指針

  2. 目標圖像的 Stride 值

  3. 源圖像的起始掃描行的內存指針

  4. 源圖像的 Stride 值

  5. 圖像的寬度值(以像素爲單位)

  6. 圖像的高度值(以像素爲單位)

The general idea is to process one row at a time, iterating over each pixel in the row. Assume that SOURCE_PIXEL_TYPE and DEST_PIXEL_TYPE are structures representing the pixel layout for the source and destination images, respectively. (For example, 32-bit RGB uses the RGBQUAD structure. Not every pixel format has a pre-defined structure.) Casting the array pointer to the structure type enables you to access the RGB or YUV components of each pixel. At the start of each row, the function stores a pointer to the row. At the end of the row, it increments the pointer by the width of the image stride, which advances the pointer to the next row.

這裏的要點是如何一次處理一行像素,遍歷一行裏面的每一個像素,假設源像素類型與目標像素類型各自在像素的層面上已經結構化來表示一個源圖像與目標圖像的像素,(舉個例子,32 位 RGB 像素使用 RGBQUAD 結構體,並不是每一種像素類型都有預定義結構體的)強制轉換數組指針到這樣的結構體指針,可以方便你直接讀寫每一個像素的 RGB 或者 YUV 值,在每一行的開頭,這個函數保存了一個指向這行像素的指針,函數的最後一行,通過圖像的 Stride 值直接將指針跳轉到圖像的下一行像素的起始點;

This example calls a hypothetical function named TransformPixelValue for each pixel. This could be any function that calculates a target pixel from a source pixel. Of course, the exact details will depend on the particular task. For example, if you have a planar YUV format, you must access the chroma planes independently from the luma plane; with interlaced video, you might need to process the fields separately; and so forth.

To give a more concrete example, the following code converts a 32-bit RGB image into an AYUV image. The RGB pixels are accessed using an RGBQUAD structure, and the AYUV pixels are accessed using aDXVA2_AYUVSample8 Structure structure.


發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章