爲ffmpeg添加自定義濾鏡

http://blog.chinaunix.net/xmlrpc.php?r=blog/article&uid=26000296&id=3068068

 

前言
FFmpeg的優秀在於它的功能強大和良好的系統框架,而濾鏡就是其中之一。ffmpeg的自帶濾鏡不但能對視頻進行裁剪,添加logo,還能將多個濾鏡組全使用。
更妙之處在於它還可以方便地添加自己定義的各種濾鏡。這種可擴展性對於實際應用來說就頗有價值了。

閒言少述,書歸正傳!
本文第一部分是我對wiki上的一篇教程的翻譯和解釋,但是它並沒有講解如何將寫好的濾鏡添加到ffmpeg中編譯並運行。
第二部分是我自己實踐了的如何將濾鏡添加進ffmpeg中進行編譯和運行(版本ffmpeg-0.8.5)。
最後一部分附上示例用的濾鏡源碼(版本ffmpeg-0.8.5)。


Chapter 1  : 
FFmpeg filter HOWTO

(本章節引自http://wiki.multimedia.cx/index.php?title=FFmpeg_filter_howto)

This page is meant as an introduction of writing filters for libavfilter. This is a work in progress, but should at least point you in the right direction for writing simple filters.

Contents
1. Definition of a filter
      1.1 AVFilter
      1.2 AVFilterPad
2. Picture buffers
      2.1  Reference counting
      2.2 Permissions
3. Filter Links
4. Writing a simple filter
      4.1 Default filter entry points
      4.2 The vf_negate filter
Definition of a filter AVFilter

All filters are described by an AVFilter structure. This structure gives information needed to initialize the filter, and information on the entry points into the filter code. This structure is declared in libavfilter/avfilter.h:

濾鏡的數據結構體(包括需要初始化濾鏡變量,濾鏡的函數入口點等)定義在libavfilter/avfilter.h中:

  1. typedef struct
  2. {
  3.     char *name; ///< filter name

  4.     int priv_size; ///< size of private data to allocate for the filter

  5.     int (*init)(AVFilterContext *ctx, const char *args, void *opaque);
  6.     void (*uninit)(AVFilterContext *ctx);

  7.     int (*query_formats)(AVFilterContext *ctx);

  8.     const AVFilterPad *inputs; ///< NULL terminated list of inputs. NULL if none
  9.     const AVFilterPad *outputs; ///< NULL terminated list of outputs. NULL if none
  10. } AVFilter;

The query_formats function sets the in_formats member of connected output links, and the out_formats member of connected input links, described below under AVFilterLink.

query_formats 函數設置濾鏡輸入,輸出圖像的格式,如YUV420,YUV422等。

AVFilterPad

Let's take a quick look at the AVFilterPad structure, which is used to describe the inputs and outputs of the filter. This is also defined in libavfilter/avfilter.h:

AVFilterPad結構體用於描述濾鏡的輸入和輸出。

  1. typedef struct AVFilterPad
  2. {
  3.     char *name;
  4.     int type;

  5.     int min_perms;
  6.     int rej_perms;

  7.     void (*start_frame)(AVFilterLink *link, AVFilterPicRef *picref);
  8.     AVFilterPicRef *(*get_video_buffer)(AVFilterLink *link, int perms);
  9.     void (*end_frame)(AVFilterLink *link);
  10.     void (*draw_slice)(AVFilterLink *link, int y, int height);

  11.     int (*request_frame)(AVFilterLink *link);

  12.     int (*config_props)(AVFilterLink *link);
  13. } AVFilterPad;

The actual definition in the header file has doxygen comments describing each entry point, its purpose, and what type of pads it is relevant for. These fields are relevant for all pads:

Field Description
name Name of the pad. No two inputs should have the same name, and no two outputs should have the same name.
(pad的名稱,當有多於一個的輸入或輸出時,不要重名)
type Only AV_PAD_VIDEO currently.
config_props Handles configuration of the link connected to the pad

Fields only relevant to input pads are:

Field Description
min_perms Minimum permissions required to a picture received as input.
(本濾鏡要求的輸入圖像的最小授權)
rej_perms Permissions not accepted on pictures received as input.
start_frame Called when a frame is about to be given as input.
draw_slice Called when a slice of frame data has been given as input.
(主要的濾鏡處理函數)
end_frame Called when the input frame has been completely sent.
get_video_buffer Called by the previous filter to request memory for a picture.

Fields only relevant to output pads are:

Field Description
request_frame Requests that the filter output a frame.
Picture buffers Reference counting(引用計數)

All pictures in the filter system are reference counted. This means that there is a picture buffer with memory allocated for the image data, and various filters can own a reference to the buffer. When a reference is no longer needed, its owner frees the reference. When the last reference to a picture buffer is freed, the filter system automatically frees the picture buffer.

濾鏡系統中所有的圖像都是有計數的引用。意即,有一個分配了存儲空間的圖像buffer,而每個濾鏡可以有自己的引用指向這個圖像buffer。當這個引用不再需要時,擁有這個引用的濾鏡將釋放這個引用。當圖像buffer的最後一個引用也被釋放時,濾鏡系統將自動釋放這個圖像buffer。

Permissions(組合濾鏡的授權問題)

The upshot of multiple filters having references to a single picture is that they will all want some level of access to the image data. It should be obvious that if one filter expects to be able to read the image data without it changing that no other filter should write to the image data. The permissions system handles this.

使用組合濾鏡的結果就是會有多個引用指向同一個圖像buffer,而且它們都希望對buffer中的圖像有相同的訪問權限。這時就會產生衝突啦,例如,當一個濾鏡有讀取buffer中圖像數據的權限時,肯定不希望有別的濾鏡同時在改定buffer中的圖像數據。解決這種衝突就需要授權系統來處理。

In most cases, when a filter prepares to output a frame, it will request a buffer from the filter to which it will be outputting. It specifies the minimum permissions it needs to the buffer, though it may be given a buffer with more permissions than the minimum it requested.

通常,當濾鏡準備輸出一幀數據時,它會向濾鏡系統請求一個輸出buffer,這時,濾鏡系統會返回一個buffer結該濾鏡並將這個buffer授予最小的權限。

When it wants to pass this buffer to another filter as output, it creates a new reference to the picture, possibly with a reduced set of permissions. This new reference will be owned by the filter receiving it.

噹噹前濾鏡要將圖像buffer輸出給下一個濾鏡時,該濾鏡會創建一個指向圖像buffer的新引用,這個buffer的授權可能會減小。而這個新的引用會被下一個濾鏡接收。

So, for example, for a filter which drops frames if they are similar to the last frame it output, it would want to keep its own reference to a picture after outputting it, and make sure that no other filter modified the buffer either. It would do this by requesting the permissions AV_PERM_READ|AV_PERM_WRITE|AV_PERM_PRESERVE for itself, and removing the AV_PERM_WRITE permission from any references it gave to other filters.

這段沒搞明白什麼意思

The available permissions are:

Permission Description
AV_PERM_READ Can read the image data.
AV_PERM_WRITE Can write to the image data.
AV_PERM_PRESERVE Can assume that the image data will not be modified by other filters. This means that no other filters should have the AV_PERM_WRITE permission.
AV_PERM_REUSE The filter may output the same buffer multiple times, but the image data may not be changed for the different outputs.
AV_PERM_REUSE2 The filter may output the same buffer multiple times, and may modify the image data between outputs.
Filter Links

A filter's inputs and outputs are connected to those of another filter through the AVFilterLink structure:

  1. typedef struct AVFilterLink
  2. {
  3.     AVFilterContext *src; ///< source filter
  4.     unsigned int srcpad; ///< index of the output pad on the source filter

  5.     AVFilterContext *dst; ///< dest filter
  6.     unsigned int dstpad; ///< index of the input pad on the dest filter

  7.     int w; ///< agreed upon image width
  8.     int h; ///< agreed upon image height
  9.     enum PixelFormat format; ///< agreed upon image colorspace

  10.     AVFilterFormats *in_formats; ///< formats supported by source filter
  11.     AVFilterFormats *out_formats; ///< formats supported by destination filter

  12.     AVFilterPicRef *srcpic;

  13.     AVFilterPicRef *cur_pic;
  14.     AVFilterPicRef *outpic;
  15. };

The src and dst members indicate the filters at the source and destination ends of the link, respectively. The srcpad indicates the index of the output pad on the source filter to which the link is connected. Likewise, the dstpad indicates the index of the input pad on the destination filter.

The in_formats member points to a list of formats supported by the source filter, while the out_formats member points to a list of formats supported by the destination filter. The AVFilterFormats structure used to store the lists is reference counted, and in fact tracks its references (see the comments for the AVFilterFormats structure in libavfilter/avfilter.h for more information on how the colorspace negotiation is works and why this is necessary). The upshot is that if a filter provides pointers to the same list on multiple input/output links, it means that those links will be forced to use the same format as each other.

When two filters are connected, they need to agree upon the dimensions of the image data they'll be working with, and the format that data is in. Once this has been agreed upon, these parameters are stored in the link structure.

The srcpic member is used internally by the filter system, and should not be accessed directly.

The cur_pic member is for the use of the destination filter. When a frame is currently being sent over the link (ie. starting from the call to start_frame() and ending with the call to end_frame()), this contains the reference to the frame which is owned by the destination filter.

The outpic member is described in the following tutorial on writing a simple filter.

Writing a simple filter Default filter entry points

Because the majority of filters that will probably be written will take exactly one input, and produce exactly one output, and output one frame for every frame received as input, the filter system provides a number default entry points to ease the development of such filters. 

Entry point Actions taken by the default implementation
request_frame() Request a frame from the previous filter in the chain.
query_formats() Sets the list of supported formats on all input pads such that all links must use the same format, from a default list of formats containing most YUV and RGB/BGR formats.
start_frame() Request a buffer to store the output frame in. A reference to this buffer is stored in the outpic member of the link hooked to the filter's output. The next filter's start_frame() callback is called and given a reference to this buffer.
end_frame() Calls the next filter's end_frame() callback. Frees the reference to the outpic member of the output link, if it was set by (ie. if the default start_frame() is used). Frees the cur_pic reference in the input link.
get_video_buffer() Returns a buffer with the AV_PERM_READ permission in addition to all the requested permissions.
config_props() on output pad Sets the image dimensions for the output link to the same as on the filter's input.
The vf_negate filter

Having looked at the data structures and callback functions involved, let's take a look at an actual filter. The vf_negate filter inverts the colors in a video. It has one input, and one output, and outputs exactly one frame for every input frame. In this way, it's fairly typical, and can take advantage of many of the default callback implementations offered by the filter system.

First, let's take a look at the AVFilter structure at the bottom of the libavfilter/vf_negate.c file:

  1. AVFilter avfilter_vf_negate =
  2. {
  3.     .name = "negate",

  4.     .priv_size = sizeof(NegContext),

  5.     .query_formats = query_formats,

  6.     .inputs = (AVFilterPad[]) {{ .name = "default",
  7.                                     .type = AV_PAD_VIDEO,
  8.                                     .draw_slice = draw_slice,
  9.                                     .config_props = config_props,
  10.                                     .min_perms = AV_PERM_READ, },
  11.                                   { .name = NULL}},
  12.     .outputs = (AVFilterPad[]) {{ .name = "default",
  13.                                     .type = AV_PAD_VIDEO, },
  14.                                   { .name = NULL}},
  15. };

Here, you can see that the filter is named "negate," and it needs sizeof(NegContext) bytes of data to store its context. In the list of inputs and outputs, a pad whose name is set to NULL indicates the end of the list, so this filter has exactly one input and one output. If you look closely at the pad definitions, you will see that fairly few callback functions are actually specified. Because of the simplicity of the filter, the defaults can do most of the work for us.

Let us take a look at the callback function it does define.

query_formats()
  1. static int query_formats(AVFilterContext *ctx)
  2. {
  3.     avfilter_set_common_formats(ctx,
  4.         avfilter_make_format_list(10,
  5.                 PIX_FMT_YUV444P, PIX_FMT_YUV422P, PIX_FMT_YUV420P,
  6.                 PIX_FMT_YUV411P, PIX_FMT_YUV410P,
  7.                 PIX_FMT_YUVJ444P, PIX_FMT_YUVJ422P, PIX_FMT_YUVJ420P,
  8.                 PIX_FMT_YUV440P, PIX_FMT_YUVJ440P));
  9.     return 0;
  10. }

This calls avfilter_make_format_list(). This function takes as its first parameter the number of formats which will follow as the remaining parameters. The return value is an AVFilterFormats structure containing the given formats. The avfilter_set_common_formats() function which this structure is passed to sets all connected links to use this same list of formats, which causes all the filters to use the same format after negotiation is complete. As you can see, this filter supports a number of planar YUV colorspaces, including JPEG YUV colorspaces (the ones with a 'J' in the names).

config_props() on an input pad

The config_props() on an input pad is responsible for verifying that the properties of the input pad are supported by the filter, and to make any updates to the filter's context which are necessary for the link's properties.

TODO: quick explanation of YUV colorspaces, chroma subsampling, difference in range of YUV and JPEG YUV.

Let's take a look at the way in which this filter stores its context: 

  1. typedef struct
  2. {
  3.     int offY, offUV;
  4.     int hsub, vsub;
  5. } NegContext;

That's right. The priv_size member of the AVFilter structure tells the filter system how many bytes to reserve for this structure. The hsub and vsub members are used for chroma subsampling, and the offY and offUV members are used for handling the difference in range between YUV and JPEG YUV. Let's see how these are set in the input pad's config_props: 

  1. static int config_props(AVFilterLink *link)
  2. {
  3.     NegContext *neg = link->dst->priv;

  4.     avcodec_get_chroma_sub_sample(link->format, &neg->hsub, &neg->vsub);

  5.     switch(link->format) {
  6.     case PIX_FMT_YUVJ444P:
  7.     case PIX_FMT_YUVJ422P:
  8.     case PIX_FMT_YUVJ420P:
  9.     case PIX_FMT_YUVJ440P:
  10.         neg->offY =
  11.         neg->offUV = 0;
  12.         break;
  13.     default:
  14.         neg->offY = -4;
  15.         neg->offUV = 1;
  16.     }

  17.     return 0;
  18. }

This simply calls avcodec_get_chroma_sub_sample() to get the chroma subsampling shift factors, and stores those in the context. It then stores a set of offsets for compensating for different luma/chroma value ranges for JPEG YUV, and a different set of offsets for other YUV colorspaces. It returns zero to indicate success, because there are no possible input cases which this filter cannot handle.

draw_slice()

Finally, the function which actually does the processing for the filter, draw_slice():

  1. static void draw_slice(AVFilterLink *link, int y, int h)
  2. {
  3.     NegContext *neg = link->dst->priv;
  4.     AVFilterPicRef *in = link->cur_pic;
  5.     AVFilterPicRef *out = link->dst->outputs[0]->outpic;
  6.     uint8_t *inrow, *outrow;
  7.     int i, j, plane;

  8.     /* luma plane */
  9.     inrow = in-> data[0] + y * in-> linesize[0];
  10.     outrow = out->data[0] + y * out->linesize[0];
  11.     for(i = 0; i < h; i ++) {
  12.         for(j = 0; j < link->w; j ++)
  13.             outrow[j] = 255 - inrow[j] + neg->offY;
  14.         inrow += in-> linesize[0];
  15.         outrow += out->linesize[0];
  16.     }

  17.     /* chroma planes */
  18.     for(plane = 1; plane < 3; plane ++) {
  19.         inrow = in-> data[plane] + (y >> neg->vsub) * in-> linesize[plane];
  20.         outrow = out->data[plane] + (y >> neg->vsub) * out->linesize[plane];

  21.         for(i = 0; i < h >> neg->vsub; i ++) {
  22.             for(j = 0; j < link->w >> neg->hsub; j ++)
  23.                 outrow[j] = 255 - inrow[j] + neg->offUV;
  24.             inrow += in-> linesize[plane];
  25.             outrow += out->linesize[plane];
  26.         }
  27.     }

  28.     avfilter_draw_slice(link->dst->outputs[0], y, h);
  29. }

The y parameter indicates the top of the current slice, and the h parameter the slice's height. Areas of the image outside this slice should not be assumed to be meaningful (though a method to allow this assumption in order to simplify boundary cases for some filters is coming in the future).

This sets inrow to point to the beginning of the first row of the slice in the input, and outrow similarly for the output. Then, for each row, it loops through all the pixels, subtracting them from 255, and adding the offset which was determined in config_props() to account for different value ranges.

It then does the same thing for the chroma planes. Note how the width and height are shifted right to account for the chroma subsampling.

Once the drawing is completed, the slice is sent to the next filter by calling avfilter_draw_slice().

Chapter 2:

將濾鏡添加進ffmpeg中編譯和運行

1. configure中聲明使用的協議

  1. # filters
  2. ...
  3. tnegate_filter_deps="gpl"

2. libavfilter/allfilter.c中註冊自定義濾鏡

  1. void avfilter_register_all(void)
  2. {
  3.     ...

  4.     REGISTER_FILTER (TNEGATE, tnegate, vf);
  5.     ...
  6. }

3. libavfilter/Makfile中添加濾鏡的鏈接

  1. OBJS = allfilters.o \
  2.        avfilter.o \
  3.        avfiltergraph.o \
  4.        defaults.o \
  5.        drawutils.o \
  6.        formats.o \
  7.        graphparser.o \

  8. OBJS-$(CONFIG_AVCODEC) += avcodec.o
  9. ...
  10. OBJS-$(CONFIG_TNEGATE_FILTER) += vf_tnegate.o

4. configure設置

  1. ./configure \
  2. --enable-gpl --enable-nonfree --enable-version3 \
  3. ...
  4. --enable-avfilter --enable-filter=movie \
  5. --enable-avfilter --enable-filter=tnegate

5. 編譯與運行

  1. #make
  2. #./ffmpeg -i input.flv -vf "tnegate" -y output.flv

chapter 3: 

反相濾鏡源碼

  1. libavfilter/vf_tnegate.c

  2. #include "libavutil/eval.h"
  3. #include "libavutil/opt.h"
  4. #include "libavutil/pixdesc.h"
  5. #include "libavcodec/avcodec.h"
  6. #include "avfilter.h"

  7. typedef struct
  8. {
  9.   int hsub, vsub; // Used for chroma subsampling
  10. }NegContext;

  11. /* */
  12. static int tnegate_config_props(AVFilterLink *link)
  13. {
  14.   NegContext *neg = link->dst->priv;

  15.   avcodec_get_chroma_sub_sample(link->format, &neg->hsub, &neg->vsub);
  16.   switch(link->format)
  17.   {
  18.     case PIX_FMT_YUVJ444P:
  19.     case PIX_FMT_YUVJ422P:
  20.     case PIX_FMT_YUVJ420P:
  21.     case PIX_FMT_YUVJ440P:
  22.       neg->offY =
  23.       neg->offUV = 0;
  24.       break;
  25.     default:
  26.       neg->offY = -4;
  27.       neg->offUV= 1;
  28.   }
  29.   return 0;
  30. }

  31. static void tnegate_draw_slice(AVFilterLink *link, int y, int h, int slice_dir)
  32. {
  33.   NegContext *neg = link->dst->priv;
  34.   //AVFilterPicRef *in = link->cur_pic;
  35.   //AVFilterPicRef *out = link->dst->outputs[0]->outpic;
  36.   AVFilterBufferRef *in = link->cur_buf;
  37.   AVFilterBufferRef *out = link->dst->outputs[0]->out_buf;
  38.   unsigned char *inrow, *outrow;
  39.   int i, j, plane;

  40.   /* luma plane */
  41.   inrow = in-> data[0] + y * in-> linesize[0]; // Get the row position of pixel
  42.   outrow = out->data[0] + y * out->linesize[0];

  43.   for(i = 0; i < h; i++)
  44.   {
  45.     for(j = 0; j < link->w; j++)
  46.       outrow[j] = 255 -inrow[j] + neg->offY;
  47.     outrow += in->linesize[0];
  48.   }

  49.   /* chroma planes */
  50.   for(plane = 1; plane < 3; plane++)
  51.   {
        inrow  = in-> data[plane] + (y >> neg->vsub) * in-> linesize[plane];
        outrow = out->data[plane] + (y >> neg->vsub) * out->linesize[plane];

        for(i = 0; i < (h >> neg->vsub); i++)
        {
          for(j = 0; j < (link->w >> neg->vsub); j++)
            outrow[j] = 255 - inrow[j] + neg->offUV;

  52.       inrow += in-> linesize[plane];
  53.       outrow += out->linesize[plane];
  54.     }
  55.   }
  56.   avfilter_draw_slice(link->dst->outputs[0], y, h, 1);
  57. }

  58. static int tnegate_query_formats(AVFilterContext *ctx)
  59. {
  60.   static const enum PixelFormat pix_fmts[] = {
  61.     PIX_FMT_YUV410P, PIX_FMT_YUV420P, PIX_FMT_GRAY8, PIX_FMT_NV12,
  62.     PIX_FMT_NV21, PIX_FMT_YUV444P, PIX_FMT_YUV422P, PIX_FMT_YUV411P,
  63.     PIX_FMT_NONE
  64.   };
  65.   avfilter_set_common_pixel_formats(ctx, avfilter_make_format_list(pix_fmts));
  66.   return 0;
  67. }

  68. /* the filter's structure */
  69. AVFilter avfilter_vf_tnegate =
  70. {
  71.   .name = "tnegate", ///< filter name
  72.   .priv_size = sizeof(NegContext), ///< size of private data to allocate for the filter
  73.   .query_formats = tnegate_query_formats, ///< set the format of in/outputs
  74.  
  75.   /* the inputs of the filter */
  76.   .inputs = (AVFilterPad[]){{ .name = "default",
  77.                               .type = AVMEDIA_TYPE_VIDEO,
  78.                               .draw_slice = tnegate_draw_slice,
  79.                               .config_props = tnegate_config_props,
  80.                               .min_perms = AV_PERM_READ,},
  81.                             { .name = NULL }},
  82.   /* the ouputs of the filter */
  83.   .outputs = (AVFilterPad[]){{ .name = "default",
  84.                                .type = AVMEDIA_TYPE_VIDEO, },
  85.                              { .name = NULL }},
  86. };


 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章