ffmpeg 之video filter 大全(待整理)

截止到19年11月13日,ffmpeg官網上顯示,ffmepg目前已有的video filter是233個。

 

本文以ffmepg4.1爲例,介紹各個video filter的用途和用法。每個filter都力求在命令行內實例運行,並給出參考代碼。

  1. addroi

指定視頻中若干個區域爲感興趣區域,但是視頻幀信息不改變,只是在metadata中添加roi的信息,影響在稍後的編碼過程中,

ffmpeg 4.2.1的版本支持該filter

//理論上可以,但是下載了4.2.1的版本還是不支持 ./ffmpeg42 -y -i ./video_8_24.mp4 -filter_complex addroi=x=0:y=0:w=200:h=200:qoffset=1 rec_addroi.mp4

 

 

 

 

也可以對多個區域做roi

./ffmpeg42 -y -i ./video_8_24.mp4 -filter_complex "addroi=x=0:y=0:w=200:h=200:qoffset=1[out1];[out1]addroi=x=200:y=200:w=200:h=200:qoffset=1[out2]" -map "[out2]" rec_addroi.mp4

 

 

  1. alphaextract

從輸入源中取出alpha分量,把取出的alpha分量作爲一個灰度視頻,這個filter經常和alphamerge filter使用

首先要確定輸入的視頻有alpah通道

  1. alphamerge

增加或替換alpha通道的內容

ovie=in_alpha.mkv [alpha]; [in][alpha] alphamerge [out]

  1. amplify

放大當前像素值和其在前後幀位置像素值的差別

./ffmpeg42 -t 10 -y -i ./video_8_24.mp4 -filter_complex amplify=radius=2:threshold=10:tolerance=5 rec_amplify.mp4

  1. atadenoise

空域自適應去噪

支持基於時間線編輯timeline editing

 

  1. avgblur

平均模糊濾波

支持基於時間線編輯timeline editing

 

  1. bbox 還沒搞懂

Compute the bounding box for the non-black pixels in the input frame luminance plane.

  1. bilateral

空域平滑,同時保留邊緣 (雙邊濾波)

支持基於時間線編輯timeline editing

 

  1. bitplanenoise

顯示、計算像素平面的噪聲

支持基於時間線編輯timeline editing

  1. blackdetect

檢測視頻中哪些幀是幾乎全黑的,可以設置閾值,這個功能對於檢測章節變化很有用,

  1. blackframe

檢測視頻中哪些幀幾乎是全黑,可以設置閾值

  1. blend\tblend

(1)blend:輸入兩個視頻流,第一個輸入作爲上層幀,第二個輸入作爲底層幀,控制底層和上層幀顯示的權重

./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4 -i ./1.mp4 -filter_complex "blend=all_expr=if(eq(mod(X\,2)\,mod(Y\,2))\,A\,B)" rec_${name}.mp4 ./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4 -filter_complex "blend=all_expr=if(eq(mod(X\,2)\,mod(Y\,2))\,A\,B)" rec_${name}.mp4

(2)tblend:輸入一個視頻流,

./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4 -i ./1.mp4 -filter_complex "tblend=all_mode=multiply" rec_${name}.mp4

 

 

 

 

  1. bm3d

 

使用block-matching 3D算法 去除幀級噪聲(速度很慢)

./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4 -i ./1.mp4 -filter_complex "bm3d=sigma=3:block=4:bstep=2:group=1:estim=basic" rec_${name}.mp4

  1. boxblur

對輸入源,應用boxbulr 算法

  1. bwdif

消除輸入視頻的隔行掃描

  1. chromahold

消除輸入視頻中除了某種指定顏色外的所有顏色,可以設置區間

  1. chromakey

YUV colorspace color/chroma 鍵值

  1. chromashift

水平或垂直,移動色度

  1. ciescope

輸入視頻的像素值分佈繪製在Cie圖中,作爲一個輸出視頻輸出。可以設置白點,cie圖的樣式,gamma值,源的色域類型

 

  1. codecview

支持可視化部分codec的編碼信息,利用碼流裏的附加信息

目前只調試出來支持顯示運動矢量

ffplay -flags2 +export_mvs input.mp4 -vf codecview=mv_type=fp

 

 

 

 

  1. colorbalance

修改輸入源的主要顏色分量的強度(紅、綠、藍)

 

./ffmpeg42 -hide_banner -t 2 -y -i ./1.mp4 -filter_complex colorbalance=rs=1:rh=1 ${color} rec_${name}.mp4

 

 

  1. colorchannelmixer

Adjust video input frames by re-mixing color channels.

  1. colorkey

RGB colorspace color keying.

  1. colorhold

Remove all color information for all RGB colors except for certain one

  1. colorlevels

Adjust video input frames using levels

  1. colormatrix

轉換色彩矩陣

  1. colorspace

轉換輸入源的色域空間

  1. convolution

Apply convolution of 3x3, 5x5, 7x7 or horizontal/vertical up to 49 elements

使用卷積來操作輸入源視頻

//銳化 ./ffmpeg42 -hide_banner -t 3 -y -i ./6.mp4 -filter_complex convolution="0 -1 0 -1 5 -1 0 -1 0:0 -1 0 -1 5 -1 0 -1 0:0 -1 0 -1 5 -1 0 -1 0:0 -1 0 -1 5 -1 0 -1 0" ${color} rec_${name}.mp4 //模糊 convolution="1 1 1 1 1 1 1 1 1:1 1 1 1 1 1 1 1 1:1 1 1 1 1 1 1 1 1:1 1 1 1 1 1 1 1 1:1/9:1/9:1/9:1/9" //邊緣增強 convolution="0 0 0 -1 1 0 0 0 0:0 0 0 -1 1 0 0 0 0:0 0 0 -1 1 0 0 0 0:0 0 0 -1 1 0 0 0 0:5:1:1:1:0:128:128:128" //邊緣檢測 convolution="0 1 0 1 -4 1 0 1 0:0 1 0 1 -4 1 0 1 0:0 1 0 1 -4 1 0 1 0:0 1 0 1 -4 1 0 1 0:5:5:5:1:0:128:128:128" //拉普拉斯邊緣檢測算子 convolution="1 1 1 1 -8 1 1 1 1:1 1 1 1 -8 1 1 1 1:1 1 1 1 -8 1 1 1 1:1 1 1 1 -8 1 1 1 1:5:5:5:1:0:128:128:0" //浮雕效果 convolution="-2 -1 0 -1 1 1 0 1 2:-2 -1 0 -1 1 1 0 1 2:-2 -1 0 -1 1 1 0 1 2:-2 -1 0 -1 1 1 0 1 2"

  1. convolve

Apply 2D convolution of video stream in frequency domain using second stream as impulse.

  1. copy

不經修改地拷貝輸入源到輸入中,對於測試很有用

  1. coreimage

Video filtering on GPU using Apple’s CoreImage API on OSX

  1. cover_rect

Cover a rectangular object

  1. crop

剪切輸入視頻爲指定尺寸,指定位置,裁剪指定尺寸

  1. cropdetect

自動檢測需要裁剪的參數,通過日誌打印出這些參數,檢測的維度是包含非黑區域,裁剪那些黑色區域

  1. cue

Delay video filtering until a given wallclock timestamp. The filter first passes on preroll amount of frames, then it buffers at most buffer amount of frames and waits for the cue. After reaching the cue it forwards the buffered frames and also any subsequent frames coming in its input.

The filter can be used synchronize the output of multiple ffmpeg processes for realtime output devices like decklink. By putting the delay in the filtering chain and pre-buffering frames the process can pass on data to output almost immediately after the target wallclock timestamp is reached.

這個filter可以同步用於實時輸出採集設備比如視頻採集卡的多路輸出的ffmpeg操作,

  1. curves

Apply color adjustments using curves

  1. datascope

Video data analysis filter.

This filter shows hexadecimal pixel values of part of video.

可以以十六進制的形式看某些區域像素值,輸出文件是像素值

 

  1. dctdnoiz

使用2D-DCT變換對frame去噪,實時性很慢,不能用於實時場景

  1. deband

消除輸入源的帶、環狀失真,原理是通過把失真像素用參考像素的平均值替換實現。

  1. deblock

消除輸入源的block效應

  1. decimate

定期丟棄一些“重複”的幀,通過衡量相鄰幀的質量,決定是否丟棄這些幀,可以設置閾值。

  1. dedot

Reduce cross-luminance (dot-crawl) and cross-color (rainbows) from video.

  1. deflate

Apply deflate effect to the video.

  1. deflicker

消除時域上幀之間的方差。

  1. dejudder

刪除部分隔行電視轉播的內容所產生的抖動。

  1. delogo

模糊logo,設置一個長方形的模糊塊

  1. derain

Remove the rain in the input image/video by applying the derain methods based on convolutional neural networks. Supported models:

  1. deshake

Attempt to fix small changes in horizontal and/or vertical shift. This filter helps remove camera shake from hand-holding a camera, bumping a tripod, moving on a vehicle, etc.

適用於手持相機引起的畫面抖動,

  1. detelecine

Apply an exact inverse of the telecine operation. It requires a predefined pattern specified using the pattern option which must be the same as that passed to the telecine filter.

把輸入視頻變成電視信號,有頂場和底場那種

  1. dilation

Apply dilation effect to the video.

對輸入視頻做擴展效果

  1. drawbox

給輸入視頻某個區域設置一個帶有顏色的box(方塊)

 

  1. drawgrid

給視頻畫面上劃分方塊,

 

  1. drawtext

給視頻中添加文字

  1. edgedetect

檢測邊緣,並且繪製出邊緣,有幾種邊緣算子可選擇

  1. entropy

 

  1. erosion

把輸入視頻做成腐蝕效果

 

  1. fade

把視頻的開始和結束做成漸入漸出的效果

  1. fftdnoiz

使用3D-FFT變換去噪

  1. fftfilt

 

  1. fillborders

Fill borders of the input video, without changing video stream dimensions. Sometimes video can have garbage at the four edges and you may not want to crop video input to keep size multiple of some number.

  1. find_rect

尋找一個矩形目標

  1. floodfill

Flood area with values of same pixel components with another values.

  1. format

轉換輸入視頻的像素格式到另外一個指定的格式

  1. fps

轉換視頻的fps,通過重複幀,或者丟幀來達到實現轉換後的恆定幀率

  1. framepack

封裝兩個視頻流成一個立體視頻,可以支持多種顯示形式,左右、上下、(這樣也可以用於顯示兩個對比視頻)

  1. framerate

Change the frame rate by interpolating new video output frames from the source frames.

  1.  framestep

Select one frame every N-th frame.

每n幀挑一幀作爲輸出

  1. freezedetect

Detect frozen video.

  1. gblur

高斯濾波器

  1. geq

對每個像素應用公式,可以水平翻轉、左右、各種操作

geq=p(W-X\,Y)

  1. gradfun

Fix the banding artifacts that are sometimes introduced into nearly flat regions by truncation to 8-bit color depth. Interpolate the gradients that should go where the bands are, and dither them.

  1. graphmonitor

Show various filtergraph stats.顯示多個filter的關係,可視化filters

  1. greyedge

A color constancy variation filter which estimates scene illumination via grey edge algorithm and corrects the scene colors accordingly.

  1. haldclut

Apply a Hald CLUT to a video stream.

創建一個顏色查找表,把這個顏色查找表應用到一個視頻上

//創建顏色查找表 ffmpeg -f lavfi -i haldclutsrc=8 -vf "hue=H=2*PI*t:s=sin(2*PI*t)+1, curves=cross_process" -t 10 -c:v ffv1 clut.nut //使用生成的顏色查找表處理視頻 ffmpeg -f lavfi -i mandelbrot -i clut.nut -filter_complex '[0][1] haldclut' -t 20 mandelclut.mkv

  1. hflip

水平翻轉視頻

  1. histeq

This filter applies a global color histogram equalization on a per-frame basis.

  1. histogram

計算輸入源的像素值的直方圖分佈情況

  1. hqdn3d

This is a high precision/quality 3d denoise filter.It aims to reduce image noise, producing smooth images and making still images really still. It should enhance compressibility.

  1. hwdownload

Download hardware frames to system memory.

  1. hwmap

Map hardware frames to system memory or to another device.

  1. hwupload

Upload system memory frames to hardware surfaces.

  1. hwupload_cuda

Upload system memory frames to a CUDA device.

  1. hqx

使用一個高質量的放大濾波,把filter 的input放大若干倍

  1. hstack

把多個視頻水平放置在一起合成一個視頻,要求他們像素格式、寬高一致

  1. hue

修改色調,飽和度

  1. hysteresis

Grow first stream into second stream by connecting components. This makes it possible to build more robust edge masks.

  1. idet

Detect video interlacing type.

  1. il

Deinterleave or interleave fields

  1. inflate

Apply inflate effect to the video

  1. interlace

Simple interlacing filter from progressive contents. This interleaves upper (or lower) lines from odd frames with lower (or upper) lines from even frames, halving the frame rate and preserving image height.

 

  1. kerndeint

Deinterlace input video by applying Donald Graft’s adaptive kernel deinterling. Work on interlaced parts of a video to produce progressive frames.

 

 

  1. lagfun

Slowly update darker pixels

  1.  lenscorrection

Correct radial lens distortion

  1.  lensfun

Apply lens correction via the lensfun library (http://lensfun.sourceforge.net/).

  1.  libvmaf

計算vamf,也可計算psnr、ssim

  1. limiter

把像素值大小限制在某一個區間內

  1. loop

loop video frames 循環視頻幀,和重播有區別(重播是用-loop)

參數:

loop:循環的次數,設爲-1則無限循環,默認是0次

size:循環涉及的幀數,默認是0

start:循環開始的地方,默認是0

 

loop=loop=30:start=60:size=3 //從視頻的第60幀開始,往後的3幀,這幾幀循環30次

  1. lut1d

Apply a 1D LUT to an input video

  1. lut3d

Apply a 3D LUT to an input video

  1. lumakey

Turn certain luma values into transparency

  1. lut, lutrgb, lutyuv

Compute a look-up table for binding each pixel component input value to an output value, and apply it to the input video.

  1. lut2, tlut2

The lut2 filter takes two input streams and outputs one stream.

 

  1. maskedclamp

Clamp the first input stream with the second input and third input stream.

 

  1. maskedmax

Merge the second and third input stream into output stream using absolute differences between second input stream and first input stream and absolute difference between third input stream and first input stream. The picked value will be from second input stream if second absolute difference is greater than first one or from third input stream otherwise.

 

  1. maskedmerge

Merge the first input stream with the second input stream using per pixel weights in the third input stream.

  1. maskedmin

Merge the second and third input stream into output stream using absolute differences between second input stream and first input stream and absolute difference between third input stream and first input stream. The picked value will be from second input stream if second absolute difference is less than first one or from third input stream otherwise.

 

  1. maskfun

Create mask from input video

  1. mcdeint

Apply motion-compensation deinterlacing.

 

  1. median

定義一個矩形區域,求矩形區域內像素值的中值,矩形內所有值都用這個均值替換

  1. mergeplanes

Merge color channel components from several video streams.

  1. mestimate

Estimate and export motion vectors using block matching algorithms. Motion vectors are stored in frame side data to be used by other filters.

使用塊匹配方法估計和導出運動矢量,把匹配到的運動矢量信息保存在side data信息中,供其他filters使用

  1. midequalizer

使用兩個視頻流,應用中途圖像均衡效果。這個filter可以調整一對輸入視頻流有相似的直方圖分佈,這樣兩個視頻流的動態範圍是一樣的,這種效果尤其適用於匹配一對立體相機的曝光。該filter接受兩個輸入,有一個輸出。兩個輸入必須有相同的像素格式,但是尺寸可以不一樣。filter 的輸出是第一個輸入流利用這個filter調整兩個輸入的輸出。

  1. minterpolate

Convert the video to specified frame rate using motion interpolation.

改變視頻幀率的filter。原理是使用運動差值方法

  1. mix

Mix several video input streams into one video stream

將多個輸入視頻流混合到一個視頻流中,可以設置每個流的權重

 

 

 

  1. mpdecimate

丟掉那些和其他先前幀區別不大幀,目的是降低幀率

  1. negate

翻轉輸入視頻,像素值翻轉操作

 

  1. nlmeans

使用non-local means 算法去噪,該算法速度較慢

  1. nnedi

Deinterlace video using neural network edge directed interpolation.

 

  1. noformat

Force libavfilter not to use any of the specified pixel formats for the input to the next filter.

強迫libavfilter不要使用noformat 指定的像素格式作用到下一個filter上

 

noformat=pix_fmts=yuv420p|yuv444p|yuv410p ,vfilp //強制libavfilter使用除yuv420p\yuv444p\yuv410p的格式作用在input上,然後傳遞到vfilp filter中

noformat=pix_fmts=yuv420p ,vfilp //如果輸入源是yuv420p,因爲強制不能使用yuv420p,那最後編碼後的視頻是yuvj420p

  1. noise

往輸入源中添加噪聲,支持選擇像素分量、噪聲類型(空域平均噪聲、有規則形狀的混合隨機噪聲、不同幀的噪聲形狀變化的空域噪聲、高斯噪聲)

  1. normalize

歸一化RGB視頻(也可以成爲直方圖拉伸、對比拉伸),

每一幀的每個通道,filter會計算輸入視頻的range,然後線性繪製到用戶指定的輸出range,輸出range默認是從純黑到純白的full dynamic range。空域平滑效果可以作用到輸入源的range,以減少由於少部分黑光或亮的物體進入或離開屏幕導致的閃爍問題,這個功能和相機的自動曝光很相似

  1. null

不經處理地把是輸入傳輸給輸出

  1. ocr

光學特徵識別,OCR,但是想用這個filter,需要在編譯的時候--enable-libtesseract。

  1. ocv

利用libopencv 對視頻轉換(transform)

支持放大、平滑(dilate\smooth)

  1. oscilloscope

將視頻信號以2D示波器的形式在視頻中展示。對於測量空域脈衝、階躍響應、色度延遲等。

支持設置示波器所要展示像素的位置、區域

 

 

 

  1. overlay

將一個流覆蓋到另一個流上邊。兩個輸入,一個輸出,第一個輸入是mian,第二個輸入被第一個輸入覆蓋

 

  1. owdenoise

應用 overcomplete wavelet 去噪,複雜度比較高,處理起來很慢,可以當實現模糊的效果

  1. pad

給輸入填充邊界,源放在給定的座標xy處

  1. palettegen

爲整個視頻流創建一個調色板

ffmpeg -i input.mkv -vf palettegen palette.png

 

 

  1. paletteuse

使用一個調色板下采樣一個輸入視頻流,可以使用palettegen這個filter產生的調色板圖像

ffmpeg -i input.mkv -i palette.png -lavfi paletteuse output.gif

 

 

  1. perspective

Correct perspective of video not recorded perpendicular to the screen.

 

  1. phase

Delay interlaced video by one field time so that the field order changes.

  1. photosensitivity

降低視頻中的閃爍現象

  1. pixdesctest

Pixel format descriptor test filter, mainly useful for internal testing. The output video should be equal to the input video.

  1. pixscope

 

 

查看某個位置處的像素值,對於檢測顏色很有用,支持的最小分辨率是640*480

 

./ffmpeg42 -hide_banner -y -i ./1.mp4 -filter_complex "pixscope=x=40/720:y=90/1280:w=80:h=80:wx=1:wy=0" ${color} rec_${name}.mp4

 

 

 

 

 

  1. pp

Enable the specified chain of postprocessing subfilters using libpostproc. This library should be automatically selected with a GPL build (--enable-gpl). Subfilters must be separated by ’/’ and can be disabled by prepending a ’-’. Each subfilter and some options have a short and a long name that can be used interchangeably, i.e. dr/dering are the same.

 

  1. pp7

Apply Postprocessing filter 7. It is variant of the spp filter, similar to spp = 6 with 7 point DCT, where only the center sample is used after IDCT.

 

  1. premultiply

Apply alpha premultiply effect to input video stream using first plane of second stream as alpha.

 

  1. prewitt

Apply prewitt operator to input video stream.

對輸入視頻流使用prewitt算子

 

  1. pseudocolor

改變視頻幀的顏色

./ffmpeg42 -hide_banner -y -i ./1.mp4 -filter_complex pseudocolor="'if(between(val,10,200),20,-1)'" ${color} rec_${name}.mp4

  1. psnr

計算兩個輸入視頻之間的平均、最大、最小PSNR。輸入的第一個視頻作爲main,會不經修改地傳到輸出端,第二個輸入被用作參考視頻。兩個視頻需要有相同的分辨率、像素格式

  1. pullup

Pulldown reversal (inverse telecine) filter, capable of handling mixed hard-telecine, 24000/1001 fps progressive, and 30000/1001 fps progressive content.

 

 

  1. qp

改變視頻的量化參數QP ,還不知道如何起作用

  1. random

Flush video frames from internal cache of frames into a random order. No frame is discarded. Inspired by frei0r nervous filter.

把視頻幀打亂播放

  1. removegrain

空域去噪filter 用於progressvie video

 

  1. removelogo

Suppress a TV station logo, using an image file to determine which pixels comprise the logo. It works by filling in the pixels that comprise the logo with neighboring pixels.

 

  1. reverse

翻轉一個視頻,倒放視頻,建議使用trim filter ,因爲要把所有frame都讀入內存,不能使用太多frame

  1. roberts

使用roberts算子處理輸入視頻

  1. rgbashift

Shift R/G/B/A pixels horizontally and/or vertically.

 

  1.  
  2. rotate

給定一個角度,旋轉視頻,可指定輸出的寬高、採用的插值方法

 

 

 

  1. sab

使用形狀自適應濾波

  1. showinfo

Show a line containing various information for each input video frame. The input video is not modified.

 

 

在命令行中顯示視頻的信息

 

 

 

  1. scale

scale是一個很重要的filter

  1. scale2ref

根據一個參考視頻來變換輸入視頻的尺寸,對於插入logo,自適應圖像比例很有用

  1. scroll

Scroll input video horizontally and/or vertically by constant speed.

 

  1. selectivecolor

Adjust cyan, magenta, yellow and black (CMYK) to certain ranges of colors (such as "reds", "yellows", "greens", "cyans", ...). The adjustment range is defined by the "purity" of the color (that is, how saturated it already is).

 

  1. separatefields

The separatefields takes a frame-based video input and splits each frame into its components fields, producing a new half height clip with twice the frame rate and twice the frame count.

 

  1. setdar, setsar

The setdar filter sets the Display Aspect Ratio for the filter output video.

 

  1. setfield

Force field for the output video frame.

 

  1. setparams

The setparams filter marks interlace and color range for the output frames. It does not change the input frame, but only sets the corresponding property, which affects how the frame is treated by filters/encoders.

 

  1. showpalette

Displays the 256 colors palette of each frame. This filter is only relevant for pal8 pixel format frames.

 

  1. shuffleframes

Reorder and/or duplicate and/or drop video frames.

 

 

  1. shuffleplanes

Reorder and/or duplicate video planes.

 

  1. signalstats

Evaluate various visual metrics that assist in determining issues associated with the digitization of analog video media.

 

 

 

  1. signature

Calculates the MPEG-7 Video Signature. The filter can handle more than one input. In this case the matching between the inputs can be calculated additionally. The filter always passes through the first input. The signature of each stream can be written into a file.

 

 

  1. smartblur

對輸入視頻濾波,但不影響源的輪廓,可增強,可模糊

 

  1. sobel

使用sobel算子處理輸入視頻,可以指定處理的視頻的分量

 

 

 

  1. spp

Apply a simple postprocessing filter that compresses and decompresses the image at several (or - in the case of quality level 6 - all) shifts and average the results.

 

  1. sr

Scale the input by applying one of the super-resolution methods based on convolutional neural networks. Supported models:

超分辨率,使用機器學習

  1. ssim

計算兩個輸入視頻的ssim,第一個輸入是main,第二個輸入是參考,可以設置把計算結果保存在文件中

 

  1. stereo3d

不同立體視頻格式之間的轉換

  1. streamselect, astreamselect

挑選視頻、音頻流

  1. super2xsai

將源放大兩倍,使用super2xsai算法實現在放大的同時保持邊緣

 

  1. swaprect

交換視頻中兩個方形目標。可以指定兩個方形區域,就可以交換兩個方形區域的畫面

 

  1. swapuv

交換u和v平面

  1. telecine

對視頻應用電視處理效果

 

  1. threshold

對視頻使用閾值效果,需要輸入四個視頻流,第一個流是要處理的,第二個流是閾值,如果第一個流的值小於第二個流,那就pick第三個流或者第四個流。

  1. thumbnail

從一個給定的連續視頻幀中挑選出最具有代表性的幀成爲一個相冊集

 

 

  1. tile

把需要的若干視頻幀合併在一張圖片上

  1. tinterlace

Perform various types of temporal field interlacing.

設置視頻播放的形式

  1. tmix

混合連續的視頻幀。

  1. tonemap

Tone map colors from different dynamic ranges.

 

  1. tpad

時域填充視頻幀

  1. transpose

Transpose rows with columns in the input video and optionally flip it.

 

  1. transpose_npp

 

  1. trim

修剪輸入視頻流,修剪後僅包含一部分輸入流

  1. unpremultiply

Apply alpha unpremultiply effect to input video stream using first plane of second stream as alpha.

 

  1. unsharp

銳化或濾波輸入視頻流

  1. uspp

Apply ultra slow/simple postprocessing filter that compresses and decompresses the image at several (or - in the case of quality level 8 - all) shifts and average the results.

 

  1. v360

變換360度視頻的格式,在不同格式之間轉換

  1. vaguedenoiser

使用基於小波變換的濾波

  1. vectorscope

Display 2 color component values in the two dimensional graph (which is called a vectorscope).

 

  1. vidstabdetect

Analyze video stabilization/deshaking. Perform pass 1 of 2, see vidstabtransform for pass 2.

 

  1. vidstabtransform

Video stabilization/deshaking: pass 2 of 2, see vidstabdetect for pass 1.

 

  1. vflip

垂直翻轉視頻

  1. vfrdet

檢測是否變幀率

  1. vignette

製造或翻轉一個自然光暈、漸暈的效果

  1. vmafmotion

Obtain the average VMAF motion score of a video. It is one of the component metrics of VMAF.

 

  1. vstack

水平堆放,合併兩個視頻到一個幀上,這個filter比overlay和pad 快

  1. w3fdif

反交錯輸入視頻

  1. waveform

Video waveform monitor.

這個filter繪製顏色成分的密度。默認情況下值繪製亮度

  1. weave, doubleweave

The weave takes a field-based video input and join each two sequential fields into single frame, producing a new double height clip with half the frame rate and half the frame count.

 

  1. xbr

使用xbr 高質量放大鏡濾波器,用於放大像素。遵循邊緣檢測的原則

  1. xmedian

從多個輸入視頻流裏調選一箇中等像素值的像素

  1. xstack

Stack video inputs into custom layout.

 

  1. yadif

Deinterlace the input video ("yadif" means "yet another deinterlacing filter").

 

  1. yadif_cuda

Deinterlace the input video using the yadif algorithm, but implemented in CUDA so that it can work as part of a GPU accelerated pipeline with nvdec and/or nvenc.

 

  1. zoompan

使用縮放或平移效果

  1. zscale

scale 輸入視頻,使用z.lib,需要額外編譯,支持色域轉換

 

 

 

 

 

#addroi
#./ffmpeg42  -y -i  ./video_8_24.mp4 -filter_complex "addroi=x=0:y=0:w=200:h=200:qoffset=1[out1];[out1]addroi=x=200:y=200:w=200:h=200:qoffset=1[out2]" -map "[out2]" rec_addroi.mp4
color='-colorspace bt709 -color_range tv -color_primaries bt709 -color_trc bt709'
amplify(){
./ffmpeg42 -hide_banner -t 10 -y -i  ./video_8_24.mp4 -filter_complex "amplify=radius=2:threshold=10:tolerance=5" rec_amplify.mp4
ffplay -hide_banner 
-i rec_amplify.mp4
}

ass(){
./ffmpeg42 -hide_banner -t 10 -y -i  ./video_8_24.mp4 -filter_complex ass rec_ass.mp4
ffplay -hide_banner -i rec_ass.mp4
}
#ass

atadenoise(){
name=atadenoise
./ffmpeg42 -hide_banner -t 10 -y -i  ./6.mp4 -filter_complex "atadenoise=enable=between(n\,1\,50):0a=0.3:0b=1" rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#atadenoise

avgblur(){
name=avgblur
./ffmpeg42 -hide_banner -t 10 -y -i  ./6.mp4 -filter_complex "avgblur=enable=between(n\,1\,50):sizeX=10" rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#avgblur

bbox(){
name=bbox
./ffmpeg42 -hide_banner -t 10 -y -i  ./6.mp4 -filter_complex "bbox=enable=between(n\,1\,10):min_val=1" rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4  
}
#bbox

bilateral(){
    name=bilateral
./ffmpeg42 -hide_banner -t 10 -y -i  ./6.mp4 -filter_complex "bilateral=enable=between(t\,2\,5)" rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#bilateral

bitplanenoise(){
name=bitplanenoise
./ffmpeg42 -hide_banner -t 10 -y -i  ./6.mp4 -filter_complex "bitplanenoise=filter=1" rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#bitplanenoise

blackdetect(){
name=blackdetect
./ffmpeg42 -hide_banner -t 10 -y -i  ./6.mp4 -filter_complex "blackdetect" rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#blackdetect

blackframe(){
name=blackframe
  ./ffmpeg42 -hide_banner -t 10 -y -i  ./6.mp4 -filter_complex "blackdetect" rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}  
#blackframe

blend(){
name=blend
  ./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4  -i ./1.mp4  -filter_complex  "tblend=all_mode=multiply" rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#blend

bm3d(){
name=bm3d
  ./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4    -filter_complex  "bm3d=sigma=3:block=4:bstep=2:group=1:estim=basic" rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#bm3d

boxblur(){
name=boxblur
  ./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4    -filter_complex  "boxblur=luma_radius=2:luma_power=1" rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#boxblur

bwdif(){
name=bwdif
  ./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4    -filter_complex  "bwdif"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#bwdif

chromahold(){
name=chromahold
  ./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4    -filter_complex  "chromahold"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#chromahold

chromakey(){
name=chromakey
  ./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4    -filter_complex  "chromakey=color=black:blend=0.01"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#chromakey

chromashift(){
name=chromashift
  ./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4    -filter_complex  "chromashift=edge=smear"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#chromashift

ciescope(){
name=ciescope
./ffmpeg42 -hide_banner -t 10 -y -i ./6.mp4   -filter_complex  ciescope=system=rec709:cie=xyy:gamuts=rec709:showwhite=1:gamma=2.2  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#ciescope

codecview(){
name=codecview
ffplay -hide_banner   -flags2 export_mvs -i ./1.mp4 -vf codecview=mv_type=fp:qp=1
} 
#codecview

colorbalance(){
name=colorbalance
ffplay -hide_banner -i rec_${name}.mp4 
} 
#colorbalance

convolution(){
name=convolution
./ffmpeg42 -hide_banner -t 3 -y -i ./6.mp4   -filter_complex  convolution="-2 -1 0 -1 1 1 0 1 2:-2 -1 0 -1 1 1 0 1 2:-2 -1 0 -1 1 1 0 1 2:-2 -1 0 -1 1 1 0 1 2" ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#convolution

convolve(){
name=convolve
./ffmpeg42 -hide_banner -t 3 -y -i ./6.mp4   -filter_complex  convolve ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#convolve

crop(){
name=crop
./ffmpeg42 -hide_banner -t 3 -y -i ./1.mp4   -filter_complex  crop=w=240:h=240 ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#crop

cropdetect(){
name=cropdetect
./ffmpeg42 -hide_banner -t 3 -y -i ./bt709_2.mp4   -filter_complex  cropdetect ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#cropdetect

cropdetect(){
name=cropdetect
./ffmpeg42 -hide_banner -t 3 -y -i ./bt709_2.mp4   -filter_complex  cropdetect ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#cropdetect

datascope(){
name=datascope
./ffmpeg42 -hide_banner -t 3 -y -i ./3.mp4   -filter_complex  datascope=mode=color2 ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#datascope

dctdnoiz(){
name=dctdnoiz
./ffmpeg42 -hide_banner -t 3 -y -i ./3.mp4   -filter_complex  dctdnoiz ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#dctdnoiz

deband(){
name=deband
./ffmpeg42 -hide_banner -t 3 -y -i ./1.mp4   -filter_complex  deband ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#deband

decimate(){
name=decimate
./ffmpeg42 -hide_banner -t 3 -y -i ./1.mp4   -filter_complex  decimate ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#decimate

dedot(){
name=dedot
./ffmpeg42 -hide_banner -t 3 -y -i ./1.mp4   -filter_complex  dedot ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#dedot

deflicker(){
name=deflicker
./ffmpeg42 -hide_banner -t 3 -y -i ./1.mp4   -filter_complex  deflicker ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#deflicker

dejudder(){
name=dejudder
./ffmpeg42 -hide_banner -t 3 -y -i ./1.mp4   -filter_complex  dejudder ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#dejudder

delogo(){
name=delogo
./ffmpeg42 -hide_banner -t 3 -y -i ./1.mp4   -filter_complex  delogo=x=1:y=1:w=100:h=100 ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#delogo

derain(){
name=derain
./ffmpeg42 -hide_banner -t 3 -y -i ./1.mp4   -filter_complex  derain ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#derain

deshake(){
name=deshake
./ffmpeg42 -hide_banner -t 3 -y -i ./1.mp4   -filter_complex  deshake ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#deshake

despill(){
name=despill
./ffmpeg42 -hide_banner -t 3 -y -i ./1.mp4   -filter_complex  despill ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#despill

detelecine(){
name=detelecine
./ffmpeg42 -hide_banner -t 3 -y -i ./1.mp4   -filter_complex  detelecine ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#detelecine

drawbox(){
name=drawbox
./ffmpeg42 -hide_banner -t 3 -y -i ./1.mp4   -filter_complex  drawbox=x=10:y=10:w=100:h=100:[email protected]:t=fill ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#drawbox 

edgedetect(){
name=edgedetect
./ffmpeg42 -hide_banner -t 3 -y -i ./3.mp4   -filter_complex  "edgedetect=enable=between(t\,0\,2)"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#edgedetect

framepack(){
name=framepack
./ffmpeg42 -hide_banner  -y -i ./1_smpte240m_no_cp_trc.mp4  -i ./1_smpte240m.mp4  -filter_complex  "framepack=frameseq"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#framepack 

framestep(){
name=framestep
./ffmpeg42 -hide_banner  -y -i ./1_smpte240m_no_cp_trc.mp4   -filter_complex  "framestep=step=10"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#framestep

frei0r(){
name=frei0r
./ffmpeg42 -hide_banner  -y -i ./1_smpte240m_no_cp_trc.mp4   -filter_complex  "frei0r"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#frei0r


hysteresis(){
name=hysteresis
./ffmpeg42 -hide_banner -t 2 -y -i ./1_smpte240m_no_cp_trc.mp4 -t 2 -i ./1_smpte240m_no_cp_trc.mp4  -filter_complex  "hysteresis"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#hysteresis

inflate(){
name=inflate
./ffmpeg42 -hide_banner -t 2 -y -i ./1_smpte240m_no_cp_trc.mp4   -filter_complex  "inflate"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
} 
#inflate
 
loop(){
name=loop
./ffmpeg42 -hide_banner  -y -i ./1_smpte240m_no_cp_trc.mp4   -filter_complex  "loop=loop=30:start=60:size=3"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#loop

lut1d(){
name=lut1d
./ffmpeg42 -hide_banner -t 2 -y -i ./1_smpte240m_no_cp_trc.mp4   -filter_complex  "lut1d"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#lut1d

maskfun(){
name=maskfun
./ffmpeg42 -hide_banner -t 2 -y -i ./1_smpte240m_no_cp_trc.mp4   -filter_complex  "maskfun=low=20:high=230:planes=1"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#maskfun

mcdeint(){
name=mcdeint
./ffmpeg42 -hide_banner -t 2 -y -i ./1_smpte240m_no_cp_trc.mp4   -filter_complex  "maskfun=low=20:high=230:planes=1"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#mcdeint

median(){
name=median
./ffmpeg42 -hide_banner -t 2 -y -i ./1_smpte240m_no_cp_trc.mp4   -filter_complex  "median=radius=50"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#median

minterpolate(){
name=minterpolate
./ffmpeg42 -hide_banner  -y -i ./1_smpte240m_no_cp_trc.mp4   -filter_complex  "minterpolate=fps=60:mi_mode=mci"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#minterpolate

mix(){
name=mix
./ffmpeg42 -hide_banner  -y -t 2 -i ./1_smpte240m_no_cp_trc.mp4 -t 2 -i ./1.mp4  -filter_complex  "mix=weights=2 4 "  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#mix

negate(){
name=negate
./ffmpeg42 -hide_banner  -y  -t 2 -i ./6.mp4  -filter_complex  "negate=negate_alpha=1"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#negate


nlmeans(){
name=nlmeans
./ffmpeg42 -hide_banner  -y  -t 2 -i ./6.mp4  -filter_complex  "nlmeans"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#nlmeans

noformat(){
name=noformat
./ffmpeg42 -hide_banner  -y  -t 2 -i ./6.mp4  -filter_complex  "noformat=yuv420p"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#noformat

noise(){
name=noise
./ffmpeg42 -hide_banner  -y   -i ./6.mp4  -filter_complex  "loop=loop=30:start=1:size=1,noise=c0_seed=123457:c0_strength=50:c0f=t"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#noise

null(){
name=null
./ffmpeg42 -hide_banner  -y   -i ./6.mp4  -filter_complex  "null"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#null


oscilloscope(){
name=Oscilloscope
./ffmpeg42 -hide_banner  -y   -i ./33_709_pix480.mp4  -filter_complex  "oscilloscope"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#oscilloscope

overlay(){
name=overlay
./ffmpeg42 -hide_banner  -y   -i ./33_709_pix480.mp4 -i ./1.mp4 -filter_complex  "overlay"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#overlay

overlay(){
name=overlay
./ffmpeg42 -hide_banner  -y   -i ./33_709_pix480.mp4 -i ./1.mp4 -filter_complex  "overlay"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#overlay

overlay(){
name=overlay
./ffmpeg42 -hide_banner  -y   -i ./33_709_pix480.mp4 -i ./1.mp4 -filter_complex  "overlay"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#overlay

owdenoise(){
name=owdenoise
./ffmpeg42 -hide_banner  -y  -i ./1.mp4 -filter_complex  "owdenoise=depth=15:ls=500"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#owdenoise

pad(){
name=pad
./ffmpeg42 -hide_banner  -y  -i ./1.mp4 -filter_complex  "scale=-2:480,pad=w=1080:h=720:x=30:y=30:color=red"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#pad

palettegen(){
name=palettegen
./ffmpeg42 -hide_banner  -y  -i ./1.mp4 -filter_complex  "palettegen"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4 
}
#palettegen

paletteuse(){
name=paletteuse
./ffmpeg42 -hide_banner  -y  -i ./6.mp4  -i rec_palettegen.png -filter_complex  "paletteuse"  ${color} rec_${name}.gif
ffplay -hide_banner -i rec_${name}.gif
}
#paletteuse

perspective(){
name=perspective
./ffmpeg42 -hide_banner  -y  -i ./6.mp4  -i rec_palettegen.png -filter_complex  "perspective"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#perspective

phase(){
name=phase
./ffmpeg42 -hide_banner  -y  -i ./6.mp4  -i rec_palettegen.png -filter_complex  "phase"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#phase

photosensitivity(){
name=photosensitivity
./ffmpeg42 -hide_banner  -y  -i ./6.mp4   -filter_complex  "photosensitivity"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}

#photosensitivity

pixdesctest(){
name=pixdesctest
./ffmpeg42 -hide_banner  -y  -i ./1.mp4   -filter_complex  "format=monow,pixdesctest"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#pixdesctest

pixscope(){
name=pixscope
./ffmpeg42 -hide_banner  -y  -i ./1.mp4   -filter_complex  "pixscope=x=40/720:y=90/1280:w=80:h=80:wx=1:wy=0"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#pixscope

prewitt(){
name=prewitt
./ffmpeg42 -hide_banner  -y  -i ./1.mp4   -filter_complex  "prewitt=planes=0xf"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#prewitt

pseudocolor(){
name=pseudocolor
./ffmpeg42 -hide_banner  -y  -i ./1.mp4   -filter_complex  pseudocolor="'if(between(val,10,200),20,-1)'"  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#pseudocolor

qp(){
name=qp
./ffmpeg42 -hide_banner  -y  -i ./1.mp4   -filter_complex  qp=100  ${color} rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#qp

setparams(){
name=setparams
./ffmpeg42 -hide_banner  -y  -i ./1.mp4   -filter_complex  setparams=field_mode=prog:range=tv:color_primaries=bt470m:color_trc=bt470m:colorspace=bt470bg   rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#setparams

showpalette(){
name=showpalette
./ffmpeg42 -hide_banner  -y  -i ./1.mp4   -filter_complex  showpalette   rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#showpalette

random(){
name=random
./ffmpeg42 -hide_banner  -y  -i ./1.mp4   -filter_complex  random   rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#random

removegrain(){
name=removegrain
./ffmpeg42 -hide_banner  -y  -i ./1.mp4   -filter_complex  removegrain   rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#removegrain

reverse(){
name=reverse
./ffmpeg42 -hide_banner  -t 5 -y  -i ./112334.mp4    -filter_complex  trim=end=5,reverse   rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#reverse

roberts(){
name=roberts
./ffmpeg42 -hide_banner  -t 5 -y  -i ./112334.mp4    -filter_complex  roberts   rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#roberts

shuffleplanes(){
name=shuffleplanes
./ffmpeg42 -hide_banner  -t 5 -y  -i ./112334.mp4    -filter_complex  shuffleplanes=1   rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#shuffleplanes

signature(){
name=signature
./ffmpeg42 -hide_banner  -t 5 -y  -i ./112334.mp4    -filter_complex  signature=filename=signature.bin  -map 0:v -f null -  
}
#signature

smartblur(){
name=smartblur
./ffmpeg42 -hide_banner  -t 5 -y  -i ./112334.mp4    -filter_complex  smartblur=lr=5:ls=-1,smartblur=lr=5:ls=0.2   rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#smartblur

sobel(){
name=sobel
./ffmpeg42 -hide_banner  -t 5 -y  -i ./112334.mp4    -filter_complex  sobel=planes=1   rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#sobel

spp(){
name=spp
./ffmpeg42 -hide_banner  -t 5 -y  -i ./112334.mp4    -filter_complex  spp   rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#spp

sr(){
name=sr
./ffmpeg42 -hide_banner  -t 5 -y  -i ./112334.mp4    -filter_complex  sr=dnn_backend=native:scale_factor=2   rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#sr

super2xsai(){
name=super2xsai
./ffmpeg42 -hide_banner  -t 5 -y  -i ./112334.mp4    -filter_complex  super2xsai   rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#super2xsai

swaprect(){
name=swaprect
./ffmpeg42 -hide_banner  -t 5 -y  -i ./1.mp4    -filter_complex  swaprect=w=20:h=40:x1=120:y1=240:x2=150:y2=320   rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#swaprect

swapuv(){
name=swapuv
./ffmpeg42 -hide_banner  -t 5 -y  -i ./1.mp4    -filter_complex  swapuv   rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#swapuv

telecine(){
name=telecine
./ffmpeg42 -hide_banner  -t 5 -y  -i ./1.mp4    -filter_complex  telecine=first_field=t:pattern=24   rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#telecine

threshold(){
name=threshold
./ffmpeg42 -hide_banner  -t 5 -y  -i ./1.mp4    -filter_complex  threshold   rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#threshold

thumbnail(){
name=thumbnail
./ffmpeg42 -hide_banner  -t 5 -y  -i ./1.mp4    -filter_complex  thumbnail=20   rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#thumbnail

tile(){
name=tile
ffmpeg  -i ./1.mp4  -vf tile=3x2:nb_frames=5:padding=7:margin=2  -an -vsync 0 keyframes%03d.png
#ffplay -hide_banner -i rec_${name}.mp4
}
#tile

tinterlace(){
name=tinterlace
ffmpeg  -y -i ./1.mp4  -filter_complex  tinterlace=0  -an  rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#tinterlace

tmix(){
name=tmix
ffmpeg  -y -i ./1.mp4  -filter_complex  tmix=4  -an  rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#tmix

tpad(){
name=tpad
ffmpeg  -y -i ./1.mp4  -filter_complex  tpad=10  -an  rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#tpad

vfrdet(){
name=vfrdet
ffmpeg  -y -i ./1.mp4  -filter_complex  vfrdet  -an  rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#vfrdet

vignette(){
name=vignette
ffmpeg  -y -i ./1.mp4  -filter_complex  vignette  -an  rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#vignette

vmafmotion(){
name=vmafmotion
ffmpeg  -y -i ./1.mp4  -filter_complex  vmafmotion  -an  rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#vmafmotion

vstack(){
name=vstack
ffmpeg  -y -i ./1.mp4 -i 6.mp4  -filter_complex  vstack  -an  rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#vstack

waveform(){
name=waveform
ffmpeg  -y -i ./1.mp4   -filter_complex  waveform  -an  rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#waveform

xbr(){
name=xbr
ffmpeg  -y -i ./6.mp4   -filter_complex  xbr  -an  rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#xbr

xmedian(){
name=xmedian
ffmpeg  -y -i ./6.mp4   -filter_complex  xmedian  -an  rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
#xmedian

zoompan(){
name=zoompan
ffmpeg  -y -i ./6.mp4   -filter_complex  zoompan  -an  rec_${name}.mp4
ffplay -hide_banner -i rec_${name}.mp4
}
zoompan

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章