Image Matting代碼和算法效果總結

本文轉載自:http://blog.leanote.com/post/[email protected]/Image-Matting。作者給出了大部分matting-code的鏈接,說明也比較細緻、系統,在這裏向作者表示由衷地感謝!以下是博客的原文:

肖總博客:http://39.108.216.13:8090/display/~xiaozhenzhong/Image-Matting+and+Background+Blur

圖像摳圖的closed form算法講解:http://blog.csdn.net/edesignerj/article/details/53349663    (本文用到的是input image和scribble image 其中                            scribble image可由ps獲取,畫刷硬度設置爲100)

                      文章: A. Levin D . Lischinski and Y. Weiss. A closed form solution and pattern recognition

                      A levin: 主頁: http://webee.technion.ac.il/people/anat.levin/   原文code:matlab;

                      python版本: https://github.com/MarcoForte/closed-form-matting (Python3.5+,scipy,numpy,matplotlib,sklearn, opencv-python)

                     進入文件夾,直接執行: python closed_form_matting.py(這張圖片7.5s)

                

                     c++版本: https://github.com/Rnandani/Natural-image-matting  

背景虛化算法,虛化的基本知識:http://blog.csdn.net/edesignerj/article/details/53349663

經典貝葉斯摳圖講解:http://blog.csdn.net/baimafujinji/article/details/72863106?locationNum=2&fps=1    (MATLAB)

       這篇文章對原來 Michael Rubinstein原文中所帶的code進行了修改,拋棄了一些gui界面,只保存了簡單的根據原圖和trimap進行                           alpha通道計算部分;

                       原文: A Bayesian Approach to Digital Matting. CVPR, 2001.  

                       主頁: http://grail.cs.washington.edu/projects/digital-matting/image-matting/  

                       原文代碼: 以下爲 Michael Rubinstein源代碼運行效果, 依然不能運行troll圖片;

                       ji

                     (不是不能運行troll圖片,是image過大,大概要運行幾百個小時才能處理完成)

   

                       python版本: https://github.com/MarcoForte/bayesian-matting (python 3.5+)  (經典貝葉斯算法的python版本)

          執行image gandolf 可以,但是換成image troll 後不可以,會進入死循環;(應該也不是死循環,只是需要執行的時間過長,) 

  

Trent專欄:http://blog.csdn.net/Trent1985

背景虛化ps教程:http://www.ps-xxw.cn/tupianchuli/5930.html

GoogleAR項目:TAngo http://blog.csdn.net/a369414641/article/details/53437674

簡單的背景虛化(原型, 水平,垂直)及JAVA代碼:http://blog.csdn.net/a369414641/article/details/53437674

代碼: 

KNN matting: https://github.com/dingzeyuli/knn-matting  (linux, matlab) (CVPR2012)

                        linux下直接運行install.sh, 下載相關依賴庫,完成後直接運行run_demo.m , 測試圖片GT04.png (800*563)  time <5s

                        (matlab 2016b)  (運行測試0103.png圖片等,平均時間約爲2.4s)

   

文章: Shared Sampling for Real-Time Alpha Matting;(2010) (MATLAB)

                原文代碼: http://inf.ufrgs.br/~eslgastal/SharedMatting/   

                 ( CUDA 3.2+LINUX 64BIT+GPU CAPABILITY >1.0 +QT VERSION4 +BOOST 1.4)

                在linux下執行已經編好的可執行程序, matlab用於對結果進行優化;(作者直接提供了一個可執行程序,貌似不可修改)

                進入文件夾,直接執行:    .

                   /SharedMatting -i GT04.png -t GT04_trimap.png -g GT04_gt.png -b moon.jpg (實時)(optimization takes almost 9 seconds)

                 或執行: ./SharedMatting 手動選擇輸入圖片input image he trimap;   

                執行:  time ./SharedMatting -i GT04.png -t GT04_trimap.png -a GT04_ALPHA.png ----real:  0.174s

                 源代碼修改後的c++版本:https://github.com/np-csu/AlphaMatting      (原文的c++ +opencv版本)-----沒有執行出來,好像缺少一些文件

                 作者源代碼,即可執行程序運行效果:  

             

 

                                             (優化前)                                                                                     (優化後)

               優化時用到了closed form源碼中的getlaplasian.m 函數;

     alpha matting on MAC :https://github.com/volvet/AlphaMatting    (MAC C++)  (沒有執行)

        (博客:實時摳圖算法:http://blog.csdn.net/volvet/article/details/51713766?locationNum=1&fps=1

              c++代碼運行時間: 640*480測試圖片約爲1.9s  (運行環境: clion +linux)

global Matting :https://github.com/atilimcetin/global-matting  (c++,windows)

     論文: He, Kaiming, et al. "A global sampling method for alpha matting." In CVPR’11, pages 2049–2056, 2011.                       

     構建vs工程visual studio2015, opencv3.1 ,下載guided filter ,debug模式下,1501s ; release模式測試640*480的人物測試圖片每張約700ms

    

Deep-image-matting: https://github.com/Joker316701882/Deep-Image-Matting (python )

                        (tensorflow implementation for paper 'deep image matting')

                         論文:Ning Xu, Brian Price, Scott Cohen, Thomas Huang. Deep Image Matting.2017

gSLICr: real-time super-pixel segmentation:https://github.com/carlren/gSLICr (c++ ubuntu 14.04; win8 visual studio )(2015)                             

                    攝像頭無法打開;(astra pro 纔可以作爲外接普通攝像頭打開,而astra 只能用openni的驅動打開)

                                                 

Robust matting: https://github.com/wangchuan/RobustMatting (opencv3.2 eigen vs2015) (2017)

                         下載源碼,建立工程,將Eigen下所需的庫添加到資源文件,release下生成.exe文件;

                         執行Robust_Matting.exe GT04-image.png GT04-trimap.png troll_alpha.png   每張圖片運行時間 58s

                        (運行測試圖片0103.png圖片等,平均時間約爲3s)

  

 

                   參考文獻:  J. Wang and M. Cohen. Optimized color sampling for robust matting. CVPR, 2007 

poisson-matting: https://github.com/MarcoForte/poisson-matting (python 3.5 or 2.7, windows)

                        J Sun, J Jia, C Tang, Y Shum. 2004. Poisson matting. ACM Trans. Graph. 23, 3 (August 2004), 315-321. DOI

                      安裝相關的庫: scipy,numpy,matplotlib,opencv,numba,pillow; 執行 python poisson_matting.py ,每張圖片時間:0.58s 

  

mishima-matting: https://github.com/MarcoForte/mishima-matting (python3.5,) (2017)

                     相關依賴庫:scipy, numpy,matplot, 執行 python mishima_matting.py,Runtime for an image 82.46864613339243 (沒有numba加速)

 

自動生成trimap: auto-portrait-matting: https://github.com/aromazyl/auto-portrait-matting (hog+svm+grabcut 算法自動trimap生成, linux)

                    下載源碼,編譯, 提示錯誤:  test.cc:8:25: fatal error: gtest/gtest.h: 沒有那個文件或目錄, 確實沒有這個頭文件:

                    apt-cache search gtest ---->  安裝   libgtest-dev    

                    進行make:-----> 報錯:cannot find -lbopencv-imgcodecs   -lopencv_videoio  -lopencv_shape  -lopen_gtest

                    解決 gtest:  


 
  1. sudo apt-get install cmake libgtest-dev
  2. cd /usr/src/gtest
  3. sudo cmake CMakeLists.txt
  4. sudo make
  5. # copy or symlink libgtest.a and libgtest_main.a to your /usr/lib folder
  6. sudo cp *.a /usr/lib

             或者建立一個軟鏈接:https://stackoverflow.com/questions/21201668/eclpse-cdt-gtest-setup-errorcannot-find-lgtest 

              make 成功, 運行可執行文件,彈出窗口,但是不知道要怎麼用;

clothing recognition : https://github.com/LangYH/ClothingRecognition

              QT編譯出錯,目前的版本是4.8, 需要更新版本至5.4 (因爲5.4或者5.6的庫中有該工程需要調用的文件)

grabcut rgbd四通道: https://github.com/Morde-kaiser/GrabCut-RGBD (opencv grabcut 改進,Windows)

              對opencv的Grabcut的改進,融合了深度信息進行圖像分割;圖像的輸入爲一個4通道矩陣,第四通道爲深度圖

              GrabCut講解: http://blog.csdn.net/zouxy09/article/details/8534954 

證件照轉換模塊:(portrait-master) (grabcut和matting算法):https://github.com/EthanLauAL/portrait (Windows ,vs c++;linux)(2014)

               在linux下運行,出現找不到opencv庫的情況:   main_camera.cc:(.text+0x19f):對‘cv::VideoCapture::VideoCapture(int)’未定義的引用


 
  1. orbbec@orbbec:/usr/lib/pkgconfig$ pkg-config --cflags opencv-2.4.13
  2. -I/usr/local/include/opencv -I/usr/local/include
  3. orbbec@orbbec:/usr/lib/pkgconfig$ pkg-config --cflags opencv-3.3.1.pc
  4. -I/usr/local/OpenCV3/include/opencv -I/usr/local/OpenCV3/include

                 pkg-config可以找到安裝包,但是還是會出現未定義的引用:主要是makefile的問題, 導致庫的調用順序出錯,再每個子文件夾下分別鏈接                  相應的庫進行編譯;生成可執行文件;

              運行結果: 插入攝像頭,隨意截取圖像,進行證件照轉換(背景複雜時效果很差)

                   

Automatic Portrait Segmentation for Image Stylization:

              文章和代碼下載: http://xiaoyongshen.me/webpage_portrait/index.html (paper and code)

               文章: Automatic portrait segmentaion for image stylization, cvpr 2016; (caffe,matlab, matio1.5.11 )

              下載matio:https://github.com/tbeu/matio#21-dependencies  (安裝出錯)

               錯誤: 


 
  1. /media/orbbec/工作啊!!/PROJECT/Image_Matting/Automatic_Portrait_Segmentation/caffe-portraitseg/include/caffe/util/cudnn.hpp(60): error: identifier "cudnnTensor4dDescriptor_t" is undefined
  2. 15 errors detected in the compilation of "/tmp/tmpxft_00003587_00000000-7_conv_layer.cpp1.ii".
  3.   /media/orbbec/工作啊!!/PROJECT/Image_Matting/Automatic_Portrait_Segmentation/caffe-portraitseg/build/src/caffe/CMakeFiles/cuda_compile.dir/layers/./cuda_compile_generated_conv_layer.cu.o
  4. src/caffe/CMakeFiles/caffe.dir/build.make:552: recipe for target 'src/caffe/CMakeFiles/cuda_compile.dir/layers/cuda_compile_generated_conv_layer.cu.o' failed

              關於cudnn版本問題: https://github.com/BVLC/caffe/issues/1792 

                 cudnn下載鏈接: https://developer.nvidia.com/rdp/cudnn-archive (版本錯誤,應該下載version 1)

                 tensorflow版本: https://github.com/PetroWu/AutoPortraitMatting

Research:  Image_matting_based_on_alpha_value:

     論文主頁:http://www.cs.unc.edu/~lguan/Research.files/Research.htm#IM (Matlab)   

                   論文:Li Guan, "Algorithms of Object Extraction in Digital Images based on Alpha value", Zhejiang University, Hangzhou,

        Jun. 2004. 

               README:

                This is a GUI demo for four image matting algorithms.

     Four algorithms are in four separate .m files and are easy to extract for specific use. 

                Usage:    You need to have MATLAB 6.0 or above. (I tested the code in 6.0 and 7.0)

                               Just run "Matting.m" and the help window in the GUI will lead you step by step.

                執行gui界面,出錯:Error while evaluating UIControl Callback

Deep Automatic Portrait Matting: 與automatic portrait segmentation for image stylization 是同一個作者:

http://xiaoyongshen.me/webpages/webpage_automatting/

deep auto = auto portrait +softmax
 

關於image_matting 方法的一段總結:
Existing matting methods can be categorized as propagation-based or sampling-based.

Propagation-based methods treat the problem as interpolating the unknown alpha values from the known regions.

The interpolation can be done by solving an affinity matrix ( possion matting, random walks for interactive alpha-matting, closed-form solution,new appearance models, fast matting using large kernel matting laplacian matrics)

optimizing Markov Random Fields [18] (an iteractive optimization approach for unified image segmentation and matting)

or by computing geodesic distance [2]. 

These methods mainly rely on the image’s continuity to estimate the alpha matte, and do not explicitly account for the foreground and background colors. They have shown success in many cases, but may fail when the foreground has long and thin structures or holes. Their performance can be improved when combined with sampling-based methods .
 

Sampling-based methods first estimate the foreground and background colors and then compute the alpha matte.
Earlier methods like Ruzon and Tomasi’s work [alpha estimatiuon] and Bayesian Matting [6], fit a parametric model to the color distributions. But they are less valid when the image does not satisfy the model.

Recent sampling-based methods [robust-matting,improving color-model , shared matting] are mostly non-parametric: they pick out some color samples from the known regions to estimate the unknown alpha values. These methods perform well in condition that the true foreground and background colors are in the sample set. However, the true foreground/background colors are not always covered, because these methods only collect samples near each unknown pixel, and the number of samples is rather limited. 

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章