opencv運動追蹤可以用來捕捉到運行物體或者活物,在被動攝像頭上應用,可以在運行時錄相,節省寶貴的存儲空間。
一個外國老哥藉助樹黴派的攝像頭使用PYTHON做一個簡單的運行捕捉攝像頭,用於捕捉工作時間偷喝他冰箱裏啤酒的同事。代碼有一些轉義字符的亂碼,使用3.0API後有些問題,由於findContours不同版本返回值不同,我小修改了一下,可以完美運行於PC機帶的攝像頭,由於沒有紅外和輔助設備測距,所以需要離攝像頭一段距離才能完美展示和處理。
運行檢測的核心算法有很多,有些複雜,有些簡單,有些準確,有些粗糙。同時也一行業和機器學習,機器視覺結合後,每一天都在發生新變化,不停的有牛B的數學家,物理學家,程序員加入。這個算法的核心是,更詳細的算法可以去參考http://python.jobbole.com/81593/,他們禁止轉載,原理如下:
我們視頻流中的背景在連續的視頻幀內,多數時候應該是靜止不變的,因此如果我們可以建立背景模型,我們的就可以監視到顯著的變化。如果發生了顯著的變化,我們就可以檢測到它——通常這些變化和我們視頻中的運動有關。
上修改後的PYTHON代碼,如果2.0的API,需要修改這一行代碼,2.0返回兩個值,把前面的下劃線和逗號去掉,3.0不需要修改
(_, cnts, _) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE):
# 導入必要的軟件包
import argparse
import datetime
import imutils
import time
import cv2
# 創建參數解析器並解析參數
ap = argparse.ArgumentParser()
ap.add_argument("-v", "--video", help="path to the video file")
ap.add_argument("-a", "--min-area", type=int, default=500, help="minimum area size")
args = vars(ap.parse_args())
# 如果video參數爲None,那麼我們從攝像頭讀取數據
if args.get("video", None) is None:
camera = cv2.VideoCapture(0)
time.sleep(0.25)
# 否則我們讀取一個視頻文件
else:
camera = cv2.VideoCapture(args["video"])
# 初始化視頻流的第一幀
firstFrame = None
# 遍歷視頻的每一幀
while True:
# 獲取當前幀並初始化occupied/unoccupied文本
(grabbed, frame) = camera.read()
text = "Unoccupied"
# 如果不能抓取到一幀,說明我們到了視頻的結尾
if not grabbed:
break
# 調整該幀的大小,轉換爲灰階圖像並且對其進行高斯模糊
frame = imutils.resize(frame, width=500)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (21, 21), 0)
# 如果第一幀是None,對其進行初始化
if firstFrame is None:
firstFrame = gray
continue
# 計算當前幀和第一幀的不同
frameDelta = cv2.absdiff(firstFrame, gray)
thresh = cv2.threshold(frameDelta, 25, 255, cv2.THRESH_BINARY)[1]
# 擴展閥值圖像填充孔洞,然後找到閥值圖像上的輪廓
thresh = cv2.dilate(thresh, None, iterations=2)
(_, cnts, _) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
# 遍歷輪廓
for c in cnts:
# if the contour is too small, ignore it
if cv2.contourArea(c) < args["min_area"]:
continue
# compute the bounding box for the contour, draw it on the frame,
# and update the text
# 計算輪廓的邊界框,在當前幀中畫出該框
(x, y, w, h) = cv2.boundingRect(c)
cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
text = "Occupied"
# draw the text and timestamp on the frame
# 在當前幀上寫文字以及時間戳
cv2.putText(frame, "Room Status: {}".format(text), (10, 20),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)
cv2.putText(frame, datetime.datetime.now().strftime("%A %d %B %Y %I:%M:%S%p"),
(10, frame.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 0, 255), 1)
# 顯示當前幀並記錄用戶是否按下按鍵
cv2.imshow("Security Feed", frame)
cv2.imshow("Thresh", thresh)
cv2.imshow("Frame Delta", frameDelta)
key = cv2.waitKey(1) & 0xFF
# 如果q鍵被按下,跳出循環
if key == ord("q"):
break
# 清理攝像機資源並關閉打開的窗口
camera.release()
cv2.destroyAllWindows()
下面上類似算法的JAVA代碼,算法大體上相當於翻譯python :
import java.awt.EventQueue; import java.awt.event.MouseAdapter; import java.awt.event.MouseEvent; import java.awt.image.BufferedImage; import java.awt.image.DataBufferByte; import java.awt.image.WritableRaster; import java.util.ArrayList; import java.util.List; import javax.swing.ImageIcon; import javax.swing.JButton; import javax.swing.JFrame; import javax.swing.JLabel; import org.opencv.core.Core; import org.opencv.core.Mat; import org.opencv.core.MatOfPoint; import org.opencv.core.Point; import org.opencv.core.Rect; import org.opencv.core.Scalar; import org.opencv.core.Size; import org.opencv.imgproc.Imgproc; import org.opencv.videoio.VideoCapture; public class CameraBasic { static { System.loadLibrary(Core.NATIVE_LIBRARY_NAME); } private JFrame frame; static JLabel label; static int flag = 0; /** * Launch the application. */ public static void main(String[] args) { EventQueue.invokeLater(new Runnable() { public void run() { try { CameraBasic window = new CameraBasic(); window.frame.setVisible(true); } catch (Exception e) { e.printStackTrace(); } } }); VideoCapture camera = new VideoCapture(); camera.open(0); if (!camera.isOpened()) { System.out.println("Camera Error"); } else { Mat frame = new Mat(); Mat firstFrame = null; while (flag == 0) { camera.read(frame); // 捕捉動態 Imgproc.resize(frame, frame, new Size(500, 500)); Mat gray = new Mat(); Imgproc.cvtColor(frame, gray, Imgproc.COLOR_BGR2GRAY); Imgproc.GaussianBlur(gray, gray, new Size(21, 21), 0); if (firstFrame == null) { firstFrame = gray; continue; } Mat frameDelta = new Mat(); Core.absdiff(firstFrame, gray, frameDelta); Mat thresh = new Mat(); Imgproc.threshold(frameDelta, thresh, 25, 255, Imgproc.THRESH_BINARY); List<MatOfPoint> contours = new ArrayList<>(); Mat hierarchy = new Mat(); Imgproc.findContours(thresh, contours, hierarchy, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE); Imgproc.dilate(thresh, thresh, new Mat(), new Point(-1, -1), 2); for (MatOfPoint mf : contours) { if (Imgproc.contourArea(mf) < 2000) { continue; } Imgproc.drawContours(frame, contours, contours.indexOf(mf), new Scalar(0, 255, 255)); Imgproc.fillConvexPoly(frame, mf, new Scalar(0, 255, 255)); Rect r = Imgproc.boundingRect(mf); Imgproc.rectangle(frame, r.tl(), r.br(), new Scalar(0, 255, 0), 2); } firstFrame = gray; label.setIcon(new ImageIcon(matToBufferedImage(frame))); try { Thread.sleep(40); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } } /** * Create the application. */ public CameraBasic() { initialize(); } /** * Initialize the contents of the frame. */ private void initialize() { frame = new JFrame(); frame.setBounds(100, 100, 800, 450); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.getContentPane().setLayout(null); JButton btnNewButton = new JButton("\u62CD\u7167"); btnNewButton.addMouseListener(new MouseAdapter() { @Override public void mouseClicked(MouseEvent arg0) { flag = 1; } }); btnNewButton.setBounds(33, 13, 113, 27); frame.getContentPane().add(btnNewButton); label = new JLabel(""); label.setBounds(0, 0, 800, 450); frame.getContentPane().add(label); } public static BufferedImage matToBufferedImage(Mat mat) { if (mat.height() > 0 && mat.width() > 0) { BufferedImage image = new BufferedImage(mat.width(), mat.height(), BufferedImage.TYPE_3BYTE_BGR); WritableRaster raster = image.getRaster(); DataBufferByte dataBuffer = (DataBufferByte) raster.getDataBuffer(); byte[] data = dataBuffer.getData(); mat.get(0, 0, data); return image; } return null; } }
其實在OPENCV裏有很多運行追蹤的算法,使用起來更準確,也更簡單,下面是一個使用JAVA寫的調用BackgroundSubtractorMOG2運行追蹤算法實現,十分簡單,把加一個特效,給捕捉到人染色。
package javaCv;
import java.awt.EventQueue;
import java.awt.event.MouseAdapter;
import java.awt.event.MouseEvent;
import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
import java.awt.image.WritableRaster;
import java.util.ArrayList;
import java.util.List;
import javax.swing.ImageIcon;
import javax.swing.JButton;
import javax.swing.JFrame;
import javax.swing.JLabel;
import org.opencv.core.Core;
import org.opencv.core.Mat;
import org.opencv.core.MatOfPoint;
import org.opencv.core.Point;
import org.opencv.core.Rect;
import org.opencv.core.Scalar;
import org.opencv.imgcodecs.Imgcodecs;
import org.opencv.imgproc.Imgproc;
import org.opencv.video.BackgroundSubtractorMOG2;
import org.opencv.video.Video;
import org.opencv.videoio.VideoCapture;
public class CameraBasic2 {
static {
System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
}
private JFrame frame;
static JLabel label;
static int flag = 0;
/**
* Launch the application.
*/
public static void main(String[] args) {
EventQueue.invokeLater(new Runnable() {
public void run() {
try {
CameraBasic2 window = new CameraBasic2();
window.frame.setVisible(true);
} catch (Exception e) {
e.printStackTrace();
}
}
});
VideoCapture camera = new VideoCapture();
camera.open(0);
if (!camera.isOpened()) {
System.out.println("Camera Error");
} else {
Mat frame = new Mat();
BackgroundSubtractorMOG2 bs = Video.createBackgroundSubtractorMOG2();
bs.setHistory(100);
Mat tmp = new Mat();
while (flag == 0) {
camera.read(frame);
// 捕捉
bs.apply(frame, tmp, 0.1f);
List<MatOfPoint> contours = new ArrayList<>();
Mat hierarchy = new Mat();
Imgproc.findContours(tmp, contours, hierarchy, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
Imgproc.dilate(tmp, tmp, new Mat(), new Point(-1, -1), 2);
for (MatOfPoint mf : contours) {
if (Imgproc.contourArea(mf) < 1000) {
continue;
}
// Imgproc.drawContours(frame, contours,
// contours.indexOf(mf), new Scalar(0, 255, 255));
Imgproc.fillConvexPoly(frame, mf, new Scalar(0, 255, 255));
Rect r = Imgproc.boundingRect(mf);
Imgproc.rectangle(frame, r.tl(), r.br(), new Scalar(0, 255, 0), 2);
//Imgcodecs.imwrite("E:\\work\\qqq\\camera2\\" + "img" + System.currentTimeMillis() + ".jpg", frame);
}
label.setIcon(new ImageIcon(matToBufferedImage(frame)));
try {
Thread.sleep(40);
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
}
/**
* Create the application.
*/
public CameraBasic2() {
initialize();
}
/**
* Initialize the contents of the frame.
*/
private void initialize() {
frame = new JFrame();
frame.setBounds(100, 100, 800, 450);
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.getContentPane().setLayout(null);
JButton btnNewButton = new JButton("\u62CD\u7167");
btnNewButton.addMouseListener(new MouseAdapter() {
@Override
public void mouseClicked(MouseEvent arg0) {
flag = 1;
}
});
btnNewButton.setBounds(33, 13, 113, 27);
frame.getContentPane().add(btnNewButton);
label = new JLabel("");
label.setBounds(0, 0, 800, 450);
frame.getContentPane().add(label);
}
public static BufferedImage matToBufferedImage(Mat mat) {
if (mat.height() > 0 && mat.width() > 0) {
BufferedImage image = new BufferedImage(mat.width(), mat.height(), BufferedImage.TYPE_3BYTE_BGR);
WritableRaster raster = image.getRaster();
DataBufferByte dataBuffer = (DataBufferByte) raster.getDataBuffer();
byte[] data = dataBuffer.getData();
mat.get(0, 0, data);
return image;
}
return null;
}
}
運行追蹤十分有用,可以用在家裏的攝像頭上,既節省空間,也節省觀看的時間,其實這個開發是比較有意思的,在捕捉動態對象的前提下,可以主動報警,有人入侵,也可以加入人臉識別,把好哥們都加進去,這樣如果不在家時,誰到了家裏都能知道,遇到不喜歡的人可以選擇性避開,遇到想見的人可以去見。也可以在攝像頭裏加入動作識別,來開啓空調電視之類,從而實現攝像頭的充分利用。
效果圖如下,我對人添加了染色效果:
參考:
http://python.jobbole.com/81593/
http://blog.csdn.net/jjddss/article/details/72674704