【實戰】基於OpenCv的SVM實現車牌檢測與識別(二)

這期繼續分享SVM實踐項目:車牌檢測與識別,同時也介紹一些乾貨

回顧一下,上期介紹了OpenCv的SVM模型訓練,這期繼續介紹一下識別過程[外鏈圖片轉存失敗,源站可能有防盜鏈機制,建議將圖片保存下來直接上傳(img-9wjfJBRU-1588726301553)(D:\CSDN\pic\車牌檢測(一)\1588637654566.png)]

這幅流程圖還是很經典,直觀的。

我們先分享一下上期說的:
[外鏈圖片轉存失敗,源站可能有防盜鏈機制,建議將圖片保存下來直接上傳(img-MdaV0tq2-1588726301555)(D:\CSDN\pic\車牌檢測(一)\1588645945277.png)]

OpenCv的中文顯示方法

我使用的是PIL的顯示方法,下面簡介一下教程:

1字體simhei.ttf需要下載,然後在font = ImageFont.truetype("./simhei.ttf", 20, encoding=“utf-8”)指定simhei.ttf的路徑即可 ,同樣的需要把這個字體放在的路徑找到或者放在運行代碼同級,都行。

2: 中文編碼爲utf-8。否則中文會顯示爲矩形。str1 = str1.decode(‘utf-8’)

3:上代碼:

from PIL import Image, ImageDraw, ImageFont
import cv2
import numpy as np

# cv2讀取圖片
img = cv2.imread(r'C:\Users\acer\Desktop\black.jpg')  # 名稱不能有漢字
cv2img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)  # cv2和PIL中顏色的hex碼的儲存順序不同
pilimg = Image.fromarray(cv2img)

# PIL圖片上打印漢字
draw = ImageDraw.Draw(pilimg)  # 圖片上打印
font = ImageFont.truetype("simhei.ttf", 20, encoding="utf-8")  # 參數1:字體文件路徑,參數2:字體大小
draw.text((0, 0), "Hi", 1.8, (255, 0, 0), font=font)  # 參數1:打印座標,參數2:文本,參數3:字體顏色,參數4:字體

# PIL圖片轉cv2 圖片
cv2charimg = cv2.cvtColor(np.array(pilimg), cv2.COLOR_RGB2BGR)
# cv2.imshow("圖片", cv2charimg) # 漢字窗口標題顯示亂碼
cv2.imshow("photo", cv2charimg)

cv2.waitKey(0)
cv2.destroyAllWindows()

值得注意的是:
1)opencv讀取圖像後圖像顏色通道是BGR排列的,而PIL讀取的圖像是RGB排列的。要注意圖像顏色通道排列的轉化cv2.cvtColor(img, cv2.COLOR_BGR2RGB)。

2)opencv讀取完圖像存儲格式是numpy。PIL是自己定義的格式。要調用PIL的方法需要先將numpy轉爲自己的格式。pilimg = Image.fromarray(cv2img)。相反,PIL處理完後,調用opencv方法要將格式轉回numpy。

cv2charimg = cv2.cvtColor(np.array(pilimg), cv2.COLOR_RGB2BGR)。

不轉的話會報錯。TypeError: Expected cv::UMat for argument ‘src’

還有一種常用的:freetype方式:

同樣的先下載字體:比如上面的simhei.ttf,同樣的還有msyh.ttf(這些百度就行,很多):

#-*- coding: utf-8 -*-
import cv2
import ft2
 
img = cv2.imread('pic_url.jpg')
line = '你好'
 
color = (0, 255, 0)  # Green
pos = (3, 3)
text_size = 24
 
# ft = put_chinese_text('wqy-zenhei.ttc')
ft = ft2.put_chinese_text('msyh.ttf')
image = ft.draw_text(img, pos, line, text_size, color)
 
name = u'圖片展示'
 
cv2.imshow(name, image)
cv2.waitKey(0)

個人推薦第一種!


接下來繼續車牌檢測~

查找圖像邊緣整體形成的矩形區域,可能有很多,車牌就在其中一個矩形區域中(這也是程序or算法的不足之處不過,並不影響結果)

		try:
			contours, hierarchy = cv2.findContours(img_edge2, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
		except ValueError:
			image, contours, hierarchy = cv2.findContours(img_edge2, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
		contours = [cnt for cnt in contours if cv2.contourArea(cnt) > Min_Area]

需要注意的是cv2.findContours()函數接受的參數爲二值圖,即黑白的(不是灰度圖),所以讀取的圖像要先轉成灰度的,再轉成二值圖!

結果篩選(原因是上述的多可能性情況):

car_contours = []
		for cnt in contours:
			rect = cv2.minAreaRect(cnt)
			area_width, area_height = rect[1]
			if area_width < area_height:
				area_width, area_height = area_height, area_width
			wh_ratio = area_width / area_height
			#print(wh_ratio)
			#要求矩形區域長寬比在2到5.5之間,2到5.5是車牌的長寬比,其餘的矩形排除
			if wh_ratio > 2 and wh_ratio < 5.5:
				car_contours.append(rect)
				box = cv2.boxPoints(rect)
				box = np.int0(box)

接下來:

矩形區域可能是傾斜的矩形,需要矯正,以便使用顏色定位

		
		for rect in car_contours:
			if rect[2] > -1 and rect[2] < 1:#創造角度,使得左、高、右、低拿到正確的值
				angle = 1
			else:
				angle = rect[2]
			rect = (rect[0], (rect[1][0]+5, rect[1][1]+5), angle)#擴大範圍,避免車牌邊緣被排除

			box = cv2.boxPoints(rect)
			heigth_point = right_point = [0, 0]
			left_point = low_point = [pic_width, pic_hight]
			for point in box:
				if left_point[0] > point[0]:
					left_point = point
				if low_point[1] > point[1]:
					low_point = point
				if heigth_point[1] < point[1]:
					heigth_point = point
				if right_point[0] < point[0]:
					right_point = point

			if left_point[1] <= right_point[1]:#正角度
				new_right_point = [right_point[0], heigth_point[1]]
				pts2 = np.float32([left_point, heigth_point, new_right_point])#字符只是高度需要改變
				pts1 = np.float32([left_point, heigth_point, right_point])
				M = cv2.getAffineTransform(pts1, pts2)
				dst = cv2.warpAffine(oldimg, M, (pic_width, pic_hight))
				point_limit(new_right_point)
				point_limit(heigth_point)
				point_limit(left_point)
				card_img = dst[int(left_point[1]):int(heigth_point[1]), int(left_point[0]):int(new_right_point[0])]
				card_imgs.append(card_img)

			elif left_point[1] > right_point[1]:#負角度
				
				new_left_point = [left_point[0], heigth_point[1]]
				pts2 = np.float32([new_left_point, heigth_point, right_point])#字符只是高度需要改變
				pts1 = np.float32([left_point, heigth_point, right_point])
				M = cv2.getAffineTransform(pts1, pts2)
				dst = cv2.warpAffine(oldimg, M, (pic_width, pic_hight))
				point_limit(right_point)
				point_limit(heigth_point)
				point_limit(new_left_point)
				card_img = dst[int(right_point[1]):int(heigth_point[1]), int(new_left_point[0]):int(right_point[0])]
				card_imgs.append(card_img)

開始使用顏色定位,排除不是車牌的矩形,目前只識別藍、綠、黃車牌

		colors = []
		for card_index,card_img in enumerate(card_imgs):
			green = yello = blue = black = white = 0
			card_img_hsv = cv2.cvtColor(card_img, cv2.COLOR_BGR2HSV)
			#有轉換失敗的可能,原因來自於上面矯正矩形出錯
			if card_img_hsv is None:
				continue
			row_num, col_num= card_img_hsv.shape[:2]
			card_img_count = row_num * col_num

			for i in range(row_num):
				for j in range(col_num):
					H = card_img_hsv.item(i, j, 0)
					S = card_img_hsv.item(i, j, 1)
					V = card_img_hsv.item(i, j, 2)
					if 11 < H <= 34 and S > 34:#圖片分辨率調整
						yello += 1
					elif 35 < H <= 99 and S > 34:#圖片分辨率調整
						green += 1
					elif 99 < H <= 124 and S > 34:#圖片分辨率調整
						blue += 1
					
					if 0 < H <180 and 0 < S < 255 and 0 < V < 46:
						black += 1
					elif 0 < H <180 and 0 < S < 43 and 221 < V < 225:
						white += 1
			color = "no"

			limit1 = limit2 = 0
			if yello*2 >= card_img_count:
				color = "yello"
				limit1 = 11
				limit2 = 34#有的圖片有色偏偏綠
			elif green*2 >= card_img_count:
				color = "green"
				limit1 = 35
				limit2 = 99
			elif blue*2 >= card_img_count:
				color = "blue"
				limit1 = 100
				limit2 = 124#有的圖片有色偏偏紫
			elif black + white >= card_img_count*0.7:#TODO
				color = "bw"
			print(color)
			colors.append(color)
			print(blue, green, yello, black, white, card_img_count)
			cv2.imshow("color", card_img)
			cv2.waitKey(1110)
			if limit1 == 0:
				continue
			#以上爲確定車牌顏色
			#以下爲根據車牌顏色再定位,縮小邊緣非車牌邊界
			xl, xr, yh, yl = self.accurate_place(card_img_hsv, limit1, limit2, color)
			if yl == yh and xl == xr:
				continue
			need_accurate = False
			if yl >= yh:
				yl = 0
				yh = row_num
				need_accurate = True
			if xl >= xr:
				xl = 0
				xr = col_num
				need_accurate = True
			card_imgs[card_index] = card_img[yl:yh, xl:xr] if color != "green" or yl < (yh-yl)//4 else card_img[yl-(yh-yl)//4:yh, xl:xr]
			if need_accurate:#可能x或y方向未縮小,需要再試一次
				card_img = card_imgs[card_index]
				card_img_hsv = cv2.cvtColor(card_img, cv2.COLOR_BGR2HSV)
				xl, xr, yh, yl = self.accurate_place(card_img_hsv, limit1, limit2, color)
				print('size', xl,xr,yh,yl)
				if yl == yh and xl == xr:
					continue
				if yl >= yh:
					yl = 0
					yh = row_num
				if xl >= xr:
					xl = 0
					xr = col_num
			card_imgs[card_index] = card_img[yl:yh, xl:xr] if color != "green" or yl < (yh-yl)//4 else card_img[yl-(yh-yl)//4:yh, xl:xr]

上個表情防止兄弟看得睡着了

[外鏈圖片轉存失敗,源站可能有防盜鏈機制,建議將圖片保存下來直接上傳(img-S7KZhD2C-1588726301558)(D:\CSDN\pic\車牌檢測(二)\1588675986308.png)]


核心部分來了,詳解一下:

predict_result = []
		roi = None
		card_color = None
		for i, color in enumerate(colors):
			if color in ("blue", "yello", "green"):
				card_img = card_imgs[i]
				gray_img = cv2.cvtColor(card_img, cv2.COLOR_BGR2GRAY)
				#黃、綠車牌字符比背景暗、與藍車牌剛好相反,所以黃、綠車牌需要反向
				if color == "green" or color == "yello":
					gray_img = cv2.bitwise_not(gray_img)
				ret, gray_img = cv2.threshold(gray_img, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
				#查找水平直方圖波峯
				x_histogram  = np.sum(gray_img, axis=1)
				x_min = np.min(x_histogram)
				x_average = np.sum(x_histogram)/x_histogram.shape[0]
				x_threshold = (x_min + x_average)/2
				wave_peaks = find_waves(x_threshold, x_histogram)
				if len(wave_peaks) == 0:
					print("peak less 0:")
					continue
				#認爲水平方向,最大的波峯爲車牌區域
				wave = max(wave_peaks, key=lambda x:x[1]-x[0])
				gray_img = gray_img[wave[0]:wave[1]]
				#查找垂直直方圖波峯
				row_num, col_num= gray_img.shape[:2]
				#去掉車牌上下邊緣1個像素,避免白邊影響閾值判斷
				gray_img = gray_img[1:row_num-1]
				y_histogram = np.sum(gray_img, axis=0)
				y_min = np.min(y_histogram)
				y_average = np.sum(y_histogram)/y_histogram.shape[0]
				y_threshold = (y_min + y_average)/5#U和0要求閾值偏小,否則U和0會被分成兩半

				wave_peaks = find_waves(y_threshold, y_histogram)

				#for wave in wave_peaks:
				#	cv2.line(card_img, pt1=(wave[0], 5), pt2=(wave[1], 5), color=(0, 0, 255), thickness=2) 
				#車牌字符數應大於6
				if len(wave_peaks) <= 6:
					print("peak less 1:", len(wave_peaks))
					continue
				
				wave = max(wave_peaks, key=lambda x:x[1]-x[0])
				max_wave_dis = wave[1] - wave[0]
				#判斷是否是左側車牌邊緣
				if wave_peaks[0][1] - wave_peaks[0][0] < max_wave_dis/3 and wave_peaks[0][0] == 0:
					wave_peaks.pop(0)
				
				#組合分離漢字
				cur_dis = 0
				for i,wave in enumerate(wave_peaks):
					if wave[1] - wave[0] + cur_dis > max_wave_dis * 0.6:
						break
					else:
						cur_dis += wave[1] - wave[0]
				if i > 0:
					wave = (wave_peaks[0][0], wave_peaks[i][1])
					wave_peaks = wave_peaks[i+1:]
					wave_peaks.insert(0, wave)
				
				#去除車牌上的分隔點
				point = wave_peaks[2]
				if point[1] - point[0] < max_wave_dis/3:
					point_img = gray_img[:,point[0]:point[1]]
					if np.mean(point_img) < 255/5:
						wave_peaks.pop(2)
				
				if len(wave_peaks) <= 6:
					print("peak less 2:", len(wave_peaks))
					continue
				part_cards = seperate_card(gray_img, wave_peaks)
				for i, part_card in enumerate(part_cards):
					#可能是固定車牌的鉚釘
					if np.mean(part_card) < 255/5:
						print("a point")
						continue
					part_card_old = part_card
					w = abs(part_card.shape[1] - SZ)//2
					
					part_card = cv2.copyMakeBorder(part_card, 0, 0, w, w, cv2.BORDER_CONSTANT, value = [0,0,0])
					part_card = cv2.resize(part_card, (SZ, SZ), interpolation=cv2.INTER_AREA)
					
					#part_card = deskew(part_card)
					part_card = preprocess_hog([part_card])
					if i == 0:
						resp = self.modelchinese.predict(part_card)
						charactor = provinces[int(resp[0]) - PROVINCE_START]
					else:
						resp = self.model.predict(part_card)
						charactor = chr(resp[0])
					#判斷最後一個數是否是車牌邊緣,假設車牌邊緣被認爲是1
					if charactor == "1" and i == len(part_cards)-1:
						if part_card_old.shape[0]/part_card_old.shape[1] >= 7:#1太細,認爲是邊緣
							continue
					predict_result.append(charactor)
				roi = card_img
				card_color = color
				break
				
		return predict_result, roi, card_color#識別到的字符、定位的車牌圖像、車牌顏色

部分代碼有註釋,大致說說:

這是識別車牌中的字符

gray_img = cv2.bitwise_not(gray_img)

這個是掩膜方法,我們後續再統一介紹吧, 大致思路就是把原圖中要放logo的區域摳出來,再把logo放進去就行了。

根據設定的閾值和圖片直方圖,找出波峯,用於分隔字符

def find_waves(threshold, histogram):
	up_point = -1#上升點
	is_peak = False
	if histogram[0] > threshold:
		up_point = 0
		is_peak = True
	wave_peaks = []
	for i,x in enumerate(histogram):
		if is_peak and x < threshold:
			if i - up_point > 2:
				is_peak = False
				wave_peaks.append((up_point, i))
		elif not is_peak and x >= threshold:
			is_peak = True
			up_point = i
	if is_peak and up_point != -1 and i - up_point > 4:
		wave_peaks.append((up_point, i))
	return wave_peaks

根據找出的波峯,分隔圖片,從而得到逐個字符圖片

def seperate_card(img, waves):
	part_cards = []
	for wave in waves:
		part_cards.append(img[:, wave[0]:wave[1]])
	return part_cards
def deskew(img):
	m = cv2.moments(img)
	if abs(m['mu02']) < 1e-2:
		return img.copy()
	skew = m['mu11']/m['mu02']
	M = np.float32([[1, skew, -0.5*SZ*skew], [0, 1, 0]])
	img = cv2.warpAffine(img, M, (SZ, SZ), flags=cv2.WARP_INVERSE_MAP | cv2.INTER_LINEAR)
	return img

其中:

m = cv2.moments(img)

矩 計算 下期介紹

最後,結果篩選:

#車牌字符數應大於6
				if len(wave_peaks) <= 6:
					print("peak less 1:", len(wave_peaks))
					continue
				
				wave = max(wave_peaks, key=lambda x:x[1]-x[0])
				max_wave_dis = wave[1] - wave[0]
				#判斷是否是左側車牌邊緣
				if wave_peaks[0][1] - wave_peaks[0][0] < max_wave_dis/3 and wave_peaks[0][0] == 0:
					wave_peaks.pop(0)
				
				#組合分離漢字
				cur_dis = 0
				for i,wave in enumerate(wave_peaks):
					if wave[1] - wave[0] + cur_dis > max_wave_dis * 0.6:
						break
					else:
						cur_dis += wave[1] - wave[0]
				if i > 0:
					wave = (wave_peaks[0][0], wave_peaks[i][1])
					wave_peaks = wave_peaks[i+1:]
					wave_peaks.insert(0, wave)
				
				#去除車牌上的分隔點
				point = wave_peaks[2]
				if point[1] - point[0] < max_wave_dis/3:
					point_img = gray_img[:,point[0]:point[1]]
					if np.mean(point_img) < 255/5:
						wave_peaks.pop(2)
				
				if len(wave_peaks) <= 6:
					print("peak less 2:", len(wave_peaks))
					continue
				part_cards = seperate_card(gray_img, wave_peaks)
				for i, part_card in enumerate(part_cards):
					#可能是固定車牌的鉚釘
					if np.mean(part_card) < 255/5:
						print("a point")
						continue
					part_card_old = part_card
					w = abs(part_card.shape[1] - SZ)//2
					
					part_card = cv2.copyMakeBorder(part_card, 0, 0, w, w, cv2.BORDER_CONSTANT, value = [0,0,0])
					part_card = cv2.resize(part_card, (SZ, SZ), interpolation=cv2.INTER_AREA)
					
					#part_card = deskew(part_card)
					part_card = preprocess_hog([part_card])
					if i == 0:
						resp = self.modelchinese.predict(part_card)
						charactor = provinces[int(resp[0]) - PROVINCE_START]
					else:
						resp = self.model.predict(part_card)
						charactor = chr(resp[0])
					#判斷最後一個數是否是車牌邊緣,假設車牌邊緣被認爲是1
					if charactor == "1" and i == len(part_cards)-1:
						if part_card_old.shape[0]/part_card_old.shape[1] >= 7:#1太細,認爲是邊緣
							continue
					predict_result.append(charactor)
				roi = card_img
				card_color = color
				break
				
		return predict_result, roi, card_color

返回識別到的字符、定位的車牌圖像、車牌顏色

main函數:

if __name__ == '__main__':
	c = CardPredictor()
	c.train_svm()
	r, roi, color = c.predict("test//car7.jpg")
	print(r, roi.shape[0],roi.shape[1],roi.shape[2])
	img = cv2.imread("test//car7.jpg")
	img = cv2.resize(img,(480,640),interpolation=cv2.INTER_LINEAR)
	r = ','.join(r)
	r = r.replace(',', '')
	print(r)

	from PIL import Image, ImageDraw, ImageFont
	cv2img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)  # cv2和PIL中顏色的hex碼的儲存順序不同
	pilimg = Image.fromarray(cv2img)

	# PIL圖片上打印漢字
	draw = ImageDraw.Draw(pilimg)  # 圖片上打印
	font = ImageFont.truetype("simhei.ttf", 30, encoding="utf-8")  # 參數1:字體文件路徑,參數2:字體大小
	draw.text((0, 0), r,  (255, 0, 0), font=font)  # 參數1:打印座標,參數2:文本,參數3:字體顏色,參數4:字體

	# PIL圖片轉cv2 圖片
	cv2charimg = cv2.cvtColor(np.array(pilimg), cv2.COLOR_RGB2BGR)
	# cv2.imshow("圖片", cv2charimg) # 漢字窗口標題顯示亂碼
	cv2.imshow("photo", cv2charimg)

	cv2.waitKey(0)
	cv2.destroyAllWindows()

最後在此說明:代碼非本人原創,來自朋友畢設,過段時間會開源請關注一下博主,謝謝

小結一下:

OPENCV的SVM的SVC訓練模型——>OpenCv進行圖像採集/控制攝像頭——>圖像預處理(二值化操作,邊緣計算等)——>定位車牌位置,並正放置處理——>確定車牌顏色——>根據車牌顏色再定位,縮小邊緣非車牌邊界——>以下爲識別車牌中的字符——>返回結果——>最後ptrdict返回識別到的字符、定位的車牌圖像、車牌顏色——>結果顯示,並使用PIL方法顯示中文

最後我想說明的是,根據我找bug的能力,已經發現一堆bug,但是無可否認,這個機器學習項目已經寫的很棒了,至少我短期不能達到這個效果,不過寫出來還是沒有太大困難,邏輯在,做就完了!另外,程序基於機器學習的SVM算法問題,以及在數據預處理上的優化問題 ,還是很欠缺的,最大的問題就是準確率問題,以及欠擬合問題,這兩者是我這個項目的問題,換成深度學習會好很多!

上圖,介紹下期內容:初識CVLIB 最後別忘了給博主一個贊和關注~,碼字不易,一起進步!

[外鏈圖片轉存失敗,源站可能有防盜鏈機制,建議將圖片保存下來直接上傳(img-in2MCaBR-1588726301570)(D:\CSDN\pic\車牌檢測(二)\1588675754675.png)]

[外鏈圖片轉存失敗,源站可能有防盜鏈機制,建議將圖片保存下來直接上傳(img-y8ssmGFv-1588726301573)(D:\CSDN\pic\車牌檢測(二)\1588675876888.png)]

最後歡迎大家進入我的微信羣學習交流,機器&深度學習技術交流羣廣結豪傑!大家一起進步,附上微信。
在這裏插入圖片描述

上海第二工業大學智能科學與技術大二 周小夏(CV調包俠)

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章