笑臉數據集、口罩數據集劃分、訓練、測試(jupyter notebook)

一、HOG,Dlib,卷積神經網絡介紹

1、HoG
①方法簡介
方向梯度直方圖(Histogram of Oriented Gradient, HOG)特徵是一種在計算機視覺和圖像處理中用來進行物體檢測的描述子。通過計算和統計局部區域的梯度方向直方圖來構成特徵。Hog特徵結合SVM分類器已經被廣泛應用於圖像識別中,尤其在行人檢測中獲得了極大的成功。現如今如今雖然有很多行人檢測算法不斷提出,但基本都是以HOG+SVM的思路爲主。

②主要思想
在一幅圖像中,局部目標的表象和形狀(appearance and shape)能夠被梯度或邊緣的方向密度分佈很好地描述。其本質是梯度的統計信息,而梯度主要存在於邊緣所在的地方。

③實現過程
簡單來說,首先需要將圖像分成小的連通區域,稱之爲細胞單元。然後採集細胞單元中各像素點的梯度或邊緣的方向直方圖。最後把這些直方圖組合起來就可以構成特徵描述器。
在這裏插入圖片描述

④算法優點
與其他的特徵描述方法相比,HOG有較多優點。由於HOG是在圖像的局部方格單元上操作,所以它對圖像幾何的和光學的形變都能保持很好的不變性,這兩種形變只會出現在更大的空間領域上。其次,在粗的空域抽樣、精細的方向抽樣以及較強的局部光學歸一化等條件下,只要行人大體上能夠保持直立的姿勢,可以容許行人有一些細微的肢體動作,這些細微的動作可以被忽略而不影響檢測效果。因此HOG特徵是特別適合於做圖像中的人體檢測的。

HOG特徵提取算法的整個實現過程大致如下: 讀入所需要的檢測目標即輸入的image 將圖像進行灰度化(將輸入的彩色的圖像的r,g,b值通過特定公式轉換爲灰度值) 採用Gamma校正法對輸入圖像進行顏色空間的標準化(歸一化) 計算圖像每個像素的梯度(包括大小和方向),捕獲輪廓信息 統計每個cell的梯度直方圖(不同梯度的個數),形成每個cell的descriptor 將每幾個cell組成一個block(以3*3爲例),一個block內所有cell的特徵串聯起來得到該block的HOG特徵descriptor 將圖像image內所有block的HOG特徵descriptor串聯起來得到該image(檢測目標)的HOG特徵descriptor,這就是最終分類的特徵向量

以下代碼片段可以提取一張圖片的HoG特徵:

import numpy as np
from scipy import signal
import scipy.misc
def s_x(img):
    kernel = np.array([[-1, 0, 1]])
    imgx = signal.convolve2d(img, kernel, boundary='symm', mode='same')
    return imgx
def s_y(img):
    kernel = np.array([[-1, 0, 1]]).T
    imgy = signal.convolve2d(img, kernel, boundary='symm', mode='same')
    return imgy
def grad(img):
    imgx = s_x(img)
    imgy = s_y(img)
    s = np.sqrt(imgx**2 + imgy**2)
    theta = np.arctan2(imgx, imgy) #imgy, imgx)
    theta[theta<0] = np.pi + theta[theta<0]
    return (s, theta)

2、Dlib
一個機器學習的開源庫,包含了機器學習的很多算法,使用起來很方便,直接包含頭文件即可,並且不依賴於其他庫(自帶圖像編解碼庫源碼)。 Dlib可以幫助您創建很多複雜的機器學習方面的軟件來幫助解決實際問題。目前Dlib已經被廣泛的用在行業和學術領域,包括機器人,嵌入式設備,移動電話和大型高性能計算環境。 dlib也是人臉識別常用的一個庫,可以檢測出人臉上的68個點,並且進行標註,當我們準備自己的人臉數據時,常常用dlib進行數據提取。
3、卷積神經網絡
卷積神經網絡(Convolutional Neural Networks, CNN)是一類包含卷積計算且具有深度結構的前饋神經網絡(Feedforward Neural Networks),是深度學習(deep learning)的代表算法之一 。卷積神經網絡具有表徵學習(representation learning)能力,能夠按其階層結構對輸入信息進行平移不變分類(shift-invariant classification),因此也被稱爲“平移不變人工神經網絡(Shift-Invariant Artificial Neural Networks, SIANN)。 卷積神經網絡仿造生物的視知覺(visual perception)機制構建,可以進行監督學習和非監督學習,其隱含層內的卷積核參數共享和層間連接的稀疏性使得卷積神經網絡能夠以較小的計算量對格點化(grid-like topology)特徵,例如像素和音頻進行學習、有穩定的效果且對數據沒有額外的特徵工程(feature engineering)要求。

二、笑臉數據集的劃分、訓練、測試

下載數據集:https://github.com/truongnmt/smile-detection
下載好之後解壓保存,我是保存在D盤,並重命名爲smile:
在這裏插入圖片描述
劃分:

import keras
import os, shutil
# The path to the directory where the original
# dataset was uncompressed
original_dataset_dir = 'D:\\smile\\datasets\\train_folder'
# The directory where we will
# store our smaller dataset
base_dir = 'D:\\smile2'
os.mkdir(base_dir)
# Directories for our training,
# validation and test splits
train_dir = os.path.join(base_dir, 'train')
os.mkdir(train_dir)
validation_dir = os.path.join(base_dir, 'validation')
os.mkdir(validation_dir)
test_dir = os.path.join(base_dir, 'test')
os.mkdir(test_dir)
# Directory with our training smile pictures
train_smiles_dir = os.path.join(train_dir, 'smiles')
os.mkdir(train_smiles_dir)
# Directory with our training unsmile pictures
train_unsmiles_dir = os.path.join(train_dir, 'unsmiles')
os.mkdir(train_unsmiles_dir)
# Directory with our validation smile pictures
validation_smiles_dir = os.path.join(validation_dir, 'smiles')
os.mkdir(validation_smiles_dir)
# Directory with our validation unsmile pictures
validation_unsmiles_dir = os.path.join(validation_dir, 'unsmiles')
os.mkdir(validation_unsmiles_dir)
# Directory with our validation smile pictures
test_smiles_dir = os.path.join(test_dir, 'smiles')
os.mkdir(test_smiles_dir)
# Directory with our validation unsmile pictures
test_unsmiles_dir = os.path.join(test_dir, 'unsmiles')
os.mkdir(test_unsmiles_dir)

然後把Datasets裏面的圖片放進去相應的文件夾(手動保存),打印新數據集的尺寸:

print('total training smile images:', len(os.listdir(train_smiles_dir)))
print('total training unsmile images:', len(os.listdir(train_unsmiles_dir)))
print('total validation smile images:', len(os.listdir(validation_smiles_dir)))
print('total validation unsmile images:', len(os.listdir(validation_unsmiles_dir)))
print('total test smile images:', len(os.listdir(test_smiles_dir)))
print('total test unsmile images:', len(os.listdir(test_unsmiles_dir)))

在這裏插入圖片描述

構建卷積神經網絡:

from keras import layers
from keras import models
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
                        input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.summary()

在這裏插入圖片描述

數據預處理:

from keras import optimizers

model.compile(loss='binary_crossentropy',
              optimizer=optimizers.RMSprop(lr=1e-4),
              metrics=['acc'])
from keras.preprocessing.image import ImageDataGenerator

# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)

train_generator = train_datagen.flow_from_directory(
        # This is the target directory
        train_dir,
        # All images will be resized to 150x150
        target_size=(150, 150),
        batch_size=20,
        # Since we use binary_crossentropy loss, we need binary labels
        class_mode='binary')

validation_generator = test_datagen.flow_from_directory(
        validation_dir,
        target_size=(150, 150),
        batch_size=20,
        class_mode='binary')

在這裏插入圖片描述

for data_batch, labels_batch in train_generator:
    print('data batch shape:', data_batch.shape)
    print('labels batch shape:', labels_batch.shape)
    break

在這裏插入圖片描述
訓練:

history = model.fit_generator(
      train_generator,
      steps_per_epoch=100,
      epochs=30,
      validation_data=validation_generator,
      validation_steps=50)

在這裏插入圖片描述
在這裏插入圖片描述

model.save('D:/smile2/smiles_and_unsmiles_small_1.h5')

在這裏插入圖片描述
繪製模型的損失和準確性:

import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()

在這裏插入圖片描述

數據增強:

datagen = ImageDataGenerator(
      rotation_range=40,
      width_shift_range=0.2,
      height_shift_range=0.2,
      shear_range=0.2,
      zoom_range=0.2,
      horizontal_flip=True,
      fill_mode='nearest')

爲了進一步對抗過擬合,我們還將在我們的模型中增加一個Dropout層:

model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
                        input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))

model.compile(loss='binary_crossentropy',
              optimizer=optimizers.RMSprop(lr=1e-4),
              metrics=['acc'])

開始訓練:

train_datagen = ImageDataGenerator(
    rescale=1./255,
    rotation_range=40,
    width_shift_range=0.2,
    height_shift_range=0.2,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True,)
# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
        # This is the target directory
        train_dir,
        # All images will be resized to 150x150
        target_size=(150, 150),
        batch_size=32,
        # Since we use binary_crossentropy loss, we need binary labels
        class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
        validation_dir,
        target_size=(150, 150),
        batch_size=32,
        class_mode='binary')
history = model.fit_generator(
      train_generator,
      steps_per_epoch=100,
      epochs=100,
      validation_data=validation_generator,
      validation_steps=50)

在這裏插入圖片描述
在這裏插入圖片描述
保存:

model.save('D:/smile2/smiles_unsmiles_small_2.h5')

在這裏插入圖片描述
繪製模型的損失和準確性:

acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()

在這裏插入圖片描述

三、攝像頭實時識別笑臉

使用上面訓練出來的模型進行判斷:

#檢測視頻或者攝像頭中的人臉
import cv2
from keras.preprocessing import image
from keras.models import load_model
import numpy as np
import dlib
from PIL import Image
model = load_model('D:/smile2/smiles_unsmiles_small_2.h5')
detector = dlib.get_frontal_face_detector()
video=cv2.VideoCapture(0)
font = cv2.FONT_HERSHEY_SIMPLEX
def rec(img):
    gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
    dets=detector(gray,1)
    if dets is not None:
        for face in dets:
            left=face.left()
            top=face.top()
            right=face.right()
            bottom=face.bottom()
            cv2.rectangle(img,(left,top),(right,bottom),(0,255,0),2)
            img1=cv2.resize(img[top:bottom,left:right],dsize=(150,150))
            img1=cv2.cvtColor(img1,cv2.COLOR_BGR2RGB)
            img1 = np.array(img1)/255.
            img_tensor = img1.reshape(-1,150,150,3)
            prediction =model.predict(img_tensor)    
            if prediction[0][0]>0.5:
                result='unsmile'
            else:
                result='smile'
            cv2.putText(img, result, (left,top), font, 2, (0, 255, 0), 2, cv2.LINE_AA)
        cv2.imshow('Video', img)
while video.isOpened():
    res, img_rd = video.read()
    if not res:
        break
    rec(img_rd)
    if cv2.waitKey(5) & 0xFF == ord('q'):
        break
video.release()
cv2.destroyAllWindows()

在這裏插入圖片描述
在這裏插入圖片描述
在這裏插入圖片描述

使用opencv自帶的笑臉識別:

import cv2

face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades+'haarcascade_frontalface_default.xml')

eye_cascade = cv2.CascadeClassifier(cv2.data.haarcascades+'haarcascade_eye.xml')

smile_cascade = cv2.CascadeClassifier(cv2.data.haarcascades+'haarcascade_smile.xml')
# 調用攝像頭攝像頭
cap = cv2.VideoCapture(0)

while(True):
    # 獲取攝像頭拍攝到的畫面
    ret, frame = cap.read()
    faces = face_cascade.detectMultiScale(frame, 1.3, 2)
    img = frame
    for (x,y,w,h) in faces:
    	# 畫出人臉框,顏色自己定義,畫筆寬度微
        img = cv2.rectangle(img,(x,y),(x+w,y+h),(255,225,0),2)
    	# 框選出人臉區域,在人臉區域而不是全圖中進行人眼檢測,節省計算資源
        face_area = img[y:y+h, x:x+w]
        
        ## 人眼檢測
        # 用人眼級聯分類器引擎在人臉區域進行人眼識別,返回的eyes爲眼睛座標列表
        eyes = eye_cascade.detectMultiScale(face_area,1.3,10)
        for (ex,ey,ew,eh) in eyes:
            #畫出人眼框,綠色,畫筆寬度爲1
            cv2.rectangle(face_area,(ex,ey),(ex+ew,ey+eh),(0,255,0),1)
        
        ## 微笑檢測
        # 用微笑級聯分類器引擎在人臉區域進行人眼識別,返回的eyes爲眼睛座標列表
        smiles = smile_cascade.detectMultiScale(face_area,scaleFactor= 1.16,minNeighbors=65,minSize=(25, 25),flags=cv2.CASCADE_SCALE_IMAGE)
        for (ex,ey,ew,eh) in smiles:
            #畫出微笑框,(BGR色彩體系),畫筆寬度爲1
            cv2.rectangle(face_area,(ex,ey),(ex+ew,ey+eh),(0,0,255),1)
            cv2.putText(img,'Smile',(x,y-7), 3, 1.2, (222, 0, 255), 2, cv2.LINE_AA)
        
	# 實時展示效果畫面
    cv2.imshow('frame2',img)
    # 每5毫秒監聽一次鍵盤動作
    if cv2.waitKey(5) & 0xFF == ord('q'):
        break

# 最後,關閉所有窗口
cap.release()
cv2.destroyAllWindows()

在這裏插入圖片描述在這裏插入圖片描述

使用Dlib:

import sys
import dlib  # 人臉識別的庫dlib
import numpy as np  # 數據處理的庫numpy
import cv2  # 圖像處理的庫OpenCv
 
class face_emotion():
    def __init__(self):
        # 使用特徵提取器get_frontal_face_detector
        self.detector = dlib.get_frontal_face_detector()
        # dlib的68點模型,使用作者訓練好的特徵預測器
        self.predictor = dlib.shape_predictor("C:/data/data_dlib_model/shape_predictor_68_face_landmarks.dat")
 
        # 建cv2攝像頭對象,這裏使用電腦自帶攝像頭,如果接了外部攝像頭,則自動切換到外部攝像頭
        self.cap = cv2.VideoCapture(0)
        # 設置視頻參數,propId設置的視頻參數,value設置的參數值
        self.cap.set(3, 480)
        # 截圖screenshoot的計數器
        self.cnt = 0
 
    def learning_face(self):
 
        # 眉毛直線擬合數據緩衝
        line_brow_x = []
        line_brow_y = []
 
        # cap.isOpened() 返回true/false 檢查初始化是否成功
        while (self.cap.isOpened()):
 
            # cap.read()
            # 返回兩個值:
            #    一個布爾值true/false,用來判斷讀取視頻是否成功/是否到視頻末尾
            #    圖像對象,圖像的三維矩陣
            flag, im_rd = self.cap.read()
 
            # 每幀數據延時1ms,延時爲0讀取的是靜態幀
            k = cv2.waitKey(1)
 
            # 取灰度
            img_gray = cv2.cvtColor(im_rd, cv2.COLOR_RGB2GRAY)
 
            # 使用人臉檢測器檢測每一幀圖像中的人臉。並返回人臉數rects
            faces = self.detector(img_gray, 0)
 
            # 待會要顯示在屏幕上的字體
            font = cv2.FONT_HERSHEY_SIMPLEX
 
            # 如果檢測到人臉
            if (len(faces) != 0):
 
                # 對每個人臉都標出68個特徵點
                for i in range(len(faces)):
                    # enumerate方法同時返回數據對象的索引和數據,k爲索引,d爲faces中的對象
                    for k, d in enumerate(faces):
                        # 用紅色矩形框出人臉
                        cv2.rectangle(im_rd, (d.left(), d.top()), (d.right(), d.bottom()), (0, 0, 255))
                        # 計算人臉熱別框邊長
                        self.face_width = d.right() - d.left()
 
                        # 使用預測器得到68點數據的座標
                        shape = self.predictor(im_rd, d)
                        # 圓圈顯示每個特徵點
                        for i in range(68):
                            cv2.circle(im_rd, (shape.part(i).x, shape.part(i).y), 2, (222, 222, 0), -1, 8)
                             #cv2.putText(im_rd, str(i), (shape.part(i).x, shape.part(i).y), cv2.FONT_HERSHEY_SIMPLEX, 0.5,
                               #          (255, 255, 255))
 
                        # 分析任意n點的位置關係來作爲表情識別的依據
                        mouth_width = (shape.part(54).x - shape.part(48).x) / self.face_width  # 嘴巴咧開程度
                        mouth_higth = (shape.part(66).y - shape.part(62).y) / self.face_width  # 嘴巴張開程度
                        
 
                        # 通過兩個眉毛上的10個特徵點,分析挑眉程度和皺眉程度
                        brow_sum = 0  # 高度之和
                        frown_sum = 0  # 兩邊眉毛距離之和
                        for j in range(17, 21):
                            brow_sum += (shape.part(j).y - d.top()) + (shape.part(j + 5).y - d.top())
                            frown_sum += shape.part(j + 5).x - shape.part(j).x
                            line_brow_x.append(shape.part(j).x)
                            line_brow_y.append(shape.part(j).y)
 
                        # self.brow_k, self.brow_d = self.fit_slr(line_brow_x, line_brow_y)  # 計算眉毛的傾斜程度
                        tempx = np.array(line_brow_x)
                        tempy = np.array(line_brow_y)
                        z1 = np.polyfit(tempx, tempy, 1)  # 擬合成一次直線
                        self.brow_k = -round(z1[0], 3)  # 擬合出曲線的斜率和實際眉毛的傾斜方向是相反的
 
                        brow_hight = (brow_sum / 10) / self.face_width  # 眉毛高度佔比
                        brow_width = (frown_sum / 5) / self.face_width  # 眉毛距離佔比
                        
 
                        # 眼睛睜開程度
                        eye_sum = (shape.part(41).y - shape.part(37).y + shape.part(40).y - shape.part(38).y +
                                   shape.part(47).y - shape.part(43).y + shape.part(46).y - shape.part(44).y)
                        eye_hight = (eye_sum / 4) / self.face_width
                        
 
                        # 分情況討論
                        # 張嘴,可能是開心或者驚訝
                        if round(mouth_higth >= 0.03):
                            if eye_hight >= 0.056:
                                cv2.putText(im_rd, "amazing", (d.left(), d.bottom() + 20), cv2.FONT_HERSHEY_SIMPLEX,
                                            0.8,
                                            (0, 0, 255), 2, 4)
                            else:
                                cv2.putText(im_rd, "smile", (d.left(), d.bottom() + 20), cv2.FONT_HERSHEY_SIMPLEX, 0.8,
                                            (0, 0, 255), 2, 4)
 
                        # 沒有張嘴,可能是正常和生氣
                        else:
                            if self.brow_k <= -0.3:
                                cv2.putText(im_rd, "angry", (d.left(), d.bottom() + 20), cv2.FONT_HERSHEY_SIMPLEX, 0.8,
                                            (0, 0, 255), 2, 4)
                            else:
                                cv2.putText(im_rd, "", (d.left(), d.bottom() + 20), cv2.FONT_HERSHEY_SIMPLEX, 0.8,
                                            (0, 0, 255), 2, 4)
 
                # 標出人臉數
                cv2.putText(im_rd, "Faces: " + str(len(faces)), (20, 50), font, 1, (0, 0, 255), 1, cv2.LINE_AA)
            else:
                # 沒有檢測到人臉
                cv2.putText(im_rd, "No Face", (20, 50), font, 1, (0, 0, 255), 1, cv2.LINE_AA)
 
            # 添加說明
            #im_rd = cv2.putText(im_rd, "S: screenshot", (20, 400), font, 0.8, (0, 0, 255), 1, cv2.LINE_AA)
            im_rd = cv2.putText(im_rd, "Q: quit", (20, 450), font, 0.8, (0, 0, 255), 1, cv2.LINE_AA)
 
            # 按下s鍵截圖保存
            #if (k == ord('s')):
                #self.cnt += 1
                #cv2.imwrite("screenshoot" + str(self.cnt) + ".jpg", im_rd)
 
            # 按下q鍵退出
            if (k == ord('q')):
                break
 
            # 窗口顯示
            cv2.imshow("camera", im_rd)
 
        # 釋放攝像頭
        self.cap.release()
 
        # 刪除建立的窗口
        cv2.destroyAllWindows()
 
 
if __name__ == "__main__":
    my_face = face_emotion()
    my_face.learning_face()

在這裏插入圖片描述

四、口罩數據集的劃分、訓練、測試

數據集下載:
這裏有戴口罩的圖片,在RWMFD_part_1文件夾裏面,只不過是散開的,你需要把他們合在一起,文件名重複你可以使用FreeRename軟件進行批量的數字序列重命名。
不帶口罩的圖片可以借用上面的笑臉數據集裏面的圖片。

劃分數據集:

import keras
import os, shutil
# The path to the directory where the original
# dataset was uncompressed
original_dataset_dir = 'D:\\mask\\train'
# The directory where we will
# store our smaller dataset
base_dir = 'D:\\mask1'
os.mkdir(base_dir)
# Directories for our training,
# validation and test splits
train_dir = os.path.join(base_dir, 'train')
os.mkdir(train_dir)
validation_dir = os.path.join(base_dir, 'validation')
os.mkdir(validation_dir)
test_dir = os.path.join(base_dir, 'test')
os.mkdir(test_dir)
# Directory with our training masks pictures
train_masks_dir = os.path.join(train_dir, 'masks')
os.mkdir(train_masks_dir)
# Directory with our training unmasks pictures
train_unmasks_dir = os.path.join(train_dir, 'unmasks')
os.mkdir(train_unmasks_dir)
# Directory with our validation masks pictures
validation_masks_dir = os.path.join(validation_dir, 'masks')
os.mkdir(validation_masks_dir)
# Directory with our validation unmasks pictures
validation_unmasks_dir = os.path.join(validation_dir, 'unmasks')
os.mkdir(validation_unmasks_dir)
# Directory with our validation masks pictures
test_masks_dir = os.path.join(test_dir, 'masks')
os.mkdir(test_masks_dir)
# Directory with our validation unmasks pictures
test_unmasks_dir = os.path.join(test_dir, 'unmasks')
os.mkdir(test_unmasks_dir)

手動將圖片保存到相應的文件夾裏面,然後查看(這裏給出我自己整理的數據集 鏈接:https://pan.baidu.com/s/1fr8mq7a-IBic2PGFb094JQ
提取碼:dtxa):

print('total training mask images:', len(os.listdir(train_masks_dir)))
print('total training unmask images:', len(os.listdir(train_unmasks_dir)))
print('total validation mask images:', len(os.listdir(validation_masks_dir)))
print('total validation unmask images:', len(os.listdir(validation_unmasks_dir)))
print('total test mask images:', len(os.listdir(test_masks_dir)))
print('total test unmask images:', len(os.listdir(test_unmasks_dir)))

在這裏插入圖片描述
構建卷積網絡:

from keras import layers
from keras import models
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
                        input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.summary()

在這裏插入圖片描述
數據預處理:

from keras import optimizers

model.compile(loss='binary_crossentropy',
              optimizer=optimizers.RMSprop(lr=1e-4),
              metrics=['acc'])
from keras.preprocessing.image import ImageDataGenerator

# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)

train_generator = train_datagen.flow_from_directory(
        # This is the target directory
        train_dir,
        # All images will be resized to 150x150
        target_size=(150, 150),
        batch_size=20,
        # Since we use binary_crossentropy loss, we need binary labels
        class_mode='binary')

validation_generator = test_datagen.flow_from_directory(
        validation_dir,
        target_size=(150, 150),
        batch_size=20,
        class_mode='binary')

在這裏插入圖片描述

for data_batch, labels_batch in train_generator:
    print('data batch shape:', data_batch.shape)
    print('labels batch shape:', labels_batch.shape)
    break

在這裏插入圖片描述
開始訓練:

history = model.fit_generator(
      train_generator,
      steps_per_epoch=100,
      epochs=30,
      validation_data=validation_generator,
      validation_steps=50)

在這裏插入圖片描述
在這裏插入圖片描述

保存模型:

model.save('D:/mask1/masks_and_unmasks_small_1.h5')

在這裏插入圖片描述
繪製模型的損失和準確性:

import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()

在這裏插入圖片描述
數據增強:

datagen = ImageDataGenerator(
      rotation_range=40,
      width_shift_range=0.2,
      height_shift_range=0.2,
      shear_range=0.2,
      zoom_range=0.2,
      horizontal_flip=True,
      fill_mode='nearest')

爲了進一步對抗過擬合,我們還將在我們的模型中增加一個Dropout層:

model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
                        input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))

model.compile(loss='binary_crossentropy',
              optimizer=optimizers.RMSprop(lr=1e-4),
              metrics=['acc'])

訓練:

train_datagen = ImageDataGenerator(
    rescale=1./255,
    rotation_range=40,
    width_shift_range=0.2,
    height_shift_range=0.2,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True,)
# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
        # This is the target directory
        train_dir,
        # All images will be resized to 150x150
        target_size=(150, 150),
        batch_size=32,
        # Since we use binary_crossentropy loss, we need binary labels
        class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
        validation_dir,
        target_size=(150, 150),
        batch_size=32,
        class_mode='binary')
history = model.fit_generator(
      train_generator,
      steps_per_epoch=100,
      epochs=50,
      validation_data=validation_generator,
      validation_steps=50)

在這裏插入圖片描述
在這裏插入圖片描述
保存模型:

model.save('D:/mask1/masks_and_unmasks_small_2.h5')

在這裏插入圖片描述
繪製模型的損失和準確性:

acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()

在這裏插入圖片描述

五、攝像頭實時識別是否帶口罩

#檢測視頻或者攝像頭中的人臉
import cv2
from keras.preprocessing import image
from keras.models import load_model
import numpy as np
import dlib
from PIL import Image
model = load_model('D:/mask1/masks_and_unmasks_small_2.h5')
detector = dlib.get_frontal_face_detector()
video=cv2.VideoCapture(0)
font = cv2.FONT_HERSHEY_SIMPLEX
def rec(img):
    gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
    dets=detector(gray,1)
    if dets is not None:
        for face in dets:
            left=face.left()
            top=face.top()
            right=face.right()
            bottom=face.bottom()
            cv2.rectangle(img,(left,top),(right,bottom),(0,255,0),2)
            img1=cv2.resize(img[top:bottom,left:right],dsize=(150,150))
            img1=cv2.cvtColor(img1,cv2.COLOR_BGR2RGB)
            img1 = np.array(img1)/255.
            img_tensor = img1.reshape(-1,150,150,3)
            prediction =model.predict(img_tensor)    
            if prediction[0][0]>0.5:
                result='unmask'
            else:
                result='mask'
            cv2.putText(img, result, (left,top), font, 2, (0, 255, 0), 2, cv2.LINE_AA)
        cv2.imshow('Video', img)
while video.isOpened():
    res, img_rd = video.read()
    if not res:
        break
    rec(img_rd)
    if cv2.waitKey(5) & 0xFF == ord('q'):
        break
video.release()
cv2.destroyAllWindows()

在這裏插入圖片描述
在這裏插入圖片描述

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章