基于MTCNN+CNN的疲劳(危险)驾驶检测系统设计

一.设计原理

在人脸检测中应用较广的算法就是MTCNN( Multi-task Cascaded Convolutional Networks的缩写)。MTCNN算法是一种基于深度学习的人脸检测和人脸对齐方法,它可以同时完成人脸检测和人脸对齐的任务,相比于传统的算法,它的性能更好,检测速度更快。
本文目的不是为了强调MTCNN模型的训练,而是如何使用MTCNN提取人脸区域和特征点,然后依据“三庭五眼”的理论把人脸分成眼睛、嘴巴、耳朵三个区域后将图像传入训练好的图像分类模型,判断闭眼、睁眼、张嘴、闭嘴、吸烟、电话,在疲劳判断上使用PERCLOS的判断标准,将结果反馈。

二.MTCNN人脸检测

MTCNN 包含三个级联的多任务卷积神经网络,分别是 Proposal Network (P-Net)、Refine Network (R-Net)、Output Network (O-Net),每个多任务卷积神经网络均有三个学习任务,分别是人脸分类、边框回归和关键点定位。MTCNN模块我用的是GitHub的一个项目,通过检测到人脸的五个关键点,定义左、右眼中心点连线与水平方向的夹角为 θ,眼部区域宽度为W, 高度为 H=w/2。从鼻尖点位 C 向左右嘴角连线作垂线, 记垂距为D。嘴部区域上、下沿分别取该垂线及其延长线上D/2 和 3D/2 处。具体可以看代码`

left_eye=keypoints['left_eye']
right_eye=keypoints['right_eye']
nose=keypoints['nose']
mouth_left=keypoints['mouth_left']
mouth_right=keypoints['mouth_right']
arc = atan(abs(right_eye[1] - left_eye[1]) / abs(right_eye[0] - left_eye[0]))
W = abs(right_eye[0] - left_eye[0]) / (2 * cos(arc))
H = W / 2
D=(mouth_left[1]-nose[1])/cos(arc)-(mouth_left[1]-mouth_right[1])/(2*cos(arc))

`

三.CNN卷积神经网络图像识别

将数据分类成open_eye,closed_eye,open_mouth,closed_mouth,smoke,call这些类别,通过训练得到一个图像分类模型即可结合mtcnn裁剪的目标区域,识别得出结果

包含 3 个卷积层,3 个池化 层和 2 个全连接层。3 个卷积层中分别包含 32、32 和 64 个 卷积核,卷积核尺寸大小为 5。
模型代码如下:

from keras.models import Sequential
from keras.layers.normalization import BatchNormalization
from keras.layers.convolutional import Conv2D
from keras.layers.convolutional import MaxPooling2D
from keras.layers.convolutional import AveragePooling2D
from keras.initializers import TruncatedNormal
from keras.layers.core import Activation
from keras.layers.core import Flatten
from keras.layers.core import Dropout
from keras.layers.core import Dense
from keras import backend as K

class EAMNET:
    @staticmethod
    def build(width, height, depth, classes):
        model = Sequential()
        inputShape = (height, width, depth)
        chanDim = -1

        if K.image_data_format() == "channels_first":
            inputShape = (depth, height, width)
            chanDim = 1

        # CONV => RELU => POOL
        model.add(Conv2D(32, (5, 5), padding="same",
            input_shape=inputShape,kernel_initializer=TruncatedNormal(mean=0.0, stddev=0.01)))
        model.add(Activation("relu"))
        model.add(BatchNormalization(axis=chanDim))
        model.add(MaxPooling2D(pool_size=(2, 2)))
        model.add(Dropout(0.25))

        # (CONV => RELU) * 2 => POOL
        model.add(Conv2D(32, (5, 5), padding="same",kernel_initializer=TruncatedNormal(mean=0.0, stddev=0.01)))
        model.add(Activation("relu"))
        model.add(BatchNormalization(axis=chanDim))
        model.add(MaxPooling2D(pool_size=(2, 2)))
        model.add(Dropout(0.25))

        # (CONV => RELU) * 3 => POOL
        model.add(Conv2D(64, (3, 3), padding="same",kernel_initializer=TruncatedNormal(mean=0.0, stddev=0.01)))
        model.add(Activation("relu"))
        model.add(BatchNormalization(axis=chanDim))
        model.add(MaxPooling2D(pool_size=(2, 2)))
        model.add(Dropout(0.25))

        # FC层
        model.add(Flatten())
        model.add(Dense(64,kernel_initializer=TruncatedNormal(mean=0.0, stddev=0.01)))
        model.add(Activation("relu"))
        model.add(BatchNormalization())
        model.add(Dropout(0.6))

        # softmax 分类
        model.add(Dense(classes,kernel_initializer=TruncatedNormal(mean=0.0, stddev=0.01)))
        model.add(Activation("softmax"))

        return model
model=EAMNET.build(32,32,1,6)
model.summary()

训练代码如下:

# set the matplotlib backend so figures can be saved in the background
import keras
import matplotlib
from keras.callbacks import ReduceLROnPlateau, EarlyStopping, ModelCheckpoint
from keras.engine.saving import load_model
from keras.utils import to_categorical
from sklearn.metrics import classification_report

from SimpleVGGNet import SimpleVGG
from EAMNet import EAMNET

matplotlib.use("Agg")

# import the necessary packages
from keras.preprocessing.image import ImageDataGenerator
from keras.optimizers import Adam
from keras.preprocessing.image import img_to_array
from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
from imutils import paths
import numpy as np
import argparse
import random
import pickle
import cv2
import os

dataset="./dataset/"
EPOCHS =100
INIT_LR = 0.01
BS = 64
IMAGE_DIMS = (64, 64, 1)
classnum=2
# initialize the data and labels
data = []
labels = []

# grab the image paths and randomly shuffle them
print("[INFO] loading images...")
imagePaths = sorted(list(paths.list_images(dataset)))
# print(imagePaths)
random.seed(10010)
random.shuffle(imagePaths)

# loop over the input images
for imagePath in imagePaths:
    # load the image, pre-process it, and store it in the data list
    image = cv2.imread(imagePath,cv2.IMREAD_GRAYSCALE)
    print(imagePath)
    image = cv2.resize(image, (IMAGE_DIMS[1], IMAGE_DIMS[0]))
    image = img_to_array(image)
    data.append(image)

    # extract the class label from the image path and update the
    # labels list
    label = imagePath.split(os.path.sep)[-2]
    labels.append(label)
# scale the raw pixel intensities to the range [0, 1]
print(labels)
data = np.array(data, dtype="float") / 255.0
labels = np.array(labels)
print("[INFO] data matrix: {:.2f}MB".format(
    data.nbytes / (1024 * 1000.0)))

# 数据集切分
(trainX, testX, trainY, testY) = train_test_split(data,labels, test_size=0.25, random_state=42)

# 转换标签为one-hot encoding格式
lb = LabelBinarizer()
print(lb)
trainY = lb.fit_transform(trainY)
testY = lb.fit_transform(testY)

trainY = to_categorical(trainY)
testY = to_categorical(testY)
# construct the image generator for data augmentation
aug = ImageDataGenerator(rotation_range=25, width_shift_range=0.1,
                         height_shift_range=0.1, shear_range=0.2, zoom_range=0.2,
                         horizontal_flip=True, fill_mode="nearest")
# initialize the model
print("[INFO] compiling model...")
model = SimpleVGG.build(width=IMAGE_DIMS[1], height=IMAGE_DIMS[0],
                            depth=IMAGE_DIMS[2], classes=classnum)
# model=load_model('./model/best0428ep150.h5')
opt = Adam(lr=INIT_LR, decay=INIT_LR / EPOCHS)
model.summary()
model.compile(loss="categorical_crossentropy", optimizer=opt,
              metrics=["accuracy"])
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=5, verbose=1)
early_stopping = EarlyStopping(monitor='val_loss', min_delta=0, patience=10, verbose=1)
# train the network
filepath="./model/best0428ep150.h5"
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1, save_best_only=True,mode='max')

H = model.fit_generator(
    aug.flow(trainX, trainY, batch_size=BS),
    validation_data=(testX, testY),
    steps_per_epoch=len(trainX) // BS,
   callbacks=[reduce_lr,checkpoint],
    epochs=EPOCHS, verbose=1)
# save the model to disk
model.save('./model/best0428ep150.h5')
# plot the training loss and accuracy
# 测试
print("------测试网络------")
predictions = model.predict(testX, batch_size=32)
print(classification_report(testY.argmax(axis=1),
    predictions.argmax(axis=1), target_names=lb.classes_))

plt.style.use("ggplot")
plt.figure()
N = EPOCHS
plt.plot(np.arange(0, N), H.history["loss"], label="train_loss")
plt.plot(np.arange(0, N), H.history["val_loss"], label="val_loss")
plt.plot(np.arange(0, N), H.history["acc"], label="train_acc")
plt.plot(np.arange(0, N), H.history["val_acc"], label="val_acc")
plt.title("Training Loss and Accuracy")
plt.xlabel("Epoch #")
plt.ylabel("Loss/Accuracy")
plt.legend(loc="upper left")
plt.savefig("./model/best0428ep150.png")
f = open("./model/best0428ep150.pickle", "wb")
f.write(pickle.dumps(lb))
f.close()

四.疲劳判断

PERCLOS的测量的参数是指在单位时间内眼睛闭合程度超过某一闭值(70%、80%)的时间占总时间的百分比。PERCLOS方法的常用标准如下:
P7O:指眼睑遮住瞳孔的面积超过70%就计为眼睛闭合,统计在一定时间内眼睛闭合时所占的时间比例。
P80:指眼睑遮住瞳孔的面积超过80%就计为眼睛闭合,统计在一定时间内眼睛闭合时所占的时间比例。
所以可以通过一定时间内判断眼睛闭合的帧/统计的总帧数判断是否达到了疲劳,结合人打哈欠的正常指标,如果超出阈值就判断为哈欠。

五.效果演示

在这里插入图片描述
在这里插入图片描述

六.项目地址

项目GitHub地址MTCNN_CNN_DangerDrivingDetection
如果觉得有帮助的话点赞star一波,Thanks♪(・ω・)ノ
另外还有使用SSD目标检测算法做的疲劳驾驶检测系统还没有写文章,如果有需要也可以直接看项目
MTCNN模块使用了开源的项目,也可以自己训练。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章