神經網絡學習小記錄32——facenet詳解及其keras實現

學習前言

最近學了我最喜歡的mtcnn,可是光有人臉有啥用啊,咱得知道who啊,開始facenet提取特徵之旅。
在這裏插入圖片描述

什麼是facenet

谷歌人臉檢測算法,發表於 CVPR 2015,利用相同人臉在不同角度等姿態的照片下有高內聚性,不同人臉有低耦合性,提出使用 cnn + triplet mining 方法,在 LFW 數據集上準確度達到 99.63%。

通過 CNN 將人臉映射到歐式空間的特徵向量上,實質上:不同圖片人臉特徵的距離較大;通過相同個體的人臉的距離,總是小於不同個體的人臉這一先驗知識訓練網絡。

測試時只需要計算人臉特徵EMBEDDING,然後計算距離使用閾值即可判定兩張人臉照片是否屬於相同的個體。
在這裏插入圖片描述
簡單來講,在使用階段,facenet即是:
1、輸入一張人臉圖片
2、通過深度學習網絡提取特徵
3、L2標準化
4、得到128維特徵向量。

代碼下載:

鏈接: https://pan.baidu.com/s/1gAhikl9VwmefJbZeJd_UNw 提取碼: q86k

Inception-ResNetV1

Inception-ResNetV1是facenet使用的主幹網絡。

它的結構很有意思!

如圖所示爲整個網絡的主幹結構:
在這裏插入圖片描述
可以看到裏面的結構分爲幾個重要的部分
1、stem
2、Inception-resnet-A
3、Inception-resnet-B
4、Inception-resnet-C

1、Stem的結構:

在這裏插入圖片描述
在facenet裏,它的Input爲160x160x3大小,輸入後進行:
兩次卷積 -> 一次最大池化 -> 兩次卷積
python實現代碼如下:

inputs = Input(shape=input_shape)
# 160,160,3 -> 77,77,64
x = conv2d_bn(inputs, 32, 3, strides=2, padding='valid', name='Conv2d_1a_3x3')
x = conv2d_bn(x, 32, 3, padding='valid', name='Conv2d_2a_3x3')
x = conv2d_bn(x, 64, 3, name='Conv2d_2b_3x3')
# 77,77,64 -> 38,38,64
x = MaxPooling2D(3, strides=2, name='MaxPool_3a_3x3')(x)

# 38,38,64 -> 17,17,256
x = conv2d_bn(x, 80, 1, padding='valid', name='Conv2d_3b_1x1')
x = conv2d_bn(x, 192, 3, padding='valid', name='Conv2d_4a_3x3')
x = conv2d_bn(x, 256, 3, strides=2, padding='valid', name='Conv2d_4b_3x3')

2、Inception-resnet-A的結構:

在這裏插入圖片描述
Inception-resnet-A的結構分爲四個分支
1、未經處理直接輸出
2、經過一次1x1的32通道的卷積處理
3、經過一次1x1的32通道的卷積處理和一次3x3的32通道的卷積處理
4、經過一次1x1的32通道的卷積處理和兩次3x3的32通道的卷積處理

234步的結果堆疊後j進行一次卷積,並與第一步的結果相加,實質上這是一個殘差網絡結構。

實現代碼如下:

branch_0 = conv2d_bn(x, 32, 1, name=name_fmt('Conv2d_1x1', 0))
branch_1 = conv2d_bn(x, 32, 1, name=name_fmt('Conv2d_0a_1x1', 1))
branch_1 = conv2d_bn(branch_1, 32, 3, name=name_fmt('Conv2d_0b_3x3', 1))
branch_2 = conv2d_bn(x, 32, 1, name=name_fmt('Conv2d_0a_1x1', 2))
branch_2 = conv2d_bn(branch_2, 32, 3, name=name_fmt('Conv2d_0b_3x3', 2))
branch_2 = conv2d_bn(branch_2, 32, 3, name=name_fmt('Conv2d_0c_3x3', 2))
branches = [branch_0, branch_1, branch_2]

mixed = Concatenate(axis=channel_axis, name=name_fmt('Concatenate'))(branches)
up = conv2d_bn(mixed,K.int_shape(x)[channel_axis],1,activation=None,use_bias=True,
                name=name_fmt('Conv2d_1x1'))
up = Lambda(scaling,
            output_shape=K.int_shape(up)[1:],
            arguments={'scale': scale})(up)
x = add([x, up])
if activation is not None:
    x = Activation(activation, name=name_fmt('Activation'))(x)

3、Inception-resnet-B的結構:

在這裏插入圖片描述
Inception-resnet-B的結構分爲四個分支
1、未經處理直接輸出
2、經過一次1x1的128通道的卷積處理
3、經過一次1x1的128通道的卷積處理、一次1x7的128通道的卷積處理和一次7x1的128通道的卷積處理

23步的結果堆疊後j進行一次卷積,並與第一步的結果相加,實質上這是一個殘差網絡結構。

實現代碼如下:

branch_0 = conv2d_bn(x, 128, 1, name=name_fmt('Conv2d_1x1', 0))
branch_1 = conv2d_bn(x, 128, 1, name=name_fmt('Conv2d_0a_1x1', 1))
branch_1 = conv2d_bn(branch_1, 128, [1, 7], name=name_fmt('Conv2d_0b_1x7', 1))
branch_1 = conv2d_bn(branch_1, 128, [7, 1], name=name_fmt('Conv2d_0c_7x1', 1))
branches = [branch_0, branch_1]

mixed = Concatenate(axis=channel_axis, name=name_fmt('Concatenate'))(branches)
up = conv2d_bn(mixed,K.int_shape(x)[channel_axis],1,activation=None,use_bias=True,
                name=name_fmt('Conv2d_1x1'))
up = Lambda(scaling,
            output_shape=K.int_shape(up)[1:],
            arguments={'scale': scale})(up)
x = add([x, up])
if activation is not None:
    x = Activation(activation, name=name_fmt('Activation'))(x)

4、Inception-resnet-C的結構:

在這裏插入圖片描述
Inception-resnet-B的結構分爲四個分支
1、未經處理直接輸出
2、經過一次1x1的128通道的卷積處理
3、經過一次1x1的192通道的卷積處理、一次1x3的192通道的卷積處理和一次3x1的128通道的卷積處理

23步的結果堆疊後j進行一次卷積,並與第一步的結果相加,實質上這是一個殘差網絡結構。

實現代碼如下:

branch_0 = conv2d_bn(x, 192, 1, name=name_fmt('Conv2d_1x1', 0))
branch_1 = conv2d_bn(x, 192, 1, name=name_fmt('Conv2d_0a_1x1', 1))
branch_1 = conv2d_bn(branch_1, 192, [1, 3], name=name_fmt('Conv2d_0b_1x3', 1))
branch_1 = conv2d_bn(branch_1, 192, [3, 1], name=name_fmt('Conv2d_0c_3x1', 1))
branches = [branch_0, branch_1]

mixed = Concatenate(axis=channel_axis, name=name_fmt('Concatenate'))(branches)
up = conv2d_bn(mixed,K.int_shape(x)[channel_axis],1,activation=None,use_bias=True,
                name=name_fmt('Conv2d_1x1'))
up = Lambda(scaling,
            output_shape=K.int_shape(up)[1:],
            arguments={'scale': scale})(up)
x = add([x, up])
if activation is not None:
    x = Activation(activation, name=name_fmt('Activation'))(x)

5、全部代碼

from functools import partial
from keras.models import Model
from keras.layers import Activation
from keras.layers import BatchNormalization
from keras.layers import Concatenate
from keras.layers import Conv2D
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import GlobalAveragePooling2D
from keras.layers import Input
from keras.layers import Lambda
from keras.layers import MaxPooling2D
from keras.layers import add
from keras import backend as K


def scaling(x, scale):
    return x * scale

def _generate_layer_name(name, branch_idx=None, prefix=None):
    if prefix is None:
        return None
    if branch_idx is None:
        return '_'.join((prefix, name))
    return '_'.join((prefix, 'Branch', str(branch_idx), name))


def conv2d_bn(x,filters,kernel_size,strides=1,padding='same',activation='relu',use_bias=False,name=None):
    x = Conv2D(filters,
               kernel_size,
               strides=strides,
               padding=padding,
               use_bias=use_bias,
               name=name)(x)
    if not use_bias:
        x = BatchNormalization(axis=3, momentum=0.995, epsilon=0.001,
                               scale=False, name=_generate_layer_name('BatchNorm', prefix=name))(x)
    if activation is not None:
        x = Activation(activation, name=_generate_layer_name('Activation', prefix=name))(x)
    return x


def _inception_resnet_block(x, scale, block_type, block_idx, activation='relu'):
    channel_axis = 3
    if block_idx is None:
        prefix = None
    else:
        prefix = '_'.join((block_type, str(block_idx)))
        
    name_fmt = partial(_generate_layer_name, prefix=prefix)

    if block_type == 'Block35':
        branch_0 = conv2d_bn(x, 32, 1, name=name_fmt('Conv2d_1x1', 0))
        branch_1 = conv2d_bn(x, 32, 1, name=name_fmt('Conv2d_0a_1x1', 1))
        branch_1 = conv2d_bn(branch_1, 32, 3, name=name_fmt('Conv2d_0b_3x3', 1))
        branch_2 = conv2d_bn(x, 32, 1, name=name_fmt('Conv2d_0a_1x1', 2))
        branch_2 = conv2d_bn(branch_2, 32, 3, name=name_fmt('Conv2d_0b_3x3', 2))
        branch_2 = conv2d_bn(branch_2, 32, 3, name=name_fmt('Conv2d_0c_3x3', 2))
        branches = [branch_0, branch_1, branch_2]
    elif block_type == 'Block17':
        branch_0 = conv2d_bn(x, 128, 1, name=name_fmt('Conv2d_1x1', 0))
        branch_1 = conv2d_bn(x, 128, 1, name=name_fmt('Conv2d_0a_1x1', 1))
        branch_1 = conv2d_bn(branch_1, 128, [1, 7], name=name_fmt('Conv2d_0b_1x7', 1))
        branch_1 = conv2d_bn(branch_1, 128, [7, 1], name=name_fmt('Conv2d_0c_7x1', 1))
        branches = [branch_0, branch_1]
    elif block_type == 'Block8':
        branch_0 = conv2d_bn(x, 192, 1, name=name_fmt('Conv2d_1x1', 0))
        branch_1 = conv2d_bn(x, 192, 1, name=name_fmt('Conv2d_0a_1x1', 1))
        branch_1 = conv2d_bn(branch_1, 192, [1, 3], name=name_fmt('Conv2d_0b_1x3', 1))
        branch_1 = conv2d_bn(branch_1, 192, [3, 1], name=name_fmt('Conv2d_0c_3x1', 1))
        branches = [branch_0, branch_1]

    mixed = Concatenate(axis=channel_axis, name=name_fmt('Concatenate'))(branches)
    up = conv2d_bn(mixed,K.int_shape(x)[channel_axis],1,activation=None,use_bias=True,
                   name=name_fmt('Conv2d_1x1'))
    up = Lambda(scaling,
                output_shape=K.int_shape(up)[1:],
                arguments={'scale': scale})(up)
    x = add([x, up])
    if activation is not None:
        x = Activation(activation, name=name_fmt('Activation'))(x)
    return x


def InceptionResNetV1(input_shape=(160, 160, 3),
                      classes=128,
                      dropout_keep_prob=0.8):
    channel_axis = 3
    inputs = Input(shape=input_shape)
    # 160,160,3 -> 77,77,64
    x = conv2d_bn(inputs, 32, 3, strides=2, padding='valid', name='Conv2d_1a_3x3')
    x = conv2d_bn(x, 32, 3, padding='valid', name='Conv2d_2a_3x3')
    x = conv2d_bn(x, 64, 3, name='Conv2d_2b_3x3')
    # 77,77,64 -> 38,38,64
    x = MaxPooling2D(3, strides=2, name='MaxPool_3a_3x3')(x)

    # 38,38,64 -> 17,17,256
    x = conv2d_bn(x, 80, 1, padding='valid', name='Conv2d_3b_1x1')
    x = conv2d_bn(x, 192, 3, padding='valid', name='Conv2d_4a_3x3')
    x = conv2d_bn(x, 256, 3, strides=2, padding='valid', name='Conv2d_4b_3x3')

    # 5x Block35 (Inception-ResNet-A block):
    for block_idx in range(1, 6):
        x = _inception_resnet_block(x,scale=0.17,block_type='Block35',block_idx=block_idx)

    # Reduction-A block:
    # 17,17,256 -> 8,8,896
    name_fmt = partial(_generate_layer_name, prefix='Mixed_6a')
    branch_0 = conv2d_bn(x, 384, 3,strides=2,padding='valid',name=name_fmt('Conv2d_1a_3x3', 0))
    branch_1 = conv2d_bn(x, 192, 1, name=name_fmt('Conv2d_0a_1x1', 1))
    branch_1 = conv2d_bn(branch_1, 192, 3, name=name_fmt('Conv2d_0b_3x3', 1))
    branch_1 = conv2d_bn(branch_1,256,3,strides=2,padding='valid',name=name_fmt('Conv2d_1a_3x3', 1))
    branch_pool = MaxPooling2D(3,strides=2,padding='valid',name=name_fmt('MaxPool_1a_3x3', 2))(x)
    branches = [branch_0, branch_1, branch_pool]
    x = Concatenate(axis=channel_axis, name='Mixed_6a')(branches)

    # 10x Block17 (Inception-ResNet-B block):
    for block_idx in range(1, 11):
        x = _inception_resnet_block(x,
                                    scale=0.1,
                                    block_type='Block17',
                                    block_idx=block_idx)

    # Reduction-B block
    # 8,8,896 -> 3,3,1792
    name_fmt = partial(_generate_layer_name, prefix='Mixed_7a')
    branch_0 = conv2d_bn(x, 256, 1, name=name_fmt('Conv2d_0a_1x1', 0))
    branch_0 = conv2d_bn(branch_0,384,3,strides=2,padding='valid',name=name_fmt('Conv2d_1a_3x3', 0))
    branch_1 = conv2d_bn(x, 256, 1, name=name_fmt('Conv2d_0a_1x1', 1))
    branch_1 = conv2d_bn(branch_1,256,3,strides=2,padding='valid',name=name_fmt('Conv2d_1a_3x3', 1))
    branch_2 = conv2d_bn(x, 256, 1, name=name_fmt('Conv2d_0a_1x1', 2))
    branch_2 = conv2d_bn(branch_2, 256, 3, name=name_fmt('Conv2d_0b_3x3', 2))
    branch_2 = conv2d_bn(branch_2,256,3,strides=2,padding='valid',name=name_fmt('Conv2d_1a_3x3', 2))
    branch_pool = MaxPooling2D(3,strides=2,padding='valid',name=name_fmt('MaxPool_1a_3x3', 3))(x)
    branches = [branch_0, branch_1, branch_2, branch_pool]
    x = Concatenate(axis=channel_axis, name='Mixed_7a')(branches)

    # 5x Block8 (Inception-ResNet-C block):
    for block_idx in range(1, 6):
        x = _inception_resnet_block(x,
                                    scale=0.2,
                                    block_type='Block8',
                                    block_idx=block_idx)
    x = _inception_resnet_block(x,scale=1.,activation=None,block_type='Block8',block_idx=6)

    # 平均池化
    x = GlobalAveragePooling2D(name='AvgPool')(x)
    x = Dropout(1.0 - dropout_keep_prob, name='Dropout')(x)
    # 全連接層到128
    x = Dense(classes, use_bias=False, name='Bottleneck')(x)
    bn_name = _generate_layer_name('BatchNorm', prefix='Bottleneck')
    x = BatchNormalization(momentum=0.995, epsilon=0.001, scale=False,
                           name=bn_name)(x)

    # 創建模型
    model = Model(inputs, x, name='inception_resnet_v1')

    return model

檢測人臉並實現比較:

利用opencv自帶的cv2.CascadeClassifier檢測人臉並實現人臉的比較:
根目錄擺放方式如下:
在這裏插入圖片描述
demo文件如下:

import numpy as np
import cv2
from net.inception import InceptionResNetV1
from keras.models import load_model
import face_recognition
#---------------------------------#
#   圖片預處理
#   高斯歸一化
#---------------------------------#
def pre_process(x):
    if x.ndim == 4:
        axis = (1, 2, 3)
        size = x[0].size
    elif x.ndim == 3:
        axis = (0, 1, 2)
        size = x.size
    else:
        raise ValueError('Dimension should be 3 or 4')

    mean = np.mean(x, axis=axis, keepdims=True)
    std = np.std(x, axis=axis, keepdims=True)
    std_adj = np.maximum(std, 1.0/np.sqrt(size))
    y = (x - mean) / std_adj
    return y
#---------------------------------#
#   l2標準化
#---------------------------------#
def l2_normalize(x, axis=-1, epsilon=1e-10):
    output = x / np.sqrt(np.maximum(np.sum(np.square(x), axis=axis, keepdims=True), epsilon))
    return output
#---------------------------------#
#   計算128特徵值
#---------------------------------#
def calc_128_vec(model,img):
    face_img = pre_process(img)
    pre = model.predict(face_img)
    pre = l2_normalize(np.concatenate(pre))
    pre = np.reshape(pre,[1,128])
    return pre
#---------------------------------#
#   獲取人臉框
#---------------------------------#
def get_face_img(cascade,filepaths,margin):

    aligned_images = []

    img = cv2.imread(filepaths)
    img = cv2.cvtColor(img,cv2.COLOR_BGRA2RGB)

    faces = cascade.detectMultiScale(img,
                                        scaleFactor=1.1,
                                        minNeighbors=3)
    (x, y, w, h) = faces[0]
    print(x, y, w, h)
    cropped = img[y-margin//2:y+h+margin//2,
                    x-margin//2:x+w+margin//2, :]
    aligned = cv2.resize(cropped, (160, 160))
    aligned_images.append(aligned)
        
    return np.array(aligned_images)
#---------------------------------#
#   計算人臉距離
#---------------------------------#
def face_distance(face_encodings, face_to_compare):
    if len(face_encodings) == 0:
        return np.empty((0))

    return np.linalg.norm(face_encodings - face_to_compare, axis=1)

if __name__ == "__main__":
    cascade_path = './model/haarcascade_frontalface_alt2.xml'
    cascade = cv2.CascadeClassifier(cascade_path)
    
    image_size = 160
    model = InceptionResNetV1()
    # model.summary()
    model_path = './model/facenet_keras.h5'
    model.load_weights(model_path)
    
    img1 = get_face_img(cascade,r"img/Larry_Page_0000.jpg",10)
    img2 = get_face_img(cascade,r"img/Larry_Page_0001.jpg",10)
    img3 = get_face_img(cascade,r"img/Mark_Zuckerberg_0000.jpg",10)
    
    print(face_distance(calc_128_vec(model,img1),calc_128_vec(model,img2)))
    print(face_distance(calc_128_vec(model,img2),calc_128_vec(model,img3)))

實現效果爲:

[0.6534328]
[1.3536944]
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章