深度学习——Face Verificaton(人脸验证)与Face Recognition(人脸识别)在FaceNet的应用案例
一、综述
人脸识别领域主要有两个范畴:Face Verificaton(人脸验证)与Face Recognition(人脸识别)
1、Face Verificaton(人脸验证):1:1的匹配问题。如果你有一张输入图片以及某人的ID或名字,
系统要做的是:验证输入照片是否是这个人。
在人脸验证中,会看到两张图像,并且必须告诉他们是否属于同一个人。
最简单的方法是逐个比较两个图像,如果原始图像之间的距离小于选定的阈值,则它可能是同一个人!
不是使用原始图像,而是可以学习一个图像的编码f(img),比较这种编码图像的元素可以
更准确地判断两张图片是否属于同一个人。
2、Face Recognition(人脸识别):1:k的匹配问题。有一个K个人的人脸数据库,得到一张输入图片,
输出:如果是在k个人脸库中的人,输出相应的ID,或者识别失败。
3、该案例中实验内容:
(1)构建 Triplet Loss 三元组损失函数
(2)使用预训练模型将人脸图像映射到128维编码向量
(3)使用这些编码来执行Face Verificaton 脸部验证 和 Face Recognition 脸部识别
二、FaceNet简介
FaceNet学习一个神经网络,将人脸图像编码为128维的数字矢量作为提取的特征 。
利用欧氏距离比较两个这样的向量,可以确定两张图片是否属于同一个人。
从图中可以看出,若取阈值为1.1,可以很轻易的区分出两张照片是不是同一个人。
网络结构:
上图是文章中所采用的网络结构,其中,前半部分就是一个普通的卷积神经网络,
但是与一般的深度学习架构不一样,Facenet没有使用Softmax作为损失函数,而是先接了一个l2**嵌入(Embedding)层。
所谓嵌入,可以理解为一种映射关系,即将特征从原来的特征空间中映射到一个新的特征空间,新的特征就可以称为原来特征的一种嵌入。这里的映射关系是将卷积神经网络末端全连接层输出的特征映射到一个超球面上,也就是使其特征的二范数归一化,然后再以Triplet Loss为监督信号,获得网络的损失与梯度。
三、Triplet Loss——三元组损失函数
也正是这篇文章的特点所在,接下来我们重点介绍一下。
顾名思义,Triplet Loss 三元组损失函数也就是:根据三张图片组成的三元组(Triplet)计算而来的损失(Loss)。
其中,三元组由Anchor(A),Negative(N),Positive(P)组成,任意一张图片都可以作为一个基点(A),然后与它属于同一人的图片就是它的P,与它属于同一人的图片就是它的N。
Triplet Loss的学习目标可以形象的表示如下图:
网络没经过学习之前,A和P的欧式距离可能很大,A和N的欧式距离可能很小,如上图左边,在网络的学习过程中,A和P的欧式距离会逐渐减小,而A和N的距离会逐渐拉大。
也就是说,网络会直接学习特征间的可分性:同一类的特征之间的距离要尽可能的小,而不同类之间的特征距离要尽可能的大。
意思就是说通过学习,使得类间的距离要大于类内的距离。
损失函数为:
其中,左边的二范数表示类内距离,右边的二范数表示类间距离,α是一个常量。优化过程就是使用梯度下降法使得损失函数不断下降,即类内距离不断下降,类间距离不断提升。
提出了这样一种损失函数之后,实践过程中,还有一个难题需要解决,也就是从训练集里选择适合训练的三元组。
选择最佳的三元组
理论上说,为了保证网络训练的效果最好,我们要选择hard positive
以及hard negative
来作为我们的三元组。
但是实际上是这样做会有问题:如果选择最Hard的三元组会造成局部极值,网络可能无法收敛至最优值。
因此google大佬们的做法是在mini-batch中挑选所有的 positive 图像对,因为这样可以使得训练的过程更加稳固。对于Negative的挑选,大佬们使用了semi-hard的Negative,也就是满足a到n的距离大于a到p的距离的Negative,而不去选择那些过难的Negative。
四、人脸验证案例应用步骤详解
(一)将人脸图像编码成128维的向量
1、利用卷积神经网络执行图像的编码
FaceNet网络模型利用很多的数据和很多的时间进行训练的,因此,按照应用深度学习中的常见做法,
让我们加载其他人已经预先训练好的权重。网络架构遵循Szegedy等人的Inception v2模型。
提供了的一个初始网络的实现可以在文件inception_v2.py中查看它是如何实现的。
inception_v2.py文件的代码如下:
import numpy as np
import tensorflow as tf
import os
from numpy import genfromtxt
from keras import backend as K
from keras.layers import Conv2D, ZeroPadding2D, Activation, Input, concatenate
from keras.models import Model
from keras.layers.normalization import BatchNormalization
from keras.layers.pooling import MaxPooling2D, AveragePooling2D
import fr_utils
from keras.layers.core import Lambda, Flatten, Dense
import tensorflow.contrib.slim as slim
def inception_block_1a(X):
"""
Implementation of an inception block
"""
X_3x3 = Conv2D(96, (1, 1), data_format='channels_first', name='inception_3a_3x3_conv1')(X)
X_3x3 = BatchNormalization(axis=1, epsilon=0.00001, name='inception_3a_3x3_bn1')(X_3x3)
X_3x3 = Activation('relu')(X_3x3)
X_3x3 = ZeroPadding2D(padding=(1, 1), data_format='channels_first')(X_3x3)
X_3x3 = Conv2D(128, (3, 3), data_format='channels_first', name='inception_3a_3x3_conv2')(X_3x3)
X_3x3 = BatchNormalization(axis=1, epsilon=0.00001, name='inception_3a_3x3_bn2')(X_3x3)
X_3x3 = Activation('relu')(X_3x3)
X_5x5 = Conv2D(16, (1, 1), data_format='channels_first', name='inception_3a_5x5_conv1')(X)
X_5x5 = BatchNormalization(axis=1, epsilon=0.00001, name='inception_3a_5x5_bn1')(X_5x5)
X_5x5 = Activation('relu')(X_5x5)
X_5x5 = ZeroPadding2D(padding=(2, 2), data_format='channels_first')(X_5x5)
X_5x5 = Conv2D(32, (5, 5), data_format='channels_first', name='inception_3a_5x5_conv2')(X_5x5)
X_5x5 = BatchNormalization(axis=1, epsilon=0.00001, name='inception_3a_5x5_bn2')(X_5x5)
X_5x5 = Activation('relu')(X_5x5)
X_pool = MaxPooling2D(pool_size=3, strides=2, data_format='channels_first')(X)
X_pool = Conv2D(32, (1, 1), data_format='channels_first', name='inception_3a_pool_conv')(X_pool)
X_pool = BatchNormalization(axis=1, epsilon=0.00001, name='inception_3a_pool_bn')(X_pool)
X_pool = Activation('relu')(X_pool)
X_pool = ZeroPadding2D(padding=((3, 4), (3, 4)), data_format='channels_first')(X_pool)
X_1x1 = Conv2D(64, (1, 1), data_format='channels_first', name='inception_3a_1x1_conv')(X)
X_1x1 = BatchNormalization(axis=1, epsilon=0.00001, name='inception_3a_1x1_bn')(X_1x1)
X_1x1 = Activation('relu')(X_1x1)
# CONCAT
inception = concatenate([X_3x3, X_5x5, X_pool, X_1x1], axis=1)
return inception
def inception_block_1b(X):
X_3x3 = Conv2D(96, (1, 1), data_format='channels_first', name='inception_3b_3x3_conv1')(X)
X_3x3 = BatchNormalization(axis=1, epsilon=0.00001, name='inception_3b_3x3_bn1')(X_3x3)
X_3x3 = Activation('relu')(X_3x3)
X_3x3 = ZeroPadding2D(padding=(1, 1), data_format='channels_first')(X_3x3)
X_3x3 = Conv2D(128, (3, 3), data_format='channels_first', name='inception_3b_3x3_conv2')(X_3x3)
X_3x3 = BatchNormalization(axis=1, epsilon=0.00001, name='inception_3b_3x3_bn2')(X_3x3)
X_3x3 = Activation('relu')(X_3x3)
X_5x5 = Conv2D(32, (1, 1), data_format='channels_first', name='inception_3b_5x5_conv1')(X)
X_5x5 = BatchNormalization(axis=1, epsilon=0.00001, name='inception_3b_5x5_bn1')(X_5x5)
X_5x5 = Activation('relu')(X_5x5)
X_5x5 = ZeroPadding2D(padding=(2, 2), data_format='channels_first')(X_5x5)
X_5x5 = Conv2D(64, (5, 5), data_format='channels_first', name='inception_3b_5x5_conv2')(X_5x5)
X_5x5 = BatchNormalization(axis=1, epsilon=0.00001, name='inception_3b_5x5_bn2')(X_5x5)
X_5x5 = Activation('relu')(X_5x5)
X_pool = AveragePooling2D(pool_size=(3, 3), strides=(3, 3), data_format='channels_first')(X)
X_pool = Conv2D(64, (1, 1), data_format='channels_first', name='inception_3b_pool_conv')(X_pool)
X_pool = BatchNormalization(axis=1, epsilon=0.00001, name='inception_3b_pool_bn')(X_pool)
X_pool = Activation('relu')(X_pool)
X_pool = ZeroPadding2D(padding=(4, 4), data_format='channels_first')(X_pool)
X_1x1 = Conv2D(64, (1, 1), data_format='channels_first', name='inception_3b_1x1_conv')(X)
X_1x1 = BatchNormalization(axis=1, epsilon=0.00001, name='inception_3b_1x1_bn')(X_1x1)
X_1x1 = Activation('relu')(X_1x1)
inception = concatenate([X_3x3, X_5x5, X_pool, X_1x1], axis=1)
return inception
def inception_block_1c(X):
X_3x3 = fr_utils.conv2d_bn(X,
layer='inception_3c_3x3',
cv1_out=128,
cv1_filter=(1, 1),
cv2_out=256,
cv2_filter=(3, 3),
cv2_strides=(2, 2),
padding=(1, 1))
X_5x5 = fr_utils.conv2d_bn(X,
layer='inception_3c_5x5',
cv1_out=32,
cv1_filter=(1, 1),
cv2_out=64,
cv2_filter=(5, 5),
cv2_strides=(2, 2),
padding=(2, 2))
X_pool = MaxPooling2D(pool_size=3, strides=2, data_format='channels_first')(X)
X_pool = ZeroPadding2D(padding=((0, 1), (0, 1)), data_format='channels_first')(X_pool)
inception = concatenate([X_3x3, X_5x5, X_pool], axis=1)
return inception
def inception_block_2a(X):
X_3x3 = fr_utils.conv2d_bn(X,
layer='inception_4a_3x3',
cv1_out=96,
cv1_filter=(1, 1),
cv2_out=192,
cv2_filter=(3, 3),
cv2_strides=(1, 1),
padding=(1, 1))
X_5x5 = fr_utils.conv2d_bn(X,
layer='inception_4a_5x5',
cv1_out=32,
cv1_filter=(1, 1),
cv2_out=64,
cv2_filter=(5, 5),
cv2_strides=(1, 1),
padding=(2, 2))
X_pool = AveragePooling2D(pool_size=(3, 3), strides=(3, 3), data_format='channels_first')(X)
X_pool = fr_utils.conv2d_bn(X_pool,
layer='inception_4a_pool',
cv1_out=128,
cv1_filter=(1, 1),
padding=(2, 2))
X_1x1 = fr_utils.conv2d_bn(X,
layer='inception_4a_1x1',
cv1_out=256,
cv1_filter=(1, 1))
inception = concatenate([X_3x3, X_5x5, X_pool, X_1x1], axis=1)
return inception
def inception_block_2b(X):
# inception4e
X_3x3 = fr_utils.conv2d_bn(X,
layer='inception_4e_3x3',
cv1_out=160,
cv1_filter=(1, 1),
cv2_out=256,
cv2_filter=(3, 3),
cv2_strides=(2, 2),
padding=(1, 1))
X_5x5 = fr_utils.conv2d_bn(X,
layer='inception_4e_5x5',
cv1_out=64,
cv1_filter=(1, 1),
cv2_out=128,
cv2_filter=(5, 5),
cv2_strides=(2, 2),
padding=(2, 2))
X_pool = MaxPooling2D(pool_size=3, strides=2, data_format='channels_first')(X)
X_pool = ZeroPadding2D(padding=((0, 1), (0, 1)), data_format='channels_first')(X_pool)
inception = concatenate([X_3x3, X_5x5, X_pool], axis=1)
return inception
def inception_block_3a(X):
X_3x3 = fr_utils.conv2d_bn(X,
layer='inception_5a_3x3',
cv1_out=96,
cv1_filter=(1, 1),
cv2_out=384,
cv2_filter=(3, 3),
cv2_strides=(1, 1),
padding=(1, 1))
X_pool = AveragePooling2D(pool_size=(3, 3), strides=(3, 3), data_format='channels_first')(X)
X_pool = fr_utils.conv2d_bn(X_pool,
layer='inception_5a_pool',
cv1_out=96,
cv1_filter=(1, 1),
padding=(1, 1))
X_1x1 = fr_utils.conv2d_bn(X,
layer='inception_5a_1x1',
cv1_out=256,
cv1_filter=(1, 1))
inception = concatenate([X_3x3, X_pool, X_1x1], axis=1)
return inception
def inception_block_3b(X):
X_3x3 = fr_utils.conv2d_bn(X,
layer='inception_5b_3x3',
cv1_out=96,
cv1_filter=(1, 1),
cv2_out=384,
cv2_filter=(3, 3),
cv2_strides=(1, 1),
padding=(1, 1))
X_pool = MaxPooling2D(pool_size=3, strides=2, data_format='channels_first')(X)
X_pool = fr_utils.conv2d_bn(X_pool,
layer='inception_5b_pool',
cv1_out=96,
cv1_filter=(1, 1))
X_pool = ZeroPadding2D(padding=(1, 1), data_format='channels_first')(X_pool)
X_1x1 = fr_utils.conv2d_bn(X,
layer='inception_5b_1x1',
cv1_out=256,
cv1_filter=(1, 1))
inception = concatenate([X_3x3, X_pool, X_1x1], axis=1)
return inception
def faceRecoModel(input_shape):
"""
FaceNet网络采用Inception模型进行训练
Arguments:
input_shape -- 数据集中图像的形状
Returns:
model -- Keras中的初始化一个函数式模型Model()
"""
# 将输入定义为具有形状input_shape的张量
X_input = Input(input_shape)
# Zero-Padding
X = ZeroPadding2D((3, 3))(X_input)
# First Block
X = Conv2D(64, (7, 7), strides=(2, 2), name='conv1')(X)
X = BatchNormalization(axis=1, name='bn1')(X)
X = Activation('relu')(X)
# Zero-Padding + MAXPOOL
X = ZeroPadding2D((1, 1))(X)
X = MaxPooling2D((3, 3), strides=2)(X)
# Second Block
X = Conv2D(64, (1, 1), strides=(1, 1), name='conv2')(X)
X = BatchNormalization(axis=1, epsilon=0.00001, name='bn2')(X)
X = Activation('relu')(X)
# Zero-Padding + MAXPOOL
X = ZeroPadding2D((1, 1))(X)
# Second Block
X = Conv2D(192, (3, 3), strides=(1, 1), name='conv3')(X)
X = BatchNormalization(axis=1, epsilon=0.00001, name='bn3')(X)
X = Activation('relu')(X)
# Zero-Padding + MAXPOOL
X = ZeroPadding2D((1, 1))(X)
X = MaxPooling2D(pool_size=3, strides=2)(X)
# Inception 1: a/b/c
X = inception_block_1a(X)
X = inception_block_1b(X)
X = inception_block_1c(X)
# Inception 2: a/b
X = inception_block_2a(X)
X = inception_block_2b(X)
# Inception 3: a/b
X = inception_block_3a(X)
X = inception_block_3b(X)
# Top layer
X = AveragePooling2D(pool_size=(3, 3), strides=(1, 1), data_format='channels_first')(X)
X = Flatten()(X)
X = Dense(128, name='dense_layer')(X)
# L2 normalization
X = Lambda(lambda x: K.l2_normalize(x, axis=1))(X)
# Create model instance
model = Model(inputs=X_input, outputs=X, name='FaceRecoModel')
return model
其中,fr_utils.py文件下的conv2d_bn函数的代码为:
def conv2d_bn(x,
layer=None,
cv1_out=None,
cv1_filter=(1, 1),
cv1_strides=(1, 1),
cv2_out=None,
cv2_filter=(3, 3),
cv2_strides=(1, 1),
padding=None):
num = '' if cv2_out == None else '1'
tensor = Conv2D(cv1_out, cv1_filter, strides=cv1_strides, data_format='channels_first', name=layer + '_conv' + num)(
x)
tensor = BatchNormalization(axis=1, epsilon=0.00001, name=layer + '_bn' + num)(tensor)
tensor = Activation('relu')(tensor)
if padding == None:
return tensor
tensor = ZeroPadding2D(padding=padding, data_format='channels_first')(tensor)
if cv2_out == None:
return tensor
tensor = Conv2D(cv2_out, cv2_filter, strides=cv2_strides, data_format='channels_first', name=layer + '_conv' + '2')(
tensor)
tensor = BatchNormalization(axis=1, epsilon=0.00001, name=layer + '_bn' + '2')(tensor)
tensor = Activation('relu')(tensor)
return tensor
注意:这个FaceNet网络采用的是96*96的彩色RGB作为输入,特别地,将m张人脸图像作为一个张量,
形状为(m,nc,nh,nw) = (m,3,96,96),输出为形状为(m,128)的矩阵,将每个输入的脸部图像转换成128维的向量。
最后的全连接层采用128个神经元,该模型确保输出是尺寸为128维的编码矢量。然后比较编码后的两个人脸图像,如下所示:
通过计算两个编码图像的欧氏距离,并与设定的阈值进行比较就可以确定两张图片是否代表同一个人。
训练系统,确保:同一人的两幅图像的编码非常相似(距离很近),两幅不同人物图像的编码有很大不同(距离尽可能远)。
(二)、定义Triplet Loss ——三元组损失函数
三元组损失函数:将同一人的两个图像(Anchor 和 Positive)的编码之间的距离“拉”到更近的位置(最小化),
同时将两个不同人的图像(Anchor 和 Negative)的编码之间的距离“拉”得更远(最大化)。
原理示意图如下所示:将从左到右调用图片 Anchor(A),Positive(P),Negative(N)
对于一张图片x,我们定义它编码后的图像用 f(x) 表示,其中 f 是由神经网络计算的函数。
训练系统需要通过三元组的图像(A,P,N),其中:
A:“Anchor”图像,是一个人的图片;P:“Positive”图像,与“Anchor”为同一个人的图像;
N:“Negative”图像,与“Anchor”为不同人的图像。
这些构成三元组的图像是从我们的训练数据集中挑选出来的,定义为第 i 个训练样本的数据。
训练要确保某个人“Anchor”与“Positive”的距离比与“Negative”的距离更接近
至少一个的间隔(为超参数)这么大,具体公式如下表示:
因此,最小化下面的三元组目标函数:
注:采用标注来定义,即:也可表示为:
另外,第一段(1)是表示给定三元组的Anchor“A”和Positive“P”之间的平方距离(欧氏距离), 希望这个值很小。
第二段(2)是表示给定三元组的Anchor“A”和Negative“N”之间的平方距离(欧氏距离), 希望这个值比较大,在它前面有一个负号是合理的。
被称为间隔,是一个超参数,需要手动调参,这里我们选择α=0.2。
小结:三元组损失函数的定义分为以下四个步骤:
(1)计算“anchor”和“positive”编码后图像的欧式距离(A,P,N)
(2)计算“anchor”和“negative”编码后图像的欧氏距离
(3)对于每一个训练样本计算
(4)优化目标函数
(5)三元组损失函数的构建及测试
#多个网络层的堆叠,通过向Sequential模型传递一个layer的list来构造该模型
from keras.models import Sequential
from keras.models import Model
from keras.layers.merge import Concatenate
from keras.layers import Conv2D,ZeroPadding2D,Activation,Input,concatenate
from keras.layers.normalization import BatchNormalization
from keras.layers.pooling import MaxPooling2D,AveragePooling2D
# Flatten层将多维的输入一维化,常用在卷积层和全连接层的过渡
# Dense层是常用的全连接层
from keras.layers.core import Lambda,Flatten,Dense
from keras.initializers import glorot_uniform #基于uniform均匀分布的初始化方式
from keras.engine.topology import Layer
from keras import backend as K
K.set_image_data_format("channels_first") #设置图像维度顺序
# import cv2
import os
import numpy as np
from numpy import genfromtxt #创建数组表格数据
import pandas as pd
import tensorflow as tf
# from fr_utils import *
# from inception_v2 import *
#当ndarray里面的存放的数据维度过大时,在控制台会出现不能将ndarray完全
# 输出情况,中间部分的结果会用省略号打印出来.
np.set_printoptions(threshold=np.nan)
# FRmodel = faceRecoModel(input_shape=(3, 96, 96))
#三元组损失函数的构建
def triplet_loss(y_true,y_pred,alpha=0.2):
"""
Arguments:
y_true:真正的标签值,在keras中定义损失函数时不需要它
y_pred:Python列表,包含三个对象:
anchor -- anchor图像的编码, 形状为(None, 128)
positive -- positive图像的编码, 形状为(None, 128)
negative -- negative图像的编码, 形状为(None, 128)
alpha:间隔/超参数
return:
loss -- 真实的数字,损失值
"""
anchor,positive,negative = y_pred[0],y_pred[1],y_pred[2]
#1、计算编码后所有的“anchor”和“positive”图像的欧式距离
#tf.subtract:减,tf.square:平方,reduce_sum(x,axis=1):对x按列(这里是特征)相加
pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor,positive)))
# 2、计算编码后所有的“anchor”和“negative”图像的欧式距离
neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor,negative)))
#3、前两个距离相减并加上alpha
bias_dist = pos_dist - neg_dist + alpha
#4、求bias_dist和0的最大值,并对训练样本求和
#reduce_sum(x,axis=0):对x按行(这里是所有的训练样本)相加
loss = tf.reduce_sum(tf.maximum(bias_dist,0))
return loss
#测试
with tf.Session() as sess:
tf.set_random_seed(1)
y_true = (None,None,None)
y_pred = (tf.random_normal([3,128],mean=6,stddev=0.1,seed=1),
tf.random_normal([3,128],mean=1,stddev=1,seed=1),
tf.random_normal([3,128],mean=3,stddev=4,seed=1))
loss = triplet_loss(y_true,y_pred)
print("loss = ",sess.run(loss))
#运行结果:
loss = 528.1432
(三)、加载预训练的模型
FaceNet是通过最小化三元组损失函数进行训练的,训练需要大量的数据和计算,在这里我们不是直接训练,
我们加载预先训练好的模型如下,运行需要几分钟的时间:
FRmodel.compile(optimizer = 'adam', loss = triplet_loss, metrics = ['accuracy'])
load_weights_from_FaceNet(FRmodel)
其中,load_weights_from_faceNet函数的具体代码如下:
WEIGHTS = [
'conv1', 'bn1', 'conv2', 'bn2', 'conv3', 'bn3',
'inception_3a_1x1_conv', 'inception_3a_1x1_bn',
'inception_3a_pool_conv', 'inception_3a_pool_bn',
'inception_3a_5x5_conv1', 'inception_3a_5x5_conv2', 'inception_3a_5x5_bn1', 'inception_3a_5x5_bn2',
'inception_3a_3x3_conv1', 'inception_3a_3x3_conv2', 'inception_3a_3x3_bn1', 'inception_3a_3x3_bn2',
'inception_3b_3x3_conv1', 'inception_3b_3x3_conv2', 'inception_3b_3x3_bn1', 'inception_3b_3x3_bn2',
'inception_3b_5x5_conv1', 'inception_3b_5x5_conv2', 'inception_3b_5x5_bn1', 'inception_3b_5x5_bn2',
'inception_3b_pool_conv', 'inception_3b_pool_bn',
'inception_3b_1x1_conv', 'inception_3b_1x1_bn',
'inception_3c_3x3_conv1', 'inception_3c_3x3_conv2', 'inception_3c_3x3_bn1', 'inception_3c_3x3_bn2',
'inception_3c_5x5_conv1', 'inception_3c_5x5_conv2', 'inception_3c_5x5_bn1', 'inception_3c_5x5_bn2',
'inception_4a_3x3_conv1', 'inception_4a_3x3_conv2', 'inception_4a_3x3_bn1', 'inception_4a_3x3_bn2',
'inception_4a_5x5_conv1', 'inception_4a_5x5_conv2', 'inception_4a_5x5_bn1', 'inception_4a_5x5_bn2',
'inception_4a_pool_conv', 'inception_4a_pool_bn',
'inception_4a_1x1_conv', 'inception_4a_1x1_bn',
'inception_4e_3x3_conv1', 'inception_4e_3x3_conv2', 'inception_4e_3x3_bn1', 'inception_4e_3x3_bn2',
'inception_4e_5x5_conv1', 'inception_4e_5x5_conv2', 'inception_4e_5x5_bn1', 'inception_4e_5x5_bn2',
'inception_5a_3x3_conv1', 'inception_5a_3x3_conv2', 'inception_5a_3x3_bn1', 'inception_5a_3x3_bn2',
'inception_5a_pool_conv', 'inception_5a_pool_bn',
'inception_5a_1x1_conv', 'inception_5a_1x1_bn',
'inception_5b_3x3_conv1', 'inception_5b_3x3_conv2', 'inception_5b_3x3_bn1', 'inception_5b_3x3_bn2',
'inception_5b_pool_conv', 'inception_5b_pool_bn',
'inception_5b_1x1_conv', 'inception_5b_1x1_bn',
'dense_layer'
]
def load_weights():
# Set weights path
dirPath = './weights'
fileNames = filter(lambda f: not f.startswith('.'), os.listdir(dirPath))
paths = {}
weights_dict = {}
for n in fileNames:
paths[n.replace('.csv', '')] = dirPath + '/' + n
for name in WEIGHTS:
if 'conv' in name:
conv_w = genfromtxt(paths[name + '_w'], delimiter=',', dtype=None)
conv_w = np.reshape(conv_w, conv_shape[name])
conv_w = np.transpose(conv_w, (2, 3, 1, 0))
conv_b = genfromtxt(paths[name + '_b'], delimiter=',', dtype=None)
weights_dict[name] = [conv_w, conv_b]
elif 'bn' in name:
bn_w = genfromtxt(paths[name + '_w'], delimiter=',', dtype=None)
bn_b = genfromtxt(paths[name + '_b'], delimiter=',', dtype=None)
bn_m = genfromtxt(paths[name + '_m'], delimiter=',', dtype=None)
bn_v = genfromtxt(paths[name + '_v'], delimiter=',', dtype=None)
weights_dict[name] = [bn_w, bn_b, bn_m, bn_v]
elif 'dense' in name:
dense_w = genfromtxt(dirPath + '/dense_w.csv', delimiter=',', dtype=None)
dense_w = np.reshape(dense_w, (128, 736))
dense_w = np.transpose(dense_w, (1, 0))
dense_b = genfromtxt(dirPath + '/dense_b.csv', delimiter=',', dtype=None)
weights_dict[name] = [dense_w, dense_b]
return weights_dict
def load_weights_from_FaceNet(FRmodel):
# Load weights from csv files (which was exported from Openface torch model)
weights = WEIGHTS
weights_dict = load_weights()
# Set layer weights of the model
for name in weights:
if FRmodel.get_layer(name) != None:
FRmodel.get_layer(name).set_weights(weights_dict[name])
elif FRmodel.get_layer(name) != None:
FRmodel.get_layer(name).set_weights(weights_dict[name])
(四)、人脸验证(Face Verification)模型的应用
建立一个数据库,其中包含每个人的图像编码向量,我们采用函数img_to_encoding函数得到每个图像的编码向量,
它基本上在指定的图像上运行前向传播的模型,代码如下:
def img_to_encoding(image_path, model):
img1 = cv2.imread(image_path, 1)
img = img1[..., ::-1]
img = np.around(np.transpose(img, (2, 0, 1)) / 255.0, decimals=12)
x_train = np.array([img])
embedding = model.predict_on_batch(x_train)
return embedding
运行代码构建数据库(以Python字典表示), 该数据库将每个人的姓名对应于他们脸部图像的128维编码。
现在,当有人出现在你的门前并刷他们的身份证(因此给你的名字)时,你可以在数据库中查找他们的编码,
并用它来验证站在前门的人是否与数据中的人匹配。
执行函数verify()验证门前摄像头图像(image_path)是否与数据库比对的人的身份一致,分为以下步骤:
(1)从image_path路径中计算图像的编码。
(2)计算关于此编码图像与存储在数据库中的需要比对身份的人的编码图像之间的欧式距离(3)如果距离小于0.7则表示验证同一个的身份成功,否则验证失败。
#人脸验证
def verify(image_path, identity, database, model):
"""
验证"image_path" 路径下编码的人脸图像与待比对的人的身份是否一致
Arguments:
image_path -- 图像的路径
identity -- 字符串类型, 需要验证的人的身份名称
database -- python字典,将人名(字符串)映射到它们的图像编码(向量)
model -- Keras下Inception v2模型
Returns:
dist -- image_path和数据库待比对人的编码图像的距离
door_open -- True,门打开. False 门不开.
"""
# 1: 用img_to_encoding()函数来计算图像的编码
encoding = img_to_encoding(image_path, model)
# 2: 计算待比对的两幅图片的欧氏距离
dist = np.linalg.norm(encoding - database[identity], ord=2) #二范数/欧式距离
# print("dist=",dist)
# 3: 如果距离小于0.7则开门,否则不开
if dist < 0.75:
print("It's " + str(identity) + ", welcome home!")
door_open = True
else:
print("It's not " + str(identity) + ", please go away")
door_open = False
return dist, door_open
database = {}
dist_list = []
siml_list = []
#循环读取两个文件夹里面待比对的图像并编码,依照欧氏距离进行判定
for i in range(1,51):
filename1 = str(i)
filename2 = str(i)+"a"
database[filename1] = img_to_encoding("images1/"+filename1 + ".jpg", FRmodel)
dist, door_open = verify("images2/"+filename2 + ".jpg", filename1, database, FRmodel)
#将距离和开门状态的结果分别添加到两个列表中
dist_list.append(dist)
siml_list.append(door_open)
print("dist_list = ",dist_list)
print("siml_list = ",siml_list)
print("Accuracy:",siml_list.count(True)/len(siml_list))
#将距离和开门状态的结果利用pandas保存到csv的文件中
distance = pd.DataFrame(data={"similar":siml_list,"distance":dist_list})
distance.to_csv("E:/Image_Similarity/dist_siml.csv")
五、人脸识别案例应用步骤详解
未完待续