人工智能与机器学习——Keras编程分别实现人脸微笑和口罩数据集的识别模型训练和实时分辨笑脸和口罩检测
目录
一、环境准备
安装anconda3
Anaconda是一个包含180+的科学包及其依赖项的发行版本。其包含的科学包包括:conda, numpy, scipy, ipython notebook等。conda是包及其依赖项和环境的管理工具。适用语言:Python, R, Ruby, Lua, Scala, Java, JavaScript, C/C++, FORTRAN。适用平台:Windows, macOS, Linux用途:快速安装、运行和升级包及其依赖项。在计算机中便捷地创建、保存、加载和切换环境。
话不多说,直接附上anconda的下载镜像Anaconda 镜像使用帮助
下载成功后直接安装即可
安装dlib库
Dlib是一个现代化的C ++工具箱,其中包含用于在C ++中创建复杂软件以解决实际问题的机器学习算法和工具。它广泛应用于工业界和学术界,包括机器人,嵌入式设备,移动电话和大型高性能计算环境。Dlib的开源许可证 允许您在任何应用程序中免费使用它。
dlib的各版本下载地址
注意下载对应的版本
安装OpenCV库
OpenCV是一个基于BSD许可(开源)发行的跨平台计算机视觉和机器学习软件库,可以运行在Linux、Windows、Android和Mac OS操作系统上。 它轻量级而且高效——由一系列 C 函数和少量 C++ 类构成,同时提供了Python、Ruby、MATLAB等语言的接口,实现了图像处理和计算机视觉方面的很多通用算法。
直接pip install opencv_python下载
安装tensorflow
直接在anconda prompt 下运行下面的命令安装
pip install --index-url https://pypi.douban.com/simple tensorflow
安装keras
直接运行下面的命令安装
conda install keras
环境检测
测试环境是否安装正确
出现Using TenorFlow backend 这个错误的原因是我tensorflow的版本有一些问题,可以直接引入import os
os.environ[‘KERAS_BACKEND’]='tensorflow’这个命令解决
二、人脸笑脸识别
下载笑脸数据集
划分测试集、训练集和验证集
import tensorflow
import keras
import os,shutil
os.environ['KERAS_BACKEND']='tensorflow'
# The path to the directory where the original
# dataset was uncompressed
original_dataset_dir = 'genki4k'
# The directory where we will
# store our smaller dataset
base_dir = '笑脸数据'
os.mkdir(base_dir)
# Directories for our training,
# validation and test splits
train_dir = os.path.join(base_dir, 'train')
os.mkdir(train_dir)
validation_dir = os.path.join(base_dir, 'validation')
os.mkdir(validation_dir)
test_dir = os.path.join(base_dir, 'test')
os.mkdir(test_dir)
# Directory with our training smile pictures
train_smile_dir = os.path.join(train_dir, 'smile')
os.mkdir(train_smile_dir)
# Directory with our training unsmile pictures
train_unsmile_dir = os.path.join(train_dir, 'unsmile')
#s.mkdir(train_dogs_dir)
# Directory with our validation smile pictures
validation_smile_dir = os.path.join(validation_dir, 'smile')
os.mkdir(validation_smile_dir)
# Directory with our validation unsmile pictures
validation_unsmile_dir = os.path.join(validation_dir, 'unsmile')
os.mkdir(validation_unsmile_dir)
# Directory with our validation smile pictures
test_smile_dir = os.path.join(test_dir, 'smile')
os.mkdir(test_smile_dir)
# Directory with our validation unsmile pictures
test_unsmile_dir = os.path.join(test_dir, 'unsmile')
os.mkdir(test_unsmile_dir)
生成笑脸数据集文件夹,把笑脸图片和非笑脸图片手动分到各个文件夹中
打印看一下文件夹的图片数量
print('total training smile images:', len(os.listdir(train_smile_dir)))
print('total training unsmile images:', len(os.listdir(train_umsmile_dir)))
print('total testing smile images:', len(os.listdir(test_smile_dir)))
print('total testing unsmile images:', len(os.listdir(test_umsmile_dir)))
print('total validation smile images:', len(os.listdir(validation_smile_dir)))
print('total validation unsmile images:', len(os.listdir(validation_unsmile_dir)))
print('total training smile images:', len(os.listdir(train_smile_dir)))
print('total training unsmile images:', len(os.listdir(train_umsmile_dir)))
print('total testing smile images:', len(os.listdir(test_smile_dir)))
print('total testing unsmile images:', len(os.listdir(test_umsmile_dir)))
print('total validation smile images:', len(os.listdir(validation_smile_dir)))
print('total validation unsmile images:', len(os.listdir(validation_unsmile_dir)))
创建模型
创建模型、查看模型
from keras import layers
from keras import models
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.summary()
对图片进行归一化处理
from keras import optimizers
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['acc'])
from keras.preprocessing.image import ImageDataGenerator
# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(rescale=1./255)
validation_datagen=ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
# 目标文件目录
train_dir,
#所有图片的size必须是150x150
target_size=(150, 150),
batch_size=20,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
test_generator = test_datagen.flow_from_directory(test_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
for data_batch, labels_batch in train_generator:
print('data batch shape:', data_batch.shape)
print('labels batch shape:', labels_batch)
break
train_generator.class_indices
训练模型
可自行调节epochs的值,epochs值越大,花费时间越久,但训练的精度会越高。
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=10,
validation_data=validation_generator,
validation_steps=50)
保存训练模型
model.save('smileAndUnsmile_1.h5')
画出训练集与验证集的精确度与损失度的图形
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
进行数据增强,并查看增强后的变化
datagen = ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
import matplotlib.pyplot as plt
# This is module with image preprocessing utilities
from keras.preprocessing import image
fnames = [os.path.join(train_smile_dir, fname) for fname in os.listdir(train_smile_dir)]
img_path = fnames[8]
img = image.load_img(img_path, target_size=(150, 150))
x = image.img_to_array(img)
x = x.reshape((1,) + x.shape)
i = 0
for batch in datagen.flow(x, batch_size=1):
plt.figure(i)
imgplot = plt.imshow(image.array_to_img(batch[0]))
i += 1
if i % 4 == 0:
break
plt.show()
创建网络,再次进行模型训练
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['acc'])
#归一化处理
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,)
# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
# This is the target directory
train_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=32,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=32,
class_mode='binary')
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=100,
validation_data=validation_generator,
validation_steps=50)
保存训练好的模型
model.save('smileAndUnsmile_2.h5')
画出进行数据增强之后后的训练集与验证集的精确度与损失度的图形
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
实现笑脸辨别
实现照片辨别
import cv2
from keras.preprocessing import image
from keras.models import load_model
import numpy as np
model = load_model('smileAndUnsmile_1.h5')
img_path='笑脸数据/test/smile/file1533.jpg'
img = image.load_img(img_path, target_size=(150, 150))
#img1 = cv2.imread(img_path,cv2.IMREAD_GRAYSCALE)
#cv2.imshow('wname',img1)
#cv2.waitKey(0)
#print(img.size)
img_tensor = image.img_to_array(img)/255.0
img_tensor = np.expand_dims(img_tensor, axis=0)
prediction =model.predict(img_tensor)
print(prediction)
if prediction[0][0]<0.5:
result='smile'
else:
result='unsmile'
print(result)
实现摄像头采集人脸识别
import cv2
from keras.preprocessing import image
from keras.models import load_model
import numpy as np
import dlib
from PIL import Image
model = load_model('smileAndUnsmile_1.h5')
detector = dlib.get_frontal_face_detector()
video=cv2.VideoCapture(0)
font = cv2.FONT_HERSHEY_SIMPLEX
def rec(img):
gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
dets=detector(gray,1)
if dets is not None:
for face in dets:
left=face.left()
top=face.top()
right=face.right()
bottom=face.bottom()
cv2.rectangle(img,(left,top),(right,bottom),(0,255,0),2)
img1=cv2.resize(img[top:bottom,left:right],dsize=(150,150))
img1=cv2.cvtColor(img1,cv2.COLOR_BGR2RGB)
img1 = np.array(img1)/255.
img_tensor = img1.reshape(-1,150,150,3)
prediction =model.predict(img_tensor)
print(prediction)
if prediction[0][0]>0.5:
result='unsmile'
else:
result='smile'
cv2.putText(img, result, (left,top), font, 2, (0, 255, 0), 2, cv2.LINE_AA)
cv2.imshow('Video', img)
while video.isOpened():
res, img_rd = video.read()
if not res:
break
rec(img_rd)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
video.release()
cv2.destroyAllWindows()
三、人脸口罩识别
下载口罩数据集
划分测试集、训练集和验证集
import tensorflow
import keras
import os,shutil
os.environ['KERAS_BACKEND']='tensorflow'
# The path to the directory where the original
# dataset was uncompressed
original_dataset_dir = '人脸口罩数据集,正样本加负样本'
# The directory where we will
# store our smaller dataset
base_dir = '口罩数据'
os.mkdir(base_dir)
# Directories for our training,
# validation and test splits
train_dir = os.path.join(base_dir, 'train')
os.mkdir(train_dir)
validation_dir = os.path.join(base_dir, 'validation')
os.mkdir(validation_dir)
test_dir = os.path.join(base_dir, 'test')
os.mkdir(test_dir)
# Directory with our training smile pictures
train_smile_dir = os.path.join(train_dir, 'have_mask')
os.mkdir(train_smile_dir)
# Directory with our training unsmile pictures
train_unsmile_dir = os.path.join(train_dir, 'no_mask')
#s.mkdir(train_dogs_dir)
# Directory with our validation smile pictures
validation_smile_dir = os.path.join(validation_dir, 'have_mask')
os.mkdir(validation_smile_dir)
# Directory with our validation unsmile pictures
validation_unsmile_dir = os.path.join(validation_dir, 'no_mask')
os.mkdir(validation_unsmile_dir)
# Directory with our validation smile pictures
test_smile_dir = os.path.join(test_dir, 'have_mask')
os.mkdir(test_smile_dir)
# Directory with our validation unsmile pictures
test_unsmile_dir = os.path.join(test_dir, 'no_mask')
os.mkdir(test_unsmile_dir)
运行代码会生成train、test、validation三个文件夹,同时这三个文件夹下面都会创建have_mask与no_mask文件夹
import keras
import os, shutil
train_havemask_dir="口罩数据/train/have_mask/"
train_nomask_dir="口罩数据/train/no_mask/"
test_havemask_dir="口罩数据/test/have_mask/"
test_nomask_dir="口罩数据/test/no_mask/"
validation_havemask_dir="口罩数据/validation/have_mask/"
validation_nomask_dir="口罩数据/validation/no_mask/"
train_dir="口罩数据/train/"
test_dir="口罩数据/test/"
validation_dir="口罩数据/validation/"
print('total training havemask images:', len(os.listdir(train_havemask_dir)))
print('total training nomask images:', len(os.listdir(train_nomask_dir)))
print('total testing havemask images:', len(os.listdir(test_havemask_dir)))
print('total testing nomask images:', len(os.listdir(test_nomask_dir)))
print('total validation havemask images:', len(os.listdir(validation_havemask_dir)))
print('total validation nomask images:', len(os.listdir(validation_nomask_dir)))
将口罩数据集手动划分到各个文件夹中,并打印看一下图片数量
创建模型
创建模型,查看模型
from keras import layers
from keras import models
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.summary()
对图片进行归一化处理
from keras import optimizers
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['acc'])
from keras.preprocessing.image import ImageDataGenerator
# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(rescale=1./255)
validation_datagen=ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
# 目标文件目录
train_dir,
#所有图片的size必须是150x150
target_size=(150, 150),
batch_size=20,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
test_generator = test_datagen.flow_from_directory(test_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
for data_batch, labels_batch in train_generator:
print('data batch shape:', data_batch.shape)
print('labels batch shape:', labels_batch)
break
train_generator.class_indices
训练模型
数据增强
datagen = ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
import matplotlib.pyplot as plt
from keras.preprocessing import image
fnames = [os.path.join(train_havemask_dir, fname) for fname in os.listdir(train_havemask_dir)]
img_path = fnames[5]
img = image.load_img(img_path, target_size=(150, 150))
x = image.img_to_array(img)
x = x.reshape((1,) + x.shape)
i = 0
for batch in datagen.flow(x, batch_size=1):
plt.figure(i)
imgplot = plt.imshow(image.array_to_img(batch[0]))
i += 1
if i % 4 == 0:
break
plt.show()
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['acc'])
#归一化处理
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,)
# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
# This is the target directory
train_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=32,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=32,
class_mode='binary')
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=10,
validation_data=validation_generator,
validation_steps=50)
保存模型
model.save('maskAndUnmask_1.h5')
训练的少,所以识别精度不是很高,想要提高识别精度可以更改epochs。
画出进行数据增强之后后的训练集与验证集的精确度与损失度的图形
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
实现口罩辨别
实现照片辨别
0代表戴口罩,1代表未戴口罩。0.5作分界线,如果预测结果大于0.5就是未带口罩,小于0.5就是戴口罩。
import cv2
from keras.preprocessing import image
from keras.models import load_model
import numpy as np
model = load_model('maskAndUnmask_1.h5')
img_path='口罩数据/test/no_mask/226.jpg'
img = image.load_img(img_path, target_size=(150, 150))
#print(img.size)
img_tensor = image.img_to_array(img)/255.0
img_tensor = np.expand_dims(img_tensor, axis=0)
prediction =model.predict(img_tensor)
print(prediction)
if prediction[0][0]>0.5:
result='未戴口罩'
else:
result='戴口罩'
print(result)
摄像头实时辨别
import cv2
from keras.preprocessing import image
from keras.models import load_model
import numpy as np
import dlib
from PIL import Image
model = load_model('maskAndUnmask_1.h5')
detector = dlib.get_frontal_face_detector()
# video=cv2.VideoCapture('media/video.mp4')
# video=cv2.VideoCapture('data/face_recognition.mp4')
video=cv2.VideoCapture(0)
font = cv2.FONT_HERSHEY_SIMPLEX
def rec(img):
gray=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
dets=detector(gray,1)
if dets is not None:
for face in dets:
left=face.left()
top=face.top()
right=face.right()
bottom=face.bottom()
cv2.rectangle(img,(left,top),(right,bottom),(0,255,0),2)
def mask(img):
img1=cv2.resize(img,dsize=(150,150))
img1=cv2.cvtColor(img1,cv2.COLOR_BGR2RGB)
img1 = np.array(img1)/255.
img_tensor = img1.reshape(-1,150,150,3)
prediction =model.predict(img_tensor)
if prediction[0][0]>0.5:
result='no-mask'
else:
result='have-mask'
cv2.putText(img, result, (100,200), font, 2, (0, 255, 0), 2, cv2.LINE_AA)
cv2.imshow('Video', img)
while video.isOpened():
res, img_rd = video.read()
if not res:
break
#将视频每一帧传入两个函数,分别用于圈出人脸与判断是否带口罩
rec(img_rd)
mask(img_rd)
#q关闭窗口
if cv2.waitKey(1) & 0xFF == ord('q'):
break
video.release()
cv2.destroyAllWindows()
四、参考资料
引入keras后出现的Using TensorFlow backend的错误
本文地址:https://blog.csdn.net/czs0303/article/details/107259305
上一篇: 字典和列表的嵌套复杂排序大全
下一篇: NPOI Word 多级标题结构设置