欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

深度学习课后作业——Course4-Week2

程序员文章站 2022-07-04 20:55:19
...

此篇摘自*这个巨巨*,本文只是加上了自己的总结


  • python调用另一个.py文件的中的类和函数

1.调用函数

A.py文件如下:

def add(x,y):
    print('和为:%d'%(x+y))

在B.py文件中调用A.py的add函数如下:

import A
A.add(1,2)
#或
from A import add
add(1,2)

2.调用类

A.py文件如下:

class A:
    def __init__(self,xx,yy):
        self.x=xx
        self.y=yy
    def add(self):
        print("x和y的和为:%d"%(self.x+self.y))

在B.py文件中调用A.py的add函数如下:

from A import A
a=A(2,3)
a.add()
#或
import A
a=A.A(2,3)
a.add()

3.在不同文件夹下调用

A.py文件的文件路径为:C:\AmyPython\Test1

B.py中调用A.py文件:

import A
import sys
sys.path.append(r'C:\AmyPython\Test1')
#python import模块时, 是在sys.path里按顺序查找的。sys.path是一个列表,里面以字符串的形式存储了许多路径。使用A.py文件中的函数需要先将他的文件路径放到sys.path中
a=A.A(2,3)
a.add()
  • np.expand_dims:用于扩展数组的形状

深度学习课后作业——Course4-Week2

深度学习课后作业——Course4-Week2

深度学习课后作业——Course4-Week2


笑脸识别

代码整合:

import numpy as np
from keras import layers
from keras.layers import Input, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D
from keras.layers import AveragePooling2D, MaxPooling2D, Dropout, GlobalMaxPooling2D, GlobalAveragePooling2D
from keras.models import Model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
#import pydot
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
import kt_utils 

import keras.backend as K
K.set_image_data_format('channels_last')
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow

%matplotlib inline

X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = kt_utils.load_dataset()

# Normalize image vectors
X_train = X_train_orig/255.
X_test = X_test_orig/255.

# Reshape
Y_train = Y_train_orig.T
Y_test = Y_test_orig.T

def HappyModel(input_shape):
    """
    实现一个检测笑容的模型
    
    参数:
        input_shape - 输入的数据的维度
    返回:
        model - 创建的Keras的模型
        
    """
    
    #你可以参考和上面的大纲
    X_input = Input(input_shape)

    #使用0填充:X_input的周围填充0
    X = ZeroPadding2D((3, 3))(X_input)

    #对X使用 CONV -> BN -> RELU 块
    X = Conv2D(32, (7, 7), strides=(1, 1), name='conv0')(X)
    X = BatchNormalization(axis=3, name='bn0')(X)
    X = Activation('relu')(X)

    #最大值池化层
    X = MaxPooling2D((2, 2), name='max_pool')(X)

    #降维,矩阵转化为向量 + 全连接层
    X = Flatten()(X)
    X = Dense(1, activation='sigmoid', name='fc')(X)

    #创建模型,讲话创建一个模型的实体,我们可以用它来训练、测试。
    model = Model(inputs=X_input, outputs=X, name='HappyModel')

    return model

#创建一个模型实体
happy_model = HappyModel(X_train.shape[1:])
#编译模型
happy_model.compile("adam","binary_crossentropy", metrics=['accuracy'])
#训练模型
#请注意,此操作会花费你大约6-10分钟。
happy_model.fit(X_train, Y_train, epochs=40, batch_size=50)
#评估模型
preds = happy_model.evaluate(X_test, Y_test, batch_size=32, verbose=1, sample_weight=None)
print ("误差值 = " + str(preds[0]))
print ("准确度 = " + str(preds[1]))

执行结果:

Epoch 1/40
600/600 [==============================] - 20s 33ms/step - loss: 2.0368 - acc: 0.5200
Epoch 2/40
600/600 [==============================] - 16s 27ms/step - loss: 0.6331 - acc: 0.7467
Epoch 3/40
600/600 [==============================] - 16s 26ms/step - loss: 0.2890 - acc: 0.8800
Epoch 4/40
600/600 [==============================] - 15s 25ms/step - loss: 0.2059 - acc: 0.9067
Epoch 5/40
600/600 [==============================] - 15s 25ms/step - loss: 0.1621 - acc: 0.9500
Epoch 6/40
600/600 [==============================] - 15s 25ms/step - loss: 0.1119 - acc: 0.9650
Epoch 7/40
600/600 [==============================] - 15s 25ms/step - loss: 0.1020 - acc: 0.9700
Epoch 8/40
600/600 [==============================] - 14s 24ms/step - loss: 0.0887 - acc: 0.9667
Epoch 9/40
600/600 [==============================] - 16s 27ms/step - loss: 0.0704 - acc: 0.9800
Epoch 10/40
600/600 [==============================] - 17s 28ms/step - loss: 0.0578 - acc: 0.9850
Epoch 11/40
600/600 [==============================] - 17s 28ms/step - loss: 0.0651 - acc: 0.9817
Epoch 12/40
600/600 [==============================] - 14s 23ms/step - loss: 0.0691 - acc: 0.9750
Epoch 13/40
600/600 [==============================] - 14s 23ms/step - loss: 0.0541 - acc: 0.9817
Epoch 14/40
600/600 [==============================] - 14s 24ms/step - loss: 0.0522 - acc: 0.9833
Epoch 15/40
600/600 [==============================] - 14s 23ms/step - loss: 0.0543 - acc: 0.9833
Epoch 16/40
600/600 [==============================] - 14s 23ms/step - loss: 0.0464 - acc: 0.9917
Epoch 17/40
600/600 [==============================] - 14s 23ms/step - loss: 0.0415 - acc: 0.9900
Epoch 18/40
600/600 [==============================] - 14s 23ms/step - loss: 0.0348 - acc: 0.9917
Epoch 19/40
600/600 [==============================] - 14s 23ms/step - loss: 0.0355 - acc: 0.9883
Epoch 20/40
600/600 [==============================] - 14s 23ms/step - loss: 0.0334 - acc: 0.9883
Epoch 21/40
600/600 [==============================] - 14s 23ms/step - loss: 0.0309 - acc: 0.9917
Epoch 22/40
600/600 [==============================] - 14s 23ms/step - loss: 0.0247 - acc: 0.9950
Epoch 23/40
600/600 [==============================] - 14s 23ms/step - loss: 0.0287 - acc: 0.9900
Epoch 24/40
600/600 [==============================] - 14s 23ms/step - loss: 0.0202 - acc: 0.9967
Epoch 25/40
600/600 [==============================] - 14s 23ms/step - loss: 0.0192 - acc: 0.9950
Epoch 26/40
600/600 [==============================] - 14s 23ms/step - loss: 0.0171 - acc: 0.9967
Epoch 27/40
600/600 [==============================] - 14s 23ms/step - loss: 0.0186 - acc: 0.9917
Epoch 28/40
600/600 [==============================] - 14s 23ms/step - loss: 0.0161 - acc: 0.9967
Epoch 29/40
600/600 [==============================] - 14s 23ms/step - loss: 0.0166 - acc: 0.9983
Epoch 30/40
600/600 [==============================] - 14s 23ms/step - loss: 0.0143 - acc: 0.9950
Epoch 31/40
600/600 [==============================] - 14s 23ms/step - loss: 0.0189 - acc: 0.9950
Epoch 32/40
600/600 [==============================] - 14s 23ms/step - loss: 0.0121 - acc: 0.9983
Epoch 33/40
600/600 [==============================] - 14s 23ms/step - loss: 0.0164 - acc: 0.9933
Epoch 34/40
600/600 [==============================] - 14s 23ms/step - loss: 0.0210 - acc: 0.9917
Epoch 35/40
600/600 [==============================] - 14s 23ms/step - loss: 0.0191 - acc: 0.9933
Epoch 36/40
600/600 [==============================] - 14s 23ms/step - loss: 0.0150 - acc: 0.9967
Epoch 37/40
600/600 [==============================] - 14s 23ms/step - loss: 0.0150 - acc: 0.9950
Epoch 38/40
600/600 [==============================] - 14s 23ms/step - loss: 0.0134 - acc: 0.9983
Epoch 39/40
600/600 [==============================] - 14s 23ms/step - loss: 0.0154 - acc: 0.9950
Epoch 40/40
600/600 [==============================] - 14s 23ms/step - loss: 0.0109 - acc: 0.9983
150/150 [==============================] - 1s 9ms/step
误差值 = 0.14801872968673707
准确度 = 0.9333333373069763

测试自己的图:

#网上随便找的图片,侵删
img_path = './smile.jpg'

img = image.load_img(img_path, target_size=(64, 64))
imshow(img)

x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)

print(happy_model.predict(x))

结果:

深度学习课后作业——Course4-Week2

ResNet-50

代码整合:

import numpy as np
import tensorflow as tf

from keras import layers
from keras.layers import Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D, GlobalMaxPooling2D
from keras.models import Model, load_model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from keras.initializers import glorot_uniform

#import pydot
from IPython.display import SVG
import scipy.misc
from matplotlib.pyplot import imshow
import keras.backend as K
K.set_image_data_format('channels_last')
K.set_learning_phase(1)

import resnets_utils 

def identity_block(X, f, filters, stage, block):
    """
    实现图3的恒等块
    
    参数:
        X - 输入的tensor类型的数据,维度为( m, n_H_prev, n_W_prev, n_H_prev )
        f - 整数,指定主路径中间的CONV窗口的维度
        filters - 整数列表,定义了主路径每层的卷积层的过滤器数量
        stage - 整数,根据每层的位置来命名每一层,与block参数一起使用。
        block - 字符串,据每层的位置来命名每一层,与stage参数一起使用。
        
    返回:
        X - 恒等块的输出,tensor类型,维度为(n_H, n_W, n_C)
    
    """
    
    #定义命名规则
    conv_name_base = "res" + str(stage) + block + "_branch"
    bn_name_base   = "bn"  + str(stage) + block + "_branch"
    
    #获取过滤器
    F1, F2, F3 = filters
    
    #保存输入数据,将会用于为主路径添加捷径
    X_shortcut = X
    
    #主路径的第一部分
    ##卷积层
    X = Conv2D(filters=F1, kernel_size=(1,1), strides=(1,1) ,padding="valid",
               name=conv_name_base+"2a", kernel_initializer=glorot_uniform(seed=0))(X)
    ##归一化
    X = BatchNormalization(axis=3,name=bn_name_base+"2a")(X)
    ##使用ReLU**函数
    X = Activation("relu")(X)
    
    #主路径的第二部分
    ##卷积层
    X = Conv2D(filters=F2, kernel_size=(f,f),strides=(1,1), padding="same",
               name=conv_name_base+"2b", kernel_initializer=glorot_uniform(seed=0))(X)
    ##归一化
    X = BatchNormalization(axis=3,name=bn_name_base+"2b")(X)
    ##使用ReLU**函数
    X = Activation("relu")(X)
    
    
    #主路径的第三部分
    ##卷积层
    X = Conv2D(filters=F3, kernel_size=(1,1), strides=(1,1), padding="valid",
               name=conv_name_base+"2c", kernel_initializer=glorot_uniform(seed=0))(X)
    ##归一化
    X = BatchNormalization(axis=3,name=bn_name_base+"2c")(X)
    ##没有ReLU**函数
    
    #最后一步:
    ##将捷径与输入加在一起
    X = Add()([X,X_shortcut])
    ##使用ReLU**函数
    X = Activation("relu")(X)
    
    return X

def convolutional_block(X, f, filters, stage, block, s=2):
    """
    实现图5的卷积块
    
    参数:
        X - 输入的tensor类型的变量,维度为( m, n_H_prev, n_W_prev, n_C_prev)
        f - 整数,指定主路径中间的CONV窗口的维度
        filters - 整数列表,定义了主路径每层的卷积层的过滤器数量
        stage - 整数,根据每层的位置来命名每一层,与block参数一起使用。
        block - 字符串,据每层的位置来命名每一层,与stage参数一起使用。
        s - 整数,指定要使用的步幅
    
    返回:
        X - 卷积块的输出,tensor类型,维度为(n_H, n_W, n_C)
    """
    
    #定义命名规则
    conv_name_base = "res" + str(stage) + block + "_branch"
    bn_name_base   = "bn"  + str(stage) + block + "_branch"
    
    #获取过滤器数量
    F1, F2, F3 = filters
    
    #保存输入数据
    X_shortcut = X

    #主路径
    ##主路径第一部分
    X = Conv2D(filters=F1, kernel_size=(1,1), strides=(s,s), padding="valid",
               name=conv_name_base+"2a", kernel_initializer=glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis=3,name=bn_name_base+"2a")(X)
    X = Activation("relu")(X)
    
    ##主路径第二部分
    X = Conv2D(filters=F2, kernel_size=(f,f), strides=(1,1), padding="same",
               name=conv_name_base+"2b", kernel_initializer=glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis=3,name=bn_name_base+"2b")(X)
    X = Activation("relu")(X)
    
    ##主路径第三部分
    X = Conv2D(filters=F3, kernel_size=(1,1), strides=(1,1), padding="valid",
               name=conv_name_base+"2c", kernel_initializer=glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis=3,name=bn_name_base+"2c")(X)
    #捷径
    X_shortcut = Conv2D(filters=F3, kernel_size=(1,1), strides=(s,s), padding="valid",
               name=conv_name_base+"1", kernel_initializer=glorot_uniform(seed=0))(X_shortcut)
    X_shortcut = BatchNormalization(axis=3,name=bn_name_base+"1")(X_shortcut)

    #最后一步
    X = Add()([X,X_shortcut])
    X = Activation("relu")(X)

    return X

def ResNet50(input_shape=(64,64,3),classes=6):
    """
    实现ResNet50
    CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> CONVBLOCK -> IDBLOCK*2 -> CONVBLOCK -> IDBLOCK*3
    -> CONVBLOCK -> IDBLOCK*5 -> CONVBLOCK -> IDBLOCK*2 -> AVGPOOL -> TOPLAYER
    
    参数:
        input_shape - 图像数据集的维度
        classes - 整数,分类数
        
    返回:
        model - Keras框架的模型
        
    """
    
    #定义tensor类型的输入数据
    X_input = Input(input_shape)
    
    #0填充
    X = ZeroPadding2D((3,3))(X_input)
    
    #stage1
    X = Conv2D(filters=64, kernel_size=(7,7), strides=(2,2), name="conv1",
               kernel_initializer=glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis=3, name="bn_conv1")(X)
    X = Activation("relu")(X)
    X = MaxPooling2D(pool_size=(3,3), strides=(2,2))(X)
    
    #stage2
    X = convolutional_block(X, f=3, filters=[64,64,256], stage=2, block="a", s=1)
    X = identity_block(X, f=3, filters=[64,64,256], stage=2, block="b")
    X = identity_block(X, f=3, filters=[64,64,256], stage=2, block="c")
    
    #stage3
    X = convolutional_block(X, f=3, filters=[128,128,512], stage=3, block="a", s=2)
    X = identity_block(X, f=3, filters=[128,128,512], stage=3, block="b")
    X = identity_block(X, f=3, filters=[128,128,512], stage=3, block="c")
    X = identity_block(X, f=3, filters=[128,128,512], stage=3, block="d")
    
    #stage4
    X = convolutional_block(X, f=3, filters=[256,256,1024], stage=4, block="a", s=2)
    X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block="b")
    X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block="c")
    X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block="d")
    X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block="e")
    X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block="f")
    
    #stage5
    X = convolutional_block(X, f=3, filters=[512,512,2048], stage=5, block="a", s=2)
    X = identity_block(X, f=3, filters=[512,512,2048], stage=5, block="b")
    X = identity_block(X, f=3, filters=[512,512,2048], stage=5, block="c")
    
    #均值池化层
    X = AveragePooling2D(pool_size=(2,2),padding="same")(X)
    
    #输出层
    X = Flatten()(X)
    X = Dense(classes, activation="softmax", name="fc"+str(classes),
              kernel_initializer=glorot_uniform(seed=0))(X)
    
    
    #创建模型
    model = Model(inputs=X_input, outputs=X, name="ResNet50")
    
    return model

model = ResNet50(input_shape=(64,64,3),classes=6)
model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"])
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = resnets_utils.load_dataset()

# Normalize image vectors
X_train = X_train_orig / 255.
X_test = X_test_orig / 255.

# Convert training and test labels to one hot matrices
Y_train = resnets_utils.convert_to_one_hot(Y_train_orig, 6).T
Y_test = resnets_utils.convert_to_one_hot(Y_test_orig, 6).T

model.fit(X_train,Y_train,epochs=2,batch_size=32)
preds = model.evaluate(X_test,Y_test)

print("误差值 = " + str(preds[0]))
print("准确率 = " + str(preds[1]))

执行结果:

由于只有两个epoch所以发现准确率不高,但是在上升

深度学习课后作业——Course4-Week2

深度学习课后作业——Course4-Week2

相关标签: 深度学习