欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

Tensorflow 02: 卷积神经网络-MNIST

程序员文章站 2024-03-14 11:45:28
...

前言

tensorflow是一个用于大规模数值计算的库。其后台依赖于高效的C++实现。连接后台的桥梁被称为session。
该篇博文主要介绍采用卷积神经网络实现MNIST手写体数字识别。
环境:tensorflow 1.0;  ubuntu 14.04,  python2.7

数据加载

# coding=utf-8
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data

mnist = input_data.read_data_sets("MNIST_data", one_hot=True)

mnist中包含的详细信息(训练集,测试集,验证集)等可参考上一片博文《Tensorflow 01: mnist-softmax》http://blog.csdn.net/u012609509/article/details/72897535

网络参数初始化,卷积,池化

# 卷积核参数初始化
def weight_variable(shape):
    initial = tf.truncated_normal(shape, stddev=0.1)
    return tf.Variable(initial)

# 偏置参数初始化
def bias_variable(shape):
    initial = tf.constant(0.1, shape=shape)
    return tf.Variable(initial)

# 卷积操作
def conv2d(x, W):
    return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')

# 池化操作
def max_pool_2x2(x):
    return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')

【注:】参数初始化的一些trick:
权重初始化:用带一点噪声扰动的方式去初始化权重来打破对称,从而避免0梯度。
One should generally initialize weights with a small amount of noise for symmetry breaking, and to prevent 0 gradients
偏置初始化:如果使用relu**函数,在初始化偏置bias的时候,一般用较小的正数去初始化来避免dead neurons。因为relu的数学表达是max(0, activation_val),如果activation_val始终小于0,则其经过relu计算后其值始终为0。
we’re using ReLU neurons, it is also good practice to initialize them with a slightly positive initial bias to avoid “dead neurons”

构造计算图

x = tf.placeholder(tf.float32, shape=[None, 784])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
x_image = tf.reshape(x, [-1, 28, 28, 1])

# 卷积层1---池化层1
W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)

# 卷积层2---池化层2
W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)

# 全连接层
W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = weight_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1, 7 * 7 * 64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)

# dropout层
keep_prob = tf.placeholder(tf.float32)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)

# softmax层
W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])
y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2

# loss function 代价函数
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
# 计算模型预测的准确率
correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

主要包括2个卷积层,2个池化层,1个全连接层,1个dropout层,1个softmax输出层。并采用AdamOptimizer优化方法对网络进行参数训练优化。

网络训练

sess = tf.InteractiveSession()
init = tf.global_variables_initializer()
sess.run(init)

# 训练
# 记录每100次迭代的loss值
loss = []
# 记录每100次迭代后在对应batch上的预测的准确率的值
acc = []
for idx in range(20000):
    batch = mnist.train.next_batch(50)
    if idx % 100 == 0:
        train_accuracy = accuracy.eval(feed_dict={x: batch[0], y_: batch[1], keep_prob: 1.0})
        print('step %d, training accuracy %g' % (idx, train_accuracy))
        loss_tmp = sess.run(cross_entropy, feed_dict={x: batch[0], y_: batch[1], keep_prob: 1.0})
        acc.append(train_accuracy)
        loss.append(loss_tmp)
    sess.run(train_step, feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
print('test accuracy %g' % accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))


# 画图
plt.figure()
plt.plot(loss)
plt.xlabel('interation')
plt.ylabel('loss value')

plt.figure()
plt.plot(acc)
plt.xlabel('interation')
plt.ylabel('acc')
plt.show()

【注:】在计算图中,通过参数feed_dict可以替换任何tensor,并不仅限于placeholder。
在tensorflow中,获取tensor值的2种方法:
(1)采用eval: accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels})
(2)采用sess.run: sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})

dropout的使用: 一般在网络训练时开启,在网络测试时关闭。

结果

loss变化曲线:可以看到收敛速度特别快。
Tensorflow 02: 卷积神经网络-MNIST

准确率变化曲线:
Tensorflow 02: 卷积神经网络-MNIST

用到的tensorfow api介绍

(1)tf.nn.conv2d(input, filter, strides, padding, use_cudnn_on_gpu=None, data_format=None, name=None)
实现输入input和卷积核filter之间的卷积操作。
注意input和filter中tensor各维度的顺序:
 input: [batch, in_height, in_width, in_channels]
 filter: [filter_height, filter_width, in_channels, out_channels]
卷积结果的输出维度计算:
当padding=’SAME’时:
 out_height = ceil(float(in_height) / float(strides[1]))
 out_width = ceil(float(in_width) / float(strides[2]))
当padding=’VALID’时:
out_height = ceil(float(in_height - filter_height + 1) / float(strides[1]))
out_width = ceil(float(in_width - filter_width + 1) / float(strides[2]))

(2)tf.nn.max_pool(value, ksize, strides, padding, data_format=’NHWC’, name=None)
实现输入value的池化操作。池化原理可参考UFLDL中的教程:
http://ufldl.stanford.edu/wiki/index.php/%E6%B1%A0%E5%8C%96

(3)tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv)
这个函数内部包含了:softmax的计算,交叉熵的计算。相当于原来的如下2步。

y = tf.nn.softmax(tf.matmul(x, W) + b)
-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1])

参考网址

https://www.tensorflow.org/get_started/mnist/pros --- tensorflow 官网教程
http://ufldl.stanford.edu/wiki/index.php/UFLDL%E6%95%99%E7%A8%8B ---UFLDL教程