cnn卷积神经网络对cifar数据集实现10分类
程序员文章站
2022-07-07 22:54:04
...
数据集下载 http://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
数据集介绍
http://www.cs.toronto.edu/~kriz/cifar.html
import tensorflow as tf
import pickle, numpy, matplotlib.pyplot as plt, random
tf.set_random_seed(1)
with open(r'G:\A_深度学习1\tensorflow\cifar-10-batches-py\data_batch_1', 'rb') as fo:
b_data = pickle.load(fo, encoding='bytes')
# 被卷积图片统一维度[-1, 32, 32, 3] /255 归一化
data = b_data[b'data'].reshape(-1, 3, 32, 32).transpose(0, 2, 3, 1) / 255
# 生成单位矩阵,根据data['data']的值排列矩阵 如:data['data'] 3 y: 0 0 0 1 0...
y_data = numpy.eye(10)[b_data[b'labels']]
# 分割数据集
all_nums = data.shape[0]
train_num = int(all_nums * 0.9)
test_num = all_nums - train_num
train_x = data[:train_num]
train_y = y_data[:train_num]
test_x = data[-test_num:]
test_y = y_data[-test_num:]
g_b = 0
def next_batch(X, Y, size):
global g_b
X = X[g_b:g_b + size]
Y = Y[g_b:g_b + size]
g_b += size
return X, Y
# 占位符
X, Y = tf.placeholder('float', shape=[None, 32, 32, 3]), tf.placeholder('float', shape=[None, 10])
# 第1层卷积,输入图片数据(?, 32, 32, 3)
W1 = tf.Variable(tf.random_normal([3, 3, 3, 32])) # 卷积核3x3,输入通道3,输出通道32
L1 = tf.nn.conv2d(X, W1, strides=[1, 1, 1, 1], padding='SAME') # 卷积输出 (?, 32, 32, 32)
L1 = tf.nn.relu(L1)
L1 = tf.nn.max_pool(L1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') # 池化输出 (?, 16, 16, 32)
# 第2层卷积,输入图片数据(?, 16, 16, 32)
W2 = tf.Variable(tf.random_normal([3, 3, 32, 64], stddev=0.01)) # 卷积核3x3,输入通道32,输出通道64
L2 = tf.nn.conv2d(L1, W2, strides=[1, 1, 1, 1], padding='SAME') # 卷积输出 (?, 16, 16, 64)
L2 = tf.nn.relu(L2)
L2 = tf.nn.max_pool(L2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') # 池化输出 (?, 8, 8, 64)
dim = L2.get_shape()[1].value * L2.get_shape()[2].value * L2.get_shape()[3].value
L2_flat = tf.reshape(L2, [-1, dim])
# 全连接
W3 = tf.get_variable("W3", shape=[dim, 10], initializer=tf.contrib.layers.xavier_initializer())
b = tf.Variable(tf.random_normal([10]))
logits = tf.matmul(L2_flat, W3) + b
# 代价或损失函数
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=Y))
optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost) # 优化器
# 测试模型检查准确率
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# 参数
training_epochs = 15 # 训练总周期
batch_size = 100 # 训练每批样本数
with tf.Session() as sess:
sess.run(tf.global_variables_initializer()) # 全局变量初始化
# 迭代训练
print('开始学习...')
for epoch in range(training_epochs):
avg_cost = 0
total_batch = int(train_num / batch_size) # 计算总批次
g_b = 0
for i in range(total_batch):
batch_xs, batch_ys = next_batch(train_x, train_y, batch_size)
c, _ = sess.run([cost, optimizer], feed_dict={X: batch_xs, Y: batch_ys})
avg_cost += c / total_batch
acc = sess.run(accuracy, feed_dict={X: train_x, Y: train_y})
print('Epoch:', (epoch + 1), 'cost =', avg_cost, 'acc=', acc)
print('学习完成')
# 测试模型检查准确率
print('Accuracy:', sess.run(accuracy, feed_dict={X: test_x, Y: test_y}))
# 在测试集中随机抽一个样本进行测试
r = random.randint(0, test_num - 1)
print("Label: ", sess.run(tf.argmax(test_y[r:r + 1], 1)))
print("Prediction: ", sess.run(tf.argmax(logits, 1), feed_dict={X: test_x[r:r + 1]}))
plt.imshow(test_x[r:r + 1].reshape(32, 32, 3), interpolation='nearest')
# plt.imshow(imgArr[r].reshape(32, 32, 3))
plt.show()
推荐阅读
-
TensorFlow2利用猫狗数据集(cats_and_dogs_filtered.zip)实现卷积神经网络完成分类任务
-
Keras : 利用卷积神经网络CNN对图像进行分类,以mnist数据集为例建立模型并预测
-
搭建ResNet18神经网络对cifar10数据集进行训练
-
cnn卷积神经网络对cifar数据集实现10分类
-
4用于cifar10的卷积神经网络-4.4/4.5cifar10数据集读取和数据增强扩充(上/下)
-
TensorFlow深度学习进阶教程:TensorFlow实现CIFAR-10数据集测试的卷积神经网络
-
简单CNN实现cifar10数据集识别
-
搭建卷积神经网络 Demo - 实现Cifar-10数据集分类
-
Tensorflow实战——5.实现卷积神经网络——cifar10数据集的加载
-
TensorFlow2利用Cifar10数据集实现卷积神经网络