CIFAR10+全连接网络TensorFlow
程序员文章站
2022-07-11 15:36:41
...
使用全连接网络对CIFAR10数据集进行分类。
运行平台: Linux
Python版本: Python 3.6
TensorFlow版本: 1.15.2
IDE: Colab
CIFAR10数据集
CIFAR-10数据集是一种常用于训练图像的采集机器学习和计算机视觉算法。它是机器学习研究中使用最广泛的数据集之一。CIFAR-10数据集包含10种不同类别的60,000张32x32彩色图像。10个不同的类别分别代表飞机,汽车,鸟类,猫,鹿,狗,青蛙,马,轮船和卡车。每个类别有6,000张图像。(From WikipediaCIFAR10-Wikipedia)
下载地址:官网链接
Colab
因为电脑的性能不够,所以使用Google Colab提供的免费GPU进行运算,并将下载的CIFAR10数据上传至Google云盘.
代码
%tensorflow_version 1.x
import tensorflow as tf
import numpy as np
print(tf.__version__)
!/opt/bin/nvidia-smi
1.15.2
Thu May 7 06:35:43 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.67 Driver Version: 418.67 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla P4 Off | 00000000:00:04.0 Off | 0 |
| N/A 58C P0 25W / 75W | 497MiB / 7611MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
import os
from google.colab import drive
drive.mount('/content/drive')
path = "/content/drive/My Drive"
os.chdir(path)
os.listdir(path)
# 解压数据集,已经解压则不需要
# !tar -zxvf cifar-10-python.tar.gz
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
['UIDlg.cpp',
'Colab Notebooks',
'cifar-10-python.tar.gz',
'drive',
'cifar-10-batches-py']
# 读取训练集数据,减少运算量只使用10000个样本训练
import pickle
with open('./cifar-10-batches-py/data_batch_1', 'rb') as f:
datadict = pickle.load(f, encoding="bytes")
data = datadict[b'data']
labels = datadict[b'labels']
print(data.shape)
train_data = np.reshape(data, [10000,3072])
train_labels = np.array(labels)
(10000, 3072)
# 读取测试集数据,减少运算量只使用前2000个样本测试
with open('./cifar-10-batches-py/test_batch', 'rb') as f:
datadict = pickle.load(f, encoding='bytes')
data = datadict[b'data']
labels = datadict[b'labels']
test_data = np.reshape(data, [10000,3072])
test_labels = np.array(labels)
print(test_labels.shape)
test_data = test_data[:2000]
test_labels = test_labels[:2000]
(10000,)
单层全连接网络
images_placeholder = tf.placeholder(tf.float32, [None,3072])
labels_placeholder = tf.placeholder(tf.int64, [None])
weight = tf.Variable(tf.truncated_normal([3072,10], stddev=0.1))
bias = tf.Variable(tf.truncated_normal([10], stddev=0.1))
h = tf.matmul(images_placeholder, weight) + bias
loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(labels=labels_placeholder, logits=h))
train_step = tf.train.GradientDescentOptimizer(5e-5).minimize(loss)
init = tf.global_variables_initializer()
correct_prediction = tf.equal(tf.argmax(h, 1), labels_placeholder)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
with tf.Session() as sess:
sess.run(init)
for epoch in range(1000):
_, cross_loss = sess.run([train_step, loss], feed_dict{images_placeholder:train_data, labels_placeholder:train_labels})
acc = sess.run(accuracy, feed_dict{images_placeholder:test_data, labels_placeholder:test_labels})
if epoch % 100 == 0:
print('epoch {0}: accuracy={1}, loss={2}'.format(epoch, acc, cross_loss))
epoch 0: accuracy=0.10450000315904617, loss=779.140625
epoch 100: accuracy=0.20900000631809235, loss=234.6305694580078
epoch 200: accuracy=0.2224999964237213, loss=205.3774871826172
epoch 300: accuracy=0.2434999942779541, loss=196.984619140625
epoch 400: accuracy=0.2370000034570694, loss=179.68875122070312
epoch 500: accuracy=0.24300000071525574, loss=167.76573181152344
epoch 600: accuracy=0.2549999952316284, loss=175.16751098632812
epoch 700: accuracy=0.24500000476837158, loss=174.78662109375
epoch 800: accuracy=0.2619999945163727, loss=163.79600524902344
epoch 900: accuracy=0.26899999380111694, loss=161.523681640625
两层全连接网络
# 两层全联接网络
images_placeholder = tf.placeholder(tf.float32, [None,3072])
labels_placeholder = tf.placeholder(tf.int64, [None])
weights1 = tf.Variable(tf.truncated_normal([3072,200], stddev=0.1))
biases1 = tf.Variable(tf.truncated_normal([200], stddev=0.1))
h1 = tf.matmul(images_placeholder, weights1) + biases1
weights2 = tf.Variable(tf.truncated_normal([200,10], stddev=0.1))
biases2 = tf.Variable(tf.truncated_normal([10], stddev=0.1))
h2 = tf.matmul(h1, weights2) + biases2
loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(labels=labels_placeholder, logits=h2))
train_step = tf.train.GradientDescentOptimizer(3e-5).minimize(loss)
init = tf.global_variables_initializer()
correct_prediction = tf.equal(tf.argmax(h2, 1), labels_placeholder)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
with tf.Session() as sess:
sess.run(init)
for epoch in range(1000):
_, cross_loss = sess.run([train_step, loss], feed_dict={images_placeholder:train_data, labels_placeholder:train_labels})
acc = sess.run(accuracy, feed_dict={images_placeholder:test_data, labels_placeholder:test_labels})
if epoch % 100 == 0:
print('epoch {0}: accuracy={1}, loss={2}'.format(epoch, acc, cross_loss))
epoch 0: accuracy=0.11699999868869781, loss=893.0167236328125
epoch 100: accuracy=0.1809999942779541, loss=304.99505615234375
epoch 200: accuracy=0.21150000393390656, loss=286.3709411621094
epoch 300: accuracy=0.21850000321865082, loss=185.3872833251953
epoch 400: accuracy=0.24699999392032623, loss=149.6310577392578
epoch 500: accuracy=0.23350000381469727, loss=149.06405639648438
epoch 600: accuracy=0.24549999833106995, loss=126.36512756347656
epoch 700: accuracy=0.273499995470047, loss=106.9766845703125
epoch 800: accuracy=0.265500009059906, loss=90.20166015625
epoch 900: accuracy=0.2680000066757202, loss=83.52161407470703
遇到的问题:
- 数据的读取.因为CIFAR10的数据集采用pickle方式压缩,要用pickle.load将数据读取.
- 数据的整理,对shape的大小转换出错,导致程序出错.
labels_placeholder = tf.placeholder(tf.int64, [None]) - 错误使用softmax的loss函数tf.nn.softmax_cross_entropy_with_logits,之前在MNIST数据集中都是使用的这个loss函数,因为它的标签是one-hot类型.而CIFAR10数据的标签直接就是类别,要用tf.nn.sparse_softmax_cross_entropy_with_logits这个损失函数.
总结
使用全连接网络训练1000次左右也只能达到27%左右的正确率!下一次将使用卷积网络进行训练.
推荐阅读
-
(sklearn:Logistic回归)和(keras:全连接神经网络)完成mnist手写数字分类
-
CIFAR10+全连接网络TensorFlow
-
TF之DNN:TF利用简单7个神经元的三层全连接神经网络实现降低损失到0.000以下(输入、隐藏、输出层分别为 2、3 、 2 个神经元)——Jason niu
-
Batch Normalization--全连接神经网络和卷积神经网络实战
-
《Pytorch - BP全连接神经网络模型》
-
浅谈tensorflow1.0 池化层(pooling)和全连接层(dense)
-
Vm虚拟机最小化安装linux并配置NAT网络连接(全图)
-
pytorch全连接神经网络进行MNIST识别
-
pytorch 深度学习入门代码 (四)多层全连接神经网络实现 MNIST 手写数字分类
-
tensorflow实战之全连接神经网络实现mnist手写字体识别