caffe使用命令行方式训练预测mnist、cifar10及自己的数据集
在win10 vs2015 显卡compute capability7.5 Python3.5.2环境下配置caffe及基本使用(一)介绍了如何编译生成caffe工程及python、matlab接口。下面介绍通过命令行方式使用caffe训练预测mnist数据集、训练预测cifar10数据集,训练预测自己的数据集。
(1) 训练mnist数据集
在主目录下的examples/minst文件夹下放入minist数据集
可以用如下两个bat文件来做mnist数据集的转换
如下是转换为leveldb数据格式
如下是转换为lmdb格式
在主目录下新建一个bat文件my_add_mnist_run_train.bat(名字可以随意定义),输入如下内容:
打开lenet_solver.prototxt,可以看到
# The train/test net protocol buffer definition
net: "examples/mnist/lenet_train_test.prototxt"
# test_iter specifies how many forward passes the test should carry out.
# In the case of MNIST, we have test batch size 100 and 100 test iterations,
# covering the full 10,000 testing images.
test_iter: 100
# Carry out testing every 500 training iterations.
test_interval: 500
# The base learning rate, momentum and the weight decay of the network.
base_lr: 0.01
momentum: 0.9
weight_decay: 0.0005
# The learning rate policy
lr_policy: "inv"
gamma: 0.0001
power: 0.75
# Display every 100 iterations
display: 100
# The maximum number of iterations
max_iter: 10000
# snapshot intermediate results
snapshot: 5000
snapshot_prefix: "examples/mnist/lenet"
# solver mode: CPU or GPU
solver_mode: GPU
里面有学习率,迭代步数,gpu还是使用cpu跑等配置
再打开lenet_train_test.prototxt文件,可以看到
可以将prototxt文件放在以下的网址,查看网络的结构:http://ethereon.github.io/netscope/#/editor
name: "LeNet"
layer {
name: "mnist"
type: "Data"
top: "data"
top: "label"
include {
phase: TRAIN
}
transform_param {
scale: 0.00390625
}
data_param {
source: "examples/mnist/mnist_train_lmdb"
batch_size: 64
backend: LMDB
}
}
layer {
name: "mnist"
type: "Data"
top: "data"
top: "label"
include {
phase: TEST
}
transform_param {
scale: 0.00390625
}
data_param {
source: "examples/mnist/mnist_test_lmdb"
batch_size: 100
backend: LMDB
}
}
layer {
name: "conv1"
type: "Convolution"
bottom: "data"
top: "conv1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 20
kernel_size: 5
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "pool1"
type: "Pooling"
bottom: "conv1"
top: "pool1"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "conv2"
type: "Convolution"
bottom: "pool1"
top: "conv2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 50
kernel_size: 5
stride: 1
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "pool2"
type: "Pooling"
bottom: "conv2"
top: "pool2"
pooling_param {
pool: MAX
kernel_size: 2
stride: 2
}
}
layer {
name: "ip1"
type: "InnerProduct"
bottom: "pool2"
top: "ip1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 500
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "relu1"
type: "ReLU"
bottom: "ip1"
top: "ip1"
}
layer {
name: "ip2"
type: "InnerProduct"
bottom: "ip1"
top: "ip2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 10
weight_filler {
type: "xavier"
}
bias_filler {
type: "constant"
}
}
}
layer {
name: "accuracy"
type: "Accuracy"
bottom: "ip2"
bottom: "label"
top: "accuracy"
include {
phase: TEST
}
}
layer {
name: "loss"
type: "SoftmaxWithLoss"
bottom: "ip2"
bottom: "label"
top: "loss"
}
里面有网络结构、各层过滤器的大小,步长设置及数据集设置,有对应自己框架的数据结构
运行该训练的bat文件,训练完毕后结果如下:
(2)用训练出来的模型测试mnist
首先计算mean.binaryproto
bat文件内容如下:
将lenet_train_test.prototxt复制一份出来,改名为my_lenet_test.prototxt,取名随意
对比lenet_train_test.prototxt,增加下面红色标记处语句:
在主目录下新建一个bat文件,文件里内容为:
运行该bat文件,结果如下:
创建一个bat文件,对单张mnist图片进行预测,文件里语句如下:
test image文件夹中里的内容如下:
result.txt中的内容如下(这里是caffe预测的结果为一个10维向量,这里预测的图片是2,会得到该向量中最大值所处的索引值,凭该索引去result中找对应的类别标签):
该bat文件的执行结果如下:
(3)训练cifar10
首先转化cifar10为caffe能够支持的数据集格式,这里转为lmdb,原来的数据集格式如下:
转化的bat文件中的语句如下: