欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

kaggel_手写数字识别

程序员文章站 2024-01-22 11:15:34
...

本文是博主基于之前练手Kaggle上手写数字识别的入门分析而做的个人总结

此案例是读者经过研究多个Kaggle上大神的kernel经验,加上个人的理解,再加入百分之一的运气得到 的结果

此案例的亮点在于数据降维部分,以及使用深度学习中的卷积神经网络模型亮瞎了博主的眼球~~~~

本案例是博主两三个月前做的结果,当时排名进了5%,然鹅,长江后浪推前浪,现在估计掉下去了不少,但是本篇文章旨在分析总结经验,结果神马的,是次要目的。。。

0 简介

    关于这个案例,具体的介绍及简介,见Kaggle官网上的数据,内容很全。此案例中由于Kaggle官网上给出的数据是已经进行了初步处理的数据,将图片转成统一的适合挖掘的数据,但实际操作中,如何将大小不一的灰度图片,甚至彩色图片等非结构化数据,转换成统一的结构化的适宜挖掘的数据,是值得讨论的东西,后续会跟大家分享这一部分的内容。

这是官网上这个案例的链接地址点击打开链接

1 基础建模(降维)

导入模块

import pandas as pd
import numpy as np
from pandas import DataFrame, Series

import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import matplotlib

import plotly.offline as py
import plotly.graph_objs as go
import plotly.tools as tls
py.init_notebook_mode(connected=True)
from sklearn import svm
import warnings 
warnings.filterwarnings('ignore')

from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA

%matplotlib inline

1.1 无降维+svm

读取并查看数据

train = pd.read_csv('train.csv')
test = pd.read_csv('test.csv')
print('train shape:',train.shape) # train shape: (42000, 785)
print('test shape:',test.shape) # test shape: (28000, 784)
y_train = train['label']
data1 = train.copy(deep = True)
data1.drop(['label'],axis=1,inplace=True)
data1.head()
kaggel_手写数字识别

归一化

data1 = (data1/255) # 归一化
test1 = (test/255) # 归一化
使用svm
from sklearn import svm
import time
start = time.clock()
svm = svm.SVC()
svm.fit(data1, y_train)
end = time.clock()
print(svm)
print(end-start) # 409.9348428827064

start = time.clock()
predictions = svm.predict(test1.values)
result = pd.DataFrame({'ImageId':test1.index+1, 'Label':predictions.astype(np.int32)})
result.to_csv("resultsvm.csv", index=False) # @0.93600
end = time.clock()
print(end-start) # 430.289612269331
输出:

SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
decision_function_shape='ovr', degree=3, gamma='auto', kernel='rbf',
max_iter=-1, probability=False, random_state=None, shrinking=True,

tol=0.001, verbose=False)

最终准确率是:0.93600

1.2 有降维

1.2.1 LDA+SVM

# Linear Discriminant Analysis (LDA)有监督线性转换

归一化

方法一:

data1 = (data1/255) # 归一化
test1 = (test/255) # 归一化
# # 计算样本的均值向量
mean_vec = np.mean(data1.values, axis=0) 

# 计算协方差矩阵及特征向量和特征值
cov_mat = np.cov(data1.values.T) # 计算协方差 # cov_mat.shape # (784, 784)
eig_vals, eig_vecs = np.linalg.eig(cov_mat) #计算协方差矩阵的特征向量及特征值
#创建一个(特征值,特征向量)元组列表
eig_pairs = [(np.abs(eig_vals[i]),eig_vecs[:,i]) for i in range(len(eig_vals))] # len(eig_vals)=784
#特征向量从高到低排序
eig_pairs.sort(key = lambda x: x[0], reverse= True) # 或者sorted(eig_pairs,key=lambda x: x[0])

方法二:

# # 方法二:标准化:StandardScaler类的好处在于可以保存训练集中的参数(均值、方差)直接使用其对象转换测试集数据
# from sklearn.preprocessing import StandardScaler
# X = data1.values
# X_= StandardScaler().fit_transform(X) # X_.shape # (42000, 784)

# # 数据矩阵按列进行中心标准化  
# T = test.values
# T_= StandardScaler().fit_transform(T) # X_.shape # (42000, 784)

# # 计算样本的均值向量
# mean_vec = np.mean(X_, axis=0) 

# # 计算协方差矩阵及特征向量和特征值
# cov_mat = np.cov(X_.T) # 计算协方差 # cov_mat.shape # (784, 784)
# eig_vals, eig_vecs = np.linalg.eig(cov_mat) #计算协方差矩阵的特征向量及特征值
# 计算各个特征值的贡献率
tot = sum(eig_vals)
var_exp = [(i/tot)*100 for i in sorted(eig_vals, reverse=True)] # 单个贡献率(方差百分比),单个越大,
cum_var_exp = np.cumsum(var_exp) # 累加贡献率
trace1 = go.Scatter(
    x=list(range(784)),
    y= cum_var_exp,
    mode='lines+markers',
    name="'Cumulative Explained Variance'",
    hoverinfo= cum_var_exp,
    line=dict(
        shape='spline',
        color = 'goldenrod'
    )
)
trace2 = go.Scatter(
    x=list(range(784)),
    y= var_exp,
    mode='lines+markers',
    name="'Individual Explained Variance'",
    hoverinfo= var_exp,
    line=dict(
        shape='linear',
        color = 'black'
    )
)
fig = tls.make_subplots(insets=[{'cell': (1,1), 'l': 0.7, 'b': 0.5}],
                          print_grid=True)

fig.append_trace(trace1, 1, 1)
fig.append_trace(trace2,1,1)
fig.layout.title = 'Explained Variance plots - Full and Zoomed-in'
fig.layout.xaxis = dict(range=[0, 80], title = 'Feature columns')
fig.layout.yaxis = dict(range=[0, 90], title = 'Explained Variance')
fig['data'] += [go.Scatter(x= list(range(784)) , y=cum_var_exp, xaxis='x2', yaxis='y2', name = 'Cumulative Explained Variance')]
fig['data'] += [go.Scatter(x=list(range(784)), y=var_exp, xaxis='x2', yaxis='y2',name = 'Individual Explained Variance')]

# fig['data'] = data
# fig['layout'] = layout
# fig['data'] += data2
# fig['layout'] += layout2
py.iplot(fig, filename='inset example')

kaggel_手写数字识别

上面是一个交互图

n_components =784

lda = LDA(n_components=n_components).fit(X_, y_train.values)
after_value = lda.transform(X_)

# lda = LDA(n_components=n_components).fit_transform(data1.values, y_train.values)
print(after_value.shape)

(42000, 9)

X_train = pd.DataFrame(after_value,columns=['eig'+str(i) for i in range(0,after_value.shape[1])])
# 对于训练数据进行特征转换
X_test = lda.transform(T_)
print(X_test.shape)

使用SVM

from sklearn import svm
import time
start = time.clock()
svm = svm.SVC()
svm.fit(X_train, y_train)
end = time.clock()
print(svm)
print(end-start) # 10.662566411408122

start = time.clock()
predictions = svm.predict(X_test)
result = pd.DataFrame({'ImageId':test.index+1, 'Label':predictions.astype(np.int32)})
result.to_csv("resultlda_svm.csv", index=False) # @0.92028
end = time.clock()
print(end-start) # 13.689182972164819
最终准确率是:0.92028

1.2.2 TSNE+SVM

#非线性概率降维方法(TSNE):与PCA的比较

  • 因为原理不同,导致,tsne 保留下的属性信息,更具代表性,也即最能体现样本间的差异;
  • TSNE 运行极慢,PCA 则相对较快;

因此更为一般的处理,尤其在展示(可视化)高维数据时,常常先用 PCA 进行降维,再使用 tsne:

from sklearn.manifold import TSNE
归一化
# 归一化
data1 = (data1/255) 
test1 = (test/255)
alldata = pd.concat([data1,test1])
s = time.clock()

n_components = 2
after_value = TSNE(n_components=n_components).fit_transform(alldata.values)

e = time.clock()
print(e-s)
print(after_value)
X_train = pd.DataFrame(after_value[:len(data1),:],columns=['eig'+str(i) for i in range(0,n_components)])
# 对于训练数据进行特征转换
X_test = pd.DataFrame(after_value[len(data1):,:],columns=['eig'+str(i) for i in range(0,n_components)])
print(X_test.shape)

加入SVM

from sklearn import svm
import time
start = time.clock()
svm = svm.SVC()
svm.fit(X_train, y_train)
end = time.clock()
print(svm)
print(end-start) # 250.67887711523684


start = time.clock()
predictions = svm.predict(X_test)
result = pd.DataFrame({'ImageId':test.index+1, 'Label':predictions.astype(np.int32)})
result.to_csv("resulttsne_svm.csv", index=False) # @0.97228
end = time.clock()
print(end-start) # 183.61847787915266

最终准确率是:0.97228

1.2.3 PCA+SVM

# PCA——一种无监督线性转换
from sklearn.decomposition import PCA

归一化

# 归一化
data1 = (data1/255)# @0.94685
test1 = (test/255)

# # 数据矩阵按列进行中心标准化@0.97228
# from sklearn.preprocessing import StandardScaler
# X = data1.values
# X_= StandardScaler().fit_transform(X) # X_.shape # (42000, 784)


# T = test.values
# T_= StandardScaler().fit_transform(T) # X_.shape # (42000, 784)

# 计算样本的均值向量
mean_vec = np.mean(data1.values, axis=0) 

# 计算协方差矩阵及特征向量和特征值
cov_mat = np.cov(data1.values.T) # 计算协方差 # cov_mat.shape # (784, 784)
eig_vals, eig_vecs = np.linalg.eig(cov_mat) #计算协方差矩阵的特征向量及特征值
 #创建一个(特征值,特征向量)元组列表
eig_pairs = [(np.abs(eig_vals[i]),eig_vecs[:,i]) for i in range(len(eig_vals))] # len(eig_vals)=784
#特征向量从高到低排序
eig_pairs.sort(key = lambda x: x[0], reverse= True) # 或者sorted(eig_pairs,key=lambda x: x[0])
# 计算各个特征值的贡献率
tot = sum(eig_vals)
var_exp = [(i/tot)*100 for i in sorted(eig_vals, reverse=True)] # 单个贡献率(方差百分比),单个越大,
cum_var_exp = np.cumsum(var_exp) # 累加贡献率
trace1 = go.Scatter(
    x=list(range(784)),
    y= cum_var_exp,
    mode='lines+markers',
    name="'Cumulative Explained Variance'",
    hoverinfo= cum_var_exp,
    line=dict(
        shape='spline',
        color = 'goldenrod'
    )
)
trace2 = go.Scatter(
    x=list(range(784)),
    y= var_exp,
    mode='lines+markers',
    name="'Individual Explained Variance'",
    hoverinfo= var_exp,
    line=dict(
        shape='linear',
        color = 'black'
    )
)
fig = tls.make_subplots(insets=[{'cell': (1,1), 'l': 0.7, 'b': 0.5}],print_grid=True)

fig.append_trace(trace1, 1, 1)
fig.append_trace(trace2,1,1)
fig.layout.title = 'Explained Variance plots - Full and Zoomed-in'
fig.layout.xaxis = dict(range=[0, 80], title = 'Feature columns')
fig.layout.yaxis = dict(range=[0, 90], title = 'Explained Variance')
fig['data'] += [go.Scatter(x= list(range(784)) , y=cum_var_exp, xaxis='x2', yaxis='y2', name = 'Cumulative Explained Variance')]
fig['data'] += [go.Scatter(x=list(range(784)), y=var_exp, xaxis='x2', yaxis='y2',name = 'Individual Explained Variance')]

# fig['data'] = data
# fig['layout'] = layout
# fig['data'] += data2
# fig['layout'] += layout2
py.iplot(fig, filename='inset example')

图像类似前图

train_x, test_x, train_y, test_y = model_selection.train_test_split(data1, y_train)

定义评价函数

def rmsl(clf):
    s = cross_validation.cross_val_score(clf, data1, y_train, cv=5)
    print(s.mean(),s.std())
    return (s.mean(),s.std())
确定最优Component
# 交叉验证最佳主成分的效果
n_components = [40,50,60,80,90,120,150,200,400]
result = {}
for i in n_components:
    start = time.clock()
    sv = svm.SVC()
    
    pca = PCA(n_components = i ).fit(data1.values) # data1.values:0.97228  X_:0.95985
    X_train = pca.transform(data1.values)
    
    s = cross_validation.cross_val_score(sv, X_train, y_train, cv=5)
    r = (s.mean(),s.std())
    result[i] = r
    end = time.clock()
    print(end - start)
print(result)
kaggel_手写数字识别
index = 0
com_compare = pd.DataFrame(columns =['n_conponents','mean','std'])
for i in result:
    com_compare.loc[index,'n_conponents'] = i
    com_compare.loc[index,'mean'] = result[i][0]
    com_compare.loc[index,'std'] = result[i][1]
    index+=1
com_compare

kaggel_手写数字识别

value = [];keys = []
for key in result.keys():
    value.append(result[key][0])
    keys.append(key)
plt.plot(keys,value,'ro--')
plt.xlabel('components')
plt.ylabel('accuracy')
plt.xticks(keys, rotation=90)
plt.ylim(0.9,1)
# plt.tight_layout()
plt.show()

kaggel_手写数字识别

由该图知最优参数是40

n_components = 40 
pca = PCA(n_components=n_components).fit(data1.values) # data1.values:0.97228  X_:0.95985
after_value = pca.transform(data1.values)
print(pca)
print(after_value.shape)

kaggel_手写数字识别

X_train = pd.DataFrame(after_value,columns=['eig'+str(i) for i in range(0,n_components)])
# 对于训练数据进行特征转换
X_test = pca.transform(test1)
print(X_test.shape)

使用SVM

from sklearn import svm
import time
start = time.clock()
svm = svm.SVC()
svm.fit(X_train, y_train)
end = time.clock()
print(svm)
print(end-start) # 47.16571434436082

start = time.clock()
predictions = svm.predict(X_test)
result = pd.DataFrame({'ImageId':test.index+1, 'Label':predictions.astype(np.int32)})
result.to_csv("resultpca_svm.csv", index=False) # @400:0.97228  40:0.97857
end = time.clock()
print(end-start) # 30.386505566902088
最终准确率是:0.97857


2 神经网络——DNN

导入模块

import pandas as pd
import numpy as np
from pandas import DataFrame, Series

import warnings 
warnings.filterwarnings('ignore')

import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline

from keras.models import Sequential
from keras.layers.core import Dense,Activation
from keras.utils import np_utils
np.random.seed(1671)
from keras.optimizers import SGD

from keras.optimizers import RMSprop, Adam 

from sklearn import model_selection

读取数据

train = pd.read_csv('train.csv')
test = pd.read_csv('test.csv')
print('train shape:',train.shape) # train shape: (42000, 785)
print('test shape:',test.shape) # test shape: (28000, 784)

y_train = train['label']
data1 = train.copy(deep = True)

data1.drop(['label'],axis=1,inplace=True)

2.1 DNN——简单模型

Simple Keras net and establishing a baseline

# network and training
NB_EPOCH = 200
BATCH_SIZE = 128
VERBOSE = 1 # 表示更新日志
NB_CLASSES = 10
OPTIMIZER = SGD()
N_HIDDEN = 128
VALIDATION_SPLIT = 0.2

y_train = np_utils.to_categorical(y_train, NB_CLASSES)

将数据拆分成训练集和验证集

train_x, test_x, train_y, test_y = model_selection.train_test_split(data1, y_train,random_state=10)

设置临时存储容器,用来存储不同版本的结果,用于比较效果

resultList = [] # 用于存储不同版本的神经网络的交叉验证的结果
versionList = [] # 用于存储不同版本的神经网络的名称
model = Sequential()
model.add(Dense(units=NB_CLASSES, input_dim = 784))
model.add(Activation('softmax'))
# model.add(Dense(units=NB_CLASSES, input_dim = 784, activation = "softmax"))
model.compile(loss='categorical_crossentropy', optimizer = OPTIMIZER, metrics = ['accuracy'])
history = model.fit(train_x, train_y, batch_size = BATCH_SIZE, epochs = NB_EPOCH, \
                    verbose = VERBOSE, validation_split = VALIDATION_SPLIT)
score = model.evaluate(test_x, test_y, verbose = VERBOSE)
print(score) 

# [0.30661305522918703, 0.91266666666666663]

print('Test score:', score[0])
print('Test accuracy:', score[1])

resultList.append(score[1])
versionList.append('raw')

kaggel_手写数字识别

2.2  DNN——添加隐层

Improving the simple net in Keras with hidden layers

# 加上两层隐层
model1 = Sequential()
model1.add(Dense(units=N_HIDDEN, input_shape=(784,)))
model1.add(Activation('relu'))
model1.add(Dense(N_HIDDEN))
model1.add(Activation('relu'))
model1.add(Dense(NB_CLASSES))
model1.add(Activation('softmax'))
#等价于:
# model1.add(Dense(units=N_HIDDEN, input_shape=(784,),activation = "relu"))
# model1.add(Dense(N_HIDDEN,activation = "relu"))
# model1.add(Dense(NB_CLASSES,activation = "softmax"))

print(model1.summary())

model1.compile(loss='categorical_crossentropy', optimizer=OPTIMIZER, metrics=['accuracy'])
history = model1.fit(train_x, train_y, batch_size = BATCH_SIZE, epochs = NB_EPOCH,
                    verbose = VERBOSE, validation_split = VALIDATION_SPLIT)
score1 = model1.evaluate(test_x, test_y, verbose = VERBOSE)
print(score1) # [0.11911841866444974, 0.96276190476190471]
print('Test score1:', score1[0])
print('Test accuracy1:', score1[1])

resultList.append(score1[1])
versionList.append('raw_hidden')

Test score1: 0.119118418664
Test accuracy1: 0.962761904762

2.3 DNN——添加dropout(正则化)

# 加上随机删除

from keras.layers.core import Dropout
DROPOUT = 0.3

model2 = Sequential()
model2.add(Dense(units=N_HIDDEN, input_shape=(784,)))
model2.add(Activation('relu'))
model2.add(Dropout(DROPOUT))

model2.add(Dense(N_HIDDEN))
model2.add(Activation('relu'))
model2.add(Dropout(DROPOUT))

model2.add(Dense(NB_CLASSES))
model2.add(Activation('softmax'))
model2.summary()


model2.compile(loss='categorical_crossentropy', optimizer=OPTIMIZER, metrics=['accuracy'])
history = model2.fit(train_x, train_y, batch_size = BATCH_SIZE, epochs = NB_EPOCH,
                    verbose = VERBOSE, validation_split = VALIDATION_SPLIT)
score2 = model2.evaluate(test_x, test_y, verbose = VERBOSE)
print(score2)

[0.10954098956180471, 0.96685714285714286]

print('Test score2:', score2[0])
print('Test accuracy2:', score2[1])

resultList.append(score2[1])
versionList.append('drop')

kaggel_手写数字识别

2.4 DNN——训练次数改为250

# 改为250训练的次数

model3 = Sequential()
model3.add(Dense(units=N_HIDDEN, input_shape=(784,)))
model3.add(Activation('relu'))
model3.add(Dropout(DROPOUT))

model3.add(Dense(N_HIDDEN))
model3.add(Activation('relu'))
model3.add(Dropout(DROPOUT))

model3.add(Dense(NB_CLASSES))
model3.add(Activation('softmax'))
model3.summary()

NB_EPOCH1 = 250  
model3.compile(loss='categorical_crossentropy', optimizer=OPTIMIZER, metrics=['accuracy'])
history = model3.fit(train_x, train_y, batch_size = BATCH_SIZE, epochs = NB_EPOCH1,
                    verbose = VERBOSE, validation_split = VALIDATION_SPLIT)
score3 = model3.evaluate(test_x, test_y, verbose = VERBOSE)
print(score3)

[0.10397297187930062, 0.96809523809523812]

print('Test score3:', score3[0])
print('Test accuracy3:', score3[1])

resultList.append(score3[1])
versionList.append('drop_epoch250')

Test score3: 0.103972971879
Test accuracy3: 0.968095238095

print(history.history.keys())

dict_keys(['loss', 'val_loss', 'val_acc', 'acc'])

fig = plt.figure(figsize=(14,5))
fig.set(alpha=0.2)  # 设定图表颜色alpha参数
plt.subplot2grid((1,2),(0,0))
# 画精确度图
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accurary')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
# plt.tight_layout()
# plt.show()
plt.subplot2grid((1,2),(0,1))
# 画损失图
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
plt.show()

kaggel_手写数字识别

2.4 DNN——测试不同优化器对结果的影响

# RMSprop

# RMSprop
model4 = Sequential()
model4.add(Dense(units=N_HIDDEN, input_shape=(784,)))
model4.add(Activation('relu'))
model4.add(Dropout(DROPOUT))

model4.add(Dense(N_HIDDEN))
model4.add(Activation('relu'))
model4.add(Dropout(DROPOUT))

model4.add(Dense(NB_CLASSES))
model4.add(Activation('softmax'))
model4.summary()

NB_EPOCH2 = 50
OPTIMIZER = RMSprop()
model4.compile(loss='categorical_crossentropy', optimizer=OPTIMIZER, metrics=['accuracy'])
history = model4.fit(train_x, train_y, batch_size = BATCH_SIZE, epochs = NB_EPOCH2,
                    verbose = VERBOSE, validation_split = VALIDATION_SPLIT)
score4 = model4.evaluate(test_x, test_y, verbose = VERBOSE)
print(score4)

[0.16108144360456017, 0.9711428571428572]

print('Test score4:', score4[0])
print('Test accuracy4:', score4[1])

resultList.append(score4[1])
versionList.append('RMSprop')
fig = plt.figure(figsize=(14,5))
fig.set(alpha=0.2)  # 设定图表颜色alpha参数
plt.subplot2grid((1,2),(0,0))
# 画精确度图
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accurary')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
# plt.tight_layout()
# plt.show()
plt.subplot2grid((1,2),(0,1))
# 画损失图
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
plt.show()

kaggel_手写数字识别

# Adam

model5 = Sequential()
model5.add(Dense(units=N_HIDDEN, input_shape=(784,)))
model5.add(Activation('relu'))
model5.add(Dropout(DROPOUT))

model5.add(Dense(N_HIDDEN))
model5.add(Activation('relu'))
model5.add(Dropout(DROPOUT))

model5.add(Dense(NB_CLASSES))
model5.add(Activation('softmax'))
model5.summary()

NB_EPOCH2 = 50
OPTIMIZER = Adam()
model5.compile(loss='categorical_crossentropy', optimizer=OPTIMIZER, metrics=['accuracy'])
history = model5.fit(train_x, train_y, batch_size = BATCH_SIZE, epochs = NB_EPOCH2,
                    verbose = VERBOSE, validation_split = VALIDATION_SPLIT)
score5 = model5.evaluate(test_x, test_y, verbose = VERBOSE)
print(score5)

[0.11124677303965048, 0.97542857142857142]

print('Test score5:', score5[0])
print('Test accuracy5:', score5[1])

resultList.append(score5[1])
versionList.append('Adam')
fig = plt.figure(figsize=(14,5))
fig.set(alpha=0.2)  # 设定图表颜色alpha参数
plt.subplot2grid((1,2),(0,0))
# 画精确度图
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accurary')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
# plt.tight_layout()
# plt.show()
plt.subplot2grid((1,2),(0,1))
# 画损失图
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
plt.show()

kaggel_手写数字识别

2.5 ——查看随机删除的比重对最终结果的影响

scoredict = {}
DROPOUTLIST = [0.1,0.15,0.2,0.25,0.3,0.35,0.4]# 加上随机删除
# trainx, valx, trainy, valy = model_selection.train_test_split(train_x, train_y,test_size = 0.2)
trainx, valx, trainy, valy = model_selection.train_test_split(train_x, train_y,test_size = 0.2,random_state =1671)
for i in DROPOUTLIST:
    model6 = Sequential()
    model6.add(Dense(units=N_HIDDEN, input_shape=(784,)))
    model6.add(Activation('relu'))
    model6.add(Dropout(i))

    model6.add(Dense(N_HIDDEN))
    model6.add(Activation('relu'))
    model6.add(Dropout(i))

    model6.add(Dense(NB_CLASSES))
    model6.add(Activation('softmax'))
    model6.summary()

    NB_EPOCH2 = 50
    OPTIMIZER = Adam()
    model6.compile(loss='categorical_crossentropy', optimizer=OPTIMIZER, metrics=['accuracy'])
    history = model6.fit(trainx, trainy, batch_size = BATCH_SIZE, epochs = NB_EPOCH2,
                        verbose = VERBOSE, validation_data = [valx,valy])
    score6 = model6.evaluate(test_x, test_y, verbose = VERBOSE)
    print(score6)

    scoredict[i] = score6[1]
scoredict

kaggel_手写数字识别

value = [];keys = []
for key in scoredict.keys():
    value.append(scoredict[key])
    keys.append(key)
df = DataFrame(value,index=keys,columns=['resultRate'])
df['num'] = range(len(keys))

df['resultRate'].plot()
plt.xlabel('dropoutrate')
plt.ylabel('accuracy')
plt.xticks(keys)
plt.show()
#  由此可知,最优的dropoutrate 是0.3
print(keys)
print(value)
kaggel_手写数字识别
resultList.append(score5[1])
versionList.append('Adam')
df = DataFrame(resultList,index=versionList,columns=['resultRate'])
df['num'] = range(len(resultList))
df

kaggel_手写数字识别

df['resultRate'].plot(style='-.bo')
plt.grid(axis='y')
 #设置数字标签**
for a,b in zip(df['num'],df['resultRate']):
    plt.text(a, b+0.001, '%.4f' % b, ha='center', va= 'bottom',fontsize=9)
plt.show()
kaggel_手写数字识别

2.6——确定最适合学习率

scoredict1 = {}
LrLIST = ['0.1','0.01','0.001','0.0001']

for i in LrLIST:
    start = time.clock()
    model7 = Sequential()
    model7.add(Dense(units=N_HIDDEN, input_shape=(784,)))
    model7.add(Activation('relu'))
    model7.add(Dropout(DROPOUT))

    model7.add(Dense(N_HIDDEN))
    model7.add(Activation('relu'))
    model7.add(Dropout(DROPOUT))

    model7.add(Dense(NB_CLASSES))
    model7.add(Activation('softmax'))
    model7.summary()

    NB_EPOCH2 = 50
    OPTIMIZER = Adam(lr = float(i))
    model7.compile(loss='categorical_crossentropy', optimizer=OPTIMIZER, metrics=['accuracy'])
    history = model7.fit(train_x, train_y, batch_size = BATCH_SIZE, epochs = NB_EPOCH2,
                        verbose = VERBOSE, validation_split = VALIDATION_SPLIT)
    score7 = model7.evaluate(test_x, test_y, verbose = VERBOSE)
    end = time.clock()
    
    usetime = end-start
    scoredict1[i] = (score7[1], usetime)
value = [];keys = [];times= []
for key in scoredict1.keys():
    value.append(scoredict1[key][0])
    times.append(scoredict1[key][1])
    keys.append(key)


df = DataFrame(value,index=range(len(value)),columns=['resultRate'])
df['lr'] = keys
df.sort_values('lr',ascending=False,inplace=True)
df['lr'] = df['lr'].astype(str)

df = df.set_index('lr')
df['num'] = range(len(value))
df['times'] = np.round(times,2)
df

kaggel_手写数字识别

df['resultRate'].plot(style='-.bo')
plt.grid(axis='y')
 #设置数字标签**
for a,b in zip(df['num'],df['resultRate']):
    plt.text(a, b+0.01, '%.4f' % b, ha='center', va= 'bottom',fontsize=9)
plt.show()

kaggel_手写数字识别

2.7 观察最合适的batch值

Incresing the size of batch computation

# 测试batch_size 的影响
scoredict2 = {}
batchLIST = ['64','128','256','512']

for i in batchLIST:
    model8 = Sequential()
    model8.add(Dense(units=N_HIDDEN, input_shape=(784,)))
    model8.add(Activation('relu'))
    model8.add(Dropout(DROPOUT))

    model8.add(Dense(N_HIDDEN))
    model8.add(Activation('relu'))
    model8.add(Dropout(DROPOUT))

    model8.add(Dense(NB_CLASSES))
    model8.add(Activation('softmax'))
    model8.summary()


    model8.compile(loss='categorical_crossentropy', optimizer=OPTIMIZER, metrics=['accuracy'])
    history = model8.fit(train_x, train_y, batch_size = int(i), epochs = NB_EPOCH,
                        verbose = VERBOSE, validation_split = VALIDATION_SPLIT)
    score8 = model8.evaluate(test_x, test_y, verbose = VERBOSE)
    scoredict2[i] = score8[1]
value = [];keys = [] 
for key in scoredict2.keys():
    value.append(scoredict2[key] ) 
    keys.append(int(key))
    
df = DataFrame(value,index=range(len(value)),columns=['resultRate'])
df['batch'] = keys
df.sort_values('batch',ascending=True,inplace=True)
df['batch'] = df['batch'].astype(str)

df = df.set_index('batch')
df['num'] = range(len(value))
df

kaggel_手写数字识别

df['resultRate'].plot(style='-.bo')
plt.grid(axis='y')
 #设置数字标签**
for a,b in zip(df['num'],df['resultRate']):
    plt.text(a, b+0.0001, '%.4f' % b, ha='center', va= 'bottom',fontsize=9)
plt.show()

 kaggel_手写数字识别

3 神经网络——CNN

3.1 最简单CNN

X_train = data1.values.reshape(-1,28,28,1)
# test = test.values.reshape(-1,28,28,1)
train_x1, test_x1, train_y1, test_y1 = model_selection.train_test_split(X_train, y_train,random_state=10)  # 默认切分的测试比例是0.25
X_test = test1.values.reshape(-1,28,28,1)
X_test.shape
from keras.layers.convolutional import Conv2D
from keras.layers.convolutional import MaxPooling2D 
from keras.layers import Flatten
INPUT_SHAPE = (28,28,1)
OPTIMIZER = Adam()
NB_EPOCH = 50
BATCH_SIZE = 128
VERBOSE = 1 # 表示更新日志
NB_CLASSES = 10
VALIDATION_SPLIT = 0.2
model9 = Sequential()
model9.add(Conv2D(20, kernel_size=5, padding='same', input_shape=INPUT_SHAPE))
model9.add(Activation('relu'))
model9.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))


model9.add(Conv2D(50, kernel_size=5, border_mode='same' ))
model9.add(Activation('relu'))
model9.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))

model9.add(Flatten())
model9.add(Dense(500))
model9.add(Activation('relu'))
model9.add(Dense(NB_CLASSES))
model9.add(Activation('softmax'))

model9.compile(loss='categorical_crossentropy', optimizer=OPTIMIZER, metrics=['accuracy'])
history = model9.fit(train_x1, train_y1, epochs = NB_EPOCH, batch_size=BATCH_SIZE,\
                     verbose=VERBOSE, validation_split=VALIDATION_SPLIT)
score9 = model9.evaluate(test_x1, test_y1, verbose = VERBOSE)
print(score9)

kaggel_手写数字识别

3.2 加上随机删除Dropout

from keras.layers.core import Dropout
DROPOUT = 0.3
# conv+maxpool+dropout+conv+droup+maxPool
model10 = Sequential()
model10.add(Conv2D(20, kernel_size=5, padding='same', input_shape=INPUT_SHAPE))
model10.add(Activation('relu'))
model10.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
model10.add(Dropout(DROPOUT))

model10.add(Conv2D(50, kernel_size=5, border_mode='same' ))
model10.add(Activation('relu'))
model10.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
model10.add(Dropout(DROPOUT))

model10.add(Flatten())
model10.add(Dense(500))
model10.add(Activation('relu'))
model10.add(Dropout(DROPOUT))
model10.add(Dense(NB_CLASSES))
model10.add(Activation('softmax'))

model10.compile(loss='categorical_crossentropy', optimizer=OPTIMIZER, metrics=['accuracy'])
history = model10.fit(train_x1, train_y1, epochs = NB_EPOCH, batch_size=BATCH_SIZE,\
                     verbose=VERBOSE, validation_split=VALIDATION_SPLIT)
score10 = model10.evaluate(test_x1, test_y1, verbose = VERBOSE)
print(score10)

[0.038056954765370916, 0.99076190476190473]

fig = plt.figure(figsize=(14,5))
fig.set(alpha=0.2)  # 设定图表颜色alpha参数
plt.subplot2grid((1,2),(0,0))
# 画精确度图
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accurary')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
# plt.tight_layout()
# plt.show()
plt.subplot2grid((1,2),(0,1))
# 画损失图
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
plt.show()

kaggel_手写数字识别

3.3 加一个卷积层

# conv+conv+maxpool+dropout+conv+conv+droup+maxPool
model12 = Sequential()
# model.add(Conv2D(filters = 32, kernel_size = (5,5),padding = 'Same', 
#                  activation ='relu', input_shape = (28,28,1)))
model12.add(Conv2D(20, kernel_size=5, padding='same', input_shape=INPUT_SHAPE))
model12.add(Activation('relu'))
model12.add(Conv2D(20, kernel_size=5, padding='same' ))
model12.add(Activation('relu'))
model12.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
model12.add(Dropout(DROPOUT))

model12.add(Conv2D(50, kernel_size=5, border_mode='same' ))
model12.add(Activation('relu'))
model12.add(Conv2D(50, kernel_size=5, border_mode='same' ))
model12.add(Activation('relu'))
model12.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
model12.add(Dropout(DROPOUT))

model12.add(Flatten())
model12.add(Dense(500))
model12.add(Activation('relu'))
model12.add(Dropout(DROPOUT))
model12.add(Dense(NB_CLASSES))
model12.add(Activation('softmax'))

model12.compile(loss='categorical_crossentropy', optimizer=OPTIMIZER, metrics=['accuracy'])
history12 = model12.fit(train_x1, train_y1, epochs = NB_EPOCH, batch_size=BATCH_SIZE,\
                     verbose=2, validation_split=VALIDATION_SPLIT)
score12 = model12.evaluate(test_x1, test_y1, verbose = VERBOSE)
print(score12)

[0.03799814764385312, 0.99228571428571433]

fig = plt.figure(figsize=(14,5))
fig.set(alpha=0.2)  # 设定图表颜色alpha参数
plt.subplot2grid((1,2),(0,0))
# 画精确度图
plt.plot(history12.history['acc'])
plt.plot(history12.history['val_acc'])
plt.title('model accurary')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
# plt.tight_layout()
# plt.show()
plt.subplot2grid((1,2),(0,1))
# 画损失图
plt.plot(history12.history['loss'])
plt.plot(history12.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
plt.show()

kaggel_手写数字识别

3.3 添加Augment微调

from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import ReduceLROnPlateau
model11 = Sequential()
model11.add(Conv2D(20, kernel_size=5, padding='same', input_shape=INPUT_SHAPE))
model11.add(Activation('relu'))
model11.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
model11.add(Dropout(DROPOUT))

model11.add(Conv2D(50, kernel_size=5, border_mode='same' ))
model11.add(Activation('relu'))
model11.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
model11.add(Dropout(DROPOUT))

model11.add(Flatten())
model11.add(Dense(500))
model11.add(Activation('relu'))
model11.add(Dropout(DROPOUT))
model11.add(Dense(NB_CLASSES))
model11.add(Activation('softmax'))

model11.compile(loss='categorical_crossentropy', optimizer=OPTIMIZER, metrics=['accuracy'])
# history = model11.fit(train_x1, train_y1, epochs = NB_EPOCH, batch_size=BATCH_SIZE,\
#                      verbose=VERBOSE, validation_split=VALIDATION_SPLIT)
datagen = ImageDataGenerator(
        featurewise_center=False,  # set input mean to 0 over the dataset
        samplewise_center=False,  # set each sample mean to 0
        featurewise_std_normalization=False,  # divide inputs by std of the dataset
        samplewise_std_normalization=False,  # divide each input by its std
        zca_whitening=False,  # apply ZCA whitening
        rotation_range=10,  # randomly rotate images in the range (degrees, 0 to 180)
        zoom_range = 0.1, # Randomly zoom image 
        width_shift_range=0.1,  # randomly shift images horizontally (fraction of total width)
        height_shift_range=0.1,  # randomly shift images vertically (fraction of total height)
        horizontal_flip=False,  # randomly flip images
        vertical_flip=False)  # randomly flip images

# For the data augmentation, i choosed to :
Randomly rotate some training images by 10 degrees
Randomly Zoom by 10% some training images
Randomly shift images horizontally by 10% of the width

Randomly shift images vertically by 10% of the height

train_x2, test_x2, train_y2, test_y2 = model_selection.train_test_split(train_x1, train_y1,test_size = 0.2)
datagen.fit(train_x2)
learning_rate_reduction = ReduceLROnPlateau(monitor='val_acc', 
                                            patience=3, 
                                            verbose=1, 
                                            factor=0.5, 
                                            min_lr=0.00001)
# Fit the model
# 默认情况下用fit方法载数据,就是全部载入。换用fit_generator方法就会以自己手写的方法用yield逐块装入。
history11 = model11.fit_generator(datagen.flow(train_x2,train_y2, batch_size=BATCH_SIZE),\
                              epochs = 30, validation_data =(test_x2, test_y2),\
                              verbose = 2, steps_per_epoch=train_x1.shape[0] // BATCH_SIZE, callbacks=[learning_rate_reduction])
score11 = model11.evaluate(test_x1, test_y1, verbose = VERBOSE)
print(score11)

[0.01490157852299966, 0.99514285714285711]

results = model11.predict(X_test)

# select the indix with the maximum probability
results = np.argmax(results,axis = 1)

results = pd.Series(results,name="Label")

submission = pd.concat([pd.Series(range(1,28001),name = "ImageId"),results],axis = 1)

submission.to_csv("resultmodel11.csv",index=False) # @ 0.99471

此时准确率:0.99471

fig = plt.figure(figsize=(14,5))
fig.set(alpha=0.2)  # 设定图表颜色alpha参数
plt.subplot2grid((1,2),(0,0))
# 画精确度图
plt.plot(history11.history['acc'])
plt.plot(history11.history['val_acc'])
plt.title('model accurary')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
# plt.tight_layout()
# plt.show()
plt.subplot2grid((1,2),(0,1))
# 画损失图
plt.plot(history11.history['loss'])
plt.plot(history11.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
plt.show()

kaggel_手写数字识别

3.4 加一个卷积层+微调Augment

model13 = Sequential()
model13.add(Conv2D(20, kernel_size=5, padding='same', input_shape=INPUT_SHAPE))
model13.add(Activation('relu'))
model13.add(Conv2D(20, kernel_size=5, padding='same' ))
model13.add(Activation('relu'))
model13.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
model13.add(Dropout(DROPOUT))

model13.add(Conv2D(50, kernel_size=5, border_mode='same' ))
model13.add(Activation('relu'))
model13.add(Conv2D(50, kernel_size=5, border_mode='same' ))
model13.add(Activation('relu'))
model13.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
model13.add(Dropout(DROPOUT))

model13.add(Flatten())
model13.add(Dense(500))
model13.add(Activation('relu'))
model13.add(Dropout(DROPOUT))
model13.add(Dense(NB_CLASSES))
model13.add(Activation('softmax'))

model13.compile(loss='categorical_crossentropy', optimizer=OPTIMIZER, metrics=['accuracy'])
# Fit the model
# 默认情况下用fit方法载数据,就是全部载入。换用fit_generator方法就会以自己手写的方法用yield逐块装入。
history13 = model13.fit_generator(datagen.flow(train_x2,train_y2, batch_size=BATCH_SIZE),\
                              epochs = 30, validation_data =(test_x2, test_y2),\
                              verbose = 2, steps_per_epoch=train_x1.shape[0] // BATCH_SIZE, callbacks=[learning_rate_reduction])
score13 = model13.evaluate(test_x1, test_y1, verbose = VERBOSE)
print(score13)

[0.014365676611257922, 0.99580952380952381]

fig = plt.figure(figsize=(14,5))
fig.set(alpha=0.2)  # 设定图表颜色alpha参数
plt.subplot2grid((1,2),(0,0))
# 画精确度图
plt.plot(history11.history['acc'])
plt.plot(history11.history['val_acc'])
plt.title('model accurary')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
# plt.tight_layout()
# plt.show()
plt.subplot2grid((1,2),(0,1))
# 画损失图
plt.plot(history11.history['loss'])
plt.plot(history11.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
plt.show()

kaggel_手写数字识别

results = model13.predict(X_test)

# select the indix with the maximum probability
results = np.argmax(results,axis = 1)

results = pd.Series(results,name="Label")

submission = pd.concat([pd.Series(range(1,28001),name = "ImageId"),results],axis = 1)

submission.to_csv("resultmodel13.csv",index=False) # @0.99457

此时准确率:0.99457

3.5 使用RMSprop优化函数

model13_3 = Sequential()
model13_3.add(Conv2D(20, kernel_size=5, padding='same', input_shape=INPUT_SHAPE))
model13_3.add(Activation('relu'))
model13_3.add(Conv2D(20, kernel_size=5, padding='same' ))
model13_3.add(Activation('relu'))
model13_3.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
model13_3.add(Dropout(DROPOUT))

model13_3.add(Conv2D(50, kernel_size=5, border_mode='same' ))
model13_3.add(Activation('relu'))
model13_3.add(Conv2D(50, kernel_size=5, border_mode='same' ))
model13_3.add(Activation('relu'))
model13_3.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
model13_3.add(Dropout(DROPOUT))

model13_3.add(Flatten())
model13_3.add(Dense(500))
model13_3.add(Activation('relu'))
model13_3.add(Dropout(DROPOUT))
model13_3.add(Dense(NB_CLASSES))
model13_3.add(Activation('softmax'))

model13_3.compile(loss='categorical_crossentropy', optimizer=OPTIMIZER, metrics=['accuracy'])
OPTIMIZER1 = RMSprop(lr=0.001, rho=0.9, epsilon=1e-08, decay=0.0)
model13_3.compile(loss='categorical_crossentropy', optimizer=OPTIMIZER1, metrics=['accuracy'])

history13_3 = model13_3.fit_generator(datagen.flow(train_x2,train_y2, batch_size=BATCH_SIZE),\
                              epochs = 30, validation_data =(test_x2, test_y2),\
                              verbose = 2, steps_per_epoch=train_x1.shape[0] // BATCH_SIZE, callbacks=[learning_rate_reduction])

score13_3 = model13_3.evaluate(test_x1, test_y1, verbose = VERBOSE)
print(score13_3)  

[0.017972873184371393, 0.99476190476190474]

print('acc is:',history13_3.history['acc'][-1])
print('val_acc is:',history13_3.history['val_acc'][-1])
acc is: 0.992561037761
val_acc is: 0.994761904762

3.6 再加一个层

model13_2 = Sequential()
model13_2.add(Conv2D(20, kernel_size=5, padding='same', input_shape=INPUT_SHAPE))
model13_2.add(Activation('relu'))
model13_2.add(Conv2D(20, kernel_size=5, padding='same' ))
model13_2.add(Activation('relu'))
model13_2.add(Conv2D(20, kernel_size=5, padding='same' ))
model13_2.add(Activation('relu'))
model13_2.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
model13_2.add(Dropout(DROPOUT))

model13_2.add(Conv2D(50, kernel_size=5, border_mode='same' ))
model13_2.add(Activation('relu'))
model13_2.add(Conv2D(50, kernel_size=5, border_mode='same' ))
model13_2.add(Activation('relu'))
model13_2.add(Conv2D(50, kernel_size=5, border_mode='same' ))
model13_2.add(Activation('relu'))
model13_2.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
model13_2.add(Dropout(DROPOUT))

model13_2.add(Flatten())
model13_2.add(Dense(500))
model13_2.add(Activation('relu'))
model13_2.add(Dropout(DROPOUT))
model13_2.add(Dense(NB_CLASSES))
model13_2.add(Activation('softmax'))

model13_2.compile(loss='categorical_crossentropy', optimizer=OPTIMIZER, metrics=['accuracy'])
history13_2 = model13_2.fit_generator(datagen.flow(train_x2,train_y2, batch_size=BATCH_SIZE),\
                              epochs = 30, validation_data =(test_x2, test_y2),\
                              verbose = 2, steps_per_epoch=train_x1.shape[0] // BATCH_SIZE, callbacks=[learning_rate_reduction])
score13_2 = model13_2.evaluate(test_x1, test_y1, verbose = VERBOSE)
print(score13_2)

fig = plt.figure(figsize=(14,5))
fig.set(alpha=0.2)  # 设定图表颜色alpha参数
plt.subplot2grid((1,2),(0,0))
# 画精确度图
plt.plot(history13_2.history['acc'])
plt.plot(history13_2.history['val_acc'])
plt.title('model accurary')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
# plt.tight_layout()
# plt.show()
plt.subplot2grid((1,2),(0,1))
# 画损失图
plt.plot(history13_2.history['loss'])
plt.plot(history13_2.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
plt.show()

kaggel_手写数字识别

3.6 预测test

使用这个训练train_x1, test_x1, train_y1, test_y1

model13_1 = Sequential()
model13_1.add(Conv2D(20, kernel_size=5, padding='same', input_shape=INPUT_SHAPE))
model13_1.add(Activation('relu'))
model13_1.add(Conv2D(20, kernel_size=5, padding='same' ))
model13_1.add(Activation('relu'))
model13_1.add(Conv2D(20, kernel_size=5, padding='same' ))
model13_1.add(Activation('relu'))
model13_1.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
model13_1.add(Dropout(DROPOUT))

model13_1.add(Conv2D(50, kernel_size=5, border_mode='same' ))
model13_1.add(Activation('relu'))
model13_1.add(Conv2D(50, kernel_size=5, border_mode='same' ))
model13_1.add(Activation('relu'))
model13_1.add(Conv2D(50, kernel_size=5, border_mode='same' ))
model13_1.add(Activation('relu'))
model13_1.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
model13_1.add(Dropout(DROPOUT))

model13_1.add(Flatten())
model13_1.add(Dense(256))
model13_1.add(Activation('relu'))
model13_1.add(Dropout(DROPOUT))
model13_1.add(Dense(NB_CLASSES))
model13_1.add(Activation('softmax'))

model13_1.compile(loss='categorical_crossentropy', optimizer=OPTIMIZER, metrics=['accuracy'])

history13_1 = model13_1.fit_generator(datagen.flow(train_x1,train_y1, batch_size=BATCH_SIZE),\
                              epochs = 30, validation_data =(test_x1, test_y1),\
                              verbose = 2, steps_per_epoch=train_x1.shape[0] // BATCH_SIZE, callbacks=[learning_rate_reduction])
fig = plt.figure(figsize=(14,5))
fig.set(alpha=0.2)  # 设定图表颜色alpha参数
plt.subplot2grid((1,2),(0,0))
# 画精确度图
plt.plot(history13_1.history['acc'])
plt.plot(history13_1.history['val_acc'])
plt.title('model accurary')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
# plt.tight_layout()
# plt.show()
plt.subplot2grid((1,2),(0,1))
# 画损失图
plt.plot(history13_1.history['loss'])
plt.plot(history13_1.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
plt.show()

kaggel_手写数字识别

results = model13_1.predict(X_test)

# select the indix with the maximum probability
results = np.argmax(results,axis = 1)

results = pd.Series(results,name="Label")

submission = pd.concat([pd.Series(range(1,28001),name = "ImageId"),results],axis = 1)

submission.to_csv("resultmodel13_1.csv",index=False) # @0.98
当前准确率:0.98

4 CNN——参考前人的Kernel中的参数

4.1 原作者代码:

model14 = Sequential()
# 卷积
model14.add(Conv2D(filters = 32, kernel_size = (5,5),padding = 'Same', 
                 activation ='relu', input_shape = (28,28,1)))
model14.add(Conv2D(filters = 32, kernel_size = (5,5),padding = 'Same', 
                 activation ='relu'))
# 池化
model14.add(MaxPooling2D(pool_size=(2,2)))
# 随机删除
model14.add(Dropout(0.25))

# 卷积
model14.add(Conv2D(filters = 64, kernel_size = (3,3),padding = 'Same', 
                 activation ='relu'))
model14.add(Conv2D(filters = 64, kernel_size = (3,3),padding = 'Same', 
                 activation ='relu'))
# 池化
model14.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
# 随机删除
model14.add(Dropout(0.25))

# 全连接层
model14.add(Flatten())
model14.add(Dense(256, activation = "relu"))
model14.add(Dropout(0.5))
model14.add(Dense(10, activation = "softmax"))

model14.compile(loss='categorical_crossentropy', optimizer=OPTIMIZER, metrics=['accuracy'])
# Fit the model
# 默认情况下用fit方法载数据,就是全部载入。换用fit_generator方法就会以自己手写的方法用yield逐块装入。
history14 = model14.fit_generator(datagen.flow(train_x2,train_y2, batch_size=BATCH_SIZE),\
                              epochs = 30, validation_data =(test_x2, test_y2),\
                              verbose = 2, steps_per_epoch=train_x1.shape[0] // BATCH_SIZE, callbacks=[learning_rate_reduction])
score14 = model14.evaluate(test_x1, test_y1, verbose = VERBOSE)
print(score14)

[0.046259992936537382, 0.98457142857142854]

fig = plt.figure(figsize=(14,5))
fig.set(alpha=0.2)  # 设定图表颜色alpha参数
plt.subplot2grid((1,2),(0,0))
# 画精确度图
plt.plot(history14.history['acc'])
plt.plot(history14.history['val_acc'])
plt.title('model accurary')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
# plt.tight_layout()
# plt.show()
plt.subplot2grid((1,2),(0,1))
# 画损失图
plt.plot(history14.history['loss'])
plt.plot(history14.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
plt.show()

kaggel_手写数字识别

4.2 使用rmsprop()优化器

OPTIMIZER1 = RMSprop(lr=0.001, rho=0.9, epsilon=1e-08, decay=0.0)
model14.compile(loss='categorical_crossentropy', optimizer=OPTIMIZER1, metrics=['accuracy'])

history14 = model14.fit_generator(datagen.flow(train_x2,train_y2, batch_size=BATCH_SIZE),\
                              epochs = 30, validation_data =(test_x2, test_y2),\
                              verbose = 2, steps_per_epoch=train_x1.shape[0] // BATCH_SIZE, callbacks=[learning_rate_reduction])

score14 = model14.evaluate(test_x1, test_y1, verbose = VERBOSE)
print(score14)  # @0.99571
results = model14.predict(X_test)

# select the indix with the maximum probability
results = np.argmax(results,axis = 1)

results = pd.Series(results,name="Label")

submission = pd.concat([pd.Series(range(1,28001),name = "ImageId"),results],axis = 1)

submission.to_csv("resultmodel14.csv",index=False) # @0.99571

Kaggle结果:0.99571(目前最优结果)

4.3 调节Batch_size的值

batchList =[70,75,110,256] #[80,86,100,128]
# batchdic={}
OPTIMIZER1 = RMSprop(lr=0.001, rho=0.9, epsilon=1e-08, decay=0.0)
for i in batchList:
    model15 = Sequential()
    # 卷积
    model15.add(Conv2D(filters = 32, kernel_size = (5,5),padding = 'Same', 
                     activation ='relu', input_shape = (28,28,1)))
    model15.add(Conv2D(filters = 32, kernel_size = (5,5),padding = 'Same', 
                     activation ='relu'))
    # 池化
    model15.add(MaxPooling2D(pool_size=(2,2)))
    # 随机删除
    model15.add(Dropout(0.25))

    # 卷积
    model15.add(Conv2D(filters = 64, kernel_size = (3,3),padding = 'Same', 
                     activation ='relu'))
    model15.add(Conv2D(filters = 64, kernel_size = (3,3),padding = 'Same', 
                     activation ='relu'))
    # 池化
    model15.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
    # 随机删除
    model15.add(Dropout(0.25))

    # 全连接层
    model15.add(Flatten())
    model15.add(Dense(256, activation = "relu"))
    model15.add(Dropout(0.5))
    model15.add(Dense(10, activation = "softmax"))

  
    model15.compile(loss='categorical_crossentropy', optimizer=OPTIMIZER1, metrics=['accuracy'])

    history15 = model15.fit_generator(datagen.flow(train_x2,train_y2, batch_size= i ),\
                                  epochs = 30, validation_data =(test_x2, test_y2),\
                                  verbose = 2, steps_per_epoch=train_x2.shape[0] // i, callbacks=[learning_rate_reduction])

    score15 = model15.evaluate(test_x1, test_y1, verbose = VERBOSE)
    batchdic[i] = score15[1]
batchdic  # 80最好

kaggel_手写数字识别

fig = plt.figure(figsize=(14,5))
fig.set(alpha=0.2)  # 设定图表颜色alpha参数
plt.subplot2grid((1,2),(0,0))
# 画精确度图
plt.plot(history14.history['acc'])
plt.plot(history14.history['val_acc'])
plt.title('model accurary')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
# plt.tight_layout()
# plt.show()
plt.subplot2grid((1,2),(0,1))
# 画损失图
plt.plot(history14.history['loss'])
plt.plot(history14.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='best')
plt.show()

kaggel_手写数字识别

4.4 调整augment

# 对比model14
model17 = Sequential()
# 卷积
model17.add(Conv2D(filters = 32, kernel_size = (5,5),padding = 'Same', 
                 activation ='relu', input_shape = (28,28,1)))
model17.add(Conv2D(filters = 32, kernel_size = (5,5),padding = 'Same', 
                 activation ='relu'))
# 池化
model17.add(MaxPooling2D(pool_size=(2,2)))
# 随机删除
model17.add(Dropout(0.25))

# 卷积
model17.add(Conv2D(filters = 64, kernel_size = (3,3),padding = 'Same', 
                 activation ='relu'))
model17.add(Conv2D(filters = 64, kernel_size = (3,3),padding = 'Same', 
                 activation ='relu'))
# 池化
model17.add(MaxPooling2D(pool_size=(2,2), strides=(2,2)))
# 随机删除
model17.add(Dropout(0.25))

# 全连接层
model17.add(Flatten())
model17.add(Dense(256, activation = "relu"))
model17.add(Dropout(0.5))
model17.add(Dense(10, activation = "softmax"))
                  
model17.compile(loss='categorical_crossentropy', optimizer=OPTIMIZER1, metrics=['accuracy'])

                  
datagen = ImageDataGenerator(
        featurewise_center=False,  # set input mean to 0 over the dataset
        samplewise_center=False,  # set each sample mean to 0
        featurewise_std_normalization=False,  # divide inputs by std of the dataset
        samplewise_std_normalization=False,  # divide each input by its std
        zca_whitening=False,  # apply ZCA whitening
        rotation_range=10,  # randomly rotate images in the range (degrees, 0 to 180)
        zoom_range = 0.1, # Randomly zoom image 
        width_shift_range=0.1,  # randomly shift images horizontally (fraction of total width)
        height_shift_range=0.1,  # randomly shift images vertically (fraction of total height)
        horizontal_flip=False,  # randomly flip images
        vertical_flip=False)  # randomly flip images


history17 = model17.fit_generator(datagen.flow(train_x2,train_y2, batch_size= 128),\
                              epochs = 30, validation_data =(test_x2, test_y2),\
                              verbose = 2, steps_per_epoch=train_x1.shape[0] // 128, callbacks=[learning_rate_reduction])

score17 = model17.evaluate(test_x1, test_y1, verbose = VERBOSE)
print(score17)

[0.07702009039975348, 0.97599999999999998]

results = model17.predict(X_test)

# select the indix with the maximum probability
results = np.argmax(results,axis = 1)

results = pd.Series(results,name="Label")

submission = pd.concat([pd.Series(range(1,28001),name = "ImageId"),results],axis = 1)

submission.to_csv("resultmodel17.csv",index=False)

运行结果:这个,,,忘记了 印象中只记得不是最好的