【《机器学习》第5章神经网络】神经元模型+感知机与多层网络+误差逆传播算法+全局最小与局部最小
程序员文章站
2024-03-14 11:49:52
...
神经网络模拟人脑的神经系统而来
MP神经元
模拟人类的一个神经元,是一个单层的神经网络,被称为“简单单元”
逻辑回归到神经元
逻辑回归中主要包括线性变换和非线性变换两部分,神经元中,可以把线性和非线性看成一个整体。
感知机
感知机由两层神经元组成,输入层接受外界输入,输出层是一个MP神经元。
对于一个二维平面的二分类问题,感知机将会不断的逐点修正 ,首先在超平面上随意取一条分类面,统计分类错误的点 ;然后随机对某个错误点就行修正 ,即变换直线的位置,使该错误点得以修正;接着再随机选择一个错误点进行纠正,分类面不断变化,直到所有的点 都完全分类正确了,就得到了最佳的分类面。
对样本进行规范化处理,即负类样本全部乘以(-1),则有:
y ^ = W ^ T X > 0 \hat{y}=\hat{W}^TX > 0 y^=W^TX>0
感知机算法通过对已知类别的训练样本集的学习,寻找一个满足上式的权向量。
见课件感知机例题
#感知机算法
import numpy as np
import matplotlib.pyplot as plt
X0 = np.array([[1,0],
[0,1],
[2,0],
[2,2]])
X1 = np.array([[-1,-1],
[-1,0],
[-2,-1],
[0,-2]])
plt.grid()
plt.scatter(X0[:,0],X0[:,1],c = 'r',marker='o',s=500)
plt.scatter(X1[:,0],X1[:,1],c = 'g',marker='*',s=500)
plt.show()
#将样本数据化为增广向量矩阵
ones = -np.ones((X0.shape[0],1))
X0 = np.hstack((ones,X0))
ones = -np.ones((X1.shape[0],1))
X1 = np.hstack((ones,X1))
#对样本进行规范化处理
X = np.vstack((X0,-X1))
plt.grid()
plt.scatter(X0[:,1],X0[:,2],c = 'r',marker='o',s=500)
plt.scatter(X1[:,1],X1[:,2],c = 'g',marker='*',s=500)
W = np.ones((X.shape[1],1))
p1=[-2.0,2.0]
p2=[(W[0]+2*W[1])/W[2],(W[0]-2*W[1])/W[2]]
plt.plot(p1,p2)
flag = True
while(flag):
flag = False
for i in range(len(X)):
x = X[i,:].reshape(-1,1)
if np.dot(W.T,x)<=0:
W = W + x
p2=[(W[0]+2*W[1])/W[2],(W[0]-2*W[1])/W[2]]
print(W)
plt.plot(p1,p2)
flag = True
plt.show()
感知机与逻辑操作
逻辑与
#逻辑与
import numpy as np
import matplotlib.pyplot as plt
X0 = np.array([[0,0],
[0,1],
[1,0]])
X1 = np.array([[1,1]])
ones = -np.ones((X0.shape[0],1))
X0 = np.hstack((ones,X0))
ones = -np.ones((X1.shape[0],1))
X1 = np.hstack((ones,X1))
X = np.vstack((-X0,X1))
W = np.ones((X.shape[1],1))
flag = True
while(flag):
flag = False
for i in range(len(X)):
x = X[i,:].reshape(-1,1)
if np.dot(W.T,x)<=0:
W = W + x
flag = True
plt.grid()
plt.scatter(X0[:,1],X0[:,2],c = 'r',marker='o',s=500)
plt.scatter(X1[:,1],X1[:,2],c = 'g',marker='*',s=500)
p1=[0,1]
p2=[(W[0]-p1[0]*W[1])/W[2],(W[0]-p1[1]*W[1])/W[2]]
plt.plot(p1,p2)
plt.show()
print(W)
逻辑或
#逻辑或
import numpy as np
import matplotlib.pyplot as plt
X0 = np.array([[0,0]])
X1 = np.array([[0,1],
[1,0],
[1,1]])
ones = -np.ones((X0.shape[0],1))
X0 = np.hstack((ones,X0))
ones = -np.ones((X1.shape[0],1))
X1 = np.hstack((ones,X1))
X = np.vstack((-X0,X1))
Y = np.array([[0],[0],[0],[0],[1],[1],[1],[1]])
W = np.ones((X.shape[1],1))
flag = True
while(flag):
flag = False
for i in range(len(X)):
x = X[i,:].reshape(-1,1)
if np.dot(W.T,x)<=0:
W = W + x
flag = True
plt.grid()
plt.scatter(X0[:,1],X0[:,2],c = 'r',marker='o',s=500)
plt.scatter(X1[:,1],X1[:,2],c = 'g',marker='*',s=500)
p1=[0,1]
p2=[(W[0]-p1[0]*W[1])/W[2],(W[0]-p1[1]*W[1])/W[2]]
plt.plot(p1,p2)
plt.show()
print(W)
逻辑或非
#逻辑或非
import numpy as np
import matplotlib.pyplot as plt
X1 = np.array([[0,0]])
X0 = np.array([[0,1],
[1,0],
[1,1]])
ones = -np.ones((X0.shape[0],1))
X0 = np.hstack((ones,X0))
ones = -np.ones((X1.shape[0],1))
X1 = np.hstack((ones,X1))
X = np.vstack((-X0,X1))
Y = np.array([[0],[0],[0],[0],[1],[1],[1],[1]])
W = np.ones((X.shape[1],1))
flag = True
while(flag):
flag = False
for i in range(len(X)):
x = X[i,:].reshape(-1,1)
if np.dot(W.T,x)<=0:
W = W + x
flag = True
plt.grid()
plt.scatter(X0[:,1],X0[:,2],c = 'r',marker='o',s=500)
plt.scatter(X1[:,1],X1[:,2],c = 'g',marker='*',s=500)
p1=[0,1]
p2=[(W[0]-p1[0]*W[1])/W[2],(W[0]-p1[1]*W[1])/W[2]]
plt.plot(p1,p2)
plt.show()
print(W)
单层的感知机是不能够完成异或 运算的,但是两层的神经元可以实现,这就需要在输入层与输出层之间加一个隐含层了。
逻辑异或
#异或
import numpy as np
import matplotlib.pyplot as plt
X0 = np.array([[0,0],
[1,1]])
X1 = np.array([[0,1],
[1,0]])
plt.grid()
plt.scatter(X0[:,0],X0[:,1],c = 'r',marker='o',s=500)
plt.scatter(X1[:,0],X1[:,1],c = 'g',marker='*',s=500)
plt.show()
多层前馈神经网络
更常见的神经网络:输入层、隐含层、输出层组成,特点是:神经元之间是全连接。称之为前馈神经网络。
神经网络的学习过程就是根据训练数据来调整神经元的连接之间的权值和阈值,换言之,神经网络学习的东西在连接权值和阈值中。
误差逆传播算法(BP算法)
用BP算法来训练多层前馈神经网络。
假设输入有d维特征向量,l个输出值,隐含层使用q个神经元。那么我们可以确定学习的参数的数目为:【d*q+q+q*l+l】–这里是两层的神经网络!!!
BP算法使用梯度下降更新参数。
BP算法推导
固定增量与批量
批量梯度下降:使用所有训练数据,计算误差使用全部前馈计算结果
输出层**函数
soft-max**函数
对于多分类问题中,需要对分类进行one-hot编码 。
损失函数
代码
#BP神经网络
import numpy as np
X=np.array([[-1,0,0],[-1,0,1],[-1,1,0],[-1,1,1]])
Y=np.array([[0],[1],[1],[0]])
V = (np.random.rand(3,2)-0.5)*2/np.sqrt(2)
W = (np.random.rand(3,1)-0.5)*2/np.sqrt(2)
i=1
while i<2000:
B=1/(1+np.exp(-np.dot(X,V)))
ones=-np.ones((4,1))
B=np.hstack((B,ones))
Y_h=1/(1+np.exp(-np.dot(B,W)))
G= (Y-Y_h)*Y_h*(1-Y_h)
E=B*(1-B)*np.dot(G,W.T)
W=W+np.dot(B.T,G)
V=V+np.dot(X.T,E[:,:-1])
i=i+1
print(Y_h)
全局最小与局部最小
- 局部极小:邻域点误差函数值均不小于该点的函数值。
- 全局最小:参数空间内所有点的误差函数值均不小于该点的误差函数值。
跳出局部最小的策略:
- 不同的初始参数
- 模拟退火
- 随机扰动
- 遗传算法
keras
keras构建模型分为:
- 定义模型
- 编译模型
- 训练模型
- 预测模型
代码
利用keras构建MLP进行二分类
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from keras.models import Sequential
from keras.layers import Dense
#读取数据
filename = 'pima_data.csv'
names = ['preg','plas','blood','skin','insulin','bmi','pedi','age','class']
data = pd.read_csv(filename,names=names)
array = data.values
X=array[:,:-1]
Y=array[:,-1]
test_size = 0.3
seed = 4
X_train,X_test,Y_train,Y_test = train_test_split(X,Y,test_size=test_size,random_state=seed)
#构建神经网络模型
model = Sequential()
model.add(Dense(12,input_dim=8,activation='relu'))
model.add(Dense(8,activation='relu'))
model.add(Dense(1,activation='sigmoid'))
model.summary()
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
model.fit(X_train,Y_train,epochs=10,batch_size=20,verbose=True)
score=model.evaluate(X_test,Y_test,verbose=False)
print('accuracy:%.2f%%' % (score[1]*100))
keras构建MLP进行多分类任务
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder
from sklearn.datasets import load_iris
#from sklearn.preprocessing import OneHotEncoder
from keras.models import Sequential
from keras.layers import Dense
from keras.utils import np_utils
#读取数据
dataset = load_iris()
X=dataset.data
Y=dataset.target
Y=np_utils.to_categorical(Y)
test_size = 0.3
seed = 4
X_train,X_test,Y_train,Y_test = train_test_split(X,Y,test_size=test_size,random_state=seed)
#构建神经网络模型
model = Sequential()
model.add(Dense(4,input_dim=4,activation='relu'))
model.add(Dense(6,activation='relu'))
model.add(Dense(3,activation='softmax'))
model.summary()
model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
history=model.fit(X_train,Y_train,epochs=150,batch_size=10,verbose=True)
score=model.evaluate(X_test,Y_test)
print('loss:%.2f,acc:%.2f%%' % (score[0],score[1]*100))
#模型训练过程可视化
print(history.history.keys())
plt.plot(history.history['acc'])
plt.title('model accuracy')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.legend(['train'],loc='upper left')
plt.show()
plt.plot(history.history['loss'])
plt.title('model loss')
plt.xlabel('epoch')
plt.ylabel('loss')
plt.legend(['train'],loc='upper left')
plt.show()
from keras.models import model_from_json
#将模型的结构存储再json文件中
model_json = model.to_json()
with open('model.json','w') as file:
file.write(model_json)
model.save_weights('model.json.h5')
#加载Json文件中模型
with open('model.json','r') as file:
model_json = file.read()
#加载模型
new_model = model_from_json(model_json)
new_model.load_weights('model.json.h5')
new_model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
score=new_model.evaluate(X_test,Y_test)
print('loss:%.2f,acc:%.2f%%' % (score[0],score[1]*100))
利用keras构建神经网络进行回归分析
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_boston
from sklearn.preprocessing import StandardScaler
from keras.models import Sequential,model_from_json
from keras.layers import Dense
#读取数据
dataset = load_boston()
X=dataset.data
Y=dataset.target
scalar = StandardScaler()
scalar.fit(X)
scalar.transform(X)
seed = 7
X_train,X_test,Y_train,Y_test = train_test_split(X,Y,test_size=0.3,random_state=seed)
#构建神经网络模型
model = Sequential()
model.add(Dense(13,input_dim=13,activation='relu'))
model.add(Dense(1,activation='linear'))
model.compile(loss='mean_squared_error',optimizer='adam')
model.fit(X_train,Y_train,epochs=150,batch_size=10)
score=model.evaluate(X_test,Y_test)
print(score)
手写数字识别-MLP
from keras.datasets import mnist
import matplotlib.pyplot as plt
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Dense
(X_train,y_train),(X_test,y_test)=mnist.load_data()
#数据展示
plt.subplot(221)
plt.imshow(X_train[0],cmap=plt.get_cmap('gray'))
plt.subplot(222)
plt.imshow(X_train[1],cmap=plt.get_cmap('gray'))
plt.subplot(223)
plt.imshow(X_train[2],cmap=plt.get_cmap('gray'))
plt.subplot(224)
plt.imshow(X_train[3],cmap=plt.get_cmap('gray'))
plt.show()
#将二维数据变为一维数据flatten扁平化
num_pixels = X_train.shape[1]*X_train.shape[2]
X_train=X_train.reshape(-1,num_pixels).astype('float32')
X_test = X_test.reshape(-1,num_pixels).astype('float32')
#格式化数据0~1
X_train = X_train/255
X_test = X_test/255
#进行one-hot编码
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
num_classes = y_test.shape[1]
#构建模型
model = Sequential()
model.add(Dense(784,input_dim=num_pixels,activation='relu'))
model.add(Dense(num_classes,activation='softmax'))
model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
model.fit(X_train,y_train,epochs=2,batch_size=200)
scores = model.evaluate(X_test,y_test)
print('acc:%.2f%%' % (scores[1]*100))