欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

动手写一个神经网络代码(附Backpropagation Algorithm代码分解)

程序员文章站 2022-06-28 11:11:10
...

先上Michal Daniel(传送门)的代码。类Network有六个成员函数,其中SGD、update_mini_batch、backprop负责计算每echo的残差、W和b偏导数、W和b的更新。feedforward、evaluation负责计算前向传导的值,可用于计算每echo训练集和验证集的error。cost_derivative计算网络最后一层的残差。

#### Libraries
# Standard library
import random
# Third-party libraries
import numpy as np

class Network(object):
    def __init__(self, sizes):
        self.num_layers = len(sizes)
        self.sizes = sizes

        self.biases = [np.random.randn(y, 1) for y in sizes[1:]]
        self.weights = [np.random.randn(y, x)
                        for x, y in zip(sizes[:-1], sizes[1:])]
    def feedforward(self, a):
        """Return the output of the network if ``a`` is input."""
        for b, w in zip(self.biases, self.weights):
            a = sigmoid(np.dot(w, a)+b)
        return a

    def SGD(self, training_data, epochs, mini_batch_size, eta,
            test_data=None):
        if test_data: n_test = len(test_data)
        n = len(training_data)
        for j in xrange(epochs):
            random.shuffle(training_data)
            mini_batches = [
                training_data[k:k+mini_batch_size]
                for k in xrange(0, n, mini_batch_size)]
            for mini_batch in mini_batches:
                self.update_mini_batch(mini_batch, eta)
            if test_data:
                print "Epoch {0}: {1} / {2}".format(
                    j, self.evaluate(test_data), len(test_data))
            else:
                print "Epoch {0} complete".format(j)

    def update_mini_batch(self, mini_batch, eta):
        nabla_b = [np.zeros(b.shape) for b in self.biases]
        nabla_w = [np.zeros(w.shape) for w in self.weights]
        for x, y in mini_batch:
            delta_nabla_b, delta_nabla_w = self.backprop(x, y)
            nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]
            nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]
        self.weights = [w-(eta/len(mini_batch))*nw
                        for w, nw in zip(self.weights, nabla_w)]
        self.biases = [b-(eta/len(mini_batch))*nb
                       for b, nb in zip(self.biases, nabla_b)]

    def backprop(self, x, y): 
        nabla_b = [np.zeros(b.shape) for b in self.biases]
        nabla_w = [np.zeros(w.shape) for w in self.weights]
        # feedforward
        activation = x
        activations = [x] # list to store all the activations, layer by layer
        zs = [] # list to store all the z vectors, layer by layer
        for b, w in zip(self.biases, self.weights):  # feedforward 同时保存隐藏层计算的中间值结果
            z = np.dot(w, activation)+b
            zs.append(z)  # zs保存了每层神经元输入值
            activation = sigmoid(z)
            activations.append(activation) 

        delta = self.cost_derivative(activations[-1], y) * \
            sigmoid_prime(zs[-1])
        nabla_b[-1] = delta  
        nabla_w[-1] = np.dot(delta, activations[-2].transpose())   

        for l in xrange(2, self.num_layers):
            z = zs[-l]
            sp = sigmoid_prime(z)
            delta = np.dot(self.weights[-l+1].transpose(), delta) * sp
            nabla_b[-l] = delta
            nabla_w[-l] = np.dot(delta, activations[-l-1].transpose()) # l 不是 1
        return (nabla_b, nabla_w)
    def evaluate(self, test_data)
        test_results = [(np.argmax(self.feedforward(x)), y)
                        for (x, y) in test_data]
        # print test_results
        return sum(int(x == y) for (x, y) in test_results)
#cost的导数
    def cost_derivative(self, output_activations, y):
        return (output_activations-y)

#### Miscellaneous functions
def sigmoid(z):
    """The sigmoid function."""
    return 1.0/(1.0+np.exp(-z))

def sigmoid_prime(z):
    """Derivative of the sigmoid function."""
    return sigmoid(z)*(1-sigmoid(z))

步骤分解:

首先需要传入的参数有层数、每层的神经元个数

根据传入参数初始化权重W和b,注意初始值必须是随机值,比如使用服从N(0,ϵ2)正态分布的随机值。如果初始化用全0,隐藏层会得到与输入值相同的函数,随机值目的是消除对称性。
输入数据X在每一epoch迭代前都要重新打乱,然后按照mini_batch_size大小切分数据,依次用每个batch训练更新W和b。每个epoch需要把所有batch训练完,训练完后可以测试下用现在的W和b能预测出什么样的结果来,并与真实值对比。然后进入下一epoch重复训练。

反馈传导步骤分解,公式代码可以对应:

1.进行前馈传导计算,利用前向传导公式,计算L1, L2, …直到Lnl的**值。这个过程类似feedforward函数,不过我们需要保存隐藏层的计算结果以便后面求残差和偏导数。

z=sigmoid(wx+b)
def backprop(self,x,y):
    # 省略部分代码
    activation = x 
    activations = [x] # list to store all the activations, layer by layer
    zs = [] # list to strore all the z vaectors, layer by layer
    for b, w in zip(self.biases, self.weights):
        z = np.dot(w, activation)+b
        zs.append(z)  # 保存了每层神经元输入值,后面
        activation = sigmoid(z)
        activations.append(activation)

z保存每层神经元输入值,activation保存每层神经元经过**函数计算后的输出值

2.对输出层(nl层),残差就是**值与实际值的差,计算:

δ(nl)=(ya(nl))f(z(nl))
def backprop(self,x,y):
    # 省略部分代码
    delta = self.cost_derivative(activations[-1], y) * \
                sigmoid_prime(zs[-1])
    # 求最后一层的残差
    # nabla_b[-1] = delta  
    # nabla_w[-1] = np.dot(delta, activations[-2].transpose())
def cost_derivative(self, output_activations, y):
    return (output_activations-y)
def sigmoid_prime(z):
    """Derivative of the sigmoid function."""
    return sigmoid(z)*(1-sigmoid(z))

3.对于l=nl1,nl2,...,2各层,计算残差,这步非常难理解,残差需要根据l+1层残差与l层W加权计算l层残差。给出公式如下:

δ(l)=((Wl)Tδl+1)f(z(l))

def backprop(self,x,y):
    # 省略部分代码
    # 代码里面 -l 表述倒数第 l 层。
    for l in xrange(2, self.num_layers):
        z = zs[-l]
        sp = sigmoid_prime(z)
        delta = np.dot(self.weights[-l+1].transpose(), delta) * sp
        # nabla_b[-l] = delta
        # nabla_w[-l] = np.dot(delta, activations[-l-1].transpose())

4.计算每层cost对w和b的偏导数

W(t)J(W,b;,x,y)=δ(l+1)(a(l))T

b(t)J(W,b;x,y)=δ(l+1)

 def backprop(self,x,y):
    # 省略部分代码
    for l in xrange(2, self.num_layers):    
        # z = zs[-l]    
        # sp = sigmoid_prime(z)    
        # delta = np.dot(self.weights[-l+1].transpose(), delta) * sp    
        nabla_b[-l] = delta    
        nabla_w[-l] = np.dot(delta, activations[-l-1].transpose())

5.对于批量梯度下降法,样本从i=1到m,计算

ΔW(l):=ΔW(l)+W(t)J(W,b;x,y)
Δb(l):=Δb(l)+b(t)J(W,b;x,y)
def update_mini_batch(self, mini_batch, eta):
    nabla_b = [np.zeros(b.shape) for b in self.biases]
    nabla_w = [np.zeros(w.shape) for w in self.weights]
    for x, y in mini_batch:
        delta_nabla_b, delta_nabla_w = self.backprop(x, y)
        nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]
        nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]
    # self.weights = [w-(eta/len(mini_batch))*nw
                     for w, nw in zip(self.weights, nabla_w)]
    # self.biases = [b-(eta/len(mini_batch))*nb
                     for b, nb in zip(self.biases, nabla_b)]

6.更新权重参数:

W(l)=W(l)α[(1mΔW(l))]
b(l)=b(l)α[1mΔb(l)]
def update_mini_batch(self, mini_batch, eta):
    # nabla_b = [np.zeros(b.shape) for b in self.biases]
    # nabla_w = [np.zeros(w.shape) for w in self.weights]
    # for x, y in mini_batch:
        # delta_nabla_b, delta_nabla_w = self.backprop(x, y)
        # nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]
        # nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]
    self.weights = [w-(eta/len(mini_batch))*nw
                   for w, nw in zip(self.weights, nabla_w)]
    self.biases = [b-(eta/len(mini_batch))*nb
                   for b, nb in zip(self.biases, nabla_b)]

重复梯度下降法的迭代步骤来减小代价函数J(W,b)的值

改进方案

权重初始化改进:

W权重初始化从区间均匀随机取值,具体解释见http://blog.csdn.net/xbinworld/article/details/50603552http://neuralnetworksanddeeplearning.com/chap3.html#weight_initialization

self.weights = [np.random.randn(y, x)/np.sqrt(x)
                        for x, y in zip(self.sizes[:-1], self.sizes[1:])]

增加正则化项

W(l)=W(l)α[(1mΔW(l))+λW(l)]
def update_mini_batch(self, mini_batch, eta, lmbda, n):
    """``lmbda`` is the regularization parameter, and
        ``n`` is the total size of the training data set.
    """
    # 省略部分代码
    self.weights = [(1-eta*(lmbda/n))*w-(eta/len(mini_batch))*nw
                        for w, nw in zip(self.weights, nabla_w)]
    # self.biases = [b-(eta/len(mini_batch))*nb
    #                    for b, nb in zip(self.biases, nabla_b)]

validation 求最优超参数

Quadratic Cost 二次损失函数