欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

深度学习线性回归(pytorch)

程序员文章站 2022-06-26 16:15:05
...

线性回归

  • 模型
    y=wx+by = wx + b

  • 损失函数
    (w1,w2,b)=1ni=1n(i)(w1,w2,b)=1ni=1n12(x1(i)w1+x2(i)w2+by(i))2.\ell(w_1, w_2, b) =\frac{1}{n} \sum_{i=1}^n \ell^{(i)}(w_1, w_2, b) =\frac{1}{n} \sum_{i=1}^n \frac{1}{2}\left(x_1^{(i)} w_1 + x_2^{(i)} w_2 + b - y^{(i)}\right)^2.

  • 优化函数
    w1w1ηBiB(i)(w1,w2,b)w1=w1ηBiBx1(i)(x1(i)w1+x2(i)w2+by(i)),w2w2ηBiB(i)(w1,w2,b)w2=w2ηBiBx2(i)(x1(i)w1+x2(i)w2+by(i)),bbηBiB(i)(w1,w2,b)b=bηBiB(x1(i)w1+x2(i)w2+by(i)). \begin{aligned} w_1 &\leftarrow w_1 - \frac{\eta}{|\mathcal{B}|} \sum_{i \in \mathcal{B}} \frac{ \partial \ell^{(i)}(w_1, w_2, b) }{\partial w_1} = w_1 - \frac{\eta}{|\mathcal{B}|} \sum_{i \in \mathcal{B}}x_1^{(i)} \left(x_1^{(i)} w_1 + x_2^{(i)} w_2 + b - y^{(i)}\right),\\ w_2 &\leftarrow w_2 - \frac{\eta}{|\mathcal{B}|} \sum_{i \in \mathcal{B}} \frac{ \partial \ell^{(i)}(w_1, w_2, b) }{\partial w_2} = w_2 - \frac{\eta}{|\mathcal{B}|} \sum_{i \in \mathcal{B}}x_2^{(i)} \left(x_1^{(i)} w_1 + x_2^{(i)} w_2 + b - y^{(i)}\right),\\ b &\leftarrow b - \frac{\eta}{|\mathcal{B}|} \sum_{i \in \mathcal{B}} \frac{ \partial \ell^{(i)}(w_1, w_2, b) }{\partial b} = b - \frac{\eta}{|\mathcal{B}|} \sum_{i \in \mathcal{B}}\left(x_1^{(i)} w_1 + x_2^{(i)} w_2 + b - y^{(i)}\right). \end{aligned}

  • 神经网络图(单层神经网络)

线性回归的pytorch实现

生成数据集

import torch

num_inputs = 2
num_examples = 1000
true_w = [2, -3.4]
true_b = 4.2
features = torch.randn(num_examples, num_inputs)
labels = true_w[0] * features[:, 0] + true_w[1] * features[:, 1] + true_b
labels += torch.normal(mean=torch.zeros(labels.shape), std=0.01)

读取数据

from torch.utils import data as tdata

batch_size = 10
# 将训练数据的特征和标签组合
dataset = tdata.TensorDataset(features, labels)
# 随机读取小批量
data_iter = tdata.DataLoader(dataset, batch_size, shuffle=True)

for X, y in data_iter:
    print(X, y)
    break
tensor([[ 0.2115,  1.4861],
        [-0.2630,  0.8898],
        [ 0.8301, -2.6101],
        [ 1.5199, -0.5050],
        [-0.4478,  0.6990],
        [ 1.4203,  1.1574],
        [ 1.3185,  1.1949],
        [ 2.0129,  0.8379],
        [ 1.1585, -0.1882],
        [ 0.9050,  0.0398]]) tensor([-0.4229,  0.6551, 14.7292,  8.9412,  0.9225,  3.1058,  2.7778,  5.3856,
         7.1542,  5.8629])

定义模型

先导入nn模块。实际上,“nn”neural networks(神经网络)的缩写。顾名思义,该模块定义了大量神经网络的层。我们先定义一个模型变量net,它是一个Sequential实例。在nn中,Sequential实例可以看作是一个串联各个层的容器。在构造模型时,我们在该容器中依次添加层。当给定输入数据时,容器中的每一层将依次计算并将输出作为下一层的输入。

from torch import nn

net = nn.Sequential()
 # 全连接层是一个线性层,特征数为2,输出个数为1
net.add_module('linear', nn.Linear(2, 1)) 

初始化模型参数

这里主要是初始化线性回归模型中的权重与偏差。使用nn.init模块
nn.init.normal(tensor, std=0.01)指定随机初始化将随机采样均值为0、标准差为0.01的正态分布。偏差初始化默认为0

from torch.nn import init

def params_init(model):
    if isinstance(model, nn.Linear):
        init.normal_(tensor=model.weight.data, std=0.01)
        init.constant_(tensor=model.bias.data, val=0)
        
net.apply(params_init)
Sequential(
  (linear): Linear(in_features=2, out_features=1, bias=True)
)

定义损失函数与优化算法

loss = nn.MSELoss() # 均方误差损失函数

from torch import optim

optimizer = optim.SGD(net.parameters(), lr=0.03)  # lr为学习率

训练模型

num_epochs = 3 # 初始化训练周期
for epoch in range(1, num_epochs + 1):
    for X, y in data_iter:
        net.zero_grad()
        l = loss(net(X), y.reshape(batch_size, -1))  # -1表示自动计算列
        l.backward()
        optimizer.step()
        
    with torch.no_grad():
        l = loss(net(features), labels.reshape(num_examples, -1))
        print('epoch %d, loss: %f' % (epoch, l.data.numpy()))

epoch 1, loss: 0.000241
epoch 2, loss: 0.000099
epoch 3, loss: 0.000099

linear = net[0]
true_w, linear.weight.data
([2, -3.4], tensor([[ 2.0001, -3.4001]]))
true_b, linear.bias.data
(4.2, tensor([4.2001]))
net
Sequential(
  (linear): Linear(in_features=2, out_features=1, bias=True)
)
相关标签: Deep_Learning