欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

莫烦pytorch学习笔记1

程序员文章站 2022-07-06 10:14:21
...

0.介绍

PyTorch 是Torch7 团队开发的,从它的名字就可以看出,其与Torch 的不同之处在于PyTorch 使用了Python 作为开发语言。所谓“Python first”,同样说明它是一个以Python 优先的深度学习框架,不仅能够实现强大的GPU 加速,同时还支持动态神经网络,这是现在很多主流框架比如Tensorflow 等都不支持的。

PyTorch 既可以看做加入了GPU 支持的numpy,同时也可以看成一个拥有自动求导功能的强大的深度神经网络,除了Facebook 之外,它还已经被Twitter、CMU 和Salesforce 等机构采用。
PyTorch 优点:
1.支持GPU;
2.动态神经网络;
3.Python 优先;
4.命令式体验;
5.轻松扩展。

1.下载pyTorch

莫烦pytorch学习笔记1
然后找到匹配自己的版本下载
莫烦pytorch学习笔记1

2. 常用函数

2.1 numpy和torch的区别

import torch
import numpy as np

np_data = np.arange(6).reshape((2, 3))
torch_data = torch.from_numpy(np_data)
tensor2array = torch_data.numpy()
print(
    '\nnumpy array:', np_data,          # [[0 1 2], [3 4 5]]
    '\ntorch tensor:', torch_data,      #  0  1  2 \n 3  4  5    [torch.LongTensor of size 2x3]
    '\ntensor to array:', tensor2array, # [[0 1 2], [3 4 5]]
)

2.2 tensor 和Variable

Variable 可以反向传播,tensor 不可以。

import torch
from torch.autograd import Variable # torch 中 Variable 模块

# 先生鸡蛋
tensor = torch.FloatTensor([[1,2],[3,4]])
# 把鸡蛋放到篮子里, requires_grad是参不参与误差反向传播, 要不要计算梯度
variable = Variable(tensor, requires_grad=True)

print(tensor)
"""
 1  2
 3  4
[torch.FloatTensor of size 2x2]
"""

print(variable)
"""
Variable containing:
 1  2
 3  4
[torch.FloatTensor of size 2x2]

t_out = torch.mean(tensor*tensor)       # x^2
v_out = torch.mean(variable*variable)   # x^2
v_out.backward()    # backpropagation from v_out
print(variable.grad)
'''
0.5000  1.0000
 1.5000  2.0000
'''
'''  
variable 
1  2
 3  4
 variable.data
 1  2
 3  4

variable.data.numpy()

array([[ 1.,  2.],
       [ 3.,  4.]], dtype=float32)
'''

"""

2.3 激励函数


import torch
import torch.nn.functional as F
from torch.autograd import Variable
import matplotlib.pyplot as plt

x = torch.linspace(-5, 5, 200)  # x data (tensor), shape=(200, 1)
x = Variable(x)
x_np = x.data.numpy()   # numpy array for plotting

y_relu = F.relu(x).data.numpy()
y_sigmoid = torch.sigmoid(x).data.numpy()
y_tanh = F.tanh(x).data.numpy()
y_softplus = F.softplus(x).data.numpy()

2.4 一个回归模型例子

import torch
import torch.nn.functional as F
torch.manual_seed(1)    # reproducible

x = torch.unsqueeze(torch.linspace(-1, 1, 100), dim=1)  # x data (tensor), shape=(100, 1)
y = x.pow(2) + 0.2*torch.rand(x.size())                 # noisy y data (tensor), shape=(100, 1)

#定义网络结构 1>10 >1

class Net(torch.nn.Module):
    def __init__(self, n_feature, n_hidden, n_output):
        super(Net, self).__init__()
        self.hidden = torch.nn.Linear(n_feature, n_hidden)   # hidden layer
        self.predict = torch.nn.Linear(n_hidden, n_output)   # output layer

    def forward(self, x):
        x = F.relu(self.hidden(x))      # activation function for hidden layer
        x = self.predict(x)             # linear output
        return x
net = Net(n_feature=1, n_hidden=10, n_output=1)     # define the network


optimizer = torch.optim.SGD(net.parameters(), lr=0.2)
loss_func = torch.nn.MSELoss()  # this is for regression mean squared loss
for t in range(100):
    prediction = net(x)     # input x and predict based on x

    loss = loss_func(prediction, y)     # must be (1. nn output, 2. target)

    optimizer.zero_grad()   # clear gradients for next train
    loss.backward()         # backpropagation, compute gradients
    optimizer.step()        # apply gradients
    print(loss)

莫烦pytorch学习笔记1

参考:1.https://www.bilibili.com/video/av15997678/
2.https://github.com/MorvanZhou/PyTorch-Tutorial/blob/master/tutorial-contents-notebooks/