欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  IT编程

建立深度神经网络cnn(Pytorch步骤记录)

程序员文章站 2022-04-02 11:24:56
Pytorch笔记建立深度神经网络cnnimport torchimport numpy as npimport torch.nn.functional as Fx=torch.Tensor([[1,1],[1,0],[0,1],[0,0]])#训练数据y=torch.Tensor([[1],[0],[0],[1]])#标签#print(y)class network(torch.nn.Module): def __init__(self,in_num,hidden_num,out_...

Pytorch笔记
建立深度神经网络cnn

import torch import numpy as np import torch.nn.functional as F
x=torch.Tensor([[1,1],[1,0],[0,1],[0,0]])#训练数据 y=torch.Tensor([[1],[0],[0],[1]])#标签 #print(y) class network(torch.nn.Module): def __init__(self,in_num,hidden_num,out_num):#生成各层 super(network,self).__init__() self.input_layer=torch.nn.Linear(in_num,hidden_num)#输入层与隐藏层组成的全连接层1 self.sigmoid=torch.nn.Sigmoid()#建立激励函数层 self.output_layer=torch.nn.Linear(hidden_num,out_num)#隐藏层与输出层组成的全连接层2 #self.softmax=torch.nn.LogSoftmax() def forward(self,input_x):#将各层搭建起来,形成完整的神经网络 #h_1 = self.sigmoid(self.input_layer(input_x))#全连接层1的输出通过激励函数输出 h_1=F.relu(self.input_layer(input_x))#使用RELU激活函数,也可以用sigmoid h_2 = self.sigmoid(self.output_layer(h_1))#全连接层2的输出通过激励函数输出 return h_2

net=network(2,4,1)#建立2输入,4隐藏神经元,1个输出神经元,共6个神经元的神经网络 print('--------------------------------------') print('当前神经网络:') print(net) loss_function=torch.nn.BCELoss() print('--------------------------------------') print('损失函数:') print(loss_function) optimizer = torch.optim.SGD(net.parameters(), lr=0.1, momentum=0.9)#自动调整学习率,加快学习速度 print('--------------------------------------') print('优化函数:') print(optimizer) for i in range(1000): out=net(x)#输入训练数据,神经系统输出数据为out #print(out) loss=loss_function(out,y)#计算输出与预期值的误差 #print ("loss is %f"%loss.data.numpy()) optimizer.zero_grad()#清除梯度,否则会累加产生错误 loss.backward()#误差反向传播 optimizer.step()#调整参数 print(out) print(y) 

运行结果

H:\ProgramData\Anaconda3\python.exe D:/PycharmProjects/untitled/123.py -------------------------------------- 当前神经网络:
network( (input_layer): Linear(in_features=2, out_features=4, bias=True) (sigmoid): Sigmoid() (output_layer): Linear(in_features=4, out_features=1, bias=True) ) -------------------------------------- 损失函数:
BCELoss() -------------------------------------- 优化函数:
SGD ( Parameter Group 0 dampening: 0 lr: 0.1 momentum: 0.9 nesterov: False weight_decay: 0 ) tensor([[9.9926e-01], [5.6683e-03], [5.1300e-04], [9.9903e-01]], grad_fn=<SigmoidBackward>) tensor([[1.], [0.], [0.], [1.]]) Process finished with exit code 0 

将此时的各层权重与偏置输出,代码如下:

for layer in net.modules(): if isinstance(layer,torch.nn.Linear): print('权重为:') print(layer.weight) print('偏置为:') print(layer.bias) 

输出结果:

权重为:
Parameter containing: tensor([[ 3.1269, -3.1266], [-3.1090, 3.1092], [-0.3053, 0.1919], [-0.3873, -0.1486]], requires_grad=True) 偏置为:
Parameter containing: tensor([-8.6705e-04, -1.1113e-03, -1.3949e-05, -3.0352e-01], requires_grad=True) 权重为:
Parameter containing: tensor([[-4.3422, -4.3581, -0.1584, -0.2715]], requires_grad=True) 偏置为:
Parameter containing: tensor([5.9323], requires_grad=True) 

注:Pytorch如无法运行,请检查环境:Run/Debug Configurations–>Python—>cnn.py–>Python interpreter设定为安装路径下的Anaconda3\python.exe,确定返回重试

本文地址:https://blog.csdn.net/qq_42017767/article/details/108267668