欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

Pytorch手写数字集基于Tensorboard的可视化损失函数(loss)、准确率(Accuracy)、梯度(grad)、权值(data)的源码

程序员文章站 2022-05-18 10:01:38
...

Tensorboard的可视化损失函数(loss)、准确率(Accuracy)、梯度(grad)、权值(data)的源码,以Pytorch手写数字集为例讲解

一、源码

import torch
import torch.nn as nn
import torch.utils.data as Data
from torchvision.datasets import mnist
import torchvision.transforms as transforms
from torch.autograd import Variable
from torch.utils.tensorboard import SummaryWriter

#############################Download Data################60000张训练,10000张测试
train_dataset =mnist.MNIST(root='./mnist/', train=True, transform=transforms.ToTensor())
test_dataset = mnist.MNIST(root='./mnist/',train=True,transform=transforms.ToTensor())
train_loader = Data.DataLoader(dataset=train_dataset,batch_size=50,shuffle=True)
test_loader =Data.DataLoader ( dataset=test_dataset,batch_size=50,shuffle=False)

###########################Define CNN module################################
class CNN(nn.Module):
    def __init__(self):
        super(CNN, self).__init__()
        # 定义卷基层
        self.layer1 = nn.Sequential(
            nn.Conv2d(1, 16, kernel_size=3,stride=1),    # b,16,26,26
            nn.BatchNorm2d(16),
            nn.ReLU())

        self.layer2= nn.Sequential(
            nn.Conv2d(16, 32, kernel_size=3,stride=1),   # b.32,24,24
            nn.BatchNorm2d(32),
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2,stride=2))        # b,32,12,12

        self.layer3=nn.Sequential(
            nn.Conv2d(32,64,kernel_size=3,stride=1),     # b,64,10,10
            nn.BatchNorm2d(64),
            nn.ReLU())

        self.layer4=nn.Sequential(
            nn.Conv2d(64,128,kernel_size=3,stride=1),    # b,128,8,8
            nn.BatchNorm2d(128),
            nn.ReLU(),
            nn.MaxPool2d(kernel_size=2,stride=2))        # b,128,4,4

        self.fc=nn.Sequential(
            nn.Linear(128*4*4,1024),
            nn.ReLU(),
            nn.Linear(1024,128),
            nn.ReLU(),
            nn.Linear(128,10))

    def forward(self, x):
        x=self.layer1(x)
        x=self.layer2(x)
        x=self.layer3(x)
        x=self.layer4(x)
        x = x.view(x.size(0), -1)  # reshape 拉平
        x=self.fc(x)
        return x

########################loss and optimizer##################################
cnn = CNN()
if torch.cuda.is_available():
    cnn = cnn.cuda()
criterion= nn.CrossEntropyLoss()                              #交叉熵
optimizer = torch.optim.Adam(cnn.parameters(), lr=0.001)      #Adam 优化方式

# 构建 SummaryWriter
writer = SummaryWriter(comment='test_your_comment', filename_suffix="_test_your_filename_suffix")

##########################train#################################
for epoch in range(5):
    train_loss=0
    train_acc=0
    for step, (x, label) in enumerate(train_loader):
        x = Variable(x).cuda()                           #50,1,28,28
        label = Variable(label).cuda()

        ###forward######
        out=cnn(x)                                      #50,10
        loss=criterion(out,label)                 # 计算损失函数

        ###backforward#######
        optimizer.zero_grad()                      # 梯度归零
        loss.backward()                           # 反向传播
        optimizer.step()                          # 梯度优化
        train_loss+=loss.item()


        ###计算准确率#####
        _, pred = out.max(1)
       # print(out.max(1))
        num_correct = (pred == label).sum().item()
        acc = num_correct /x.shape[0]
        train_acc += acc

    aver_loss = train_loss/len(train_loader)
    aver_acc = train_acc / len(train_loader)
    print('Epoch: {}, Train Loss: {:.6f}, Train Acc: {:.6f}'
        .format(epoch, train_loss / len(train_loader), train_acc / len(train_loader)))

    #########记录数据,保存于event file,这里记录了每一个epoch的损失和准确度########
    writer.add_scalars("Loss", {"Train": aver_loss}, epoch)
    writer.add_scalars("Accuracy", {"Train": aver_acc}, epoch)

    ############## 每个epoch,记录梯度,权值#######################################
    for name, param in cnn.named_parameters():      #返回模型的参数
        writer.add_histogram(name + '_grad', param.grad, epoch)   #参数的梯度
        writer.add_histogram(name + '_data', param, epoch)        #参数的权值

二、打开tensorboard方式
1.打开pycharm中的Terminal
Pytorch手写数字集基于Tensorboard的可视化损失函数(loss)、准确率(Accuracy)、梯度(grad)、权值(data)的源码
2.输入命令 tensorboard –-logdir=+"路径"即可,定位到runs文件
位置
该代码执行完之后会出现一个runs文件夹
Pytorch手写数字集基于Tensorboard的可视化损失函数(loss)、准确率(Accuracy)、梯度(grad)、权值(data)的源码
3.打开网页即可
Pytorch手写数字集基于Tensorboard的可视化损失函数(loss)、准确率(Accuracy)、梯度(grad)、权值(data)的源码
4.就会显示如下界面
Pytorch手写数字集基于Tensorboard的可视化损失函数(loss)、准确率(Accuracy)、梯度(grad)、权值(data)的源码
我们在代码中定义的每个epoch的loss和Accuarcy


Pytorch手写数字集基于Tensorboard的可视化损失函数(loss)、准确率(Accuracy)、梯度(grad)、权值(data)的源码
定义的每个网络层的权值分布情况


Pytorch手写数字集基于Tensorboard的可视化损失函数(loss)、准确率(Accuracy)、梯度(grad)、权值(data)的源码该图显示的是我们定义每个epoch权重的梯度和权值

关于Tensorboard的参数解析请看之前写的博客

相关标签: pytorch框架知识