莫烦pytorch学习笔记5
程序员文章站
2022-07-06 10:41:09
...
1 自编码器
自编码,又称自编码器(autoencoder),是神经网络的一种,经过训练后能尝试将输入复制到输出。自编码器(autoencoder)内部有一个隐藏层h,可以产生编码(code)表示输入。该网络可以看作由两部分组成:一个由函数h = f(x) 表示的编码器和一个生成重构的解码器r = g(h)。
首先,自编码器是一个神经网络。
如果我们得到的数据是正确标注的,不论是图像或音频或文本,我们就很幸运了。深度学习在有标注数据集上非常有效。这是因为总有一个函数代表了变量之间的关系。
比如如果我们的输入数据是一堆数字,还有定义了输入数据是偶数或者奇数的标签,那么代表这两列数字关系的函数就很简单:如果输入数据能被2整除,则这个数是偶数,不然就是奇数。
所有数据类型(视频或文本)均可用数字表示。因此总是有一个函数能映射关系。只不过比我们刚刚讨论过的函数更复杂一点。
2代码实现
import torch
import torch.nn as nn
from torch.autograd import Variable
import torch.utils.data as Data
import torchvision
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
import numpy as np
torch.manual_seed(1) # reproducible
# Hyper Parameters
EPOCH = 10
BATCH_SIZE = 64
LR = 0.005 # learning rate
DOWNLOAD_MNIST = False
N_TEST_IMG = 5
import time
# Mnist digits dataset
train_data = torchvision.datasets.MNIST(
root='./mnist/',
train=True, # this is training data
transform=torchvision.transforms.ToTensor(), # Converts a PIL.Image or numpy.ndarray to
# torch.FloatTensor of shape (C x H x W) and normalize in the range [0.0, 1.0]
download=DOWNLOAD_MNIST, # download it if you don't have it
)
# plot one example
print(train_data.train_data.size()) # (60000, 28, 28)
print(train_data.train_labels.size()) # (60000)
plt.imshow(train_data.train_data[2].numpy(), cmap='gray')
plt.title('%i' % train_data.train_labels[2])
plt.show()
# Data Loader for easy mini-batch return in training, the image batch shape will be (50, 1, 28, 28)
train_loader = Data.DataLoader(dataset=train_data, batch_size=BATCH_SIZE, shuffle=True)
class AutoEncoder(nn.Module):
def __init__(self):
super(AutoEncoder, self).__init__()
self.encoder = nn.Sequential(
nn.Linear(28*28, 128),
nn.Tanh(),
nn.Linear(128, 64),
nn.Tanh(),
nn.Linear(64, 12),
nn.Tanh(),
nn.Linear(12, 3), # compress to 3 features which can be visualized in plt
)
self.decoder = nn.Sequential(
nn.Linear(3, 12),
nn.Tanh(),
nn.Linear(12, 64),
nn.Tanh(),
nn.Linear(64, 128),
nn.Tanh(),
nn.Linear(128, 28*28),
nn.Sigmoid(), # compress to a range (0, 1)
)
def forward(self, x):
encoded = self.encoder(x)
decoded = self.decoder(encoded)
return encoded, decoded
autoencoder = AutoEncoder()
print(autoencoder)
optimizer = torch.optim.Adam(autoencoder.parameters(), lr=LR)
loss_func = nn.MSELoss()
# original data (first row) for viewing
view_data = Variable(train_data.train_data[:N_TEST_IMG].view(-1, 28*28).type(torch.FloatTensor)/255.)
print(train_data.train_data[:N_TEST_IMG])
print(type(train_data.train_data[:N_TEST_IMG]))
print(train_data.train_data[:N_TEST_IMG].size())
print("----------------------------")
print(train_data.train_data[:N_TEST_IMG].view(-1, 28*28))
print(type(train_data.train_data[:N_TEST_IMG].view(-1, 28*28)))
print(train_data.train_data[:N_TEST_IMG].view(-1, 28*28).size())
print("----------------------------")
print(train_data.train_data[:N_TEST_IMG].view(-1, 28*28).type(torch.FloatTensor)/255.)
print(type(train_data.train_data[:N_TEST_IMG].view(-1, 28*28).type(torch.FloatTensor)/255.))
print(train_data.train_data[:N_TEST_IMG].view(-1, 28*28).type(torch.FloatTensor)/255.)
for epoch in range(EPOCH):
for step, (x, y) in enumerate(train_loader):
b_x = Variable(x.view(-1, 28 * 28)) # batch x, shape (batch, 28*28)
b_y = Variable(x.view(-1, 28 * 28)) # batch y, shape (batch, 28*28)
b_label = Variable(y) # batch label
encoded, decoded = autoencoder(b_x)
loss = loss_func(decoded, b_y) # mean square error
optimizer.zero_grad() # clear gradients for this training step
loss.backward() # backpropagation, compute gradients
optimizer.step() # apply gradients
if step % 500 == 0 and epoch in [0, 5, EPOCH - 1]:
print('Epoch: ', epoch, '| train loss: ', loss.data)
# plotting decoded image (second row)
_, decoded_data = autoencoder(view_data)
# initialize figure
f, a = plt.subplots(2, N_TEST_IMG, figsize=(5, 2))
for i in range(N_TEST_IMG):
a[0][i].imshow(np.reshape(view_data.data.numpy()[i], (28, 28)), cmap='gray');
a[0][i].set_xticks(());a[0][i].set_yticks(())
for i in range(N_TEST_IMG):
a[1][i].clear()
a[1][i].imshow(np.reshape(decoded_data.data.numpy()[i], (28, 28)), cmap='gray')
a[1][i].set_xticks(());a[1][i].set_yticks(())
plt.show();
# visualize in 3D plot
view_data = Variable(train_data.train_data[:200].view(-1, 28*28).type(torch.FloatTensor)/255.)
encoded_data, _ = autoencoder(view_data)
fig = plt.figure(2); ax = Axes3D(fig)
X, Y, Z = encoded_data.data[:, 0].numpy(), encoded_data.data[:, 1].numpy(), encoded_data.data[:, 2].numpy()
values = train_data.train_labels[:200].numpy()
for x, y, z, s in zip(X, Y, Z, values):
c = cm.rainbow(int(255*s/9)); ax.text(x, y, z, s, backgroundcolor=c)
ax.set_xlim(X.min(), X.max()); ax.set_ylim(Y.min(), Y.max()); ax.set_zlim(Z.min(), Z.max())
plt.show()
参考:1.https://www.bilibili.com/video/av15997678/
2.https://github.com/MorvanZhou/PyTorch-Tutorial/blob/master/tutorial-contents-notebooks/
上一篇: Java 验证码识别(1)使用 Tess4J 进行 OCR 识别
下一篇: 回帖的理由~经典~
推荐阅读
-
HTML5学习笔记之html5与传统html区别
-
MVC使用Controller代替Filter完成登录验证(Session校验)学习笔记5
-
html5中canvas学习笔记2-判断浏览器是否支持canvas
-
html5中canvas学习笔记1-画板的尺寸与实际显示尺寸
-
Linux计划任务Crontab学习笔记(5):常见错误使用案例
-
HTML5移动开发学习笔记之Canvas基础
-
C#.NET学习笔记5 C#中的条件编译
-
Java学习笔记(5)--- Number类和Math 类,String类的应用,Java数组入门
-
【莫烦强化学习】视频笔记(二)3.Q_Learning算法实现走迷宫
-
C学习笔记(5)--- 指针第二部分,字符串,结构体。