Pytorch卷积和反卷积计算方法
程序员文章站
2022-06-15 22:19:16
torch.nn.Conv2d def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros'):Parametersin_channels (int) – Number of channels in the input i...
torch.nn.Conv2d
def __init__(self, in_channels, out_channels, kernel_size, stride=1,
padding=0, dilation=1, groups=1,
bias=True, padding_mode='zeros'):
Parameters
- in_channels (int) – Number of channels in the input image
- out_channels (int) – Number of channels produced by the convolution
- kernel_size (int or tuple) – Size of the convolving kernel
- stride (int or tuple, optional) – Stride of the convolution. Default: 1
- padding (int or tuple, optional) – Zero-padding added to both sides of the input. Default: 0
- padding_mode (string, optional) – ‘zeros’, ‘reflect’, ‘replicate’ or ‘circular’. Default: ‘zeros’
- dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
- groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
- bias (bool, optional) – If True, adds a learnable bias to the output. Default: True
Example
torch.nn.ConvTranspose2d
看了这篇,使用里面的公式,发现计算出来的不对,不过还是有助于理解。
def __init__(self, in_channels, out_channels, kernel_size, stride=1,
padding=0, output_padding=0, groups=1, bias=True,
dilation=1, padding_mode='zeros'):
Parameters
- in_channels (int) – Number of channels in the input image
- out_channels (int) – Number of channels produced by the convolution
- kernel_size (int or tuple) – Size of the convolving kernel
- stride (int or tuple, optional) – Stride of the convolution. Default: 1
- padding (int or tuple, optional) – dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Default: 0
- output_padding (int or tuple, optional) – Additional size added to one side of each dimension in the output shape. Default: 0
- groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1
- bias (bool, optional) – If True, adds a learnable bias to the output. Default: True
dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1
Example
H_out=(H_in−1)×stride[0]−2×padding[0]+dilation[0]×(kernel_size[0]−1)+output_padding[0]+1
W_out=(W_in−1)×stride[1]−2×padding[1]+dilation[1]×(kernel_size[1]−1)+output_padding[1]+1
本文地址:https://blog.csdn.net/Doraemon_Zzn/article/details/107930752
推荐阅读
-
PyTorch上实现卷积神经网络CNN的方法
-
深度之眼Pytorch打卡(十五):Pytorch卷积神经网络部件——转置卷积操作与转置卷积层(对转置卷积操作全网最细致分析,转置卷积的stride与padding,转置与反卷积名称论证)
-
TensorFlow mnist 批量归一化和多通道卷积
-
cuda并行程序设计复习(直方图、卷积、扫描、前缀和)
-
普通卷积、空洞卷积、反卷积以及池化的感受野以及输出尺寸
-
Pytorch:卷积神经网络CNN,使用重复元素的网络(VGG)训练MNIST数据集99%以上正确率
-
卷积神经网络:CIFAR-10训练和测试(单块GPU)
-
[pytorch、学习] - 5.5 卷积神经网络(LeNet)
-
Pytorch实现卷积神经网络CNN
-
pytorch(2) ---实现二层卷积神经网络