Pytorch中Softmax和LogSoftmax的使用详解
一、函数解释
1.softmax函数常用的用法是指定参数dim就可以:
(1)dim=0:对每一列的所有元素进行softmax运算,并使得每一列所有元素和为1。
(2)dim=1:对每一行的所有元素进行softmax运算,并使得每一行所有元素和为1。
class softmax(module): r"""applies the softmax function to an n-dimensional input tensor rescaling them so that the elements of the n-dimensional output tensor lie in the range [0,1] and sum to 1. softmax is defined as: .. math:: \text{softmax}(x_{i}) = \frac{\exp(x_i)}{\sum_j \exp(x_j)} shape: - input: :math:`(*)` where `*` means, any number of additional dimensions - output: :math:`(*)`, same shape as the input returns: a tensor of the same dimension and shape as the input with values in the range [0, 1] arguments: dim (int): a dimension along which softmax will be computed (so every slice along dim will sum to 1). .. note:: this module doesn't work directly with nllloss, which expects the log to be computed between the softmax and itself. use `logsoftmax` instead (it's faster and has better numerical properties). examples:: >>> m = nn.softmax(dim=1) >>> input = torch.randn(2, 3) >>> output = m(input) """ __constants__ = ['dim'] def __init__(self, dim=none): super(softmax, self).__init__() self.dim = dim def __setstate__(self, state): self.__dict__.update(state) if not hasattr(self, 'dim'): self.dim = none def forward(self, input): return f.softmax(input, self.dim, _stacklevel=5) def extra_repr(self): return 'dim={dim}'.format(dim=self.dim)
2.logsoftmax其实就是对softmax的结果进行log,即log(softmax(x))
class logsoftmax(module): r"""applies the :math:`\log(\text{softmax}(x))` function to an n-dimensional input tensor. the logsoftmax formulation can be simplified as: .. math:: \text{logsoftmax}(x_{i}) = \log\left(\frac{\exp(x_i) }{ \sum_j \exp(x_j)} \right) shape: - input: :math:`(*)` where `*` means, any number of additional dimensions - output: :math:`(*)`, same shape as the input arguments: dim (int): a dimension along which logsoftmax will be computed. returns: a tensor of the same dimension and shape as the input with values in the range [-inf, 0) examples:: >>> m = nn.logsoftmax() >>> input = torch.randn(2, 3) >>> output = m(input) """ __constants__ = ['dim'] def __init__(self, dim=none): super(logsoftmax, self).__init__() self.dim = dim def __setstate__(self, state): self.__dict__.update(state) if not hasattr(self, 'dim'): self.dim = none def forward(self, input): return f.log_softmax(input, self.dim, _stacklevel=5)
二、代码示例
输入代码
import torch import torch.nn as nn import numpy as np batch_size = 4 class_num = 6 inputs = torch.randn(batch_size, class_num) for i in range(batch_size): for j in range(class_num): inputs[i][j] = (i + 1) * (j + 1) print("inputs:", inputs)
得到大小batch_size为4,类别数为6的向量(可以理解为经过最后一层得到)
tensor([[ 1., 2., 3., 4., 5., 6.],
[ 2., 4., 6., 8., 10., 12.],
[ 3., 6., 9., 12., 15., 18.],
[ 4., 8., 12., 16., 20., 24.]])
接着我们对该向量每一行进行softmax
softmax = nn.softmax(dim=1) probs = softmax(inputs) print("probs:\n", probs)
得到
tensor([[4.2698e-03, 1.1606e-02, 3.1550e-02, 8.5761e-02, 2.3312e-01, 6.3369e-01],
[3.9256e-05, 2.9006e-04, 2.1433e-03, 1.5837e-02, 1.1702e-01, 8.6467e-01],
[2.9067e-07, 5.8383e-06, 1.1727e-04, 2.3553e-03, 4.7308e-02, 9.5021e-01],
[2.0234e-09, 1.1047e-07, 6.0317e-06, 3.2932e-04, 1.7980e-02, 9.8168e-01]])
此外,我们对该向量每一行进行logsoftmax
logsoftmax = nn.logsoftmax(dim=1) log_probs = logsoftmax(inputs) print("log_probs:\n", log_probs)
得到
tensor([[-5.4562e+00, -4.4562e+00, -3.4562e+00, -2.4562e+00, -1.4562e+00, -4.5619e-01],
[-1.0145e+01, -8.1454e+00, -6.1454e+00, -4.1454e+00, -2.1454e+00, -1.4541e-01],
[-1.5051e+01, -1.2051e+01, -9.0511e+00, -6.0511e+00, -3.0511e+00, -5.1069e-02],
[-2.0018e+01, -1.6018e+01, -1.2018e+01, -8.0185e+00, -4.0185e+00, -1.8485e-02]])
验证每一行元素和是否为1
# probs_sum in dim=1 probs_sum = [0 for i in range(batch_size)] for i in range(batch_size): for j in range(class_num): probs_sum[i] += probs[i][j] print(i, "row probs sum:", probs_sum[i])
得到每一行的和,看到确实为1
0 row probs sum: tensor(1.)
1 row probs sum: tensor(1.0000)
2 row probs sum: tensor(1.)
3 row probs sum: tensor(1.)
验证logsoftmax是对softmax的结果进行log
# to numpy np_probs = probs.data.numpy() print("numpy probs:\n", np_probs) # np.log() log_np_probs = np.log(np_probs) print("log numpy probs:\n", log_np_probs)
得到
numpy probs:
[[4.26977826e-03 1.16064614e-02 3.15496325e-02 8.57607946e-02 2.33122006e-01 6.33691311e-01]
[3.92559559e-05 2.90064461e-04 2.14330270e-03 1.58369839e-02 1.17020354e-01 8.64669979e-01]
[2.90672347e-07 5.83831024e-06 1.17265590e-04 2.35534250e-03 4.73083146e-02 9.50212955e-01]
[2.02340233e-09 1.10474026e-07 6.03167746e-06 3.29318427e-04 1.79801770e-02 9.81684387e-01]]
log numpy probs:
[[-5.4561934e+00 -4.4561934e+00 -3.4561934e+00 -2.4561932e+00 -1.4561933e+00 -4.5619333e-01]
[-1.0145408e+01 -8.1454077e+00 -6.1454072e+00 -4.1454072e+00 -2.1454074e+00 -1.4540738e-01]
[-1.5051069e+01 -1.2051069e+01 -9.0510693e+00 -6.0510693e+00 -3.0510693e+00 -5.1069155e-02]
[-2.0018486e+01 -1.6018486e+01 -1.2018485e+01 -8.0184851e+00 -4.0184855e+00 -1.8485421e-02]]
验证完毕
三、整体代码
import torch import torch.nn as nn import numpy as np batch_size = 4 class_num = 6 inputs = torch.randn(batch_size, class_num) for i in range(batch_size): for j in range(class_num): inputs[i][j] = (i + 1) * (j + 1) print("inputs:", inputs) softmax = nn.softmax(dim=1) probs = softmax(inputs) print("probs:\n", probs) logsoftmax = nn.logsoftmax(dim=1) log_probs = logsoftmax(inputs) print("log_probs:\n", log_probs) # probs_sum in dim=1 probs_sum = [0 for i in range(batch_size)] for i in range(batch_size): for j in range(class_num): probs_sum[i] += probs[i][j] print(i, "row probs sum:", probs_sum[i]) # to numpy np_probs = probs.data.numpy() print("numpy probs:\n", np_probs) # np.log() log_np_probs = np.log(np_probs) print("log numpy probs:\n", log_np_probs)
基于pytorch softmax,logsoftmax 表达
import torch import numpy as np input = torch.autograd.variable(torch.rand(1, 3)) print(input) print('softmax={}'.format(torch.nn.functional.softmax(input, dim=1))) print('logsoftmax={}'.format(np.log(torch.nn.functional.softmax(input, dim=1))))
以上为个人经验,希望能给大家一个参考,也希望大家多多支持。
上一篇: 感觉很精致
下一篇: 以后厨师就有我来胜任
推荐阅读
-
详解Django中的ifequal和ifnotequal标签使用
-
对layui中的onevent 和event的使用详解
-
详解Linux系统中ls和dir命令的组合使用
-
MySQL中datetime和timestamp的区别及使用详解
-
详解Linux中PostgreSQL和PostGIS的安装和使用
-
详解java中的深拷贝和浅拷贝(clone()方法的重写、使用序列化实现真正的深拷贝)
-
对python中的argv和argc使用详解
-
PHP中register_globals参数为OFF和ON的区别(register_globals 使用详解)
-
android中DatePicker和TimePicker的使用方法详解
-
详解Python中列表和元祖的使用方法