pytorch常用损失函数--CrossEntropyLoss(交叉损失熵函数)
程序员文章站
2022-06-26 13:32:34
一.函数原理简介:cross_entroy_loss(x, target):x为torch.Size([m, classes_sum]),target为一维的tensor,维度为torch.Size([classes_sum])target必须为long类型,x为float类型softmax–>logsoftmax–>NLLLoss二.代码直观演示:'''步骤:one-hot编码softmax求解logsoftmax求解NLLLoss求解'''def cross_...
一.函数原理简介:
cross_entroy_loss(x, target):
x为torch.Size([m, classes_sum]),target为一维的tensor,维度为torch.Size([classes_sum])
target必须为long类型,x为float类型
softmax–>logsoftmax–>NLLLoss
二.代码直观演示:
'''
步骤:
one-hot编码
softmax求解
logsoftmax求解
NLLLoss求解
'''
def cross_entroy_loss(x, target):
rows = x.size(0)
cols = x.size(1)
one_hot =torch.zeros((rows, cols))
for i in range(len(target)):
cols = target[i]
one_hot[i][cols] = 1
#下面一句
'''
torch.sum(torch.exp(x), dim = 1).reshape(-1,1)
torch.sum(torch.exp(x), dim = 1)本身得到的是(1, 5)维度的数据
将torch.exp(x)的每一列求和之后并将其变为(5,1)维度的数据
'''
#先进行softmax运算:
'''
softmax = e(xi) / sum(e(xi))#对每行进行操作,即每行的值加起来为1
'''
#再进行logsoftmax
'''
logsoftmax = log(softmax)
'''
#最后进行NLLLoss
'''
Nllloss = -1/N * (onehot * logsoftmax) #N代表target.shape[0]
'''
softmax = torch.exp(x) / torch.sum(torch.exp(x), dim = 1).reshape(-1,1)
logsoftmax = torch.log(softmax)
nllloss = - torch.sum(one_hot * logsoftmax) / target.shape[0]
CrossEntroyLoss_value = nllloss
return CrossEntroyLoss_value
本文地址:https://blog.csdn.net/weixin_40025586/article/details/109777923