欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

PyTorch学习笔记(17)损失函数(二)

程序员文章站 2022-07-12 23:14:50
...

损失函数

nn.L2Loss

功能 计算inputs 与 target 之差的绝对值
ln=xnyn l_{n}=\left|x_{n}-y_{n}\right|

nn.MSELoss

功能 计算inputs 与 target 之差的平方
ln=(xnyn)2 l_{n}=\left(x_{n}-y_{n}\right)^{2}
主要参数
reduction 计算模式,可为none/sum/mean
none -逐个元素计算
sum -所有元素求和,返回标量
mean -加权平均,返回标量

SmoothL1Loss

功能 平滑的L1Loss
主要参数
reduction 计算模式,可为none/sum/mean
none -逐个元素计算
sum -所有元素求和,返回标量
mean -加权平均,返回标量
loss(x,y)=1nizi\operatorname{loss}(x, y)=\frac{1}{n} \sum_{i} z_{i}
zi={0.5(xiyi)2, if xiyi<1xiyi0.5, otherwise  z_{i}=\left\{\begin{array}{ll} 0.5\left(x_{i}-y_{i}\right)^{2}, & \text { if }\left|x_{i}-y_{i}\right|<1 \\ \left|x_{i}-y_{i}\right|-0.5, & \text { otherwise } \end{array}\right.

PoissonNLLLoss

功能 泊松分布的负对数似然损失函数
主要参数
log_input 输入是否为对数形式,决定计算公式
full 计算所有loss 默认为False
eps 修正项,避免log(input) 为nan
KaTeX parse error: Can't use function '$' in math mode at position 11: log_input $̲=$ True
KaTeX parse error: Can't use function '$' in math mode at position 21: …input, target) $̲=$ exp(input) -…
KaTeX parse error: Can't use function '$' in math mode at position 12: log_input $̲=$ False
loss(input,target)=inputtargetlog(input+eps)loss (input, target) = input - target * log(input+eps)

nn.KLDivLoss

功能:计算KLD(divergence),KL散度,相对熵 注意事项:需提前将输入计算log-probabilities,如通过nn.logsoftmax()
主要参数
reduction : none/sum/mean/batchmean
batchmean- batchsize维度求平均值
none -逐个元素计算
sum -所有元素求和,返回标量
mean -加权平均,返回标量
DKL(PQ)=Exp[logP(x)Q(x)]=Exp[logP(x)logQ(x)]=i=1NP(xi)(logP(xi)logQ(xi)) \begin{aligned} D_{K L}(P \| Q)=E_{x-p}\left[\log \frac{P(x)}{Q(x)}\right] &=E_{x-p}[\log P(x)-\log Q(x)] \\ &=\sum_{i=1}^{N} \mathrm{P}\left(x_{i}\right)\left(\log P\left(x_{i}\right)-\log Q\left(x_{i}\right)\right) \end{aligned}
ln=yn(logynxn) l_{n}=y_{n} \cdot\left(\log y_{n}-x_{n}\right)

nn.MarginRankingLoss

功能 计算两个向量之间的相似度,用于排序任务
特殊说明 该方法计算两组数据之间的差异,返回一个n*n的loss矩阵
主要参数
margin 边界值,x1与x2之间的差异值
reduction 计算模式,可为none/sum/mean
loss(x,y)=max(0,y(x1x2)+ margin ) \operatorname{loss}(x, y)=\max (0,-y *(x 1-x 2)+\text { margin })
y = 1时,希望x1比x2大,当x1>x2时,不产生loss
y = -1时,希望x2比x1大,当x2>x1时,不产生loss

nn.MultiLabelMarginLoss

功能 多标签边界损失函数
主要参数
reduction 计算模式,可为none/sum/mean
loss(x,y)=ijmax(0,1(x[y[j]]x[i]))xsize(0) \operatorname{loss}(x, y)=\sum_{i j} \frac{\max (0,1-(x[y[j]]-x[i]))}{x \cdot \operatorname{size}(0)}

nn.SoftMarginLoss

功能 计算二分类的logistic损失
主要参数
reduction 计算模式,可为none/sum/mean
loss(x,y)=ilog(1+exp(y[i]x[i]))xnelement0 \operatorname{loss}(x, y)=\sum_{i} \frac{\log (1+\exp (-y[i] * x[i]))}{x \cdot \operatorname{nelement} 0}

reduction 计算模式,可为none/sum/mean
loss(x,y)=1Ciy[i]log((1+exp(x[i]))1)+(1y[i])log(exp(x[i])(1+exp(x[i]))) \operatorname{loss}(x, y)=-\frac{1}{C} * \sum_{i} y[i] * \log \left((1+\exp (-x[i]))^{-1}\right)+(1-y[i]) * \log \left(\frac{\exp (-x[i])}{(1+\exp (-x[i]))}\right)

nn.MultiMarginLoss

功能 计算多分类的折页损失
主要参数
p 可选1或2
weight 各类别的loss设置权值
margin 边界值
reduction 计算模式,可为none/sum/mean
loss(x,y)=imax(0,marginx[y]+x[i]))pxsize(0) \operatorname{loss}(x, y)=\frac{\left.\sum_{i} \max (0, \operatorname{margin}-x[y]+x[i])\right)^{p}}{x \cdot \operatorname{size}(0)}

nn.TripletMarginLoss

功能 计算三元组损失,人脸验证中常用
主要参数
p 范数的阶 默认为2
margin 边界值
reduction 计算模式,可为none/sum/mean
L(a,p,n)=max{d(ai,pi)d(ai,ni)+ margin, 0}d(xi,yi)=xiyip \begin{array}{c} L(a, p, n)=\max \left\{d\left(a_{i}, p_{i}\right)-d\left(a_{i}, n_{i}\right)+\text { margin, } 0\right\} \\ \qquad d\left(x_{i}, y_{i}\right)=\left\|\mathbf{x}_{i}-\mathbf{y}_{i}\right\|_{p} \end{array}

nn.HingeEmbeddingLoss

功能 计算两个输入的相似性,常用于非线性embedding和半监督学习
特别注意 输入x应为两个输入之差的绝对值
主要参数
margin 边界值
reduction 计算模式,可为none/sum/mean
ln={xn, if yn=1max{0,Δxn}, if yn=1 l_{n}=\left\{\begin{array}{ll} x_{n}, & \text { if } y_{n}=1 \\ \max \left\{0, \Delta-x_{n}\right\}, & \text { if } y_{n}=-1 \end{array}\right.

nn.CosineEmbeddingLoss

功能 采用余弦相似度计算两个输入的相似性
主要参数
margin 可取值[-1,1],推荐为[0.0,5]
reduction 计算模式,可为none/sum/mean
loss(x,y)={1cos(x1,x2), if y=1max(0,cos(x1,x2)margin), if y=1 \operatorname{loss}(x, y)=\left\{\begin{array}{ll} 1-\cos \left(x_{1}, x_{2}\right), & \text { if } y=1 \\ \max \left(0, \cos \left(x_{1}, x_{2}\right)-\operatorname{margin}\right), & \text { if } y=-1 \end{array}\right.
cos(θ)=ABAB=i=1nAi×Bii=1n(Ai)2×i=1n(Bi)2 \cos (\theta)=\frac{A \cdot B}{\|A\|\|B\|}=\frac{\sum_{i=1}^{n} A_{i} \times B_{i}}{\sqrt{\sum_{i=1}^{n}\left(A_{i}\right)^{2}} \times \sqrt{\sum_{i=1}^{n}\left(B_{i}\right)^{2}}}
nn.CTCLoss
功能 计算CTC损失 解决时序类数据的分类
(Connectionist Temporal Classification)
主要参数
blank blank label
zero_infinity 无穷大的值 或 梯度置0
reduction 计算模式,可为none/sum/mean



# -*- coding: utf-8 -*-
"""
nn.L1Loss
nn.MSELoss
nn.SmoothL1Loss
nn.PoissonNLLLoss
nn.KLDivLoss
nn.MarginRankingLoss
nn.MultiLabelMarginLoss
nn.SoftMarginLoss
nn.MultiLabelSoftMarginLoss
nn.MultiMarginLoss
nn.TripletMarginLoss
nn.HingeEmbeddingLoss
nn.CosineEmbeddingLoss
nn.CTCLoss
"""

import torch
import torch.nn as nn
import torch.nn.functional as F
import matplotlib.pyplot as plt
import numpy as np
from tools.common_tools import set_seed

set_seed(1)  # 设置随机种子

# ------------------------------------------------- 5 L1 loss ----------------------------------------------
flag = 0
# flag = 1
if flag:

    inputs = torch.ones((2, 2))
    target = torch.ones((2, 2)) * 3

    loss_f = nn.L1Loss(reduction='none')
    loss = loss_f(inputs, target)

    print("input:{}\ntarget:{}\nL1 loss:{}".format(inputs, target, loss))

# ------------------------------------------------- 6 MSE loss ----------------------------------------------

    loss_f_mse = nn.MSELoss(reduction='none')
    loss_mse = loss_f_mse(inputs, target)

    print("MSE loss:{}".format(loss_mse))

# ------------------------------------------------- 7 Smooth L1 loss ----------------------------------------------
flag = 0
# flag = 1
if flag:
    inputs = torch.linspace(-3, 3, steps=500)
    target = torch.zeros_like(inputs)

    loss_f = nn.SmoothL1Loss(reduction='none')

    loss_smooth = loss_f(inputs, target)

    loss_l1 = np.abs(inputs.numpy())

    plt.plot(inputs.numpy(), loss_smooth.numpy(), label='Smooth L1 Loss')
    plt.plot(inputs.numpy(), loss_l1, label='L1 loss')
    plt.xlabel('x_i - y_i')
    plt.ylabel('loss value')
    plt.legend()
    plt.grid()
    plt.show()


# ------------------------------------------------- 8 Poisson NLL Loss ----------------------------------------------
flag = 0
# flag = 1
if flag:

    inputs = torch.randn((2, 2))
    target = torch.randn((2, 2))

    loss_f = nn.PoissonNLLLoss(log_input=True, full=False, reduction='none')
    loss = loss_f(inputs, target)
    print("input:{}\ntarget:{}\nPoisson NLL loss:{}".format(inputs, target, loss))

# --------------------------------- compute by hand
flag = 0
# flag = 1
if flag:

    idx = 0

    loss_1 = torch.exp(inputs[idx, idx]) - target[idx, idx]*inputs[idx, idx]

    print("第一个元素loss:", loss_1)


# ------------------------------------------------- 9 KL Divergence Loss ----------------------------------------------
flag = 0
# flag = 1
if flag:

    inputs = torch.tensor([[0.5, 0.3, 0.2], [0.2, 0.3, 0.5]])
    inputs_log = torch.log(inputs)
    target = torch.tensor([[0.9, 0.05, 0.05], [0.1, 0.7, 0.2]], dtype=torch.float)

    loss_f_none = nn.KLDivLoss(reduction='none')
    loss_f_mean = nn.KLDivLoss(reduction='mean')
    loss_f_bs_mean = nn.KLDivLoss(reduction='batchmean')

    loss_none = loss_f_none(inputs, target)
    loss_mean = loss_f_mean(inputs, target)
    loss_bs_mean = loss_f_bs_mean(inputs, target)

    print("loss_none:\n{}\nloss_mean:\n{}\nloss_bs_mean:\n{}".format(loss_none, loss_mean, loss_bs_mean))

# --------------------------------- compute by hand
flag = 0
# flag = 1
if flag:

    idx = 0

    loss_1 = target[idx, idx] * (torch.log(target[idx, idx]) - inputs[idx, idx])

    print("第一个元素loss:", loss_1)


# ---------------------------------------------- 10 Margin Ranking Loss --------------------------------------------
flag = 0
# flag = 1
if flag:

    x1 = torch.tensor([[1], [2], [3]], dtype=torch.float)
    x2 = torch.tensor([[2], [2], [2]], dtype=torch.float)

    target = torch.tensor([1, 1, -1], dtype=torch.float)

    loss_f_none = nn.MarginRankingLoss(margin=0, reduction='none')

    loss = loss_f_none(x1, x2, target)

    print(loss)

# ---------------------------------------------- 11 Multi Label Margin Loss -----------------------------------------
flag = 0
# flag = 1
if flag:

    x = torch.tensor([[0.1, 0.2, 0.4, 0.8]])
    y = torch.tensor([[0, 3, -1, -1]], dtype=torch.long)

    loss_f = nn.MultiLabelMarginLoss(reduction='none')

    loss = loss_f(x, y)

    print(loss)

# --------------------------------- compute by hand
flag = 0
# flag = 1
if flag:

    x = x[0]
    item_1 = (1-(x[0] - x[1])) + (1 - (x[0] - x[2]))    # [0]
    item_2 = (1-(x[3] - x[1])) + (1 - (x[3] - x[2]))    # [3]

    loss_h = (item_1 + item_2) / x.shape[0]

    print(loss_h)

# ---------------------------------------------- 12 SoftMargin Loss -----------------------------------------
flag = 0
# flag = 1
if flag:

    inputs = torch.tensor([[0.3, 0.7], [0.5, 0.5]])
    target = torch.tensor([[-1, 1], [1, -1]], dtype=torch.float)

    loss_f = nn.SoftMarginLoss(reduction='none')

    loss = loss_f(inputs, target)

    print("SoftMargin: ", loss)

# --------------------------------- compute by hand
flag = 0
# flag = 1
if flag:

    idx = 0

    inputs_i = inputs[idx, idx]
    target_i = target[idx, idx]

    loss_h = np.log(1 + np.exp(-target_i * inputs_i))

    print(loss_h)


# ---------------------------------------------- 13 MultiLabel SoftMargin Loss -----------------------------------------
flag = 0
# flag = 1
if flag:

    inputs = torch.tensor([[0.3, 0.7, 0.8]])
    target = torch.tensor([[0, 1, 1]], dtype=torch.float)

    loss_f = nn.MultiLabelSoftMarginLoss(reduction='none')

    loss = loss_f(inputs, target)

    print("MultiLabel SoftMargin: ", loss)

# --------------------------------- compute by hand
flag = 0
# flag = 1
if flag:

    i_0 = torch.log(torch.exp(-inputs[0, 0]) / (1 + torch.exp(-inputs[0, 0])))

    i_1 = torch.log(1 / (1 + torch.exp(-inputs[0, 1])))
    i_2 = torch.log(1 / (1 + torch.exp(-inputs[0, 2])))

    loss_h = (i_0 + i_1 + i_2) / -3

    print(loss_h)

# ---------------------------------------------- 14 Multi Margin Loss -----------------------------------------
flag = 0
# flag = 1
if flag:

    x = torch.tensor([[0.1, 0.2, 0.7], [0.2, 0.5, 0.3]])
    y = torch.tensor([1, 2], dtype=torch.long)

    loss_f = nn.MultiMarginLoss(reduction='none')

    loss = loss_f(x, y)

    print("Multi Margin Loss: ", loss)

# --------------------------------- compute by hand
flag = 0
# flag = 1
if flag:

    x = x[0]
    margin = 1

    i_0 = margin - (x[1] - x[0])
    # i_1 = margin - (x[1] - x[1])
    i_2 = margin - (x[1] - x[2])

    loss_h = (i_0 + i_2) / x.shape[0]

    print(loss_h)

# ---------------------------------------------- 15 Triplet Margin Loss -----------------------------------------
flag = 0
# flag = 1
if flag:

    anchor = torch.tensor([[1.]])
    pos = torch.tensor([[2.]])
    neg = torch.tensor([[0.5]])

    loss_f = nn.TripletMarginLoss(margin=1.0, p=1)

    loss = loss_f(anchor, pos, neg)

    print("Triplet Margin Loss", loss)

# --------------------------------- compute by hand
flag = 0
# flag = 1
if flag:

    margin = 1
    a, p, n = anchor[0], pos[0], neg[0]

    d_ap = torch.abs(a-p)
    d_an = torch.abs(a-n)

    loss = d_ap - d_an + margin

    print(loss)

# ---------------------------------------------- 16 Hinge Embedding Loss -----------------------------------------
flag = 0
# flag = 1
if flag:

    inputs = torch.tensor([[1., 0.8, 0.5]])
    target = torch.tensor([[1, 1, -1]])

    loss_f = nn.HingeEmbeddingLoss(margin=1, reduction='none')

    loss = loss_f(inputs, target)

    print("Hinge Embedding Loss", loss)

# --------------------------------- compute by hand
flag = 0
# flag = 1
if flag:
    margin = 1.
    loss = max(0, margin - inputs.numpy()[0, 2])

    print(loss)


# ---------------------------------------------- 17 Cosine Embedding Loss -----------------------------------------
flag = 0
# flag = 1
if flag:

    x1 = torch.tensor([[0.3, 0.5, 0.7], [0.3, 0.5, 0.7]])
    x2 = torch.tensor([[0.1, 0.3, 0.5], [0.1, 0.3, 0.5]])

    target = torch.tensor([[1, -1]], dtype=torch.float)

    loss_f = nn.CosineEmbeddingLoss(margin=0., reduction='none')

    loss = loss_f(x1, x2, target)

    print("Cosine Embedding Loss", loss)

# --------------------------------- compute by hand
flag = 0
# flag = 1
if flag:
    margin = 0.

    def cosine(a, b):
        numerator = torch.dot(a, b)
        denominator = torch.norm(a, 2) * torch.norm(b, 2)
        return float(numerator/denominator)

    l_1 = 1 - (cosine(x1[0], x2[0]))

    l_2 = max(0, cosine(x1[0], x2[0]))

    print(l_1, l_2)


# ---------------------------------------------- 18 CTC Loss -----------------------------------------
# flag = 0
flag = 1
if flag:
    T = 50      # Input sequence length
    C = 20      # Number of classes (including blank)
    N = 16      # Batch size
    S = 30      # Target sequence length of longest target in batch
    S_min = 10  # Minimum target length, for demonstration purposes

    # Initialize random batch of input vectors, for *size = (T,N,C)
    inputs = torch.randn(T, N, C).log_softmax(2).detach().requires_grad_()

    # Initialize random batch of targets (0 = blank, 1:C = classes)
    target = torch.randint(low=1, high=C, size=(N, S), dtype=torch.long)

    input_lengths = torch.full(size=(N,), fill_value=T, dtype=torch.long)
    target_lengths = torch.randint(low=S_min, high=S, size=(N,), dtype=torch.long)

    ctc_loss = nn.CTCLoss()
    loss = ctc_loss(inputs, target, input_lengths, target_lengths)

    print("CTC loss: ", loss)