PyTorch Note -- 自动求导
程序员文章站
2024-03-14 20:42:11
...
自动求导有两种方式 torch.autograd.grad(loss_func, [parameter, ...]) 或者是 loss_func.backward() ; parameter.grad
x = torch.ones(1)
# print(x)
w = torch.full([1], 2, requires_grad=True)
# print(w) # tensor([2.], requires_grad=True)
mse = torch.nn.functional.mse_loss(x*w, torch.ones(1))
# print(mse) # tensor(1., grad_fn=<MseLossBackward>)
print(torch.autograd.grad(mse, [w])) # 对[w] 参数求梯度 # (tensor([2.]),) 与下面两行等价
mse2 = torch.nn.functional.mse_loss(x * w, torch.ones(1))
mse2.backward()
print(w.grad) # tensor([2.])