torch.nn、(二)
目录
DataParallel layers (multi-GPU, distributed)
Recurrent layers
RNN
class torch.nn.
RNN
(*args, **kwargs)[source]
Applies a multi-layer Elman RNN with tanhtanhtanh or ReLUReLUReLU non-linearity to an input sequence.
For each element in the input sequence, each layer computes the following function:
ht=tanh(Wihxt+bih+Whhh(t−1)+bhh)h_t = \text{tanh}(W_{ih} x_t + b_{ih} + W_{hh} h_{(t-1)} + b_{hh}) ht=tanh(Wihxt+bih+Whhh(t−1)+bhh)
where hth_tht is the hidden state at time t, xtx_txt is the input at time t, and h(t−1)h_{(t-1)}h(t−1) is the hidden state of the previous layer at time t-1 or the initial hidden state at time 0. If nonlinearity
is 'relu'
, then ReLU is used instead of tanh.
Parameters
Inputs: input, h_0
Outputs: output, h_n
Shape:
Variables
Note
All the weights and biases are initialized from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k})U(−k
,k
) where k=1hidden_sizek = \frac{1}{\text{hidden\_size}}k=hidden_size1
Note
If the following conditions are satisfied: 1) cudnn is enabled, 2) input data is on the GPU 3) input data has dtype torch.float16
4) V100 GPU is used, 5) input data is not in PackedSequence
format persistent algorithm can be selected to improve performance.
Examples:
>>> rnn = nn.RNN(10, 20, 2)
>>> input = torch.randn(5, 3, 10)
>>> h0 = torch.randn(2, 3, 20)
>>> output, hn = rnn(input, h0)
LSTM
class torch.nn.
LSTM
(*args, **kwargs)[source]
Applies a multi-layer long short-term memory (LSTM) RNN to an input sequence.
For each element in the input sequence, each layer computes the following function:
it=σ(Wiixt+bii+Whih(t−1)+bhi)ft=σ(Wifxt+bif+Whfh(t−1)+bhf)gt=tanh(Wigxt+big+Whgh(t−1)+bhg)ot=σ(Wioxt+bio+Whoh(t−1)+bho)ct=ft∗c(t−1)+it∗gtht=ot∗tanh(ct)\begin{array}{ll} \\ i_t = \sigma(W_{ii} x_t + b_{ii} + W_{hi} h_{(t-1)} + b_{hi}) \\ f_t = \sigma(W_{if} x_t + b_{if} + W_{hf} h_{(t-1)} + b_{hf}) \\ g_t = \tanh(W_{ig} x_t + b_{ig} + W_{hg} h_{(t-1)} + b_{hg}) \\ o_t = \sigma(W_{io} x_t + b_{io} + W_{ho} h_{(t-1)} + b_{ho}) \\ c_t = f_t * c_{(t-1)} + i_t * g_t \\ h_t = o_t * \tanh(c_t) \\ \end{array} it=σ(Wiixt+bii+Whih(t−1)+bhi)ft=σ(Wifxt+bif+Whfh(t−1)+bhf)gt=tanh(Wigxt+big+Whgh(t−1)+bhg)ot=σ(Wioxt+bio+Whoh(t−1)+bho)ct=ft∗c(t−1)+it∗gtht=ot∗tanh(ct)
where hth_tht is the hidden state at time t, ctc_tct is the cell state at time t, xtx_txt is the input at time t, h(t−1)h_{(t-1)}h(t−1) is the hidden state of the layer at time t-1 or the initial hidden state at time 0, and iti_tit , ftf_tft , gtg_tgt , oto_tot are the input, forget, cell, and output gates, respectively. σ\sigmaσ is the sigmoid function, and ∗*∗ is the Hadamard product.
In a multilayer LSTM, the input xt(l)x^{(l)}_txt(l) of the lll -th layer (l>=2l >= 2l>=2 ) is the hidden state ht(l−1)h^{(l-1)}_tht(l−1) of the previous layer multiplied by dropout δt(l−1)\delta^{(l-1)}_tδt(l−1) where each δt(l−1)\delta^{(l-1)}_tδt(l−1) is a Bernoulli random variable which is 000 with probability dropout
.
Parameters
Inputs: input, (h_0, c_0)
Outputs: output, (h_n, c_n)
Variables
Note
All the weights and biases are initialized from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k})U(−k
,k
) where k=1hidden_sizek = \frac{1}{\text{hidden\_size}}k=hidden_size1
Note
If the following conditions are satisfied: 1) cudnn is enabled, 2) input data is on the GPU 3) input data has dtype torch.float16
4) V100 GPU is used, 5) input data is not in PackedSequence
format persistent algorithm can be selected to improve performance.
Examples:
>>> rnn = nn.LSTM(10, 20, 2)
>>> input = torch.randn(5, 3, 10)
>>> h0 = torch.randn(2, 3, 20)
>>> c0 = torch.randn(2, 3, 20)
>>> output, (hn, cn) = rnn(input, (h0, c0))
GRU
class torch.nn.
GRU
(*args, **kwargs)[source]
Applies a multi-layer gated recurrent unit (GRU) RNN to an input sequence.
For each element in the input sequence, each layer computes the following function:
rt=σ(Wirxt+bir+Whrh(t−1)+bhr)zt=σ(Wizxt+biz+Whzh(t−1)+bhz)nt=tanh(Winxt+bin+rt∗(Whnh(t−1)+bhn))ht=(1−zt)∗nt+zt∗h(t−1)\begin{array}{ll} r_t = \sigma(W_{ir} x_t + b_{ir} + W_{hr} h_{(t-1)} + b_{hr}) \\ z_t = \sigma(W_{iz} x_t + b_{iz} + W_{hz} h_{(t-1)} + b_{hz}) \\ n_t = \tanh(W_{in} x_t + b_{in} + r_t * (W_{hn} h_{(t-1)}+ b_{hn})) \\ h_t = (1 - z_t) * n_t + z_t * h_{(t-1)} \end{array} rt=σ(Wirxt+bir+Whrh(t−1)+bhr)zt=σ(Wizxt+biz+Whzh(t−1)+bhz)nt=tanh(Winxt+bin+rt∗(Whnh(t−1)+bhn))ht=(1−zt)∗nt+zt∗h(t−1)
where hth_tht is the hidden state at time t, xtx_txt is the input at time t, h(t−1)h_{(t-1)}h(t−1) is the hidden state of the layer at time t-1 or the initial hidden state at time 0, and rtr_trt , ztz_tzt , ntn_tnt are the reset, update, and new gates, respectively. σ\sigmaσ is the sigmoid function, and ∗*∗ is the Hadamard product.
In a multilayer GRU, the input xt(l)x^{(l)}_txt(l) of the lll -th layer (l>=2l >= 2l>=2 ) is the hidden state ht(l−1)h^{(l-1)}_tht(l−1) of the previous layer multiplied by dropout δt(l−1)\delta^{(l-1)}_tδt(l−1) where each δt(l−1)\delta^{(l-1)}_tδt(l−1) is a Bernoulli random variable which is 000 with probability dropout
.
Parameters
Inputs: input, h_0
Outputs: output, h_n
Shape:
Variables
Note
All the weights and biases are initialized from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k})U(−k
,k
) where k=1hidden_sizek = \frac{1}{\text{hidden\_size}}k=hidden_size1
Note
If the following conditions are satisfied: 1) cudnn is enabled, 2) input data is on the GPU 3) input data has dtype torch.float16
4) V100 GPU is used, 5) input data is not in PackedSequence
format persistent algorithm can be selected to improve performance.
Examples:
>>> rnn = nn.GRU(10, 20, 2)
>>> input = torch.randn(5, 3, 10)
>>> h0 = torch.randn(2, 3, 20)
>>> output, hn = rnn(input, h0)
RNNCell
class torch.nn.
RNNCell
(input_size, hidden_size, bias=True, nonlinearity='tanh')[source]
An Elman RNN cell with tanh or ReLU non-linearity.
h′=tanh(Wihx+bih+Whhh+bhh)h' = \tanh(W_{ih} x + b_{ih} + W_{hh} h + b_{hh})h′=tanh(Wihx+bih+Whhh+bhh)
If nonlinearity
is ‘relu’, then ReLU is used in place of tanh.
Parameters
Inputs: input, hidden
Outputs: h’
Shape:
Variables
Note
All the weights and biases are initialized from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k})U(−k
,k
) where k=1hidden_sizek = \frac{1}{\text{hidden\_size}}k=hidden_size1
Examples:
>>> rnn = nn.RNNCell(10, 20)
>>> input = torch.randn(6, 3, 10)
>>> hx = torch.randn(3, 20)
>>> output = []
>>> for i in range(6):
hx = rnn(input[i], hx)
output.append(hx)
LSTMCell
class torch.nn.
LSTMCell
(input_size, hidden_size, bias=True)[source]
A long short-term memory (LSTM) cell.
i=σ(Wiix+bii+Whih+bhi)f=σ(Wifx+bif+Whfh+bhf)g=tanh(Wigx+big+Whgh+bhg)o=σ(Wiox+bio+Whoh+bho)c′=f∗c+i∗gh′=o∗tanh(c′)\begin{array}{ll} i = \sigma(W_{ii} x + b_{ii} + W_{hi} h + b_{hi}) \\ f = \sigma(W_{if} x + b_{if} + W_{hf} h + b_{hf}) \\ g = \tanh(W_{ig} x + b_{ig} + W_{hg} h + b_{hg}) \\ o = \sigma(W_{io} x + b_{io} + W_{ho} h + b_{ho}) \\ c' = f * c + i * g \\ h' = o * \tanh(c') \\ \end{array}i=σ(Wiix+bii+Whih+bhi)f=σ(Wifx+bif+Whfh+bhf)g=tanh(Wigx+big+Whgh+bhg)o=σ(Wiox+bio+Whoh+bho)c′=f∗c+i∗gh′=o∗tanh(c′)
where σ\sigmaσ is the sigmoid function, and ∗*∗ is the Hadamard product.
Parameters
Inputs: input, (h_0, c_0)
Outputs: (h_1, c_1)
Variables
Note
All the weights and biases are initialized from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k})U(−k
,k
) where k=1hidden_sizek = \frac{1}{\text{hidden\_size}}k=hidden_size1
Examples:
>>> rnn = nn.LSTMCell(10, 20)
>>> input = torch.randn(6, 3, 10)
>>> hx = torch.randn(3, 20)
>>> cx = torch.randn(3, 20)
>>> output = []
>>> for i in range(6):
hx, cx = rnn(input[i], (hx, cx))
output.append(hx)
GRUCell
class torch.nn.
GRUCell
(input_size, hidden_size, bias=True)[source]
A gated recurrent unit (GRU) cell
r=σ(Wirx+bir+Whrh+bhr)z=σ(Wizx+biz+Whzh+bhz)n=tanh(Winx+bin+r∗(Whnh+bhn))h′=(1−z)∗n+z∗h\begin{array}{ll} r = \sigma(W_{ir} x + b_{ir} + W_{hr} h + b_{hr}) \\ z = \sigma(W_{iz} x + b_{iz} + W_{hz} h + b_{hz}) \\ n = \tanh(W_{in} x + b_{in} + r * (W_{hn} h + b_{hn})) \\ h' = (1 - z) * n + z * h \end{array}r=σ(Wirx+bir+Whrh+bhr)z=σ(Wizx+biz+Whzh+bhz)n=tanh(Winx+bin+r∗(Whnh+bhn))h′=(1−z)∗n+z∗h
where σ\sigmaσ is the sigmoid function, and ∗*∗ is the Hadamard product.
Parameters
Inputs: input, hidden
Outputs: h’
Shape:
Variables
Note
All the weights and biases are initialized from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k})U(−k
,k
) where k=1hidden_sizek = \frac{1}{\text{hidden\_size}}k=hidden_size1
Examples:
>>> rnn = nn.GRUCell(10, 20)
>>> input = torch.randn(6, 3, 10)
>>> hx = torch.randn(3, 20)
>>> output = []
>>> for i in range(6):
hx = rnn(input[i], hx)
output.append(hx)
Transformer layers
Transformer
class torch.nn.
Transformer
(d_model=512, nhead=8, num_encoder_layers=6, num_decoder_layers=6, dim_feedforward=2048, dropout=0.1, custom_encoder=None, custom_decoder=None)[source]
A transformer model. User is able to modify the attributes as needed. The architechture is based on the paper “Attention Is All You Need”. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000-6010.
Parameters
Examples::
>>> transformer_model = nn.Transformer(src_vocab, tgt_vocab)
>>> transformer_model = nn.Transformer(src_vocab, tgt_vocab, nhead=16, num_encoder_layers=12)
forward
(src, tgt, src_mask=None, tgt_mask=None, memory_mask=None, src_key_padding_mask=None, tgt_key_padding_mask=None, memory_key_padding_mask=None)[source]
Take in and process masked source/target sequences.
Parameters
Shape:
Note: [src/tgt/memory]_mask should be filled with float(‘-inf’) for the masked positions and float(0.0) else. These masks ensure that predictions for position i depend only on the unmasked positions j and are applied identically for each sequence in a batch. [src/tgt/memory]_key_padding_mask should be a ByteTensor where True values are positions that should be masked with float(‘-inf’) and False values will be unchanged. This mask ensures that no information will be taken from position i if it is masked, and has a separate mask for each sequence in a batch.
Note: Due to the multi-head attention architecture in the transformer model, the output sequence length of a transformer is same as the input sequence (i.e. target) length of the decode.
where S is the source sequence length, T is the target sequence length, N is the batch size, E is the feature number
Examples
>>> output = transformer_model(src, tgt, src_mask=src_mask, tgt_mask=tgt_mask)
generate_square_subsequent_mask
(sz)[source]
Generate a square mask for the sequence. The masked positions are filled with float(‘-inf’). Unmasked positions are filled with float(0.0).
TransformerEncoder
class torch.nn.
TransformerEncoder
(encoder_layer, num_layers, norm=None)[source]
TransformerEncoder is a stack of N encoder layers
Parameters
Examples::
>>> encoder_layer = nn.TransformerEncoderLayer(d_model, nhead)
>>> transformer_encoder = nn.TransformerEncoder(encoder_layer, num_layers)
forward
(src, mask=None, src_key_padding_mask=None)[source]
Pass the input through the endocder layers in turn.
Parameters
Shape:
see the docs in Transformer class.
TransformerDecoder
class torch.nn.
TransformerDecoder
(decoder_layer, num_layers, norm=None)[source]
TransformerDecoder is a stack of N decoder layers
Parameters
Examples::
>>> decoder_layer = nn.TransformerDecoderLayer(d_model, nhead)
>>> transformer_decoder = nn.TransformerDecoder(decoder_layer, num_layers)
forward
(tgt, memory, tgt_mask=None, memory_mask=None, tgt_key_padding_mask=None, memory_key_padding_mask=None)[source]
Pass the inputs (and mask) through the decoder layer in turn.
Parameters
Shape:
see the docs in Transformer class.
TransformerEncoderLayer
class torch.nn.
TransformerEncoderLayer
(d_model, nhead, dim_feedforward=2048, dropout=0.1)[source]
TransformerEncoderLayer is made up of self-attn and feedforward network. This standard encoder layer is based on the paper “Attention Is All You Need”. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000-6010. Users may modify or implement in a different way during application.
Parameters
Examples::
>>> encoder_layer = nn.TransformerEncoderLayer(d_model, nhead)
forward
(src, src_mask=None, src_key_padding_mask=None)[source]
Pass the input through the endocder layer.
Parameters
Shape:
see the docs in Transformer class.
TransformerDecoderLayer
class torch.nn.
TransformerDecoderLayer
(d_model, nhead, dim_feedforward=2048, dropout=0.1)[source]
TransformerDecoderLayer is made up of self-attn, multi-head-attn and feedforward network. This standard decoder layer is based on the paper “Attention Is All You Need”. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000-6010. Users may modify or implement in a different way during application.
Parameters
Examples::
>>> decoder_layer = nn.TransformerDecoderLayer(d_model, nhead)
forward
(tgt, memory, tgt_mask=None, memory_mask=None, tgt_key_padding_mask=None, memory_key_padding_mask=None)[source]
Pass the inputs (and mask) through the decoder layer.
Parameters
Shape:
see the docs in Transformer class.
Linear layers
Identity
class torch.nn.
Identity
(*args, **kwargs)[source]
A placeholder identity operator that is argument-insensitive.
Parameters
Examples:
>>> m = nn.Identity(54, unused_argument1=0.1, unused_argument2=False)
>>> input = torch.randn(128, 20)
>>> output = m(input)
>>> print(output.size())
torch.Size([128, 20])
Linear
class torch.nn.
Linear
(in_features, out_features, bias=True)[source]
Applies a linear transformation to the incoming data: y=xAT+by = xA^T + by=xAT+b
Parameters
Shape:
Variables
,k
-
Input: (N,∗)(N, *)(N,∗) where * means, any number of additional dimensions
-
Output: (N,∗)(N, *)(N,∗) , same shape as the input
-
min_val – minimum value of the linear region range. Default: -1
-
max_val – maximum value of the linear region range. Default: 1
-
inplace – can optionally do the operation in-place. Default:
False
-
Input: (N,∗)(N, *)(N,∗) where * means, any number of additional dimensions
-
Output: (N,∗)(N, *)(N,∗) , same shape as the input
-
negative_slope – Controls the angle of the negative slope. Default: 1e-2
-
inplace – can optionally do the operation in-place. Default:
False
-
Input: (N,∗)(N, *)(N,∗) where * means, any number of additional dimensions
-
Output: (N,∗)(N, *)(N,∗) , same shape as the input
-
Input: (N,∗)(N, *)(N,∗) where * means, any number of additional dimensions
-
Output: (N,∗)(N, *)(N,∗) , same shape as the input
-
embed_dim – total dimension of the model.
-
num_heads – parallel attention heads.
-
dropout – a Dropout layer on attn_output_weights. Default: 0.0.
-
bias – add bias as module parameter. Default: True.
-
add_bias_kv – add bias to the key and value sequences at dim=0.
-
add_zero_attn – add a new batch of zeros to the key and value sequences at dim=1.
-
kdim – total number of features in key. Default: None.
-
vdim – total number of features in key. Default: None.
-
Note – if kdim and vdim are None, they will be set to embed_dim such that
-
key, and value have the same number of features. (query,) –
-
key, value (query,) – map a query and a set of key-value pairs to an output. See “Attention Is All You Need” for more details.
-
key_padding_mask – if provided, specified padding elements in the key will be ignored by the attention. This is an binary mask. When the value is True, the corresponding value on the attention layer will be filled with -inf.
-
need_weights – output attn_output_weights.
-
attn_mask – mask that prevents attention to certain positions. This is an additive mask (i.e. the values will be added to the attention layer).
-
Inputs:
-
query: (L,N,E)(L, N, E)(L,N,E) where L is the target sequence length, N is the batch size, E is the embedding dimension.
-
key: (S,N,E)(S, N, E)(S,N,E) , where S is the source sequence length, N is the batch size, E is the embedding dimension.
-
value: (S,N,E)(S, N, E)(S,N,E) where S is the source sequence length, N is the batch size, E is the embedding dimension.
-
key_padding_mask: (N,S)(N, S)(N,S) , ByteTensor, where N is the batch size, S is the source sequence length.
-
attn_mask: (L,S)(L, S)(L,S) where L is the target sequence length, S is the source sequence length.
-
Outputs:
-
attn_output: (L,N,E)(L, N, E)(L,N,E) where L is the target sequence length, N is the batch size, E is the embedding dimension.
-
attn_output_weights: (N,L,S)(N, L, S)(N,L,S) where N is the batch size, L is the target sequence length, S is the source sequence length.
-
num_parameters (int) – number of aaa to learn. Although it takes an int as input, there is only two values are legitimate: 1, or the number of channels at input. Default: 1
-
init (float) – the initial value of aaa . Default: 0.25
-
Input: (N,∗)(N, *)(N,∗) where * means, any number of additional dimensions
-
Output: (N,∗)(N, *)(N,∗) , same shape as the input
-
Input: (N,∗)(N, *)(N,∗) where * means, any number of additional dimensions
-
Output: (N,∗)(N, *)(N,∗) , same shape as the input
-
Input: (N,∗)(N, *)(N,∗) where * means, any number of additional dimensions
-
Output: (N,∗)(N, *)(N,∗) , same shape as the input
-
lower – lower bound of the uniform distribution. Default: 18\frac{1}{8}81
-
upper – upper bound of the uniform distribution. Default: 13\frac{1}{3}31
-
inplace – can optionally do the operation in-place. Default:
False
-
Input: (N,∗)(N, *)(N,∗) where * means, any number of additional dimensions
-
Output: (N,∗)(N, *)(N,∗) , same shape as the input
-
Input: (N,∗)(N, *)(N,∗) where * means, any number of additional dimensions
-
Output: (N,∗)(N, *)(N,∗) , same shape as the input
-
alpha – the α\alphaα value for the CELU formulation. Default: 1.0
-
inplace – can optionally do the operation in-place. Default:
False
-
Input: (N,∗)(N, *)(N,∗) where * means, any number of additional dimensions
-
Output: (N,∗)(N, *)(N,∗) , same shape as the input
-
Input: (N,∗)(N, *)(N,∗) where * means, any number of additional dimensions
-
Output: (N,∗)(N, *)(N,∗) , same shape as the input
-
beta – the β\betaβ value for the Softplus formulation. Default: 1
-
threshold – values above this revert to a linear function. Default: 20
-
Input: (N,∗)(N, *)(N,∗) where * means, any number of additional dimensions
-
Output: (N,∗)(N, *)(N,∗) , same shape as the input
-
Input: (N,∗)(N, *)(N,∗) where * means, any number of additional dimensions
-
Output: (N,∗)(N, *)(N,∗) , same shape as the input
-
Input: (N,∗)(N, *)(N,∗) where * means, any number of additional dimensions
-
Output: (N,∗)(N, *)(N,∗) , same shape as the input
-
Input: (N,∗)(N, *)(N,∗) where * means, any number of additional dimensions
-
Output: (N,∗)(N, *)(N,∗) , same shape as the input
-
Input: (N,∗)(N, *)(N,∗) where * means, any number of additional dimensions
-
Output: (N,∗)(N, *)(N,∗) , same shape as the input
-
threshold – The value to threshold at
-
value – The value to replace with
-
inplace – can optionally do the operation in-place. Default:
False
-
Input: (N,∗)(N, *)(N,∗) where * means, any number of additional dimensions
-
Output: (N,∗)(N, *)(N,∗) , same shape as the input
-
Input: (∗)(*)(∗) where * means, any number of additional dimensions
-
Output: (∗)(*)(∗) , same shape as the input
-
Input: (∗)(*)(∗) where * means, any number of additional dimensions
-
Output: (∗)(*)(∗) , same shape as the input
-
Input: (N,C,H,W)(N, C, H, W)(N,C,H,W)
-
Output: (N,C,H,W)(N, C, H, W)(N,C,H,W) (same shape as input)
-
Input: (∗)(*)(∗) where * means, any number of additional dimensions
-
Output: (∗)(*)(∗) , same shape as the input
-
cutoffs
should be an ordered Sequence of integers sorted in the increasing order. It controls number of clusters and the partitioning of targets into clusters. For example settingcutoffs = [10, 100, 1000]
means that first 10 targets will be assigned to the ‘head’ of the adaptive softmax, targets 11, 12, …, 100 will be assigned to the first cluster, and targets 101, 102, …, 1000 will be assigned to the second cluster, while targets 1001, 1002, …, n_classes - 1 will be assigned to the last, third cluster. -
div_value
is used to compute the size of each additional cluster, which is given as ⌊in_featuresdiv_valueidx⌋\left\lfloor\frac{in\_features}{div\_value^{idx}}\right\rfloor⌊div_valueidxin_features⌋ , where idxidxidx is the cluster index (with clusters for less frequent words having larger indices, and indices starting from 111 ). -
head_bias
if set to True, adds a bias term to the ‘head’ of the adaptive softmax. See paper for details. Set to False in the official implementation. -
in_features (int) – Number of features in the input tensor
-
n_classes (int) – Number of classes in the dataset
-
cutoffs (Sequence) – Cutoffs used to assign targets to their buckets
-
div_value (float, optional) – value used as an exponent to compute sizes of the clusters. Default: 4.0
-
head_bias (bool, optional) – If
True
, adds a bias term to the ‘head’ of the adaptive softmax. Default:False
-
output is a Tensor of size
N
containing computed target log probabilities for each example -
loss is a Scalar representing the computed negative log likelihood loss
-
input: (N,in_features)(N, in\_features)(N,in_features)
-
target: (N)(N)(N) where each value satisfies 0<=target[i]<=n_classes0 <= target[i] <= n\_classes0<=target[i]<=n_classes
-
output1: (N)(N)(N)
-
output2:
Scalar
-
Input: (N,in_features)(N, in\_features)(N,in_features)
-
Output: (N,n_classes)(N, n\_classes)(N,n_classes)
-
Input: (N,in_features)(N, in\_features)(N,in_features)
-
Output: (N)(N)(N)
-
num_features – CCC from an expected input of size (N,C,L)(N, C, L)(N,C,L) or LLL from input of size (N,L)(N, L)(N,L)
-
eps – a value added to the denominator for numerical stability. Default: 1e-5
-
momentum – the value used for the running_mean and running_var computation. Can be set to
None
for cumulative moving average (i.e. simple average). Default: 0.1 -
affine – a boolean value that when set to
True
, this module has learnable affine parameters. Default:True
-
track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default:True
-
Input: (N,C)(N, C)(N,C) or (N,C,L)(N, C, L)(N,C,L)
-
Output: (N,C)(N, C)(N,C) or (N,C,L)(N, C, L)(N,C,L) (same shape as input)
-
num_features – CCC from an expected input of size (N,C,H,W)(N, C, H, W)(N,C,H,W)
-
eps – a value added to the denominator for numerical stability. Default: 1e-5
-
momentum – the value used for the running_mean and running_var computation. Can be set to
None
for cumulative moving average (i.e. simple average). Default: 0.1 -
affine – a boolean value that when set to
True
, this module has learnable affine parameters. Default:True
-
track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default:True
-
Input: (N,C,H,W)(N, C, H, W)(N,C,H,W)
-
Output: (N,C,H,W)(N, C, H, W)(N,C,H,W) (same shape as input)
-
num_features – CCC from an expected input of size (N,C,D,H,W)(N, C, D, H, W)(N,C,D,H,W)
-
eps – a value added to the denominator for numerical stability. Default: 1e-5
-
momentum – the value used for the running_mean and running_var computation. Can be set to
None
for cumulative moving average (i.e. simple average). Default: 0.1 -
affine – a boolean value that when set to
True
, this module has learnable affine parameters. Default:True
-
track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default:True
-
Input: (N,C,D,H,W)(N, C, D, H, W)(N,C,D,H,W)
-
Output: (N,C,D,H,W)(N, C, D, H, W)(N,C,D,H,W) (same shape as input)
-
num_groups (int) – number of groups to separate the channels into
-
num_channels (int) – number of channels expected in input
-
eps – a value added to the denominator for numerical stability. Default: 1e-5
-
affine – a boolean value that when set to
True
, this module has learnable per-channel affine parameters initialized to ones (for weights) and zeros (for biases). Default:True
. -
Input: (N,C,∗)(N, C, *)(N,C,∗) where C=num_channelsC=\text{num\_channels}C=num_channels
-
Output: (N,C,∗)(N, C, *)(N,C,∗) (same shape as input)
-
num_features – CCC from an expected input of size (N,C,+)(N, C, +)(N,C,+)
-
eps – a value added to the denominator for numerical stability. Default: 1e-5
-
momentum – the value used for the running_mean and running_var computation. Can be set to
None
for cumulative moving average (i.e. simple average). Default: 0.1 -
affine – a boolean value that when set to
True
, this module has learnable affine parameters. Default:True
-
track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default:True
-
process_group – synchronization of stats happen within each process group individually. Default behavior is synchronization across the whole world
-
Input: (N,C,+)(N, C, +)(N,C,+)
-
Output: (N,C,+)(N, C, +)(N,C,+) (same shape as input)
-
module (nn.Module) – containing module
-
process_group (optional) – process group to scope synchronization,
-
num_features – CCC from an expected input of size (N,C,L)(N, C, L)(N,C,L) or LLL from input of size (N,L)(N, L)(N,L)
-
eps – a value added to the denominator for numerical stability. Default: 1e-5
-
momentum – the value used for the running_mean and running_var computation. Default: 0.1
-
affine – a boolean value that when set to
True
, this module has learnable affine parameters, initialized the same way as done for batch normalization. Default:False
. -
track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default:False
-
Input: (N,C,L)(N, C, L)(N,C,L)
-
Output: (N,C,L)(N, C, L)(N,C,L) (same shape as input)
-
num_features – CCC from an expected input of size (N,C,H,W)(N, C, H, W)(N,C,H,W)
-
eps – a value added to the denominator for numerical stability. Default: 1e-5
-
momentum – the value used for the running_mean and running_var computation. Default: 0.1
-
affine – a boolean value that when set to
True
, this module has learnable affine parameters, initialized the same way as done for batch normalization. Default:False
. -
track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default:False
-
Input: (N,C,H,W)(N, C, H, W)(N,C,H,W)
-
Output: (N,C,H,W)(N, C, H, W)(N,C,H,W) (same shape as input)
-
num_features – CCC from an expected input of size (N,C,D,H,W)(N, C, D, H, W)(N,C,D,H,W)
-
eps – a value added to the denominator for numerical stability. Default: 1e-5
-
momentum – the value used for the running_mean and running_var computation. Default: 0.1
-
affine – a boolean value that when set to
True
, this module has learnable affine parameters, initialized the same way as done for batch normalization. Default:False
. -
track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default:False
-
Input: (N,C,D,H,W)(N, C, D, H, W)(N,C,D,H,W)
-
Output: (N,C,D,H,W)(N, C, D, H, W)(N,C,D,H,W) (same shape as input)
-
normalized_shape (int or list or torch.Size) –
input shape from an expected input of size
[∗×normalized_shape[0]×normalized_shape[1]×…×normalized_shape[−1]][* \times \text{normalized\_shape}[0] \times \text{normalized\_shape}[1] \times \ldots \times \text{normalized\_shape}[-1]] [∗×normalized_shape[0]×normalized_shape[1]×…×normalized_shape[−1]]
If a single integer is used, it is treated as a singleton list, and this module will normalize over the last dimension which is expected to be of that specific size.
-
eps – a value added to the denominator for numerical stability. Default: 1e-5
-
elementwise_affine – a boolean value that when set to
True
, this module has learnable per-element affine parameters initialized to ones (for weights) and zeros (for biases). Default:True
. -
Input: (N,∗)(N, *)(N,∗)
-
Output: (N,∗)(N, *)(N,∗) (same shape as input)
-
size – amount of neighbouring channels used for normalization
-
alpha – multiplicative factor. Default: 0.0001
-
beta – exponent. Default: 0.75
-
k – additive factor. Default: 1
-
Input: (N,C,∗)(N, C, *)(N,C,∗)
-
Output: (N,C,∗)(N, C, *)(N,C,∗) (same shape as input)
-
input_size – The number of expected features in the input x
-
hidden_size – The number of features in the hidden state h
-
num_layers – Number of recurrent layers. E.g., setting
num_layers=2
would mean stacking two RNNs together to form a stacked RNN, with the second RNN taking in outputs of the first RNN and computing the final results. Default: 1 -
nonlinearity – The non-linearity to use. Can be either
'tanh'
or'relu'
. Default:'tanh'
-
bias – If
False
, then the layer does not use bias weights b_ih and b_hh. Default:True
-
batch_first – If
True
, then the input and output tensors are provided as (batch, seq, feature). Default:False
-
dropout – If non-zero, introduces a Dropout layer on the outputs of each RNN layer except the last layer, with dropout probability equal to
dropout
. Default: 0 -
bidirectional – If
True
, becomes a bidirectional RNN. Default:False
-
input of shape (seq_len, batch, input_size): tensor containing the features of the input sequence. The input can also be a packed variable length sequence. See
torch.nn.utils.rnn.pack_padded_sequence()
ortorch.nn.utils.rnn.pack_sequence()
for details. -
h_0 of shape (num_layers * num_directions, batch, hidden_size): tensor containing the initial hidden state for each element in the batch. Defaults to zero if not provided. If the RNN is bidirectional, num_directions should be 2, else it should be 1.
-
output of shape (seq_len, batch, num_directions * hidden_size): tensor containing the output features (h_t) from the last layer of the RNN, for each t. If a
torch.nn.utils.rnn.PackedSequence
has been given as the input, the output will also be a packed sequence.For the unpacked case, the directions can be separated using
output.view(seq_len, batch, num_directions, hidden_size)
, with forward and backward being direction 0 and 1 respectively. Similarly, the directions can be separated in the packed case. -
h_n of shape (num_layers * num_directions, batch, hidden_size): tensor containing the hidden state for t = seq_len.
Like output, the layers can be separated using
h_n.view(num_layers, num_directions, batch, hidden_size)
. -
Input1: (L,N,Hin)(L, N, H_{in})(L,N,Hin) tensor containing input features where Hin=input_sizeH_{in}=\text{input\_size}Hin=input_size and L represents a sequence length.
-
Input2: (S,N,Hout)(S, N, H_{out})(S,N,Hout) tensor containing the initial hidden state for each element in the batch. Hout=hidden_sizeH_{out}=\text{hidden\_size}Hout=hidden_size Defaults to zero if not provided. where S=num_layers∗num_directionsS=\text{num\_layers} * \text{num\_directions}S=num_layers∗num_directions If the RNN is bidirectional, num_directions should be 2, else it should be 1.
-
Output1: (L,N,Hall)(L, N, H_{all})(L,N,Hall) where Hall=num_directions∗hidden_sizeH_{all}=\text{num\_directions} * \text{hidden\_size}Hall=num_directions∗hidden_size
-
Output2: (S,N,Hout)(S, N, H_{out})(S,N,Hout) tensor containing the next hidden state for each element in the batch
-
~RNN.weight_ih_l[k] – the learnable input-hidden weights of the k-th layer, of shape (hidden_size, input_size) for k = 0. Otherwise, the shape is (hidden_size, num_directions * hidden_size)
-
~RNN.weight_hh_l[k] – the learnable hidden-hidden weights of the k-th layer, of shape (hidden_size, hidden_size)
-
~RNN.bias_ih_l[k] – the learnable input-hidden bias of the k-th layer, of shape (hidden_size)
-
~RNN.bias_hh_l[k] – the learnable hidden-hidden bias of the k-th layer, of shape (hidden_size)
-
input_size – The number of expected features in the input x
-
hidden_size – The number of features in the hidden state h
-
num_layers – Number of recurrent layers. E.g., setting
num_layers=2
would mean stacking two LSTMs together to form a stacked LSTM, with the second LSTM taking in outputs of the first LSTM and computing the final results. Default: 1 -
bias – If
False
, then the layer does not use bias weights b_ih and b_hh. Default:True
-
batch_first – If
True
, then the input and output tensors are provided as (batch, seq, feature). Default:False
-
dropout – If non-zero, introduces a Dropout layer on the outputs of each LSTM layer except the last layer, with dropout probability equal to
dropout
. Default: 0 -
bidirectional – If
True
, becomes a bidirectional LSTM. Default:False
-
input of shape (seq_len, batch, input_size): tensor containing the features of the input sequence. The input can also be a packed variable length sequence. See
torch.nn.utils.rnn.pack_padded_sequence()
ortorch.nn.utils.rnn.pack_sequence()
for details. -
h_0 of shape (num_layers * num_directions, batch, hidden_size): tensor containing the initial hidden state for each element in the batch. If the LSTM is bidirectional, num_directions should be 2, else it should be 1.
-
c_0 of shape (num_layers * num_directions, batch, hidden_size): tensor containing the initial cell state for each element in the batch.
If (h_0, c_0) is not provided, both h_0 and c_0 default to zero.
-
output of shape (seq_len, batch, num_directions * hidden_size): tensor containing the output features (h_t) from the last layer of the LSTM, for each t. If a
torch.nn.utils.rnn.PackedSequence
has been given as the input, the output will also be a packed sequence.For the unpacked case, the directions can be separated using
output.view(seq_len, batch, num_directions, hidden_size)
, with forward and backward being direction 0 and 1 respectively. Similarly, the directions can be separated in the packed case. -
h_n of shape (num_layers * num_directions, batch, hidden_size): tensor containing the hidden state for t = seq_len.
Like output, the layers can be separated using
h_n.view(num_layers, num_directions, batch, hidden_size)
and similarly for c_n. -
c_n of shape (num_layers * num_directions, batch, hidden_size): tensor containing the cell state for t = seq_len.
-
~LSTM.weight_ih_l[k] – the learnable input-hidden weights of the kth\text{k}^{th}kth layer (W_ii|W_if|W_ig|W_io), of shape (4*hidden_size, input_size) for k = 0. Otherwise, the shape is (4*hidden_size, num_directions * hidden_size)
-
~LSTM.weight_hh_l[k] – the learnable hidden-hidden weights of the kth\text{k}^{th}kth layer (W_hi|W_hf|W_hg|W_ho), of shape (4*hidden_size, hidden_size)
-
~LSTM.bias_ih_l[k] – the learnable input-hidden bias of the kth\text{k}^{th}kth layer (b_ii|b_if|b_ig|b_io), of shape (4*hidden_size)
-
~LSTM.bias_hh_l[k] – the learnable hidden-hidden bias of the kth\text{k}^{th}kth layer (b_hi|b_hf|b_hg|b_ho), of shape (4*hidden_size)
-
input_size – The number of expected features in the input x
-
hidden_size – The number of features in the hidden state h
-
num_layers – Number of recurrent layers. E.g., setting
num_layers=2
would mean stacking two GRUs together to form a stacked GRU, with the second GRU taking in outputs of the first GRU and computing the final results. Default: 1 -
bias – If
False
, then the layer does not use bias weights b_ih and b_hh. Default:True
-
batch_first – If
True
, then the input and output tensors are provided as (batch, seq, feature). Default:False
-
dropout – If non-zero, introduces a Dropout layer on the outputs of each GRU layer except the last layer, with dropout probability equal to
dropout
. Default: 0 -
bidirectional – If
True
, becomes a bidirectional GRU. Default:False
-
input of shape (seq_len, batch, input_size): tensor containing the features of the input sequence. The input can also be a packed variable length sequence. See
torch.nn.utils.rnn.pack_padded_sequence()
for details. -
h_0 of shape (num_layers * num_directions, batch, hidden_size): tensor containing the initial hidden state for each element in the batch. Defaults to zero if not provided. If the RNN is bidirectional, num_directions should be 2, else it should be 1.
-
output of shape (seq_len, batch, num_directions * hidden_size): tensor containing the output features h_t from the last layer of the GRU, for each t. If a
torch.nn.utils.rnn.PackedSequence
has been given as the input, the output will also be a packed sequence. For the unpacked case, the directions can be separated usingoutput.view(seq_len, batch, num_directions, hidden_size)
, with forward and backward being direction 0 and 1 respectively.Similarly, the directions can be separated in the packed case.
-
h_n of shape (num_layers * num_directions, batch, hidden_size): tensor containing the hidden state for t = seq_len
Like output, the layers can be separated using
h_n.view(num_layers, num_directions, batch, hidden_size)
. -
Input1: (L,N,Hin)(L, N, H_{in})(L,N,Hin) tensor containing input features where Hin=input_sizeH_{in}=\text{input\_size}Hin=input_size and L represents a sequence length.
-
Input2: (S,N,Hout)(S, N, H_{out})(S,N,Hout) tensor containing the initial hidden state for each element in the batch. Hout=hidden_sizeH_{out}=\text{hidden\_size}Hout=hidden_size Defaults to zero if not provided. where S=num_layers∗num_directionsS=\text{num\_layers} * \text{num\_directions}S=num_layers∗num_directions If the RNN is bidirectional, num_directions should be 2, else it should be 1.
-
Output1: (L,N,Hall)(L, N, H_{all})(L,N,Hall) where Hall=num_directions∗hidden_sizeH_{all}=\text{num\_directions} * \text{hidden\_size}Hall=num_directions∗hidden_size
-
Output2: (S,N,Hout)(S, N, H_{out})(S,N,Hout) tensor containing the next hidden state for each element in the batch
-
~GRU.weight_ih_l[k] – the learnable input-hidden weights of the kth\text{k}^{th}kth layer (W_ir|W_iz|W_in), of shape (3*hidden_size, input_size) for k = 0. Otherwise, the shape is (3*hidden_size, num_directions * hidden_size)
-
~GRU.weight_hh_l[k] – the learnable hidden-hidden weights of the kth\text{k}^{th}kth layer (W_hr|W_hz|W_hn), of shape (3*hidden_size, hidden_size)
-
~GRU.bias_ih_l[k] – the learnable input-hidden bias of the kth\text{k}^{th}kth layer (b_ir|b_iz|b_in), of shape (3*hidden_size)
-
~GRU.bias_hh_l[k] – the learnable hidden-hidden bias of the kth\text{k}^{th}kth layer (b_hr|b_hz|b_hn), of shape (3*hidden_size)
-
input_size – The number of expected features in the input x
-
hidden_size – The number of features in the hidden state h
-
bias – If
False
, then the layer does not use bias weights b_ih and b_hh. Default:True
-
nonlinearity – The non-linearity to use. Can be either
'tanh'
or'relu'
. Default:'tanh'
-
input of shape (batch, input_size): tensor containing input features
-
hidden of shape (batch, hidden_size): tensor containing the initial hidden state for each element in the batch. Defaults to zero if not provided.
-
h’ of shape (batch, hidden_size): tensor containing the next hidden state for each element in the batch
-
Input1: (N,Hin)(N, H_{in})(N,Hin) tensor containing input features where HinH_{in}Hin = input_size
-
Input2: (N,Hout)(N, H_{out})(N,Hout) tensor containing the initial hidden state for each element in the batch where HoutH_{out}Hout = hidden_size Defaults to zero if not provided.
-
Output: (N,Hout)(N, H_{out})(N,Hout) tensor containing the next hidden state for each element in the batch
-
~RNNCell.weight_ih – the learnable input-hidden weights, of shape (hidden_size, input_size)
-
~RNNCell.weight_hh – the learnable hidden-hidden weights, of shape (hidden_size, hidden_size)
-
~RNNCell.bias_ih – the learnable input-hidden bias, of shape (hidden_size)
-
~RNNCell.bias_hh – the learnable hidden-hidden bias, of shape (hidden_size)
-
input_size – The number of expected features in the input x
-
hidden_size – The number of features in the hidden state h
-
bias – If
False
, then the layer does not use bias weights b_ih and b_hh. Default:True
-
input of shape (batch, input_size): tensor containing input features
-
h_0 of shape (batch, hidden_size): tensor containing the initial hidden state for each element in the batch.
-
c_0 of shape (batch, hidden_size): tensor containing the initial cell state for each element in the batch.
If (h_0, c_0) is not provided, both h_0 and c_0 default to zero.
-
h_1 of shape (batch, hidden_size): tensor containing the next hidden state for each element in the batch
-
c_1 of shape (batch, hidden_size): tensor containing the next cell state for each element in the batch
-
~LSTMCell.weight_ih – the learnable input-hidden weights, of shape (4*hidden_size, input_size)
-
~LSTMCell.weight_hh – the learnable hidden-hidden weights, of shape (4*hidden_size, hidden_size)
-
~LSTMCell.bias_ih – the learnable input-hidden bias, of shape (4*hidden_size)
-
~LSTMCell.bias_hh – the learnable hidden-hidden bias, of shape (4*hidden_size)
-
input_size – The number of expected features in the input x
-
hidden_size – The number of features in the hidden state h
-
bias – If
False
, then the layer does not use bias weights b_ih and b_hh. Default:True
-
input of shape (batch, input_size): tensor containing input features
-
hidden of shape (batch, hidden_size): tensor containing the initial hidden state for each element in the batch. Defaults to zero if not provided.
-
h’ of shape (batch, hidden_size): tensor containing the next hidden state for each element in the batch
-
Input1: (N,Hin)(N, H_{in})(N,Hin) tensor containing input features where HinH_{in}Hin = input_size
-
Input2: (N,Hout)(N, H_{out})(N,Hout) tensor containing the initial hidden state for each element in the batch where HoutH_{out}Hout = hidden_size Defaults to zero if not provided.
-
Output: (N,Hout)(N, H_{out})(N,Hout) tensor containing the next hidden state for each element in the batch
-
~GRUCell.weight_ih – the learnable input-hidden weights, of shape (3*hidden_size, input_size)
-
~GRUCell.weight_hh – the learnable hidden-hidden weights, of shape (3*hidden_size, hidden_size)
-
~GRUCell.bias_ih – the learnable input-hidden bias, of shape (3*hidden_size)
-
~GRUCell.bias_hh – the learnable hidden-hidden bias, of shape (3*hidden_size)
-
d_model – the number of expected features in the encoder/decoder inputs (default=512).
-
nhead – the number of heads in the multiheadattention models (default=8).
-
num_encoder_layers – the number of sub-encoder-layers in the encoder (default=6).
-
num_decoder_layers – the number of sub-decoder-layers in the decoder (default=6).
-
dim_feedforward – the dimension of the feedforward network model (default=2048).
-
dropout – the dropout value (default=0.1).
-
custom_encoder – custom encoder (default=None).
-
custom_decoder – custom decoder (default=None).
-
src – the sequence to the encoder (required).
-
tgt – the sequence to the decoder (required).
-
src_mask – the additive mask for the src sequence (optional).
-
tgt_mask – the additive mask for the tgt sequence (optional).
-
memory_mask – the additive mask for the encoder output (optional).
-
src_key_padding_mask – the ByteTensor mask for src keys per batch (optional).
-
tgt_key_padding_mask – the ByteTensor mask for tgt keys per batch (optional).
-
memory_key_padding_mask – the ByteTensor mask for memory keys per batch (optional).
-
src: (S,N,E)(S, N, E)(S,N,E) .
-
tgt: (T,N,E)(T, N, E)(T,N,E) .
-
src_mask: (S,S)(S, S)(S,S) .
-
tgt_mask: (T,T)(T, T)(T,T) .
-
memory_mask: (T,S)(T, S)(T,S) .
-
src_key_padding_mask: (N,S)(N, S)(N,S) .
-
tgt_key_padding_mask: (N,T)(N, T)(N,T) .
-
memory_key_padding_mask: (N,S)(N, S)(N,S) .
-
output: (T,N,E)(T, N, E)(T,N,E) .
-
encoder_layer – an instance of the TransformerEncoderLayer() class (required).
-
num_layers – the number of sub-encoder-layers in the encoder (required).
-
norm – the layer normalization component (optional).
-
src – the sequnce to the encoder (required).
-
mask – the mask for the src sequence (optional).
-
src_key_padding_mask – the mask for the src keys per batch (optional).
-
decoder_layer – an instance of the TransformerDecoderLayer() class (required).
-
num_layers – the number of sub-decoder-layers in the decoder (required).
-
norm – the layer normalization component (optional).
-
tgt – the sequence to the decoder (required).
-
memory – the sequnce from the last layer of the encoder (required).
-
tgt_mask – the mask for the tgt sequence (optional).
-
memory_mask – the mask for the memory sequence (optional).
-
tgt_key_padding_mask – the mask for the tgt keys per batch (optional).
-
memory_key_padding_mask – the mask for the memory keys per batch (optional).
-
d_model – the number of expected features in the input (required).
-
nhead – the number of heads in the multiheadattention models (required).
-
dim_feedforward – the dimension of the feedforward network model (default=2048).
-
dropout – the dropout value (default=0.1).
-
src – the sequnce to the encoder layer (required).
-
src_mask – the mask for the src sequence (optional).
-
src_key_padding_mask – the mask for the src keys per batch (optional).
-
d_model – the number of expected features in the input (required).
-
nhead – the number of heads in the multiheadattention models (required).
-
dim_feedforward – the dimension of the feedforward network model (default=2048).
-
dropout – the dropout value (default=0.1).
-
tgt – the sequence to the decoder layer (required).
-
memory – the sequnce from the last layer of the encoder (required).
-
tgt_mask – the mask for the tgt sequence (optional).
-
memory_mask – the mask for the memory sequence (optional).
-
tgt_key_padding_mask – the mask for the tgt keys per batch (optional).
-
memory_key_padding_mask – the mask for the memory keys per batch (optional).
-
args – any argument (unused)
-
kwargs – any keyword argument (unused)
-
in_features – size of each input sample
-
out_features – size of each output sample
-
bias – If set to
False
, the layer will not learn an additive bias. Default:True
-
Input: (N,∗,Hin)(N, *, H_{in})(N,∗,Hin) where ∗*∗ means any number of additional dimensions and Hin=in_featuresH_{in} = \text{in\_features}Hin=in_features
-
Output: (N,∗,Hout)(N, *, H_{out})(N,∗,Hout) where all but the last dimension are the same shape as the input and Hout=out_featuresH_{out} = \text{out\_features}Hout=out_features .
-
~Linear.weight – the learnable weights of the module of shape (out_features,in_features)(\text{out\_features}, \text{in\_features})(out_features,in_features) . The values are initialized from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k})U(−k
-
) , where k=1in_featuresk = \frac{1}{\text{in\_features}}k=in_features1
-
~Linear.bias – the learnable bias of the module of shape (out_features)(\text{out\_features})(out_features) . If
bias
isTrue
, the values are initialized from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k})U(−k - ,k
Examples:
>>> m = nn.Linear(20, 30) >>> input = torch.randn(128, 20) >>> output = m(input) >>> print(output.size()) torch.Size([128, 30])
Bilinear
class
torch.nn.
Bilinear
(in1_features, in2_features, out_features, bias=True)[source]Applies a bilinear transformation to the incoming data: y=x1Ax2+by = x_1 A x_2 + by=x1Ax2+b
Parameters
Shape:
Variables
,k-
) where k=1in_featuresk = \frac{1}{\text{in\_features}}k=in_features1
-
in1_features – size of each first input sample
-
in2_features – size of each second input sample
-
out_features – size of each output sample
-
bias – If set to False, the layer will not learn an additive bias. Default:
True
-
Input1: (N,∗,Hin1)(N, *, H_{in1})(N,∗,Hin1) where Hin1=in1_featuresH_{in1}=\text{in1\_features}Hin1=in1_features and ∗*∗ means any number of additional dimensions. All but the last dimension of the inputs should be the same.
-
Input2: (N,∗,Hin2)(N, *, H_{in2})(N,∗,Hin2) where Hin2=in2_featuresH_{in2}=\text{in2\_features}Hin2=in2_features .
-
Output: (N,∗,Hout)(N, *, H_{out})(N,∗,Hout) where Hout=out_featuresH_{out}=\text{out\_features}Hout=out_features and all but the last dimension are the same shape as the input.
-
~Bilinear.weight – the learnable weights of the module of shape (out_features,in1_features,in2_features)(\text{out\_features}, \text{in1\_features}, \text{in2\_features})(out_features,in1_features,in2_features) . The values are initialized from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k})U(−k
-
-
) , where k=1in1_featuresk = \frac{1}{\text{in1\_features}}k=in1_features1
-
~Bilinear.bias – the learnable bias of the module of shape (out_features)(\text{out\_features})(out_features) . If
bias
isTrue
, the values are initialized from U(−k,k)\mathcal{U}(-\sqrt{k}, \sqrt{k})U(−k - ,k
Examples:
>>> m = nn.Bilinear(20, 30, 40) >>> input1 = torch.randn(128, 20) >>> input2 = torch.randn(128, 30) >>> output = m(input1, input2) >>> print(output.size()) torch.Size([128, 40])
Dropout layers
Dropout
class
torch.nn.
Dropout
(p=0.5, inplace=False)[source]During training, randomly zeroes some of the elements of the input tensor with probability
p
using samples from a Bernoulli distribution. Each channel will be zeroed out independently on every forward call.This has proven to be an effective technique for regularization and preventing the co-adaptation of neurons as described in the paper Improving neural networks by preventing co-adaptation of feature detectors .
Furthermore, the outputs are scaled by a factor of 11−p\frac{1}{1-p}1−p1 during training. This means that during evaluation the module simply computes an identity function.
Parameters
Shape:
Examples:
>>> m = nn.Dropout(p=0.2) >>> input = torch.randn(20, 16) >>> output = m(input)
Dropout2d
class
torch.nn.
Dropout2d
(p=0.5, inplace=False)[source]Randomly zero out entire channels (a channel is a 2D feature map, e.g., the jjj -th channel of the iii -th sample in the batched input is a 2D tensor input[i,j]\text{input}[i, j]input[i,j] ). Each channel will be zeroed out independently on every forward call with probability
p
using samples from a Bernoulli distribution.Usually the input comes from
nn.Conv2d
modules.As described in the paper Efficient Object Localization Using Convolutional Networks , if adjacent pixels within feature maps are strongly correlated (as is normally the case in early convolution layers) then i.i.d. dropout will not regularize the activations and will otherwise just result in an effective learning rate decrease.
In this case,
nn.Dropout2d()
will help promote independence between feature maps and should be used instead.Parameters
Shape:
Examples:
>>> m = nn.Dropout2d(p=0.2) >>> input = torch.randn(20, 16, 32, 32) >>> output = m(input)
Dropout3d
class
torch.nn.
Dropout3d
(p=0.5, inplace=False)[source]Randomly zero out entire channels (a channel is a 3D feature map, e.g., the jjj -th channel of the iii -th sample in the batched input is a 3D tensor input[i,j]\text{input}[i, j]input[i,j] ). Each channel will be zeroed out independently on every forward call with probability
p
using samples from a Bernoulli distribution.Usually the input comes from
nn.Conv3d
modules.As described in the paper Efficient Object Localization Using Convolutional Networks , if adjacent pixels within feature maps are strongly correlated (as is normally the case in early convolution layers) then i.i.d. dropout will not regularize the activations and will otherwise just result in an effective learning rate decrease.
In this case,
nn.Dropout3d()
will help promote independence between feature maps and should be used instead.Parameters
Shape:
Examples:
>>> m = nn.Dropout3d(p=0.2) >>> input = torch.randn(20, 16, 4, 32, 32) >>> output = m(input)
AlphaDropout
class
torch.nn.
AlphaDropout
(p=0.5, inplace=False)[source]Applies Alpha Dropout over the input.
Alpha Dropout is a type of Dropout that maintains the self-normalizing property. For an input with zero mean and unit standard deviation, the output of Alpha Dropout maintains the original mean and standard deviation of the input. Alpha Dropout goes hand-in-hand with SELU activation function, which ensures that the outputs have zero mean and unit standard deviation.
During training, it randomly masks some of the elements of the input tensor with probability p using samples from a bernoulli distribution. The elements to masked are randomized on every forward call, and scaled and shifted to maintain zero mean and unit standard deviation.
During evaluation the module simply computes an identity function.
More details can be found in the paper Self-Normalizing Neural Networks .
Parameters
Shape:
Examples:
>>> m = nn.AlphaDropout(p=0.2) >>> input = torch.randn(20, 16) >>> output = m(input)
Sparse layers
Embedding
class
torch.nn.
Embedding
(num_embeddings, embedding_dim, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False, _weight=None)[source]A simple lookup table that stores embeddings of a fixed dictionary and size.
This module is often used to store word embeddings and retrieve them using indices. The input to the module is a list of indices, and the output is the corresponding word embeddings.
Parameters
Variables
~Embedding.weight (Tensor) – the learnable weights of the module of shape (num_embeddings, embedding_dim) initialized from N(0,1)\mathcal{N}(0, 1)N(0,1)
Shape:
Note
Keep in mind that only a limited number of optimizers support sparse gradients: currently it’s
optim.SGD
(CUDA and CPU),optim.SparseAdam
(CUDA and CPU) andoptim.Adagrad
(CPU)Note
With
padding_idx
set, the embedding vector atpadding_idx
is initialized to all zeros. However, note that this vector can be modified afterwards, e.g., using a customized initialization method, and thus changing the vector used to pad the output. The gradient for this vector fromEmbedding
is always zero.Examples:
>>> # an Embedding module containing 10 tensors of size 3 >>> embedding = nn.Embedding(10, 3) >>> # a batch of 2 samples of 4 indices each >>> input = torch.LongTensor([[1,2,4,5],[4,3,2,9]]) >>> embedding(input) tensor([[[-0.0251, -1.6902, 0.7172], [-0.6431, 0.0748, 0.6969], [ 1.4970, 1.3448, -0.9685], [-0.3677, -2.7265, -0.1685]], [[ 1.4970, 1.3448, -0.9685], [ 0.4362, -0.4004, 0.9400], [-0.6431, 0.0748, 0.6969], [ 0.9124, -2.3616, 1.1151]]]) >>> # example with padding_idx >>> embedding = nn.Embedding(10, 3, padding_idx=0) >>> input = torch.LongTensor([[0,2,0,5]]) >>> embedding(input) tensor([[[ 0.0000, 0.0000, 0.0000], [ 0.1535, -2.0309, 0.9315], [ 0.0000, 0.0000, 0.0000], [-0.1655, 0.9897, 0.0635]]])
classmethod
from_pretrained
(embeddings, freeze=True, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False)[source]Creates Embedding instance from given 2-dimensional FloatTensor.
Parameters
Examples:
>>> # FloatTensor containing pretrained weights >>> weight = torch.FloatTensor([[1, 2.3, 3], [4, 5.1, 6.3]]) >>> embedding = nn.Embedding.from_pretrained(weight) >>> # Get embeddings for index 1 >>> input = torch.LongTensor([1]) >>> embedding(input) tensor([[ 4.0000, 5.1000, 6.3000]])
EmbeddingBag
class
torch.nn.
EmbeddingBag
(num_embeddings, embedding_dim, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, mode='mean', sparse=False, _weight=None)[source]Computes sums or means of ‘bags’ of embeddings, without instantiating the intermediate embeddings.
For bags of constant length and no
per_sample_weights
, this classHowever,
EmbeddingBag
is much more time and memory efficient than using a chain of these operations.EmbeddingBag also supports per-sample weights as an argument to the forward pass. This scales the output of the Embedding before performing a weighted reduction as specified by
mode
. Ifper_sample_weights`
is passed, the only supportedmode
is"sum"
, which computes a weighted sum according toper_sample_weights
.Parameters
Variables
~EmbeddingBag.weight (Tensor) – the learnable weights of the module of shape (num_embeddings, embedding_dim) initialized from N(0,1)\mathcal{N}(0, 1)N(0,1) .
Inputs:
input
(LongTensor),offsets
(LongTensor, optional), andper_index_weights
(Tensor, optional)per_sample_weights (Tensor, optional): a tensor of float / double weights, or None
to indicate all weights should be taken to be
1
. If specified,per_sample_weights
must have exactly the same shape as input and is treated as having the sameoffsets
, if those are notNone
. Only supported formode='sum'
.Output shape: (B, embedding_dim)
Examples:
>>> # an Embedding module containing 10 tensors of size 3 >>> embedding_sum = nn.EmbeddingBag(10, 3, mode='sum') >>> # a batch of 2 samples of 4 indices each >>> input = torch.LongTensor([1,2,4,5,4,3,2,9]) >>> offsets = torch.LongTensor([0,4]) >>> embedding_sum(input, offsets) tensor([[-0.8861, -5.4350, -0.0523], [ 1.1306, -2.5798, -1.0044]])
classmethod
from_pretrained
(embeddings, freeze=True, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, mode='mean', sparse=False)[source]Creates EmbeddingBag instance from given 2-dimensional FloatTensor.
Parameters
Examples:
>>> # FloatTensor containing pretrained weights >>> weight = torch.FloatTensor([[1, 2.3, 3], [4, 5.1, 6.3]]) >>> embeddingbag = nn.EmbeddingBag.from_pretrained(weight) >>> # Get embeddings for index 1 >>> input = torch.LongTensor([[1, 0]]) >>> embeddingbag(input) tensor([[ 2.5000, 3.7000, 4.6500]])
Distance functions
CosineSimilarity
class
torch.nn.
CosineSimilarity
(dim=1, eps=1e-08)[source]Returns cosine similarity between x1x_1x1 and x2x_2x2 , computed along dim.
similarity=x1⋅x2max(∥x1∥2⋅∥x2∥2,ϵ).\text{similarity} = \dfrac{x_1 \cdot x_2}{\max(\Vert x_1 \Vert _2 \cdot \Vert x_2 \Vert _2, \epsilon)}. similarity=max(∥x1∥2⋅∥x2∥2,ϵ)x1⋅x2.
Parameters
Shape:
Examples::
>>> input1 = torch.randn(100, 128) >>> input2 = torch.randn(100, 128) >>> cos = nn.CosineSimilarity(dim=1, eps=1e-6) >>> output = cos(input1, input2)
PairwiseDistance
class
torch.nn.
PairwiseDistance
(p=2.0, eps=1e-06, keepdim=False)[source]Computes the batchwise pairwise distance between vectors v1v_1v1 , v2v_2v2 using the p-norm:
∥x∥p=(∑i=1n∣xi∣p)1/p.\Vert x \Vert _p = \left( \sum_{i=1}^n \vert x_i \vert ^ p \right) ^ {1/p}. ∥x∥p=(i=1∑n∣xi∣p)1/p.
Parameters
Shape:
Examples::
>>> pdist = nn.PairwiseDistance(p=2) >>> input1 = torch.randn(100, 128) >>> input2 = torch.randn(100, 128) >>> output = pdist(input1, input2)
Loss functions
L1Loss
class
torch.nn.
L1Loss
(size_average=None, reduce=None, reduction='mean')[source]Creates a criterion that measures the mean absolute error (MAE) between each element in the input xxx and target yyy .
The unreduced (i.e. with
reduction
set to'none'
) loss can be described as:ℓ(x,y)=L={l1,…,lN}⊤,ln=∣xn−yn∣,\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = \left| x_n - y_n \right|, ℓ(x,y)=L={l1,…,lN}⊤,ln=∣xn−yn∣,
where NNN is the batch size. If
reduction
is not'none'
(default'mean'
), then:ℓ(x,y)={mean(L),if reduction=’mean’;sum(L),if reduction=’sum’.\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases} ℓ(x,y)={mean(L),sum(L),if reduction=’mean’;if reduction=’sum’.
xxx and yyy are tensors of arbitrary shapes with a total of nnn elements each.
The sum operation still operates over all the elements, and divides by nnn .
The division by nnn can be avoided if one sets
reduction = 'sum'
.Parameters
Shape:
Examples:
>>> loss = nn.L1Loss() >>> input = torch.randn(3, 5, requires_grad=True) >>> target = torch.randn(3, 5) >>> output = loss(input, target) >>> output.backward()
MSELoss
class
torch.nn.
MSELoss
(size_average=None, reduce=None, reduction='mean')[source]Creates a criterion that measures the mean squared error (squared L2 norm) between each element in the input xxx and target yyy .
The unreduced (i.e. with
reduction
set to'none'
) loss can be described as:ℓ(x,y)=L={l1,…,lN}⊤,ln=(xn−yn)2,\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = \left( x_n - y_n \right)^2, ℓ(x,y)=L={l1,…,lN}⊤,ln=(xn−yn)2,
where NNN is the batch size. If
reduction
is not'none'
(default'mean'
), then:ℓ(x,y)={mean(L),if reduction=’mean’;sum(L),if reduction=’sum’.\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases} ℓ(x,y)={mean(L),sum(L),if reduction=’mean’;if reduction=’sum’.
xxx and yyy are tensors of arbitrary shapes with a total of nnn elements each.
The sum operation still operates over all the elements, and divides by nnn .
The division by nnn can be avoided if one sets
reduction = 'sum'
.Parameters
Shape:
Examples:
>>> loss = nn.MSELoss() >>> input = torch.randn(3, 5, requires_grad=True) >>> target = torch.randn(3, 5) >>> output = loss(input, target) >>> output.backward()
CrossEntropyLoss
class
torch.nn.
CrossEntropyLoss
(weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean')[source]This criterion combines
nn.LogSoftmax()
andnn.NLLLoss()
in one single class.It is useful when training a classification problem with C classes. If provided, the optional argument
weight
should be a 1D Tensor assigning weight to each of the classes. This is particularly useful when you have an unbalanced training set.The input is expected to contain raw, unnormalized scores for each class.
input has to be a Tensor of size either (minibatch,C)(minibatch, C)(minibatch,C) or (minibatch,C,d1,d2,...,dK)(minibatch, C, d_1, d_2, ..., d_K)(minibatch,C,d1,d2,...,dK) with K≥1K \geq 1K≥1 for the K-dimensional case (described later).
This criterion expects a class index in the range [0,C−1][0, C-1][0,C−1] as the target for each value of a 1D tensor of size minibatch; if ignore_index is specified, this criterion also accepts this class index (this index may not necessarily be in the class range).
The loss can be described as:
loss(x,class)=−log(exp(x[class])∑jexp(x[j]))=−x[class]+log(∑jexp(x[j]))\text{loss}(x, class) = -\log\left(\frac{\exp(x[class])}{\sum_j \exp(x[j])}\right) = -x[class] + \log\left(\sum_j \exp(x[j])\right) loss(x,class)=−log(∑jexp(x[j])exp(x[class]))=−x[class]+log(j∑exp(x[j]))
or in the case of the
weight
argument being specified:loss(x,class)=weight[class](−x[class]+log(∑jexp(x[j])))\text{loss}(x, class) = weight[class] \left(-x[class] + \log\left(\sum_j \exp(x[j])\right)\right) loss(x,class)=weight[class](−x[class]+log(j∑exp(x[j])))
The losses are averaged across observations for each minibatch.
Can also be used for higher dimension inputs, such as 2D images, by providing an input of size (minibatch,C,d1,d2,...,dK)(minibatch, C, d_1, d_2, ..., d_K)(minibatch,C,d1,d2,...,dK) with K≥1K \geq 1K≥1 , where KKK is the number of dimensions, and a target of appropriate shape (see below).
Parameters
Shape:
Examples:
>>> loss = nn.CrossEntropyLoss() >>> input = torch.randn(3, 5, requires_grad=True) >>> target = torch.empty(3, dtype=torch.long).random_(5) >>> output = loss(input, target) >>> output.backward()
CTCLoss
class
torch.nn.
CTCLoss
(blank=0, reduction='mean', zero_infinity=False)[source]The Connectionist Temporal Classification loss.
Calculates loss between a continuous (unsegmented) time series and a target sequence. CTCLoss sums over the probability of possible alignments of input to target, producing a loss value which is differentiable with respect to each input node. The alignment of input to target is assumed to be “many-to-one”, which limits the length of the target sequence such that it must be ≤\leq≤ the input length.
Parameters
Shape:
Example:
>>> T = 50 # Input sequence length >>> C = 20 # Number of classes (including blank) >>> N = 16 # Batch size >>> S = 30 # Target sequence length of longest target in batch >>> S_min = 10 # Minimum target length, for demonstration purposes >>> >>> # Initialize random batch of input vectors, for *size = (T,N,C) >>> input = torch.randn(T, N, C).log_softmax(2).detach().requires_grad_() >>> >>> # Initialize random batch of targets (0 = blank, 1:C = classes) >>> target = torch.randint(low=1, high=C, size=(N, S), dtype=torch.long) >>> >>> input_lengths = torch.full(size=(N,), fill_value=T, dtype=torch.long) >>> target_lengths = torch.randint(low=S_min, high=S, size=(N,), dtype=torch.long) >>> ctc_loss = nn.CTCLoss() >>> loss = ctc_loss(input, target, input_lengths, target_lengths) >>> loss.backward()
Reference:
A. Graves et al.: Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks: https://www.cs.toronto.edu/~graves/icml_2006.pdf
Note
In order to use CuDNN, the following must be satisfied:
targets
must be in concatenated format, allinput_lengths
must be T. blank=0blank=0blank=0 ,target_lengths
≤256\leq 256≤256 , the integer arguments must be of dtypetorch.int32
.The regular implementation uses the (more common in PyTorch) torch.long dtype.
Note
In some circumstances when using the CUDA backend with CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting
torch.backends.cudnn.deterministic = True
. Please see the notes on Reproducibility for background.NLLLoss
class
torch.nn.
NLLLoss
(weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean')[source]The negative log likelihood loss. It is useful to train a classification problem with C classes.
If provided, the optional argument
weight
should be a 1D Tensor assigning weight to each of the classes. This is particularly useful when you have an unbalanced training set.The input given through a forward call is expected to contain log-probabilities of each class. input has to be a Tensor of size either (minibatch,C)(minibatch, C)(minibatch,C) or (minibatch,C,d1,d2,...,dK)(minibatch, C, d_1, d_2, ..., d_K)(minibatch,C,d1,d2,...,dK) with K≥1K \geq 1K≥1 for the K-dimensional case (described later).
Obtaining log-probabilities in a neural network is easily achieved by adding a LogSoftmax layer in the last layer of your network. You may use CrossEntropyLoss instead, if you prefer not to add an extra layer.
The target that this loss expects should be a class index in the range [0,C−1][0, C-1][0,C−1] where C = number of classes; if ignore_index is specified, this loss also accepts this class index (this index may not necessarily be in the class range).
The unreduced (i.e. with
reduction
set to'none'
) loss can be described as:ℓ(x,y)=L={l1,…,lN}⊤,ln=−wynxn,yn,wc=weight[c]⋅1{c≠ignore_index},\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_{y_n} x_{n,y_n}, \quad w_{c} = \text{weight}[c] \cdot \mathbb{1}\{c \not= \text{ignore\_index}\}, ℓ(x,y)=L={l1,…,lN}⊤,ln=−wynxn,yn,wc=weight[c]⋅1{c=ignore_index},
where NNN is the batch size. If
reduction
is not'none'
(default'mean'
), thenℓ(x,y)={∑n=1N1∑n=1Nwynln,if reduction=’mean’;∑n=1Nln,if reduction=’sum’.\ell(x, y) = \begin{cases} \sum_{n=1}^N \frac{1}{\sum_{n=1}^N w_{y_n}} l_n, & \text{if reduction} = \text{'mean';}\\ \sum_{n=1}^N l_n, & \text{if reduction} = \text{'sum'.} \end{cases} ℓ(x,y)={∑n=1N∑n=1Nwyn1ln,∑n=1Nln,if reduction=’mean’;if reduction=’sum’.
Can also be used for higher dimension inputs, such as 2D images, by providing an input of size (minibatch,C,d1,d2,...,dK)(minibatch, C, d_1, d_2, ..., d_K)(minibatch,C,d1,d2,...,dK) with K≥1K \geq 1K≥1 , where KKK is the number of dimensions, and a target of appropriate shape (see below). In the case of images, it computes NLL loss per-pixel.
Parameters
Shape:
Examples:
>>> m = nn.LogSoftmax(dim=1) >>> loss = nn.NLLLoss() >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = loss(m(input), target) >>> output.backward() >>> >>> >>> # 2D loss example (used, for example, with image inputs) >>> N, C = 5, 4 >>> loss = nn.NLLLoss() >>> # input is of size N x C x height x width >>> data = torch.randn(N, 16, 10, 10) >>> conv = nn.Conv2d(16, C, (3, 3)) >>> m = nn.LogSoftmax(dim=1) >>> # each element in target has to have 0 <= value < C >>> target = torch.empty(N, 8, 8, dtype=torch.long).random_(0, C) >>> output = loss(m(conv(data)), target) >>> output.backward()
PoissonNLLLoss
class
torch.nn.
PoissonNLLLoss
(log_input=True, full=False, size_average=None, eps=1e-08, reduce=None, reduction='mean')[source]Negative log likelihood loss with Poisson distribution of target.
The loss can be described as:
target∼Poisson(input)loss(input,target)=input−target∗log(input)+log(target!)\text{target} \sim \mathrm{Poisson}(\text{input}) \text{loss}(\text{input}, \text{target}) = \text{input} - \text{target} * \log(\text{input}) + \log(\text{target!})target∼Poisson(input)loss(input,target)=input−target∗log(input)+log(target!)
The last term can be omitted or approximated with Stirling formula. The approximation is used for target values more than 1. For targets less or equal to 1 zeros are added to the loss.
Parameters
Examples:
>>> loss = nn.PoissonNLLLoss() >>> log_input = torch.randn(5, 2, requires_grad=True) >>> target = torch.randn(5, 2) >>> output = loss(log_input, target) >>> output.backward()
Shape:
KLDivLoss
class
torch.nn.
KLDivLoss
(size_average=None, reduce=None, reduction='mean')[source]The Kullback-Leibler divergence Loss
KL divergence is a useful distance measure for continuous distributions and is often useful when performing direct regression over the space of (discretely sampled) continuous output distributions.
As with
NLLLoss
, the input given is expected to contain log-probabilities and is not restricted to a 2D Tensor. The targets are given as probabilities (i.e. without taking the logarithm).This criterion expects a target Tensor of the same size as the input Tensor.
The unreduced (i.e. with
reduction
set to'none'
) loss can be described as:l(x,y)=L={l1,…,lN},ln=yn⋅(logyn−xn)l(x,y) = L = \{ l_1,\dots,l_N \}, \quad l_n = y_n \cdot \left( \log y_n - x_n \right) l(x,y)=L={l1,…,lN},ln=yn⋅(logyn−xn)
where the index NNN spans all dimensions of
input
and LLL has the same shape asinput
. Ifreduction
is not'none'
(default'mean'
), then:ℓ(x,y)={mean(L),if reduction=’mean’;sum(L),if reduction=’sum’.\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{'mean';} \\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases} ℓ(x,y)={mean(L),sum(L),if reduction=’mean’;if reduction=’sum’.
In default
reduction
mode'mean'
, the losses are averaged for each minibatch over observations as well as over dimensions.'batchmean'
mode gives the correct KL divergence where losses are averaged over batch dimension only.'mean'
mode’s behavior will be changed to the same as'batchmean'
in the next major release.Parameters
Note
size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
.Note
reduction
='mean'
doesn’t return the true kl divergence value, please usereduction
='batchmean'
which aligns with KL math definition. In the next major release,'mean'
will be changed to be the same as'batchmean'
.Shape:
BCELoss
class
torch.nn.
BCELoss
(weight=None, size_average=None, reduce=None, reduction='mean')[source]Creates a criterion that measures the Binary Cross Entropy between the target and the output:
The unreduced (i.e. with
reduction
set to'none'
) loss can be described as:ℓ(x,y)=L={l1,…,lN}⊤,ln=−wn[yn⋅logxn+(1−yn)⋅log(1−xn)],\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_n \left[ y_n \cdot \log x_n + (1 - y_n) \cdot \log (1 - x_n) \right], ℓ(x,y)=L={l1,…,lN}⊤,ln=−wn[yn⋅logxn+(1−yn)⋅log(1−xn)],
where NNN is the batch size. If
reduction
is not'none'
(default'mean'
), thenℓ(x,y)={mean(L),if reduction=’mean’;sum(L),if reduction=’sum’.\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases} ℓ(x,y)={mean(L),sum(L),if reduction=’mean’;if reduction=’sum’.
This is used for measuring the error of a reconstruction in for example an auto-encoder. Note that the targets yyy should be numbers between 0 and 1.
Parameters
Shape:
Examples:
>>> m = nn.Sigmoid() >>> loss = nn.BCELoss() >>> input = torch.randn(3, requires_grad=True) >>> target = torch.empty(3).random_(2) >>> output = loss(m(input), target) >>> output.backward()
BCEWithLogitsLoss
class
torch.nn.
BCEWithLogitsLoss
(weight=None, size_average=None, reduce=None, reduction='mean', pos_weight=None)[source]This loss combines a Sigmoid layer and the BCELoss in one single class. This version is more numerically stable than using a plain Sigmoid followed by a BCELoss as, by combining the operations into one layer, we take advantage of the log-sum-exp trick for numerical stability.
The unreduced (i.e. with
reduction
set to'none'
) loss can be described as:ℓ(x,y)=L={l1,…,lN}⊤,ln=−wn[yn⋅logσ(xn)+(1−yn)⋅log(1−σ(xn))],\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_n \left[ y_n \cdot \log \sigma(x_n) + (1 - y_n) \cdot \log (1 - \sigma(x_n)) \right], ℓ(x,y)=L={l1,…,lN}⊤,ln=−wn[yn⋅logσ(xn)+(1−yn)⋅log(1−σ(xn))],
where NNN is the batch size. If
reduction
is not'none'
(default'mean'
), thenℓ(x,y)={mean(L),if reduction=’mean’;sum(L),if reduction=’sum’.\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases} ℓ(x,y)={mean(L),sum(L),if reduction=’mean’;if reduction=’sum’.
This is used for measuring the error of a reconstruction in for example an auto-encoder. Note that the targets t[i] should be numbers between 0 and 1.
It’s possible to trade off recall and precision by adding weights to positive examples. In the case of multi-label classification the loss can be described as:
ℓc(x,y)=Lc={l1,c,…,lN,c}⊤,ln,c=−wn,c[pcyn,c⋅logσ(xn,c)+(1−yn,c)⋅log(1−σ(xn,c))],\ell_c(x, y) = L_c = \{l_{1,c},\dots,l_{N,c}\}^\top, \quad l_{n,c} = - w_{n,c} \left[ p_c y_{n,c} \cdot \log \sigma(x_{n,c}) + (1 - y_{n,c}) \cdot \log (1 - \sigma(x_{n,c})) \right], ℓc(x,y)=Lc={l1,c,…,lN,c}⊤,ln,c=−wn,c[pcyn,c⋅logσ(xn,c)+(1−yn,c)⋅log(1−σ(xn,c))],
where ccc is the class number (c>1c > 1c>1 for multi-label binary classification, c=1c = 1c=1 for single-label binary classification), nnn is the number of the sample in the batch and pcp_cpc is the weight of the positive answer for the class ccc .
pc>1p_c > 1pc>1 increases the recall, pc<1p_c < 1pc<1 increases the precision.
For example, if a dataset contains 100 positive and 300 negative examples of a single class, then pos_weight for the class should be equal to 300100=3\frac{300}{100}=3100300=3 . The loss would act as if the dataset contains 3×100=3003\times 100=3003×100=300 positive examples.
Examples:
>>> target = torch.ones([10, 64], dtype=torch.float32) # 64 classes, batch size = 10 >>> output = torch.full([10, 64], 0.999) # A prediction (logit) >>> pos_weight = torch.ones([64]) # All weights are equal to 1 >>> criterion = torch.nn.BCEWithLogitsLoss(pos_weight=pos_weight) >>> criterion(output, target) # -log(sigmoid(0.999)) tensor(0.3135)
Parameters
Shape:
-
Input: (N,∗)(N, *)(N,∗) where ∗*∗ means, any number of additional dimensions
-
Target: (N,∗)(N, *)(N,∗) , same shape as the input
-
Output: scalar. If
reduction
is'none'
, then (N,∗)(N, *)(N,∗) , same shape as input.
Examples:
>>> loss = nn.BCEWithLogitsLoss() >>> input = torch.randn(3, requires_grad=True) >>> target = torch.empty(3).random_(2) >>> output = loss(input, target) >>> output.backward()
MarginRankingLoss
class
torch.nn.
MarginRankingLoss
(margin=0.0, size_average=None, reduce=None, reduction='mean')[source]Creates a criterion that measures the loss given inputs x1x1x1 , x2x2x2 , two 1D mini-batch Tensors, and a label 1D mini-batch tensor yyy (containing 1 or -1).
If y=1y = 1y=1 then it assumed the first input should be ranked higher (have a larger value) than the second input, and vice-versa for y=−1y = -1y=−1 .
The loss function for each sample in the mini-batch is:
loss(x,y)=max(0,−y∗(x1−x2)+margin)\text{loss}(x, y) = \max(0, -y * (x1 - x2) + \text{margin}) loss(x,y)=max(0,−y∗(x1−x2)+margin)
Parameters
Shape:
HingeEmbeddingLoss
class
torch.nn.
HingeEmbeddingLoss
(margin=1.0, size_average=None, reduce=None, reduction='mean')[source]Measures the loss given an input tensor xxx and a labels tensor yyy (containing 1 or -1). This is usually used for measuring whether two inputs are similar or dissimilar, e.g. using the L1 pairwise distance as xxx , and is typically used for learning nonlinear embeddings or semi-supervised learning.
The loss function for nnn -th sample in the mini-batch is
ln={xn,if yn=1,max{0,Δ−xn},if yn=−1,l_n = \begin{cases} x_n, & \text{if}\; y_n = 1,\\ \max \{0, \Delta - x_n\}, & \text{if}\; y_n = -1, \end{cases} ln={xn,max{0,Δ−xn},ifyn=1,ifyn=−1,
and the total loss functions is
ℓ(x,y)={mean(L),if reduction=’mean’;sum(L),if reduction=’sum’.\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases} ℓ(x,y)={mean(L),sum(L),if reduction=’mean’;if reduction=’sum’.
where L={l1,…,lN}⊤L = \{l_1,\dots,l_N\}^\topL={l1,…,lN}⊤ .
Parameters
Shape:
MultiLabelMarginLoss
class
torch.nn.
MultiLabelMarginLoss
(size_average=None, reduce=None, reduction='mean')[source]Creates a criterion that optimizes a multi-class multi-classification hinge loss (margin-based loss) between input xxx (a 2D mini-batch Tensor) and output yyy (which is a 2D Tensor of target class indices). For each sample in the mini-batch:
loss(x,y)=∑ijmax(0,1−(x[y[j]]−x[i]))x.size(0)\text{loss}(x, y) = \sum_{ij}\frac{\max(0, 1 - (x[y[j]] - x[i]))}{\text{x.size}(0)} loss(x,y)=ij∑x.size(0)max(0,1−(x[y[j]]−x[i]))
where x∈{0, ⋯ , x.size(0)−1}x \in \left\{0, \; \cdots , \; \text{x.size}(0) - 1\right\}x∈{0,⋯,x.size(0)−1} , y∈{0, ⋯ , y.size(0)−1}y \in \left\{0, \; \cdots , \; \text{y.size}(0) - 1\right\}y∈{0,⋯,y.size(0)−1} , 0≤y[j]≤x.size(0)−10 \leq y[j] \leq \text{x.size}(0)-10≤y[j]≤x.size(0)−1 , and i≠y[j]i \neq y[j]i=y[j] for all iii and jjj .
yyy and xxx must have the same size.
The criterion only considers a contiguous block of non-negative targets that starts at the front.
This allows for different samples to have variable amounts of target classes.
Parameters
Shape:
Examples:
>>> loss = nn.MultiLabelMarginLoss() >>> x = torch.FloatTensor([[0.1, 0.2, 0.4, 0.8]]) >>> # for target y, only consider labels 3 and 0, not after label -1 >>> y = torch.LongTensor([[3, 0, -1, 1]]) >>> loss(x, y) >>> # 0.25 * ((1-(0.1-0.2)) + (1-(0.1-0.4)) + (1-(0.8-0.2)) + (1-(0.8-0.4))) tensor(0.8500)
SmoothL1Loss
class
torch.nn.
SmoothL1Loss
(size_average=None, reduce=None, reduction='mean')[source]Creates a criterion that uses a squared term if the absolute element-wise error falls below 1 and an L1 term otherwise. It is less sensitive to outliers than the MSELoss and in some cases prevents exploding gradients (e.g. see Fast R-CNN paper by Ross Girshick). Also known as the Huber loss:
loss(x,y)=1n∑izi\text{loss}(x, y) = \frac{1}{n} \sum_{i} z_{i} loss(x,y)=n1i∑zi
where ziz_{i}zi is given by:
zi={0.5(xi−yi)2,if ∣xi−yi∣<1∣xi−yi∣−0.5,otherwise z_{i} = \begin{cases} 0.5 (x_i - y_i)^2, & \text{if } |x_i - y_i| < 1 \\ |x_i - y_i| - 0.5, & \text{otherwise } \end{cases} zi={0.5(xi−yi)2,∣xi−yi∣−0.5,if ∣xi−yi∣<1otherwise
xxx and yyy arbitrary shapes with a total of nnn elements each the sum operation still operates over all the elements, and divides by nnn .
The division by nnn can be avoided if sets
reduction = 'sum'
.Parameters
Shape:
SoftMarginLoss
class
torch.nn.
SoftMarginLoss
(size_average=None, reduce=None, reduction='mean')[source]Creates a criterion that optimizes a two-class classification logistic loss between input tensor xxx and target tensor yyy (containing 1 or -1).
loss(x,y)=∑ilog(1+exp(−y[i]∗x[i]))x.nelement()\text{loss}(x, y) = \sum_i \frac{\log(1 + \exp(-y[i]*x[i]))}{\text{x.nelement}()} loss(x,y)=i∑x.nelement()log(1+exp(−y[i]∗x[i]))
Parameters
Shape:
MultiLabelSoftMarginLoss
class
torch.nn.
MultiLabelSoftMarginLoss
(weight=None, size_average=None, reduce=None, reduction='mean')[source]Creates a criterion that optimizes a multi-label one-versus-all loss based on max-entropy, between input xxx and target yyy of size (N,C)(N, C)(N,C) . For each sample in the minibatch:
loss(x,y)=−1C∗∑iy[i]∗log((1+exp(−x[i]))−1)+(1−y[i])∗log(exp(−x[i])(1+exp(−x[i])))loss(x, y) = - \frac{1}{C} * \sum_i y[i] * \log((1 + \exp(-x[i]))^{-1}) + (1-y[i]) * \log\left(\frac{\exp(-x[i])}{(1 + \exp(-x[i]))}\right) loss(x,y)=−C1∗i∑y[i]∗log((1+exp(−x[i]))−1)+(1−y[i])∗log((1+exp(−x[i]))exp(−x[i]))
where i∈{0, ⋯ , x.nElement()−1}i \in \left\{0, \; \cdots , \; \text{x.nElement}() - 1\right\}i∈{0,⋯,x.nElement()−1} , y[i]∈{0, 1}y[i] \in \left\{0, \; 1\right\}y[i]∈{0,1} .
Parameters
Shape:
CosineEmbeddingLoss
class
torch.nn.
CosineEmbeddingLoss
(margin=0.0, size_average=None, reduce=None, reduction='mean')[source]Creates a criterion that measures the loss given input tensors x1x_1x1 , x2x_2x2 and a Tensor label yyy with values 1 or -1. This is used for measuring whether two inputs are similar or dissimilar, using the cosine distance, and is typically used for learning nonlinear embeddings or semi-supervised learning.
The loss function for each sample is:
loss(x,y)={1−cos(x1,x2),if y=1max(0,cos(x1,x2)−margin),if y=−1\text{loss}(x, y) = \begin{cases} 1 - \cos(x_1, x_2), & \text{if } y = 1 \\ \max(0, \cos(x_1, x_2) - \text{margin}), & \text{if } y = -1 \end{cases} loss(x,y)={1−cos(x1,x2),max(0,cos(x1,x2)−margin),if y=1if y=−1
Parameters
MultiMarginLoss
class
torch.nn.
MultiMarginLoss
(p=1, margin=1.0, weight=None, size_average=None, reduce=None, reduction='mean')[source]Creates a criterion that optimizes a multi-class classification hinge loss (margin-based loss) between input xxx (a 2D mini-batch Tensor) and output yyy (which is a 1D tensor of target class indices, 0≤y≤x.size(1)−10 \leq y \leq \text{x.size}(1)-10≤y≤x.size(1)−1 ):
For each mini-batch sample, the loss in terms of the 1D input xxx and scalar output yyy is:
loss(x,y)=∑imax(0,margin−x[y]+x[i]))px.size(0)\text{loss}(x, y) = \frac{\sum_i \max(0, \text{margin} - x[y] + x[i]))^p}{\text{x.size}(0)} loss(x,y)=x.size(0)∑imax(0,margin−x[y]+x[i]))p
where x∈{0, ⋯ , x.size(0)−1}x \in \left\{0, \; \cdots , \; \text{x.size}(0) - 1\right\}x∈{0,⋯,x.size(0)−1} and i≠yi \neq yi=y .
Optionally, you can give non-equal weighting on the classes by passing a 1D
weight
tensor into the constructor.The loss function then becomes:
loss(x,y)=∑imax(0,w[y]∗(margin−x[y]+x[i]))p)x.size(0)\text{loss}(x, y) = \frac{\sum_i \max(0, w[y] * (\text{margin} - x[y] + x[i]))^p)}{\text{x.size}(0)} loss(x,y)=x.size(0)∑imax(0,w[y]∗(margin−x[y]+x[i]))p)
Parameters
TripletMarginLoss
class
torch.nn.
TripletMarginLoss
(margin=1.0, p=2.0, eps=1e-06, swap=False, size_average=None, reduce=None, reduction='mean')[source]Creates a criterion that measures the triplet loss given an input tensors x1x1x1 , x2x2x2 , x3x3x3 and a margin with a value greater than 000 . This is used for measuring a relative similarity between samples. A triplet is composed by a, p and n (i.e., anchor, positive examples and negative examples respectively). The shapes of all input tensors should be (N,D)(N, D)(N,D) .
The distance swap is described in detail in the paper Learning shallow convolutional feature descriptors with triplet losses by V. Balntas, E. Riba et al.
The loss function for each sample in the mini-batch is:
L(a,p,n)=max{d(ai,pi)−d(ai,ni)+margin,0}L(a, p, n) = \max \{d(a_i, p_i) - d(a_i, n_i) + {\rm margin}, 0\} L(a,p,n)=max{d(ai,pi)−d(ai,ni)+margin,0}
where
d(xi,yi)=∥xi−yi∥pd(x_i, y_i) = \left\lVert {\bf x}_i - {\bf y}_i \right\rVert_p d(xi,yi)=∥xi−yi∥p
Parameters
Shape:
>>> triplet_loss = nn.TripletMarginLoss(margin=1.0, p=2) >>> anchor = torch.randn(100, 128, requires_grad=True) >>> positive = torch.randn(100, 128, requires_grad=True) >>> negative = torch.randn(100, 128, requires_grad=True) >>> output = triplet_loss(anchor, positive, negative) >>> output.backward()
Vision layers
PixelShuffle
class
torch.nn.
PixelShuffle
(upscale_factor)[source]Rearranges elements in a tensor of shape (∗,C×r2,H,W)(*, C \times r^2, H, W)(∗,C×r2,H,W) to a tensor of shape (∗,C,H×r,W×r)(*, C, H \times r, W \times r)(∗,C,H×r,W×r) .
This is useful for implementing efficient sub-pixel convolution with a stride of 1/r1/r1/r .
Look at the paper: Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network by Shi et. al (2016) for more details.
Parameters
upscale_factor (int) – factor to increase spatial resolution by
Shape:
Examples:
>>> pixel_shuffle = nn.PixelShuffle(3) >>> input = torch.randn(1, 9, 4, 4) >>> output = pixel_shuffle(input) >>> print(output.size()) torch.Size([1, 1, 12, 12])
Upsample
class
torch.nn.
Upsample
(size=None, scale_factor=None, mode='nearest', align_corners=None)[source]Upsamples a given multi-channel 1D (temporal), 2D (spatial) or 3D (volumetric) data.
The input data is assumed to be of the form minibatch x channels x [optional depth] x [optional height] x width. Hence, for spatial inputs, we expect a 4D Tensor and for volumetric inputs, we expect a 5D Tensor.
The algorithms available for upsampling are nearest neighbor and linear, bilinear, bicubic and trilinear for 3D, 4D and 5D input Tensor, respectively.
One can either give a
scale_factor
or the target outputsize
to calculate the output size. (You cannot give both, as it is ambiguous)Parameters
Shape:
Dout=⌊Din×scale_factor⌋D_{out} = \left\lfloor D_{in} \times \text{scale\_factor} \right\rfloor Dout=⌊Din×scale_factor⌋
Hout=⌊Hin×scale_factor⌋H_{out} = \left\lfloor H_{in} \times \text{scale\_factor} \right\rfloor Hout=⌊Hin×scale_factor⌋
Wout=⌊Win×scale_factor⌋W_{out} = \left\lfloor W_{in} \times \text{scale\_factor} \right\rfloor Wout=⌊Win×scale_factor⌋
Warning
With
align_corners = True
, the linearly interpolating modes (linear, bilinear, bicubic, and trilinear) don’t proportionally align the output and input pixels, and thus the output values can depend on the input size. This was the default behavior for these modes up to version 0.3.1. Since then, the default behavior isalign_corners = False
. See below for concrete examples on how this affects the outputs.Note
If you want downsampling/general resizing, you should use
interpolate()
.Examples:
>>> input = torch.arange(1, 5, dtype=torch.float32).view(1, 1, 2, 2) >>> input tensor([[[[ 1., 2.], [ 3., 4.]]]]) >>> m = nn.Upsample(scale_factor=2, mode='nearest') >>> m(input) tensor([[[[ 1., 1., 2., 2.], [ 1., 1., 2., 2.], [ 3., 3., 4., 4.], [ 3., 3., 4., 4.]]]]) >>> m = nn.Upsample(scale_factor=2, mode='bilinear') # align_corners=False >>> m(input) tensor([[[[ 1.0000, 1.2500, 1.7500, 2.0000], [ 1.5000, 1.7500, 2.2500, 2.5000], [ 2.5000, 2.7500, 3.2500, 3.5000], [ 3.0000, 3.2500, 3.7500, 4.0000]]]]) >>> m = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True) >>> m(input) tensor([[[[ 1.0000, 1.3333, 1.6667, 2.0000], [ 1.6667, 2.0000, 2.3333, 2.6667], [ 2.3333, 2.6667, 3.0000, 3.3333], [ 3.0000, 3.3333, 3.6667, 4.0000]]]]) >>> # Try scaling the same data in a larger tensor >>> >>> input_3x3 = torch.zeros(3, 3).view(1, 1, 3, 3) >>> input_3x3[:, :, :2, :2].copy_(input) tensor([[[[ 1., 2.], [ 3., 4.]]]]) >>> input_3x3 tensor([[[[ 1., 2., 0.], [ 3., 4., 0.], [ 0., 0., 0.]]]]) >>> m = nn.Upsample(scale_factor=2, mode='bilinear') # align_corners=False >>> # Notice that values in top left corner are the same with the small input (except at boundary) >>> m(input_3x3) tensor([[[[ 1.0000, 1.2500, 1.7500, 1.5000, 0.5000, 0.0000], [ 1.5000, 1.7500, 2.2500, 1.8750, 0.6250, 0.0000], [ 2.5000, 2.7500, 3.2500, 2.6250, 0.8750, 0.0000], [ 2.2500, 2.4375, 2.8125, 2.2500, 0.7500, 0.0000], [ 0.7500, 0.8125, 0.9375, 0.7500, 0.2500, 0.0000], [ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]]]) >>> m = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True) >>> # Notice that values in top left corner are now changed >>> m(input_3x3) tensor([[[[ 1.0000, 1.4000, 1.8000, 1.6000, 0.8000, 0.0000], [ 1.8000, 2.2000, 2.6000, 2.2400, 1.1200, 0.0000], [ 2.6000, 3.0000, 3.4000, 2.8800, 1.4400, 0.0000], [ 2.4000, 2.7200, 3.0400, 2.5600, 1.2800, 0.0000], [ 1.2000, 1.3600, 1.5200, 1.2800, 0.6400, 0.0000], [ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]]])
UpsamplingNearest2d
class
torch.nn.
UpsamplingNearest2d
(size=None, scale_factor=None)[source]Applies a 2D nearest neighbor upsampling to an input signal composed of several input channels.
To specify the scale, it takes either the
size
or thescale_factor
as it’s constructor argument.When
size
is given, it is the output size of the image (h, w).Parameters
Warning
This class is deprecated in favor of
interpolate()
.Shape:
Hout=⌊Hin×scale_factor⌋H_{out} = \left\lfloor H_{in} \times \text{scale\_factor} \right\rfloor Hout=⌊Hin×scale_factor⌋
Wout=⌊Win×scale_factor⌋W_{out} = \left\lfloor W_{in} \times \text{scale\_factor} \right\rfloor Wout=⌊Win×scale_factor⌋
Examples:
>>> input = torch.arange(1, 5, dtype=torch.float32).view(1, 1, 2, 2) >>> input tensor([[[[ 1., 2.], [ 3., 4.]]]]) >>> m = nn.UpsamplingNearest2d(scale_factor=2) >>> m(input) tensor([[[[ 1., 1., 2., 2.], [ 1., 1., 2., 2.], [ 3., 3., 4., 4.], [ 3., 3., 4., 4.]]]])
UpsamplingBilinear2d
class
torch.nn.
UpsamplingBilinear2d
(size=None, scale_factor=None)[source]Applies a 2D bilinear upsampling to an input signal composed of several input channels.
To specify the scale, it takes either the
size
or thescale_factor
as it’s constructor argument.When
size
is given, it is the output size of the image (h, w).Parameters
Warning
This class is deprecated in favor of
interpolate()
. It is equivalent tonn.functional.interpolate(..., mode='bilinear', align_corners=True)
.Shape:
Hout=⌊Hin×scale_factor⌋H_{out} = \left\lfloor H_{in} \times \text{scale\_factor} \right\rfloor Hout=⌊Hin×scale_factor⌋
Wout=⌊Win×scale_factor⌋W_{out} = \left\lfloor W_{in} \times \text{scale\_factor} \right\rfloor Wout=⌊Win×scale_factor⌋
Examples:
>>> input = torch.arange(1, 5, dtype=torch.float32).view(1, 1, 2, 2) >>> input tensor([[[[ 1., 2.], [ 3., 4.]]]]) >>> m = nn.UpsamplingBilinear2d(scale_factor=2) >>> m(input) tensor([[[[ 1.0000, 1.3333, 1.6667, 2.0000], [ 1.6667, 2.0000, 2.3333, 2.6667], [ 2.3333, 2.6667, 3.0000, 3.3333], [ 3.0000, 3.3333, 3.6667, 4.0000]]]])
DataParallel layers (multi-GPU, distributed)
DataParallel
class
torch.nn.
DataParallel
(module, device_ids=None, output_device=None, dim=0)[source]Implements data parallelism at the module level.
This container parallelizes the application of the given
module
by splitting the input across the specified devices by chunking in the batch dimension (other objects will be copied once per device). In the forward pass, the module is replicated on each device, and each replica handles a portion of the input. During the backwards pass, gradients from each replica are summed into the original module.The batch size should be larger than the number of GPUs used.
See also: Use nn.DataParallel instead of multiprocessing
Arbitrary positional and keyword inputs are allowed to be passed into DataParallel but some types are specially handled. tensors will be scattered on dim specified (default 0). tuple, list and dict types will be shallow copied. The other types will be shared among different threads and can be corrupted if written to in the model’s forward pass.
The parallelized
module
must have its parameters and buffers ondevice_ids[0]
before running thisDataParallel
module.Warning
In each forward,
module
is replicated on each device, so any updates to the running module inforward
will be lost. For example, ifmodule
has a counter attribute that is incremented in eachforward
, it will always stay at the initial value because the update is done on the replicas which are destroyed afterforward
. However,DataParallel
guarantees that the replica ondevice[0]
will have its parameters and buffers sharing storage with the base parallelizedmodule
. So in-place updates to the parameters or buffers ondevice[0]
will be recorded. E.g.,BatchNorm2d
andspectral_norm()
rely on this behavior to update the buffers.Warning
Forward and backward hooks defined on
module
and its submodules will be invokedlen(device_ids)
times, each with inputs located on a particular device. Particularly, the hooks are only guaranteed to be executed in correct order with respect to operations on corresponding devices. For example, it is not guaranteed that hooks set viaregister_forward_pre_hook()
be executed before alllen(device_ids)
forward()
calls, but that each such hook be executed before the correspondingforward()
call of that device.Warning
When
module
returns a scalar (i.e., 0-dimensional tensor) inforward()
, this wrapper will return a vector of length equal to number of devices used in data parallelism, containing the result from each device.Note
There is a subtlety in using the
pack sequence -> recurrent network -> unpack sequence
pattern in aModule
wrapped inDataParallel
. See My recurrent network doesn’t work with data parallelism section in FAQ for details.Parameters
Variables
~DataParallel.module (Module) – the module to be parallelized
Example:
>>> net = torch.nn.DataParallel(model, device_ids=[0, 1, 2]) >>> output = net(input_var) # input_var can be on any device, including CPU
DistributedDataParallel
class
torch.nn.parallel.
DistributedDataParallel
(module, device_ids=None, output_device=None, dim=0, broadcast_buffers=True, process_group=None, bucket_cap_mb=25, find_unused_parameters=False, check_reduction=False)[source]Implements distributed data parallelism that is based on
torch.distributed
package at the module level.This container parallelizes the application of the given module by splitting the input across the specified devices by chunking in the batch dimension. The module is replicated on each machine and each device, and each such replica handles a portion of the input. During the backwards pass, gradients from each node are averaged.
The batch size should be larger than the number of GPUs used locally.
See also: Basics and Use nn.DataParallel instead of multiprocessing. The same constraints on input as in
torch.nn.DataParallel
apply.Creation of this class requires that
torch.distributed
to be already initialized, by callingtorch.distributed.init_process_group()
.DistributedDataParallel
can be used in the following two ways:In this case, a single process will be spawned on each host/node and each process will operate on all the GPUs of the node where it’s running. To use
DistributedDataParallel
in this way, you can simply construct the model as the following:>>> torch.distributed.init_process_group(backend="nccl") >>> model = DistributedDataParallel(model) # device_ids will include all GPU devices by default
This is the highly recommended way to use
DistributedDataParallel
, with multiple processes, each of which operates on a single GPU. This is currently the fastest approach to do data parallel training using PyTorch and applies to both single-node(multi-GPU) and multi-node data parallel training. It is proven to be significantly faster thantorch.nn.DataParallel
for single-node multi-GPU data parallel training.Here is how to use it: on each host with N GPUs, you should spawn up N processes, while ensuring that each process individually works on a single GPU from 0 to N-1. Therefore, it is your job to ensure that your training script operates on a single given GPU by calling:
>>> torch.cuda.set_device(i)
where i is from 0 to N-1. In each process, you should refer the following to construct this module:
>>> torch.distributed.init_process_group(backend='nccl', world_size=4, init_method='...') >>> model = DistributedDataParallel(model, device_ids=[i], output_device=i)
In order to spawn up multiple processes per node, you can use either
torch.distributed.launch
ortorch.multiprocessing.spawn
Note
nccl
backend is currently the fastest and highly recommended backend to be used with Multi-Process Single-GPU distributed training and this applies to both single-node and multi-node distributed trainingNote
This module also supports mixed-precision distributed training. This means that your model can have different types of parameters such as mixed types of fp16 and fp32, the gradient reduction on these mixed types of parameters will just work fine. Also note that
nccl
backend is currently the fastest and highly recommended backend for fp16/fp32 mixed-precision training.Note
If you use
torch.save
on one process to checkpoint the module, andtorch.load
on some other processes to recover it, make sure thatmap_location
is configured properly for every process. Withoutmap_location
,torch.load
would recover the module to devices where the module was saved from.Warning
This module works only with the
gloo
andnccl
backends.Warning
Constructor, forward method, and differentiation of the output (or a function of the output of this module) is a distributed synchronization point. Take that into account in case different processes might be executing different code.
Warning
This module assumes all parameters are registered in the model by the time it is created. No parameters should be added nor removed later. Same applies to buffers.
Warning
This module assumes all parameters are registered in the model of each distributed processes are in the same order. The module itself will conduct gradient all-reduction following the reverse order of the registered parameters of the model. In other words, it is users’ responsibility to ensure that each distributed process has the exact same model and thus the exact same parameter registration order.
Warning
This module assumes all buffers and gradients are dense.
Warning
This module doesn’t work with
torch.autograd.grad()
(i.e. it will only work if gradients are to be accumulated in.grad
attributes of parameters).Warning
If you plan on using this module with a
nccl
backend or agloo
backend (that uses Infiniband), together with a DataLoader that uses multiple workers, please change the multiprocessing start method toforkserver
(Python 3 only) orspawn
. Unfortunately Gloo (that uses Infiniband) and NCCL2 are not fork safe, and you will likely experience deadlocks if you don’t change this setting.Warning
Forward and backward hooks defined on
module
and its submodules won’t be invoked anymore, unless the hooks are initialized in theforward()
method.Warning
You should never try to change your model’s parameters after wrapping up your model with DistributedDataParallel. In other words, when wrapping up your model with DistributedDataParallel, the constructor of DistributedDataParallel will register the additional gradient reduction functions on all the parameters of the model itself at the time of construction. If you change the model’s parameters after the DistributedDataParallel construction, this is not supported and unexpected behaviors can happen, since some parameters’ gradient reduction functions might not get called.
Note
Parameters are never broadcast between processes. The module performs an all-reduce step on gradients and assumes that they will be modified by the optimizer in all processes in the same way. Buffers (e.g. BatchNorm stats) are broadcast from the module in process of rank 0, to all other replicas in the system in every iteration.
Parameters
Variables
~DistributedDataParallel.module (Module) – the module to be parallelized
Example:
>>> torch.distributed.init_process_group(backend='nccl', world_size=4, init_method='...') >>> net = torch.nn.DistributedDataParallel(model, pg)
no_sync
()[source]A context manager to disable gradient synchronizations across DDP processes. Within this context, gradients will be accumulated on module variables, which will later be synchronized in the first forward-backward pass exiting the context.
Example:
>>> ddp = torch.nn.DistributedDataParallel(model, pg) >>> with ddp.no_sync(): ... for input in inputs: ... ddp(input).backward() # no synchronization, accumulate grads ... ddp(another_input).backward() # synchronize grads
Utilities
clip_grad_norm_
torch.nn.utils.
clip_grad_norm_
(parameters, max_norm, norm_type=2)[source]Clips gradient norm of an iterable of parameters.
The norm is computed over all gradients together, as if they were concatenated into a single vector. Gradients are modified in-place.
Parameters
Returns
Total norm of the parameters (viewed as a single vector).
clip_grad_value_
torch.nn.utils.
clip_grad_value_
(parameters, clip_value)[source]Clips gradient of an iterable of parameters at specified value.
Gradients are modified in-place.
Parameters
parameters_to_vector
torch.nn.utils.
parameters_to_vector
(parameters)[source]Convert parameters to one vector
Parameters
parameters (Iterable[Tensor]) – an iterator of Tensors that are the parameters of a model.
Returns
The parameters represented by a single vector
vector_to_parameters
torch.nn.utils.
vector_to_parameters
(vec, parameters)[source]Convert one vector to the parameters
Parameters
weight_norm
torch.nn.utils.
weight_norm
(module, name='weight', dim=0)[source]Applies weight normalization to a parameter in the given module.
w=gv∥v∥\mathbf{w} = g \dfrac{\mathbf{v}}{\|\mathbf{v}\|} w=g∥v∥v
Weight normalization is a reparameterization that decouples the magnitude of a weight tensor from its direction. This replaces the parameter specified by
name
(e.g.'weight'
) with two parameters: one specifying the magnitude (e.g.'weight_g'
) and one specifying the direction (e.g.'weight_v'
). Weight normalization is implemented via a hook that recomputes the weight tensor from the magnitude and direction before everyforward()
call.By default, with
dim=0
, the norm is computed independently per output channel/plane. To compute a norm over the entire weight tensor, usedim=None
.See https://arxiv.org/abs/1602.07868
Parameters
Returns
The original module with the weight norm hook
Example:
>>> m = weight_norm(nn.Linear(20, 40), name='weight') >>> m Linear(in_features=20, out_features=40, bias=True) >>> m.weight_g.size() torch.Size([40, 1]) >>> m.weight_v.size() torch.Size([40, 20])
remove_weight_norm
torch.nn.utils.
remove_weight_norm
(module, name='weight')[source]Removes the weight normalization reparameterization from a module.
Parameters
Example
>>> m = weight_norm(nn.Linear(20, 40)) >>> remove_weight_norm(m)
spectral_norm
torch.nn.utils.
spectral_norm
(module, name='weight', n_power_iterations=1, eps=1e-12, dim=None)[source]Applies spectral normalization to a parameter in the given module.
WSN=Wσ(W),σ(W)=maxh:h≠0∥Wh∥2∥h∥2\mathbf{W}_{SN} = \dfrac{\mathbf{W}}{\sigma(\mathbf{W})}, \sigma(\mathbf{W}) = \max_{\mathbf{h}: \mathbf{h} \ne 0} \dfrac{\|\mathbf{W} \mathbf{h}\|_2}{\|\mathbf{h}\|_2} WSN=σ(W)W,σ(W)=h:h=0max∥h∥2∥Wh∥2
Spectral normalization stabilizes the training of discriminators (critics) in Generative Adversarial Networks (GANs) by rescaling the weight tensor with spectral norm σ\sigmaσ of the weight matrix calculated using power iteration method. If the dimension of the weight tensor is greater than 2, it is reshaped to 2D in power iteration method to get spectral norm. This is implemented via a hook that calculates spectral norm and rescales weight before every
forward()
call.See Spectral Normalization for Generative Adversarial Networks .
Parameters
Returns
The original module with the spectral norm hook
Example:
>>> m = spectral_norm(nn.Linear(20, 40)) >>> m Linear(in_features=20, out_features=40, bias=True) >>> m.weight_u.size() torch.Size([40])
remove_spectral_norm
torch.nn.utils.
remove_spectral_norm
(module, name='weight')[source]Removes the spectral normalization reparameterization from a module.
Parameters
Example
>>> m = spectral_norm(nn.Linear(40, 10)) >>> remove_spectral_norm(m)
PackedSequence
torch.nn.utils.rnn.
PackedSequence
(data, batch_sizes=None, sorted_indices=None, unsorted_indices=None)[source]Holds the data and list of
batch_sizes
of a packed sequence.All RNN modules accept packed sequences as inputs.
Note
Instances of this class should never be created manually. They are meant to be instantiated by functions like
pack_padded_sequence()
.Batch sizes represent the number elements at each sequence step in the batch, not the varying sequence lengths passed to
pack_padded_sequence()
. For instance, given dataabc
andx
thePackedSequence
would contain dataaxbc
withbatch_sizes=[2,1,1]
.Variables
Note
data
can be on arbitrary device and of arbitrary dtype.sorted_indices
andunsorted_indices
must betorch.int64
tensors on the same device asdata
.However,
batch_sizes
should always be a CPUtorch.int64
tensor.This invariant is maintained throughout
PackedSequence
class, and all functions that construct a :class:PackedSequence in PyTorch (i.e., they only pass in tensors conforming to this constraint).pack_padded_sequence
torch.nn.utils.rnn.
pack_padded_sequence
(input, lengths, batch_first=False, enforce_sorted=True)[source]Packs a Tensor containing padded sequences of variable length.
input
can be of sizeT x B x *
where T is the length of the longest sequence (equal tolengths[0]
),B
is the batch size, and*
is any number of dimensions (including 0). Ifbatch_first
isTrue
,B x T x *
input
is expected.For unsorted sequences, use enforce_sorted = False. If
enforce_sorted
isTrue
, the sequences should be sorted by length in a decreasing order, i.e.input[:,0]
should be the longest sequence, andinput[:,B-1]
the shortest one. enforce_sorted = True is only necessary for ONNX export.Note
This function accepts any input that has at least two dimensions. You can apply it to pack the labels, and use the output of the RNN with them to compute the loss directly. A Tensor can be retrieved from a
PackedSequence
object by accessing its.data
attribute.Parameters
Returns
a
PackedSequence
objectpad_packed_sequence
torch.nn.utils.rnn.
pad_packed_sequence
(sequence, batch_first=False, padding_value=0.0, total_length=None)[source]Pads a packed batch of variable length sequences.
It is an inverse operation to
pack_padded_sequence()
.The returned Tensor’s data will be of size
T x B x *
, where T is the length of the longest sequence and B is the batch size. Ifbatch_first
is True, the data will be transposed intoB x T x *
format.Batch elements will be ordered decreasingly by their length.
Note
total_length
is useful to implement thepack sequence -> recurrent network -> unpack sequence
pattern in aModule
wrapped inDataParallel
. See this FAQ section for details.Parameters
Returns
Tuple of Tensor containing the padded sequence, and a Tensor containing the list of lengths of each sequence in the batch.
pad_sequence
torch.nn.utils.rnn.
pad_sequence
(sequences, batch_first=False, padding_value=0)[source]Pad a list of variable length Tensors with
padding_value
pad_sequence
stacks a list of Tensors along a new dimension, and pads them to equal length. For example, if the input is list of sequences with sizeL x *
and if batch_first is False, andT x B x *
otherwise.B is batch size. It is equal to the number of elements in
sequences
. T is length of the longest sequence. L is length of the sequence. * is any number of trailing dimensions, including none.Example
>>> from torch.nn.utils.rnn import pad_sequence >>> a = torch.ones(25, 300) >>> b = torch.ones(22, 300) >>> c = torch.ones(15, 300) >>> pad_sequence([a, b, c]).size() torch.Size([25, 3, 300])
Note
This function returns a Tensor of size
T x B x *
orB x T x *
where T is the length of the longest sequence. This function assumes trailing dimensions and type of all the Tensors in sequences are same.Parameters
Returns
Tensor of size
T x B x *
ifbatch_first
isFalse
. Tensor of sizeB x T x *
otherwisepack_sequence
torch.nn.utils.rnn.
pack_sequence
(sequences, enforce_sorted=True)[source]Packs a list of variable length Tensors
sequences
should be a list of Tensors of sizeL x *
, where L is the length of a sequence and * is any number of trailing dimensions, including zero.For unsorted sequences, use enforce_sorted = False. If
enforce_sorted
isTrue
, the sequences should be sorted in the order of decreasing length.enforce_sorted = True
is only necessary for ONNX export.Example
>>> from torch.nn.utils.rnn import pack_sequence >>> a = torch.tensor([1,2,3]) >>> b = torch.tensor([4,5]) >>> c = torch.tensor([6]) >>> pack_sequence([a, b, c]) PackedSequence(data=tensor([ 1, 4, 6, 2, 5, 3]), batch_sizes=tensor([ 3, 2, 1]))
Parameters
Returns
a
PackedSequence
object-
) , where k=1in1_featuresk = \frac{1}{\text{in1\_features}}k=in1_features1
-
p – probability of an element to be zeroed. Default: 0.5
-
inplace – If set to
True
, will do this operation in-place. Default:False
-
Input: (∗)(*)(∗) . Input can be of any shape
-
Output: (∗)(*)(∗) . Output is of the same shape as input
-
p (float, optional) – probability of an element to be zero-ed.
-
inplace (bool, optional) – If set to
True
, will do this operation in-place -
Input: (N,C,H,W)(N, C, H, W)(N,C,H,W)
-
Output: (N,C,H,W)(N, C, H, W)(N,C,H,W) (same shape as input)
-
p (float, optional) – probability of an element to be zeroed.
-
inplace (bool, optional) – If set to
True
, will do this operation in-place -
Input: (N,C,D,H,W)(N, C, D, H, W)(N,C,D,H,W)
-
Output: (N,C,D,H,W)(N, C, D, H, W)(N,C,D,H,W) (same shape as input)
-
p (float) – probability of an element to be dropped. Default: 0.5
-
inplace (bool, optional) – If set to
True
, will do this operation in-place -
Input: (∗)(*)(∗) . Input can be of any shape
-
Output: (∗)(*)(∗) . Output is of the same shape as input
-
num_embeddings (int) – size of the dictionary of embeddings
-
embedding_dim (int) – the size of each embedding vector
-
padding_idx (int, optional) – If given, pads the output with the embedding vector at
padding_idx
(initialized to zeros) whenever it encounters the index. -
max_norm (float, optional) – If given, each embedding vector with norm larger than
max_norm
is renormalized to have normmax_norm
. -
norm_type (float, optional) – The p of the p-norm to compute for the
max_norm
option. Default2
. -
scale_grad_by_freq (boolean, optional) – If given, this will scale gradients by the inverse of frequency of the words in the mini-batch. Default
False
. -
sparse (bool, optional) – If
True
, gradient w.r.t.weight
matrix will be a sparse tensor. See Notes for more details regarding sparse gradients. -
Input: (∗)(*)(∗) , LongTensor of arbitrary shape containing the indices to extract
-
Output: (∗,H)(*, H)(∗,H) , where * is the input shape and H=embedding_dimH=\text{embedding\_dim}H=embedding_dim
-
embeddings (Tensor) – FloatTensor containing weights for the Embedding. First dimension is being passed to Embedding as
num_embeddings
, second asembedding_dim
. -
freeze (boolean, optional) – If
True
, the tensor does not get updated in the learning process. Equivalent toembedding.weight.requires_grad = False
. Default:True
-
padding_idx (int, optional) – See module initialization documentation.
-
max_norm (float, optional) – See module initialization documentation.
-
norm_type (float, optional) – See module initialization documentation. Default
2
. -
scale_grad_by_freq (boolean, optional) – See module initialization documentation. Default
False
. -
sparse (bool, optional) – See module initialization documentation.
-
num_embeddings (int) – size of the dictionary of embeddings
-
embedding_dim (int) – the size of each embedding vector
-
max_norm (float, optional) – If given, each embedding vector with norm larger than
max_norm
is renormalized to have normmax_norm
. -
norm_type (float, optional) – The p of the p-norm to compute for the
max_norm
option. Default2
. -
scale_grad_by_freq (boolean, optional) – if given, this will scale gradients by the inverse of frequency of the words in the mini-batch. Default
False
. Note: this option is not supported whenmode="max"
. -
mode (string, optional) –
"sum"
,"mean"
or"max"
. Specifies the way to reduce the bag."sum"
computes the weighted sum, takingper_sample_weights
into consideration."mean"
computes the average of the values in the bag,"max"
computes the max value over each bag. Default:"mean"
-
sparse (bool, optional) – if
True
, gradient w.r.t.weight
matrix will be a sparse tensor. See Notes for more details regarding sparse gradients. Note: this option is not supported whenmode="max"
. -
If
input
is 2D of shape (B, N),it will be treated as
B
bags (sequences) each of fixed lengthN
, and this will returnB
values aggregated in a way depending on themode
.offsets
is ignored and required to beNone
in this case. -
If
input
is 1D of shape (N),it will be treated as a concatenation of multiple bags (sequences).
offsets
is required to be a 1D tensor containing the starting index positions of each bag ininput
. Therefore, foroffsets
of shape (B),input
will be viewed as havingB
bags. Empty bags (i.e., having 0-length) will have returned vectors filled by zeros. -
embeddings (Tensor) – FloatTensor containing weights for the EmbeddingBag. First dimension is being passed to EmbeddingBag as ‘num_embeddings’, second as ‘embedding_dim’.
-
freeze (boolean, optional) – If
True
, the tensor does not get updated in the learning process. Equivalent toembeddingbag.weight.requires_grad = False
. Default:True
-
max_norm (float, optional) – See module initialization documentation. Default:
None
-
norm_type (float, optional) – See module initialization documentation. Default
2
. -
scale_grad_by_freq (boolean, optional) – See module initialization documentation. Default
False
. -
mode (string, optional) – See module initialization documentation. Default:
"mean"
-
sparse (bool, optional) – See module initialization documentation. Default:
False
. -
dim (int, optional) – Dimension where cosine similarity is computed. Default: 1
-
eps (float, optional) – Small value to avoid division by zero. Default: 1e-8
-
Input1: (∗1,D,∗2)(\ast_1, D, \ast_2)(∗1,D,∗2) where D is at position dim
-
Input2: (∗1,D,∗2)(\ast_1, D, \ast_2)(∗1,D,∗2) , same shape as the Input1
-
Output: (∗1,∗2)(\ast_1, \ast_2)(∗1,∗2)
-
p (real) – the norm degree. Default: 2
-
eps (float, optional) – Small value to avoid division by zero. Default: 1e-6
-
keepdim (bool, optional) – Determines whether or not to keep the vector dimension. Default: False
-
Input1: (N,D)(N, D)(N,D) where D = vector dimension
-
Input2: (N,D)(N, D)(N,D) , same shape as the Input1
-
Output: (N)(N)(N) . If
keepdim
isTrue
, then (N,1)(N, 1)(N,1) . -
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored when reduce isFalse
. Default:True
-
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
-
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
-
Input: (N,∗)(N, *)(N,∗) where ∗*∗ means, any number of additional dimensions
-
Target: (N,∗)(N, *)(N,∗) , same shape as the input
-
Output: scalar. If
reduction
is'none'
, then (N,∗)(N, *)(N,∗) , same shape as the input -
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored when reduce isFalse
. Default:True
-
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
-
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
-
Input: (N,∗)(N, *)(N,∗) where ∗*∗ means, any number of additional dimensions
-
Target: (N,∗)(N, *)(N,∗) , same shape as the input
-
weight (Tensor, optional) – a manual rescaling weight given to each class. If given, has to be a Tensor of size C
-
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored when reduce isFalse
. Default:True
-
ignore_index (int, optional) – Specifies a target value that is ignored and does not contribute to the input gradient. When
size_average
isTrue
, the loss is averaged over non-ignored targets. -
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
-
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
-
Input: (N,C)(N, C)(N,C) where C = number of classes, or (N,C,d1,d2,...,dK)(N, C, d_1, d_2, ..., d_K)(N,C,d1,d2,...,dK) with K≥1K \geq 1K≥1 in the case of K-dimensional loss.
-
Target: (N)(N)(N) where each value is 0≤targets[i]≤C−10 \leq \text{targets}[i] \leq C-10≤targets[i]≤C−1 , or (N,d1,d2,...,dK)(N, d_1, d_2, ..., d_K)(N,d1,d2,...,dK) with K≥1K \geq 1K≥1 in the case of K-dimensional loss.
-
Output: scalar. If
reduction
is'none'
, then the same size as the target: (N)(N)(N) , or (N,d1,d2,...,dK)(N, d_1, d_2, ..., d_K)(N,d1,d2,...,dK) with K≥1K \geq 1K≥1 in the case of K-dimensional loss. -
blank (int, optional) – blank label. Default 000 .
-
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the output losses will be divided by the target lengths and then the mean over the batch is taken. Default:'mean'
-
zero_infinity (bool, optional) – Whether to zero infinite losses and the associated gradients. Default:
False
Infinite losses mainly occur when the inputs are too short to be aligned to the targets. -
Log_probs: Tensor of size (T,N,C)(T, N, C)(T,N,C) , where T=input lengthT = \text{input length}T=input length , N=batch sizeN = \text{batch size}N=batch size , and C=number of classes (including blank)C = \text{number of classes (including blank)}C=number of classes (including blank) . The logarithmized probabilities of the outputs (e.g. obtained with
torch.nn.functional.log_softmax()
). -
Targets: Tensor of size (N,S)(N, S)(N,S) or (sum(target_lengths))(\operatorname{sum}(\text{target\_lengths}))(sum(target_lengths)) , where N=batch sizeN = \text{batch size}N=batch size and S=max target length, if shape is (N,S)S = \text{max target length, if shape is } (N, S)S=max target length, if shape is (N,S) . It represent the target sequences. Each element in the target sequence is a class index. And the target index cannot be blank (default=0). In the (N,S)(N, S)(N,S) form, targets are padded to the length of the longest sequence, and stacked. In the (sum(target_lengths))(\operatorname{sum}(\text{target\_lengths}))(sum(target_lengths)) form, the targets are assumed to be un-padded and concatenated within 1 dimension.
-
Input_lengths: Tuple or tensor of size (N)(N)(N) , where N=batch sizeN = \text{batch size}N=batch size . It represent the lengths of the inputs (must each be ≤T\leq T≤T ). And the lengths are specified for each sequence to achieve masking under the assumption that sequences are padded to equal lengths.
-
Target_lengths: Tuple or tensor of size (N)(N)(N) , where N=batch sizeN = \text{batch size}N=batch size . It represent lengths of the targets. Lengths are specified for each sequence to achieve masking under the assumption that sequences are padded to equal lengths. If target shape is (N,S)(N,S)(N,S) , target_lengths are effectively the stop index sns_nsn for each target sequence, such that
target_n = targets[n,0:s_n]
for each target in a batch. Lengths must each be ≤S\leq S≤S If the targets are given as a 1d tensor that is the concatenation of individual targets, the target_lengths must add up to the total length of the tensor. -
Output: scalar. If
reduction
is'none'
, then (N)(N)(N) , where N=batch sizeN = \text{batch size}N=batch size . -
weight (Tensor, optional) – a manual rescaling weight given to each class. If given, it has to be a Tensor of size C. Otherwise, it is treated as if having all ones.
-
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored when reduce isFalse
. Default:True
-
ignore_index (int, optional) – Specifies a target value that is ignored and does not contribute to the input gradient. When
size_average
isTrue
, the loss is averaged over non-ignored targets. -
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
-
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
-
Input: (N,C)(N, C)(N,C) where C = number of classes, or (N,C,d1,d2,...,dK)(N, C, d_1, d_2, ..., d_K)(N,C,d1,d2,...,dK) with K≥1K \geq 1K≥1 in the case of K-dimensional loss.
-
Target: (N)(N)(N) where each value is 0≤targets[i]≤C−10 \leq \text{targets}[i] \leq C-10≤targets[i]≤C−1 , or (N,d1,d2,...,dK)(N, d_1, d_2, ..., d_K)(N,d1,d2,...,dK) with K≥1K \geq 1K≥1 in the case of K-dimensional loss.
-
Output: scalar. If
reduction
is'none'
, then the same size as the target: (N)(N)(N) , or (N,d1,d2,...,dK)(N, d_1, d_2, ..., d_K)(N,d1,d2,...,dK) with K≥1K \geq 1K≥1 in the case of K-dimensional loss. -
log_input (bool, optional) – if
True
the loss is computed as exp(input)−target∗input\exp(\text{input}) - \text{target}*\text{input}exp(input)−target∗input , ifFalse
the loss is input−target∗log(input+eps)\text{input} - \text{target}*\log(\text{input}+\text{eps})input−target∗log(input+eps) . -
full (bool, optional) –
whether to compute full loss, i. e. to add the Stirling approximation term
target∗log(target)−target+0.5∗log(2πtarget).\text{target}*\log(\text{target}) - \text{target} + 0.5 * \log(2\pi\text{target}). target∗log(target)−target+0.5∗log(2πtarget).
-
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored when reduce isFalse
. Default:True
-
eps (float, optional) – Small value to avoid evaluation of log(0)\log(0)log(0) when
log_input = False
. Default: 1e-8 -
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
-
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
-
Input: (N,∗)(N, *)(N,∗) where ∗*∗ means, any number of additional dimensions
-
Target: (N,∗)(N, *)(N,∗) , same shape as the input
-
Output: scalar by default. If
reduction
is'none'
, then (N,∗)(N, *)(N,∗) , the same shape as the input -
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored when reduce isFalse
. Default:True
-
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
-
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'batchmean'
|'sum'
|'mean'
.'none'
: no reduction will be applied.'batchmean'
: the sum of the output will be divided by batchsize.'sum'
: the output will be summed.'mean'
: the output will be divided by the number of elements in the output. Default:'mean'
-
Input: (N,∗)(N, *)(N,∗) where ∗*∗ means, any number of additional dimensions
-
Target: (N,∗)(N, *)(N,∗) , same shape as the input
-
Output: scalar by default. If :attr:
reduction
is'none'
, then (N,∗)(N, *)(N,∗) , the same shape as the input -
weight (Tensor, optional) – a manual rescaling weight given to the loss of each batch element. If given, has to be a Tensor of size nbatch.
-
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored when reduce isFalse
. Default:True
-
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
-
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
-
Input: (N,∗)(N, *)(N,∗) where ∗*∗ means, any number of additional dimensions
-
Target: (N,∗)(N, *)(N,∗) , same shape as the input
-
Output: scalar. If
reduction
is'none'
, then (N,∗)(N, *)(N,∗) , same shape as input. -
weight (Tensor, optional) – a manual rescaling weight given to the loss of each batch element. If given, has to be a Tensor of size nbatch.
-
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored when reduce isFalse
. Default:True
-
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
-
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
-
pos_weight (Tensor, optional) – a weight of positive examples. Must be a vector with length equal to the number of classes.
-
margin (float, optional) – Has a default value of 000 .
-
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored when reduce isFalse
. Default:True
-
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
-
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
-
Input: (N,D)(N, D)(N,D) where N is the batch size and D is the size of a sample.
-
Target: (N)(N)(N)
-
Output: scalar. If
reduction
is'none'
, then (N)(N)(N) . -
margin (float, optional) – Has a default value of 1.
-
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored when reduce isFalse
. Default:True
-
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
-
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
-
Input: (∗)(*)(∗) where ∗*∗ means, any number of dimensions. The sum operation operates over all the elements.
-
Target: (∗)(*)(∗) , same shape as the input
-
Output: scalar. If
reduction
is'none'
, then same shape as the input -
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored when reduce isFalse
. Default:True
-
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
-
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
-
Input: (C)(C)(C) or (N,C)(N, C)(N,C) where N is the batch size and C is the number of classes.
-
Target: (C)(C)(C) or (N,C)(N, C)(N,C) , label targets padded by -1 ensuring same shape as the input.
-
Output: scalar. If
reduction
is'none'
, then (N)(N)(N) . -
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored when reduce isFalse
. Default:True
-
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
-
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
-
Input: (N,∗)(N, *)(N,∗) where ∗*∗ means, any number of additional dimensions
-
Target: (N,∗)(N, *)(N,∗) , same shape as the input
-
Output: scalar. If
reduction
is'none'
, then (N,∗)(N, *)(N,∗) , same shape as the input -
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored when reduce isFalse
. Default:True
-
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
-
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
-
Input: (∗)(*)(∗) where ∗*∗ means, any number of additional dimensions
-
Target: (∗)(*)(∗) , same shape as the input
-
Output: scalar. If
reduction
is'none'
, then same shape as the input -
weight (Tensor, optional) – a manual rescaling weight given to each class. If given, it has to be a Tensor of size C. Otherwise, it is treated as if having all ones.
-
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored when reduce isFalse
. Default:True
-
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
-
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
-
Input: (N,C)(N, C)(N,C) where N is the batch size and C is the number of classes.
-
Target: (N,C)(N, C)(N,C) , label targets padded by -1 ensuring same shape as the input.
-
Output: scalar. If
reduction
is'none'
, then (N)(N)(N) . -
margin (float, optional) – Should be a number from −1-1−1 to 111 , 000 to 0.50.50.5 is suggested. If
margin
is missing, the default value is 000 . -
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored when reduce isFalse
. Default:True
-
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
-
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
-
p (int, optional) – Has a default value of 111 . 111 and 222 are the only supported values.
-
margin (float, optional) – Has a default value of 111 .
-
weight (Tensor, optional) – a manual rescaling weight given to each class. If given, it has to be a Tensor of size C. Otherwise, it is treated as if having all ones.
-
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored when reduce isFalse
. Default:True
-
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
-
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
-
margin (float, optional) – Default: 111 .
-
p (int, optional) – The norm degree for pairwise distance. Default: 222 .
-
swap (bool, optional) – The distance swap is described in detail in the paper Learning shallow convolutional feature descriptors with triplet losses by V. Balntas, E. Riba et al. Default:
False
. -
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored when reduce isFalse
. Default:True
-
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
-
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
-
Input: (N,D)(N, D)(N,D) where DDD is the vector dimension.
-
Output: scalar. If
reduction
is'none'
, then (N)(N)(N) . -
Input: (N,L,Hin,Win)(N, L, H_{in}, W_{in})(N,L,Hin,Win) where L=C×upscale_factor2L=C \times \text{upscale\_factor}^2L=C×upscale_factor2
-
Output: (N,C,Hout,Wout)(N, C, H_{out}, W_{out})(N,C,Hout,Wout) where Hout=Hin×upscale_factorH_{out} = H_{in} \times \text{upscale\_factor}Hout=Hin×upscale_factor and Wout=Win×upscale_factorW_{out} = W_{in} \times \text{upscale\_factor}Wout=Win×upscale_factor
-
size (int or Tuple[int] or Tuple[int, int] or Tuple[int, int, int], optional) – output spatial sizes
-
scale_factor (float or Tuple[float] or Tuple[float, float] or Tuple[float, float, float], optional) – multiplier for spatial size. Has to match input size if it is a tuple.
-
mode (str, optional) – the upsampling algorithm: one of
'nearest'
,'linear'
,'bilinear'
,'bicubic'
and'trilinear'
. Default:'nearest'
-
align_corners (bool, optional) – if
True
, the corner pixels of the input and output tensors are aligned, and thus preserving the values at those pixels. This only has effect whenmode
is'linear'
,'bilinear'
, or'trilinear'
. Default:False
-
Input: (N,C,Win)(N, C, W_{in})(N,C,Win) , (N,C,Hin,Win)(N, C, H_{in}, W_{in})(N,C,Hin,Win) or (N,C,Din,Hin,Win)(N, C, D_{in}, H_{in}, W_{in})(N,C,Din,Hin,Win)
-
Output: (N,C,Wout)(N, C, W_{out})(N,C,Wout) , (N,C,Hout,Wout)(N, C, H_{out}, W_{out})(N,C,Hout,Wout) or (N,C,Dout,Hout,Wout)(N, C, D_{out}, H_{out}, W_{out})(N,C,Dout,Hout,Wout) , where
-
size (int or Tuple[int, int], optional) – output spatial sizes
-
scale_factor (float or Tuple[float, float], optional) – multiplier for spatial size.
-
Input: (N,C,Hin,Win)(N, C, H_{in}, W_{in})(N,C,Hin,Win)
-
Output: (N,C,Hout,Wout)(N, C, H_{out}, W_{out})(N,C,Hout,Wout) where
-
size (int or Tuple[int, int], optional) – output spatial sizes
-
scale_factor (float or Tuple[float, float], optional) – multiplier for spatial size.
-
Input: (N,C,Hin,Win)(N, C, H_{in}, W_{in})(N,C,Hin,Win)
-
Output: (N,C,Hout,Wout)(N, C, H_{out}, W_{out})(N,C,Hout,Wout) where
-
module (Module) – module to be parallelized
-
device_ids (list of python:int or torch.device) – CUDA devices (default: all devices)
-
output_device (int or torch.device) – device location of output (default: device_ids[0])
-
Single-Process Multi-GPU
-
Multi-Process Single-GPU
-
module (Module) – module to be parallelized
-
device_ids (list of python:int or torch.device) – CUDA devices. This should only be provided when the input module resides on a single CUDA device. For single-device modules, the
i``th :attr:`module` replica is placed on ``device_ids[i]
. For multi-device modules and CPU modules, device_ids must be None or an empty list, and input data for the forward pass must be placed on the correct device. (default: all devices for single-device modules) -
output_device (int or torch.device) – device location of output for single-device CUDA modules. For multi-device modules and CPU modules, it must be None, and the module itself dictates the output location. (default: device_ids[0] for single-device modules)
-
broadcast_buffers (bool) – flag that enables syncing (broadcasting) buffers of the module at beginning of the forward function. (default:
True
) -
process_group – the process group to be used for distributed data all-reduction. If
None
, the default process group, which is created by`torch.distributed.init_process_group`
, will be used. (default:None
) -
bucket_cap_mb – DistributedDataParallel will bucket parameters into multiple buckets so that gradient reduction of each bucket can potentially overlap with backward computation.
bucket_cap_mb
controls the bucket size in MegaBytes (MB) (default: 25) -
find_unused_parameters (bool) – Traverse the autograd graph of all tensors contained in the return value of the wrapped module’s
forward
function. Parameters that don’t receive gradients as part of this graph are preemptively marked as being ready to be reduced. Note that allforward
outputs that are derived from module parameters must participate in calculating loss and later the gradient computation. If they don’t, this wrapper will hang waiting for autograd to produce gradients for those parameters. Any outputs derived from module parameters that are otherwise unused can be detached from the autograd graph usingtorch.Tensor.detach
. (default:False
) -
check_reduction – when setting to
True
, it enables DistributedDataParallel to automatically check if the previous iteration’s backward reductions were successfully issued at the beginning of every iteration’s forward function. You normally don’t need this option enabled unless you are observing weird behaviors such as different ranks are getting different gradients, which should not happen if DistributedDataParallel is correctly used. (default:False
) -
parameters (Iterable[Tensor] or Tensor) – an iterable of Tensors or a single Tensor that will have gradients normalized
-
norm_type (float or int) – type of the used p-norm. Can be
'inf'
for infinity norm. -
parameters (Iterable[Tensor] or Tensor) – an iterable of Tensors or a single Tensor that will have gradients normalized
-
clip_value (float or int) – maximum allowed value of the gradients. The gradients are clipped in the range [-clip_value,clip_value]\left[\text{-clip\_value}, \text{clip\_value}\right][-clip_value,clip_value]
-
vec (Tensor) – a single vector represents the parameters of a model.
-
parameters (Iterable[Tensor]) – an iterator of Tensors that are the parameters of a model.
-
module (Module) – containing module
-
name (str, optional) – name of weight parameter
-
dim (int, optional) – dimension over which to compute the norm
-
module (Module) – containing module
-
name (str, optional) – name of weight parameter
-
module (nn.Module) – containing module
-
name (str, optional) – name of weight parameter
-
n_power_iterations (int, optional) – number of power iterations to calculate spectral norm
-
eps (float, optional) – epsilon for numerical stability in calculating norms
-
dim (int, optional) – dimension corresponding to number of outputs, the default is
0
, except for modules that are instances of ConvTranspose{1,2,3}d, when it is1
-
module (Module) – containing module
-
name (str, optional) – name of weight parameter
-
~PackedSequence.data (Tensor) – Tensor containing packed sequence
-
~PackedSequence.batch_sizes (Tensor) – Tensor of integers holding information about the batch size at each sequence step
-
~PackedSequence.sorted_indices (Tensor, optional) – Tensor of integers holding how this
PackedSequence
is constructed from sequences. -
~PackedSequence.unsorted_indices (Tensor, optional) – Tensor of integers holding how this to recover the original sequences with correct order.
-
input (Tensor) – padded batch of variable length sequences.
-
lengths (Tensor) – list of sequences lengths of each batch element.
-
batch_first (bool, optional) – if
True
, the input is expected inB x T x *
format. -
enforce_sorted (bool, optional) – if
True
, the input is expected to contain sequences sorted by length in a decreasing order. IfFalse
, this condition is not checked. Default:True
. -
sequence (PackedSequence) – batch to pad
-
batch_first (bool, optional) – if
True
, the output will be inB x T x *
format. -
padding_value (float, optional) – values for padded elements.
-
total_length (int, optional) – if not
None
, the output will be padded to have lengthtotal_length
. This method will throwValueError
iftotal_length
is less than the max sequence length insequence
. -
sequences (list[Tensor]) – list of variable length sequences.
-
batch_first (bool, optional) – output will be in
B x T x *
if True, or inT x B x *
otherwise -
padding_value (float, optional) – value for padded elements. Default: 0.
-
sequences (list[Tensor]) – A list of sequences of decreasing length.
-
enforce_sorted (bool, optional) – if
True
, checks that the input contains sequences sorted by length in a decreasing order. IfFalse
, this condition is not checked. Default:True
.
-
上一篇: 并行程序模拟
下一篇: HikariCP数据库连接池连接MySQL失败:javax.net.ssl.SSLHandshakeException: No appropriate protocol
推荐阅读
-
全国好的二本计算机专业排名:学计算机去哪个二本大学好?
-
百度浏览器扫描二维码以便加入会员或者查看详情
-
四川二本压线的公办大学及分数线-理科二本压线生填报学校(2021高考)
-
Android自定义View简易折线图控件(二)
-
2021适合捡漏的二本公办大学:录取分数线下降的大学有哪些?
-
甘肃省师范类二本院校有哪些?甘肃二本师范最低多少分?(附2021最新排名)
-
2021年山东10所好大学排名:山东实力最强二本大学
-
全国二本公办本科大学-公办本科最低分数线的学校(2021理科参考)
-
400分左右的公办二本大学有哪些?2021高考公立二本学校最低多少分?
-
51nod“省选”模测第二场 B 异或约数和(数论分块)