borch package to build Neural Networks,
Neural Network¶
The module borch.nn
provides implementations of neural network modules that are used
for deep probabilistic programming. It provides an interface almost identical to the
torch.nn
modules and in many cases it is possible to just switch
>>> import torch.nn as nn
to
>>> import borch.nn as nn
and a network defined in torch is now probabilistic, without any other changes in the
model specification, one also need to change the loss function to
borch.infer.vi_loss
.
Examples
>>> import torch
>>> import torch.nn.functional as F
>>> from borch import nn, distributions as dist
>>> class Net(nn.Module):
...
... def __init__(self):
... super(Net, self).__init__()
... self.conv1 = nn.Conv2d(1, 6, 5)
... self.conv2 = nn.Conv2d(6, 16, 5)
... self.fc1 = nn.Linear(16 * 5 * 5, 120)
... self.fc2 = nn.Linear(120, 84)
... self.fc3 = nn.Linear(84, 10)
...
... def forward(self, x):
... x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))
... x = F.max_pool2d(F.relu(self.conv2(x)), 2)
... x = x.view(-1, self.num_flat_features(x))
... x = F.relu(self.fc1(x))
... x = F.relu(self.fc2(x))
... x = self.fc3(x)
... self.pred = dist.Categorical(logits=x)
... return self.pred
...
... def num_flat_features(self, x):
... size = x.size()[1:]
... num_features = 1
... for s in size:
... num_features *= s
... return num_features
>>> net = Net()
Notes
borch.nn
only supports mini-batches. The entire ``borch.nn``package only
supports inputs that are a mini-batch of samples, and not a single sample.
For example, nn.Conv2d will take in a 4D Tensor of miniBatch x nChannels x Height x Width.
-
class
borch.nn.
AdaptiveAvgPool1d
(output_size: Union[int, None, Tuple[Optional[int], ...]])¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.Applies a 1D adaptive average pooling over an input signal composed of several input planes.
The output size is \(L_{out}\), for any input size. The number of output features is equal to the number of input planes.
- Args:
output_size: the target output size \(L_{out}\).
- Shape:
Input: \((N, C, L_{in})\) or \((C, L_{in})\).
Output: \((N, C, L_{out})\) or \((C, L_{out})\), where \(L_{out}=\text{output\_size}\).
- Examples:
>>> >> # target output size of 5 >> m = nn.AdaptiveAvgPool1d(5) >> input = torch.randn(1, 64, 8) >> output = m(input)
-
class
borch.nn.
AdaptiveAvgPool2d
(output_size: Union[int, None, Tuple[Optional[int], ...]])¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.Applies a 2D adaptive average pooling over an input signal composed of several input planes.
The output is of size H x W, for any input size. The number of output features is equal to the number of input planes.
- Args:
- output_size: the target output size of the image of the form H x W.
Can be a tuple (H, W) or a single H for a square image H x H. H and W can be either a
int
, orNone
which means the size will be the same as that of the input.
- Shape:
Input: \((N, C, H_{in}, W_{in})\) or \((C, H_{in}, W_{in})\).
Output: \((N, C, S_{0}, S_{1})\) or \((C, S_{0}, S_{1})\), where \(S=\text{output\_size}\).
- Examples:
>>> >> # target output size of 5x7 >> m = nn.AdaptiveAvgPool2d((5,7)) >> input = torch.randn(1, 64, 8, 9) >> output = m(input) >> # target output size of 7x7 (square) >> m = nn.AdaptiveAvgPool2d(7) >> input = torch.randn(1, 64, 10, 9) >> output = m(input) >> # target output size of 10x7 >> m = nn.AdaptiveAvgPool2d((None, 7)) >> input = torch.randn(1, 64, 10, 9) >> output = m(input)
-
class
borch.nn.
AdaptiveAvgPool3d
(output_size: Union[int, None, Tuple[Optional[int], ...]])¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.Applies a 3D adaptive average pooling over an input signal composed of several input planes.
The output is of size D x H x W, for any input size. The number of output features is equal to the number of input planes.
- Args:
- output_size: the target output size of the form D x H x W.
Can be a tuple (D, H, W) or a single number D for a cube D x D x D. D, H and W can be either a
int
, orNone
which means the size will be the same as that of the input.
- Shape:
Input: \((N, C, H_{in}, W_{in})\) or \((C, H_{in}, W_{in})\).
Output: \((N, C, S_{0}, S_{1}, S_{2})\) or \((C, S_{0}, S_{1}, S_{2})\), where \(S=\text{output\_size}\).
- Examples:
>>> >> # target output size of 5x7x9 >> m = nn.AdaptiveAvgPool3d((5,7,9)) >> input = torch.randn(1, 64, 8, 9, 10) >> output = m(input) >> # target output size of 7x7x7 (cube) >> m = nn.AdaptiveAvgPool3d(7) >> input = torch.randn(1, 64, 10, 9, 8) >> output = m(input) >> # target output size of 7x9x8 >> m = nn.AdaptiveAvgPool3d((7, None, None)) >> input = torch.randn(1, 64, 10, 9, 8) >> output = m(input)
-
class
borch.nn.
AdaptiveLogSoftmaxWithLoss
(in_features: int, n_classes: int, cutoffs: Sequence[int], div_value: float = 4.0, head_bias: bool = False, device=None, dtype=None)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.- Efficient softmax approximation as described in
-
Adaptive softmax is an approximate strategy for training models with large output spaces. It is most effective when the label distribution is highly imbalanced, for example in natural language modelling, where the word frequency distribution approximately follows the Zipf’s law.
Adaptive softmax partitions the labels into several clusters, according to their frequency. These clusters may contain different number of targets each. Additionally, clusters containing less frequent labels assign lower dimensional embeddings to those labels, which speeds up the computation. For each minibatch, only clusters for which at least one target is present are evaluated.
The idea is that the clusters which are accessed frequently (like the first one, containing most frequent labels), should also be cheap to compute – that is, contain a small number of assigned labels.
We highly recommend taking a look at the original paper for more details.
cutoffs
should be an ordered Sequence of integers sorted in the increasing order. It controls number of clusters and the partitioning of targets into clusters. For example settingcutoffs = [10, 100, 1000]
means that first 10 targets will be assigned to the ‘head’ of the adaptive softmax, targets 11, 12, …, 100 will be assigned to the first cluster, and targets 101, 102, …, 1000 will be assigned to the second cluster, while targets 1001, 1002, …, n_classes - 1 will be assigned to the last, third cluster.div_value
is used to compute the size of each additional cluster, which is given as \(\left\lfloor\frac{\texttt{in\_features}}{\texttt{div\_value}^{idx}}\right\rfloor\), where \(idx\) is the cluster index (with clusters for less frequent words having larger indices, and indices starting from \(1\)).head_bias
if set to True, adds a bias term to the ‘head’ of the adaptive softmax. See paper for details. Set to False in the official implementation.
Warning
Labels passed as inputs to this module should be sorted according to their frequency. This means that the most frequent label should be represented by the index 0, and the least frequent label should be represented by the index n_classes - 1.
Note
This module returns a
NamedTuple
withoutput
andloss
fields. See further documentation for details.Note
To compute log-probabilities for all classes, the
log_prob
method can be used.- Args:
in_features (int): Number of features in the input tensor n_classes (int): Number of classes in the dataset cutoffs (Sequence): Cutoffs used to assign targets to their buckets div_value (float, optional): value used as an exponent to compute sizes
of the clusters. Default: 4.0
- head_bias (bool, optional): If
True
, adds a bias term to the ‘head’ of the adaptive softmax. Default:
False
- head_bias (bool, optional): If
- Returns:
NamedTuple
withoutput
andloss
fields:output is a Tensor of size
N
containing computed target log probabilities for each exampleloss is a Scalar representing the computed negative log likelihood loss
- Shape:
input: \((N, \texttt{in\_features})\)
target: \((N)\) where each value satisfies \(0 <= \texttt{target[i]} <= \texttt{n\_classes}\)
output1: \((N)\)
output2:
Scalar
-
class
borch.nn.
AdaptiveMaxPool1d
(output_size: Union[int, None, Tuple[Optional[int], ...]], return_indices: bool = False)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.Applies a 1D adaptive max pooling over an input signal composed of several input planes.
The output size is \(L_{out}\), for any input size. The number of output features is equal to the number of input planes.
- Args:
output_size: the target output size \(L_{out}\). return_indices: if
True
, will return the indices along with the outputs.Useful to pass to nn.MaxUnpool1d. Default:
False
- Shape:
Input: \((N, C, L_{in})\) or \((C, L_{in})\).
Output: \((N, C, L_{out})\) or \((C, L_{out})\), where \(L_{out}=\text{output\_size}\).
- Examples:
>>> >> # target output size of 5 >> m = nn.AdaptiveMaxPool1d(5) >> input = torch.randn(1, 64, 8) >> output = m(input)
-
class
borch.nn.
AdaptiveMaxPool2d
(output_size: Union[int, None, Tuple[Optional[int], ...]], return_indices: bool = False)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.Applies a 2D adaptive max pooling over an input signal composed of several input planes.
The output is of size \(H_{out} \times W_{out}\), for any input size. The number of output features is equal to the number of input planes.
- Args:
- output_size: the target output size of the image of the form \(H_{out} \times W_{out}\).
Can be a tuple \((H_{out}, W_{out})\) or a single \(H_{out}\) for a square image \(H_{out} \times H_{out}\). \(H_{out}\) and \(W_{out}\) can be either a
int
, orNone
which means the size will be the same as that of the input.- return_indices: if
True
, will return the indices along with the outputs. Useful to pass to nn.MaxUnpool2d. Default:
False
- Shape:
Input: \((N, C, H_{in}, W_{in})\) or \((C, H_{in}, W_{in})\).
Output: \((N, C, H_{out}, W_{out})\) or \((C, H_{out}, W_{out})\), where \((H_{out}, W_{out})=\text{output\_size}\).
- Examples:
>>> >> # target output size of 5x7 >> m = nn.AdaptiveMaxPool2d((5,7)) >> input = torch.randn(1, 64, 8, 9) >> output = m(input) >> # target output size of 7x7 (square) >> m = nn.AdaptiveMaxPool2d(7) >> input = torch.randn(1, 64, 10, 9) >> output = m(input) >> # target output size of 10x7 >> m = nn.AdaptiveMaxPool2d((None, 7)) >> input = torch.randn(1, 64, 10, 9) >> output = m(input)
-
class
borch.nn.
AdaptiveMaxPool3d
(output_size: Union[int, None, Tuple[Optional[int], ...]], return_indices: bool = False)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.Applies a 3D adaptive max pooling over an input signal composed of several input planes.
The output is of size \(D_{out} \times H_{out} \times W_{out}\), for any input size. The number of output features is equal to the number of input planes.
- Args:
- output_size: the target output size of the image of the form \(D_{out} \times H_{out} \times W_{out}\).
Can be a tuple \((D_{out}, H_{out}, W_{out})\) or a single \(D_{out}\) for a cube \(D_{out} \times D_{out} \times D_{out}\). \(D_{out}\), \(H_{out}\) and \(W_{out}\) can be either a
int
, orNone
which means the size will be the same as that of the input.- return_indices: if
True
, will return the indices along with the outputs. Useful to pass to nn.MaxUnpool3d. Default:
False
- Shape:
Input: \((N, C, D_{in}, H_{in}, W_{in})\) or \((C, D_{in}, H_{in}, W_{in})\).
Output: \((N, C, D_{out}, H_{out}, W_{out})\) or \((C, D_{out}, H_{out}, W_{out})\), where \((D_{out}, H_{out}, W_{out})=\text{output\_size}\).
- Examples:
>>> >> # target output size of 5x7x9 >> m = nn.AdaptiveMaxPool3d((5,7,9)) >> input = torch.randn(1, 64, 8, 9, 10) >> output = m(input) >> # target output size of 7x7x7 (cube) >> m = nn.AdaptiveMaxPool3d(7) >> input = torch.randn(1, 64, 10, 9, 8) >> output = m(input) >> # target output size of 7x9x8 >> m = nn.AdaptiveMaxPool3d((7, None, None)) >> input = torch.randn(1, 64, 10, 9, 8) >> output = m(input)
-
class
borch.nn.
AlphaDropout
(p: float = 0.5, inplace: bool = False)¶ Applies Alpha Dropout over the input.
Alpha Dropout is a type of Dropout that maintains the self-normalizing property. For an input with zero mean and unit standard deviation, the output of Alpha Dropout maintains the original mean and standard deviation of the input. Alpha Dropout goes hand-in-hand with SELU activation function, which ensures that the outputs have zero mean and unit standard deviation.
During training, it randomly masks some of the elements of the input tensor with probability p using samples from a bernoulli distribution. The elements to masked are randomized on every forward call, and scaled and shifted to maintain zero mean and unit standard deviation.
During evaluation the module simply computes an identity function.
More details can be found in the paper Self-Normalizing Neural Networks .
- Parameters
p (float) – probability of an element to be dropped. Default: 0.5
inplace (bool, optional) – If set to
True
, will do this operation in-place
- Shape:
Input: \((*)\). Input can be of any shape
Output: \((*)\). Output is of the same shape as input
Examples:
>>> m = nn.AlphaDropout(p=0.2) >>> input = torch.randn(20, 16) >>> output = m(input)
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
AvgPool1d
(kernel_size: Union[int, Tuple[int]], stride: Union[int, Tuple[int]] = None, padding: Union[int, Tuple[int]] = 0, ceil_mode: bool = False, count_include_pad: bool = True)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.- Applies a 1D average pooling over an input signal composed of several
input planes.
In the simplest case, the output value of the layer with input size \((N, C, L)\), output \((N, C, L_{out})\) and
kernel_size
\(k\) can be precisely described as:\[\text{out}(N_i, C_j, l) = \frac{1}{k} \sum_{m=0}^{k-1} \text{input}(N_i, C_j, \text{stride} \times l + m)\]If
padding
is non-zero, then the input is implicitly zero-padded on both sides forpadding
number of points.- Note:
When ceil_mode=True, sliding windows are allowed to go off-bounds if they start within the left padding or the input. Sliding windows that would start in the right padded region are ignored.
The parameters
kernel_size
,stride
,padding
can each be anint
or a one-element tuple.- Args:
kernel_size: the size of the window stride: the stride of the window. Default value is
kernel_size
padding: implicit zero padding to be added on both sides ceil_mode: when True, will use ceil instead of floor to compute the output shape count_include_pad: when True, will include the zero-padding in the averaging calculation- Shape:
Input: \((N, C, L_{in})\) or \((C, L_{in}\).
Output: \((N, C, L_{out})\) or \((C, L_{out})\), where
\[L_{out} = \left\lfloor \frac{L_{in} + 2 \times \text{padding} - \text{kernel\_size}}{\text{stride}} + 1\right\rfloor\]
Examples:
>> # pool with window of size=3, stride=2 >> m = nn.AvgPool1d(3, stride=2) >> m(torch.tensor([[[1.,2,3,4,5,6,7]]])) tensor([[[ 2., 4., 6.]]])
-
class
borch.nn.
AvgPool2d
(kernel_size: Union[int, Tuple[int, int]], stride: Union[int, Tuple[int, int], None] = None, padding: Union[int, Tuple[int, int]] = 0, ceil_mode: bool = False, count_include_pad: bool = True, divisor_override: Optional[int] = None)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.- Applies a 2D average pooling over an input signal composed of several input
planes.
In the simplest case, the output value of the layer with input size \((N, C, H, W)\), output \((N, C, H_{out}, W_{out})\) and
kernel_size
\((kH, kW)\) can be precisely described as:\[out(N_i, C_j, h, w) = \frac{1}{kH * kW} \sum_{m=0}^{kH-1} \sum_{n=0}^{kW-1} input(N_i, C_j, stride[0] \times h + m, stride[1] \times w + n)\]If
padding
is non-zero, then the input is implicitly zero-padded on both sides forpadding
number of points.- Note:
When ceil_mode=True, sliding windows are allowed to go off-bounds if they start within the left padding or the input. Sliding windows that would start in the right padded region are ignored.
The parameters
kernel_size
,stride
,padding
can either be:a single
int
– in which case the same value is used for the height and width dimensiona
tuple
of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension
- Args:
kernel_size: the size of the window stride: the stride of the window. Default value is
kernel_size
padding: implicit zero padding to be added on both sides ceil_mode: when True, will use ceil instead of floor to compute the output shape count_include_pad: when True, will include the zero-padding in the averaging calculation divisor_override: if specified, it will be used as divisor, otherwise size of the pooling region will be used.- Shape:
Input: \((N, C, H_{in}, W_{in})\) or \((C, H_{in}, W_{in})\).
Output: \((N, C, H_{out}, W_{out})\) or \((C, H_{out}, W_{out})\), where
\[H_{out} = \left\lfloor\frac{H_{in} + 2 \times \text{padding}[0] - \text{kernel\_size}[0]}{\text{stride}[0]} + 1\right\rfloor\]\[W_{out} = \left\lfloor\frac{W_{in} + 2 \times \text{padding}[1] - \text{kernel\_size}[1]}{\text{stride}[1]} + 1\right\rfloor\]
Examples:
>> # pool of square window of size=3, stride=2 >> m = nn.AvgPool2d(3, stride=2) >> # pool of non-square window >> m = nn.AvgPool2d((3, 2), stride=(2, 1)) >> input = torch.randn(20, 16, 50, 32) >> output = m(input)
-
class
borch.nn.
AvgPool3d
(kernel_size: Union[int, Tuple[int, int, int]], stride: Union[int, Tuple[int, int, int], None] = None, padding: Union[int, Tuple[int, int, int]] = 0, ceil_mode: bool = False, count_include_pad: bool = True, divisor_override: Optional[int] = None)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.- Applies a 3D average pooling over an input signal composed of several input
planes.
In the simplest case, the output value of the layer with input size \((N, C, D, H, W)\), output \((N, C, D_{out}, H_{out}, W_{out})\) and
kernel_size
\((kD, kH, kW)\) can be precisely described as:\[\begin{split}\begin{aligned} \text{out}(N_i, C_j, d, h, w) ={} & \sum_{k=0}^{kD-1} \sum_{m=0}^{kH-1} \sum_{n=0}^{kW-1} \\ & \frac{\text{input}(N_i, C_j, \text{stride}[0] \times d + k, \text{stride}[1] \times h + m, \text{stride}[2] \times w + n)} {kD \times kH \times kW} \end{aligned}\end{split}\]If
padding
is non-zero, then the input is implicitly zero-padded on all three sides forpadding
number of points.- Note:
When ceil_mode=True, sliding windows are allowed to go off-bounds if they start within the left padding or the input. Sliding windows that would start in the right padded region are ignored.
The parameters
kernel_size
,stride
can either be:a single
int
– in which case the same value is used for the depth, height and width dimensiona
tuple
of three ints – in which case, the first int is used for the depth dimension, the second int for the height dimension and the third int for the width dimension
- Args:
kernel_size: the size of the window stride: the stride of the window. Default value is
kernel_size
padding: implicit zero padding to be added on all three sides ceil_mode: when True, will use ceil instead of floor to compute the output shape count_include_pad: when True, will include the zero-padding in the averaging calculation divisor_override: if specified, it will be used as divisor, otherwisekernel_size
will be used- Shape:
Input: \((N, C, D_{in}, H_{in}, W_{in})\) or \((C, D_{in}, H_{in}, W_{in})\).
Output: \((N, C, D_{out}, H_{out}, W_{out})\) or \((C, D_{out}, H_{out}, W_{out})\), where
\[D_{out} = \left\lfloor\frac{D_{in} + 2 \times \text{padding}[0] - \text{kernel\_size}[0]}{\text{stride}[0]} + 1\right\rfloor\]\[H_{out} = \left\lfloor\frac{H_{in} + 2 \times \text{padding}[1] - \text{kernel\_size}[1]}{\text{stride}[1]} + 1\right\rfloor\]\[W_{out} = \left\lfloor\frac{W_{in} + 2 \times \text{padding}[2] - \text{kernel\_size}[2]}{\text{stride}[2]} + 1\right\rfloor\]
Examples:
>> # pool of square window of size=3, stride=2 >> m = nn.AvgPool3d(3, stride=2) >> # pool of non-square window >> m = nn.AvgPool3d((3, 2, 2), stride=(2, 1, 2)) >> input = torch.randn(20, 16, 50,44, 31) >> output = m(input)
-
class
borch.nn.
BCELoss
(weight: Optional[torch.Tensor] = None, size_average=None, reduce=None, reduction: str = 'mean')¶ Creates a criterion that measures the Binary Cross Entropy between the target and the input probabilities:
The unreduced (i.e. with
reduction
set to'none'
) loss can be described as:\[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_n \left[ y_n \cdot \log x_n + (1 - y_n) \cdot \log (1 - x_n) \right],\]where \(N\) is the batch size. If
reduction
is not'none'
(default'mean'
), then\[\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases}\end{split}\]This is used for measuring the error of a reconstruction in for example an auto-encoder. Note that the targets \(y\) should be numbers between 0 and 1.
Notice that if \(x_n\) is either 0 or 1, one of the log terms would be mathematically undefined in the above loss equation. PyTorch chooses to set \(\log (0) = -\infty\), since \(\lim_{x\to 0} \log (x) = -\infty\). However, an infinite term in the loss equation is not desirable for several reasons.
For one, if either \(y_n = 0\) or \((1 - y_n) = 0\), then we would be multiplying 0 with infinity. Secondly, if we have an infinite loss value, then we would also have an infinite term in our gradient, since \(\lim_{x\to 0} \frac{d}{dx} \log (x) = \infty\). This would make BCELoss’s backward method nonlinear with respect to \(x_n\), and using it for things like linear regression would not be straight-forward.
Our solution is that BCELoss clamps its log function outputs to be greater than or equal to -100. This way, we can always have a finite loss value and a linear backward method.
- Parameters
weight (Tensor, optional) – a manual rescaling weight given to the loss of each batch element. If given, has to be a Tensor of size nbatch.
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
- Shape:
Input: \((*)\), where \(*\) means any number of dimensions.
Target: \((*)\), same shape as the input.
Output: scalar. If
reduction
is'none'
, then \((*)\), same shape as input.
Examples:
>>> m = nn.Sigmoid() >>> loss = nn.BCELoss() >>> input = torch.randn(3, requires_grad=True) >>> target = torch.empty(3).random_(2) >>> output = loss(m(input), target) >>> output.backward()
-
forward
(input: torch.Tensor, target: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
BCEWithLogitsLoss
(weight: Optional[torch.Tensor] = None, size_average=None, reduce=None, reduction: str = 'mean', pos_weight: Optional[torch.Tensor] = None)¶ This loss combines a Sigmoid layer and the BCELoss in one single class. This version is more numerically stable than using a plain Sigmoid followed by a BCELoss as, by combining the operations into one layer, we take advantage of the log-sum-exp trick for numerical stability.
The unreduced (i.e. with
reduction
set to'none'
) loss can be described as:\[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_n \left[ y_n \cdot \log \sigma(x_n) + (1 - y_n) \cdot \log (1 - \sigma(x_n)) \right],\]where \(N\) is the batch size. If
reduction
is not'none'
(default'mean'
), then\[\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases}\end{split}\]This is used for measuring the error of a reconstruction in for example an auto-encoder. Note that the targets t[i] should be numbers between 0 and 1.
It’s possible to trade off recall and precision by adding weights to positive examples. In the case of multi-label classification the loss can be described as:
\[\ell_c(x, y) = L_c = \{l_{1,c},\dots,l_{N,c}\}^\top, \quad l_{n,c} = - w_{n,c} \left[ p_c y_{n,c} \cdot \log \sigma(x_{n,c}) + (1 - y_{n,c}) \cdot \log (1 - \sigma(x_{n,c})) \right],\]where \(c\) is the class number (\(c > 1\) for multi-label binary classification, \(c = 1\) for single-label binary classification), \(n\) is the number of the sample in the batch and \(p_c\) is the weight of the positive answer for the class \(c\).
\(p_c > 1\) increases the recall, \(p_c < 1\) increases the precision.
For example, if a dataset contains 100 positive and 300 negative examples of a single class, then pos_weight for the class should be equal to \(\frac{300}{100}=3\). The loss would act as if the dataset contains \(3\times 100=300\) positive examples.
Examples:
>>> target = torch.ones([10, 64], dtype=torch.float32) # 64 classes, batch size = 10 >>> output = torch.full([10, 64], 1.5) # A prediction (logit) >>> pos_weight = torch.ones([64]) # All weights are equal to 1 >>> criterion = torch.nn.BCEWithLogitsLoss(pos_weight=pos_weight) >>> criterion(output, target) # -log(sigmoid(1.5)) tensor(0.2014)
- Parameters
weight (Tensor, optional) – a manual rescaling weight given to the loss of each batch element. If given, has to be a Tensor of size nbatch.
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
pos_weight (Tensor, optional) – a weight of positive examples. Must be a vector with length equal to the number of classes.
- Shape:
Input: \((*)\), where \(*\) means any number of dimensions.
Target: \((*)\), same shape as the input.
Output: scalar. If
reduction
is'none'
, then \((*)\), same shape as input.
Examples:
>>> loss = nn.BCEWithLogitsLoss() >>> input = torch.randn(3, requires_grad=True) >>> target = torch.empty(3).random_(2) >>> output = loss(input, target) >>> output.backward()
-
forward
(input: torch.Tensor, target: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
BatchNorm1d
(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)¶ Applies Batch Normalization over a 2D or 3D input (a mini-batch of 1D inputs with optional additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift .
\[y = \frac{x - \mathrm{E}[x]}{\sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]The mean and standard-deviation are calculated per-dimension over the mini-batches and \(\gamma\) and \(\beta\) are learnable parameter vectors of size C (where C is the input size). By default, the elements of \(\gamma\) are set to 1 and the elements of \(\beta\) are set to 0. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False).
Also by default, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a default
momentum
of 0.1.If
track_running_stats
is set toFalse
, this layer then does not keep running estimates, and batch statistics are instead used during evaluation time as well.Note
This
momentum
argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is \(\hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x_t\), where \(\hat{x}\) is the estimated statistic and \(x_t\) is the new observed value.Because the Batch Normalization is done over the C dimension, computing statistics on (N, L) slices, it’s common terminology to call this Temporal Batch Normalization.
- Parameters
num_features – \(C\) from an expected input of size \((N, C, L)\) or \(L\) from input of size \((N, L)\)
eps – a value added to the denominator for numerical stability. Default: 1e-5
momentum – the value used for the running_mean and running_var computation. Can be set to
None
for cumulative moving average (i.e. simple average). Default: 0.1affine – a boolean value that when set to
True
, this module has learnable affine parameters. Default:True
track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics, and initializes statistics buffersrunning_mean
andrunning_var
asNone
. When these buffers areNone
, this module always uses batch statistics. in both training and eval modes. Default:True
- Shape:
Input: \((N, C)\) or \((N, C, L)\)
Output: \((N, C)\) or \((N, C, L)\) (same shape as input)
Examples:
>>> # With Learnable Parameters >>> m = nn.BatchNorm1d(100) >>> # Without Learnable Parameters >>> m = nn.BatchNorm1d(100, affine=False) >>> input = torch.randn(20, 100) >>> output = m(input)
-
class
borch.nn.
BatchNorm2d
(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)¶ Applies Batch Normalization over a 4D input (a mini-batch of 2D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift .
\[y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]The mean and standard-deviation are calculated per-dimension over the mini-batches and \(\gamma\) and \(\beta\) are learnable parameter vectors of size C (where C is the input size). By default, the elements of \(\gamma\) are set to 1 and the elements of \(\beta\) are set to 0. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False).
Also by default, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a default
momentum
of 0.1.If
track_running_stats
is set toFalse
, this layer then does not keep running estimates, and batch statistics are instead used during evaluation time as well.Note
This
momentum
argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is \(\hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x_t\), where \(\hat{x}\) is the estimated statistic and \(x_t\) is the new observed value.Because the Batch Normalization is done over the C dimension, computing statistics on (N, H, W) slices, it’s common terminology to call this Spatial Batch Normalization.
- Parameters
num_features – \(C\) from an expected input of size \((N, C, H, W)\)
eps – a value added to the denominator for numerical stability. Default: 1e-5
momentum – the value used for the running_mean and running_var computation. Can be set to
None
for cumulative moving average (i.e. simple average). Default: 0.1affine – a boolean value that when set to
True
, this module has learnable affine parameters. Default:True
track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics, and initializes statistics buffersrunning_mean
andrunning_var
asNone
. When these buffers areNone
, this module always uses batch statistics. in both training and eval modes. Default:True
- Shape:
Input: \((N, C, H, W)\)
Output: \((N, C, H, W)\) (same shape as input)
Examples:
>>> # With Learnable Parameters >>> m = nn.BatchNorm2d(100) >>> # Without Learnable Parameters >>> m = nn.BatchNorm2d(100, affine=False) >>> input = torch.randn(20, 100, 35, 45) >>> output = m(input)
-
class
borch.nn.
BatchNorm3d
(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)¶ Applies Batch Normalization over a 5D input (a mini-batch of 3D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift .
\[y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]The mean and standard-deviation are calculated per-dimension over the mini-batches and \(\gamma\) and \(\beta\) are learnable parameter vectors of size C (where C is the input size). By default, the elements of \(\gamma\) are set to 1 and the elements of \(\beta\) are set to 0. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False).
Also by default, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a default
momentum
of 0.1.If
track_running_stats
is set toFalse
, this layer then does not keep running estimates, and batch statistics are instead used during evaluation time as well.Note
This
momentum
argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is \(\hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x_t\), where \(\hat{x}\) is the estimated statistic and \(x_t\) is the new observed value.Because the Batch Normalization is done over the C dimension, computing statistics on (N, D, H, W) slices, it’s common terminology to call this Volumetric Batch Normalization or Spatio-temporal Batch Normalization.
- Parameters
num_features – \(C\) from an expected input of size \((N, C, D, H, W)\)
eps – a value added to the denominator for numerical stability. Default: 1e-5
momentum – the value used for the running_mean and running_var computation. Can be set to
None
for cumulative moving average (i.e. simple average). Default: 0.1affine – a boolean value that when set to
True
, this module has learnable affine parameters. Default:True
track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics, and initializes statistics buffersrunning_mean
andrunning_var
asNone
. When these buffers areNone
, this module always uses batch statistics. in both training and eval modes. Default:True
- Shape:
Input: \((N, C, D, H, W)\)
Output: \((N, C, D, H, W)\) (same shape as input)
Examples:
>>> # With Learnable Parameters >>> m = nn.BatchNorm3d(100) >>> # Without Learnable Parameters >>> m = nn.BatchNorm3d(100, affine=False) >>> input = torch.randn(20, 100, 35, 45, 10) >>> output = m(input)
-
class
borch.nn.
Bilinear
(in1_features: int, in2_features: int, out_features: int, bias: bool = True, device=None, dtype=None)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.- Applies a bilinear transformation to the incoming data:
\(y = x_1^T A x_2 + b\)
- Args:
in1_features: size of each first input sample in2_features: size of each second input sample out_features: size of each output sample bias: If set to False, the layer will not learn an additive bias.
Default:
True
- Shape:
Input1: \((N, *, H_{in1})\) where \(H_{in1}=\text{in1\_features}\) and \(*\) means any number of additional dimensions. All but the last dimension of the inputs should be the same.
Input2: \((N, *, H_{in2})\) where \(H_{in2}=\text{in2\_features}\).
Output: \((N, *, H_{out})\) where \(H_{out}=\text{out\_features}\) and all but the last dimension are the same shape as the input.
- Attributes:
- weight: the learnable weights of the module of shape
\((\text{out\_features}, \text{in1\_features}, \text{in2\_features})\). The values are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\), where \(k = \frac{1}{\text{in1\_features}}\)
- bias: the learnable bias of the module of shape \((\text{out\_features})\).
If
bias
isTrue
, the values are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\), where \(k = \frac{1}{\text{in1\_features}}\)
Examples:
>> m = nn.Bilinear(20, 30, 40) >> input1 = torch.randn(128, 20) >> input2 = torch.randn(128, 30) >> output = m(input1, input2) >> print(output.size()) torch.Size([128, 40])
-
class
borch.nn.
CELU
(alpha: float = 1.0, inplace: bool = False)¶ Applies the element-wise function:
\[\text{CELU}(x) = \max(0,x) + \min(0, \alpha * (\exp(x/\alpha) - 1))\]More details can be found in the paper Continuously Differentiable Exponential Linear Units .
- Parameters
alpha – the \(\alpha\) value for the CELU formulation. Default: 1.0
inplace – can optionally do the operation in-place. Default:
False
- Shape:
Input: \((*)\), where \(*\) means any number of dimensions.
Output: \((*)\), same shape as the input.
Examples:
>>> m = nn.CELU() >>> input = torch.randn(2) >>> output = m(input)
-
extra_repr
() → str¶ Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
CTCLoss
(blank: int = 0, reduction: str = 'mean', zero_infinity: bool = False)¶ The Connectionist Temporal Classification loss.
Calculates loss between a continuous (unsegmented) time series and a target sequence. CTCLoss sums over the probability of possible alignments of input to target, producing a loss value which is differentiable with respect to each input node. The alignment of input to target is assumed to be “many-to-one”, which limits the length of the target sequence such that it must be \(\leq\) the input length.
- Parameters
blank (int, optional) – blank label. Default \(0\).
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the output losses will be divided by the target lengths and then the mean over the batch is taken. Default:'mean'
zero_infinity (bool, optional) – Whether to zero infinite losses and the associated gradients. Default:
False
Infinite losses mainly occur when the inputs are too short to be aligned to the targets.
- Shape:
Log_probs: Tensor of size \((T, N, C)\), where \(T = \text{input length}\), \(N = \text{batch size}\), and \(C = \text{number of classes (including blank)}\). The logarithmized probabilities of the outputs (e.g. obtained with
torch.nn.functional.log_softmax()
).Targets: Tensor of size \((N, S)\) or \((\operatorname{sum}(\text{target\_lengths}))\), where \(N = \text{batch size}\) and \(S = \text{max target length, if shape is } (N, S)\). It represent the target sequences. Each element in the target sequence is a class index. And the target index cannot be blank (default=0). In the \((N, S)\) form, targets are padded to the length of the longest sequence, and stacked. In the \((\operatorname{sum}(\text{target\_lengths}))\) form, the targets are assumed to be un-padded and concatenated within 1 dimension.
Input_lengths: Tuple or tensor of size \((N)\), where \(N = \text{batch size}\). It represent the lengths of the inputs (must each be \(\leq T\)). And the lengths are specified for each sequence to achieve masking under the assumption that sequences are padded to equal lengths.
Target_lengths: Tuple or tensor of size \((N)\), where \(N = \text{batch size}\). It represent lengths of the targets. Lengths are specified for each sequence to achieve masking under the assumption that sequences are padded to equal lengths. If target shape is \((N,S)\), target_lengths are effectively the stop index \(s_n\) for each target sequence, such that
target_n = targets[n,0:s_n]
for each target in a batch. Lengths must each be \(\leq S\) If the targets are given as a 1d tensor that is the concatenation of individual targets, the target_lengths must add up to the total length of the tensor.Output: scalar. If
reduction
is'none'
, then \((N)\), where \(N = \text{batch size}\).
Examples:
>>> # Target are to be padded >>> T = 50 # Input sequence length >>> C = 20 # Number of classes (including blank) >>> N = 16 # Batch size >>> S = 30 # Target sequence length of longest target in batch (padding length) >>> S_min = 10 # Minimum target length, for demonstration purposes >>> >>> # Initialize random batch of input vectors, for *size = (T,N,C) >>> input = torch.randn(T, N, C).log_softmax(2).detach().requires_grad_() >>> >>> # Initialize random batch of targets (0 = blank, 1:C = classes) >>> target = torch.randint(low=1, high=C, size=(N, S), dtype=torch.long) >>> >>> input_lengths = torch.full(size=(N,), fill_value=T, dtype=torch.long) >>> target_lengths = torch.randint(low=S_min, high=S, size=(N,), dtype=torch.long) >>> ctc_loss = nn.CTCLoss() >>> loss = ctc_loss(input, target, input_lengths, target_lengths) >>> loss.backward() >>> >>> >>> # Target are to be un-padded >>> T = 50 # Input sequence length >>> C = 20 # Number of classes (including blank) >>> N = 16 # Batch size >>> >>> # Initialize random batch of input vectors, for *size = (T,N,C) >>> input = torch.randn(T, N, C).log_softmax(2).detach().requires_grad_() >>> input_lengths = torch.full(size=(N,), fill_value=T, dtype=torch.long) >>> >>> # Initialize random batch of targets (0 = blank, 1:C = classes) >>> target_lengths = torch.randint(low=1, high=T, size=(N,), dtype=torch.long) >>> target = torch.randint(low=1, high=C, size=(sum(target_lengths),), dtype=torch.long) >>> ctc_loss = nn.CTCLoss() >>> loss = ctc_loss(input, target, input_lengths, target_lengths) >>> loss.backward()
- Reference:
A. Graves et al.: Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks: https://www.cs.toronto.edu/~graves/icml_2006.pdf
Note
In order to use CuDNN, the following must be satisfied:
targets
must be in concatenated format, allinput_lengths
must be T. \(blank=0\),target_lengths
\(\leq 256\), the integer arguments must be of dtypetorch.int32
.The regular implementation uses the (more common in PyTorch) torch.long dtype.
Note
In some circumstances when using the CUDA backend with CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting
torch.backends.cudnn.deterministic = True
. Please see the notes on /notes/randomness for background.-
forward
(log_probs: torch.Tensor, targets: torch.Tensor, input_lengths: torch.Tensor, target_lengths: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
ChannelShuffle
(groups: int)¶ Divide the channels in a tensor of shape \((*, C , H, W)\) into g groups and rearrange them as \((*, C \frac g, g, H, W)\), while keeping the original tensor shape.
- Parameters
groups (int) – number of groups to divide channels in.
Examples:
>>> channel_shuffle = nn.ChannelShuffle(2) >>> input = torch.randn(1, 4, 2, 2) >>> print(input) [[[[1, 2], [3, 4]], [[5, 6], [7, 8]], [[9, 10], [11, 12]], [[13, 14], [15, 16]], ]] >>> output = channel_shuffle(input) >>> print(output) [[[[1, 2], [3, 4]], [[9, 10], [11, 12]], [[5, 6], [7, 8]], [[13, 14], [15, 16]], ]]
-
extra_repr
() → str¶ Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
ConstantPad1d
(padding: Union[int, Tuple[int, int]], value: float)¶ Pads the input tensor boundaries with a constant value.
For N-dimensional padding, use
torch.nn.functional.pad()
.- Parameters
padding (int, tuple) – the size of the padding. If is int, uses the same padding in both boundaries. If a 2-tuple, uses (\(\text{padding\_left}\), \(\text{padding\_right}\))
- Shape:
Input: \((C, W_{in})\) or \((N, C, W_{in})\).
Output: \((C, W_{out})\) or \((N, C, W_{out})\), where
\(W_{out} = W_{in} + \text{padding\_left} + \text{padding\_right}\)
Examples:
>>> m = nn.ConstantPad1d(2, 3.5) >>> input = torch.randn(1, 2, 4) >>> input tensor([[[-1.0491, -0.7152, -0.0749, 0.8530], [-1.3287, 1.8966, 0.1466, -0.2771]]]) >>> m(input) tensor([[[ 3.5000, 3.5000, -1.0491, -0.7152, -0.0749, 0.8530, 3.5000, 3.5000], [ 3.5000, 3.5000, -1.3287, 1.8966, 0.1466, -0.2771, 3.5000, 3.5000]]]) >>> m = nn.ConstantPad1d(2, 3.5) >>> input = torch.randn(1, 2, 3) >>> input tensor([[[ 1.6616, 1.4523, -1.1255], [-3.6372, 0.1182, -1.8652]]]) >>> m(input) tensor([[[ 3.5000, 3.5000, 1.6616, 1.4523, -1.1255, 3.5000, 3.5000], [ 3.5000, 3.5000, -3.6372, 0.1182, -1.8652, 3.5000, 3.5000]]]) >>> # using different paddings for different sides >>> m = nn.ConstantPad1d((3, 1), 3.5) >>> m(input) tensor([[[ 3.5000, 3.5000, 3.5000, 1.6616, 1.4523, -1.1255, 3.5000], [ 3.5000, 3.5000, 3.5000, -3.6372, 0.1182, -1.8652, 3.5000]]])
-
class
borch.nn.
ConstantPad2d
(padding: Union[int, Tuple[int, int, int, int]], value: float)¶ Pads the input tensor boundaries with a constant value.
For N-dimensional padding, use
torch.nn.functional.pad()
.- Parameters
padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 4-tuple, uses (\(\text{padding\_left}\), \(\text{padding\_right}\), \(\text{padding\_top}\), \(\text{padding\_bottom}\))
- Shape:
Input: \((N, C, H_{in}, W_{in})\) or \((C, H_{in}, W_{in})\).
Output: \((N, C, H_{out}, W_{out})\) or \((C, H_{out}, W_{out})\), where
\(H_{out} = H_{in} + \text{padding\_top} + \text{padding\_bottom}\)
\(W_{out} = W_{in} + \text{padding\_left} + \text{padding\_right}\)
Examples:
>>> m = nn.ConstantPad2d(2, 3.5) >>> input = torch.randn(1, 2, 2) >>> input tensor([[[ 1.6585, 0.4320], [-0.8701, -0.4649]]]) >>> m(input) tensor([[[ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000], [ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000], [ 3.5000, 3.5000, 1.6585, 0.4320, 3.5000, 3.5000], [ 3.5000, 3.5000, -0.8701, -0.4649, 3.5000, 3.5000], [ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000], [ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000]]]) >>> # using different paddings for different sides >>> m = nn.ConstantPad2d((3, 0, 2, 1), 3.5) >>> m(input) tensor([[[ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000], [ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000], [ 3.5000, 3.5000, 3.5000, 1.6585, 0.4320], [ 3.5000, 3.5000, 3.5000, -0.8701, -0.4649], [ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000]]])
-
class
borch.nn.
ConstantPad3d
(padding: Union[int, Tuple[int, int, int, int, int, int]], value: float)¶ Pads the input tensor boundaries with a constant value.
For N-dimensional padding, use
torch.nn.functional.pad()
.- Parameters
padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 6-tuple, uses (\(\text{padding\_left}\), \(\text{padding\_right}\), \(\text{padding\_top}\), \(\text{padding\_bottom}\), \(\text{padding\_front}\), \(\text{padding\_back}\))
- Shape:
Input: \((N, C, D_{in}, H_{in}, W_{in})\) or \((C, D_{in}, H_{in}, W_{in})\).
Output: \((N, C, D_{out}, H_{out}, W_{out})\) or \((C, D_{out}, H_{out}, W_{out})\), where
\(D_{out} = D_{in} + \text{padding\_front} + \text{padding\_back}\)
\(H_{out} = H_{in} + \text{padding\_top} + \text{padding\_bottom}\)
\(W_{out} = W_{in} + \text{padding\_left} + \text{padding\_right}\)
Examples:
>>> m = nn.ConstantPad3d(3, 3.5) >>> input = torch.randn(16, 3, 10, 20, 30) >>> output = m(input) >>> # using different paddings for different sides >>> m = nn.ConstantPad3d((3, 3, 6, 6, 0, 1), 3.5) >>> output = m(input)
-
class
borch.nn.
Container
(**kwargs: Any)¶
-
class
borch.nn.
Conv1d
(in_channels: int, out_channels: int, kernel_size: Union[int, Tuple[int]], stride: Union[int, Tuple[int]] = 1, padding: Union[str, int, Tuple[int]] = 0, dilation: Union[int, Tuple[int]] = 1, groups: int = 1, bias: bool = True, padding_mode: str = 'zeros', device=None, dtype=None)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.- Applies a 1D convolution over an input signal composed of several input
planes.
In the simplest case, the output value of the layer with input size \((N, C_{\text{in}}, L)\) and output \((N, C_{\text{out}}, L_{\text{out}})\) can be precisely described as:
\[\text{out}(N_i, C_{\text{out}_j}) = \text{bias}(C_{\text{out}_j}) + \sum_{k = 0}^{C_{in} - 1} \text{weight}(C_{\text{out}_j}, k) \star \text{input}(N_i, k)\]where \(\star\) is the valid cross-correlation operator, \(N\) is a batch size, \(C\) denotes a number of channels, \(L\) is a length of signal sequence.
This module supports TensorFloat32.
stride
controls the stride for the cross-correlation, a single number or a one-element tuple.padding
controls the amount of padding applied to the input. It can be either a string {‘valid’, ‘same’} or a tuple of ints giving the amount of implicit padding applied on both sides.dilation
controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of whatdilation
does.groups
controls the connections between inputs and outputs.in_channels
andout_channels
must both be divisible bygroups
. For example,At groups=1, all inputs are convolved to all outputs.
At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels and producing half the output channels, and both subsequently concatenated.
At groups=
in_channels
, each input channel is convolved with its own set of filters (of size \(\frac{\text{out\_channels}}{\text{in\_channels}}\)).
- Note:
When groups == in_channels and out_channels == K * in_channels, where K is a positive integer, this operation is also known as a “depthwise convolution”.
In other words, for an input of size \((N, C_{in}, L_{in})\), a depthwise convolution with a depthwise multiplier K can be performed with the arguments \((C_\text{in}=C_\text{in}, C_\text{out}=C_\text{in} \times \text{K}, ..., \text{groups}=C_\text{in})\).
- Note:
In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting
torch.backends.cudnn.deterministic = True
. See /notes/randomness for more information.- Note:
padding='valid'
is the same as no padding.padding='same'
pads the input so the output has the shape as the input. However, this mode doesn’t support any stride values other than 1.- Args:
in_channels (int): Number of channels in the input image out_channels (int): Number of channels produced by the convolution kernel_size (int or tuple): Size of the convolving kernel stride (int or tuple, optional): Stride of the convolution. Default: 1 padding (int, tuple or str, optional): Padding added to both sides of
the input. Default: 0
- padding_mode (string, optional):
'zeros'
,'reflect'
, 'replicate'
or'circular'
. Default:'zeros'
- dilation (int or tuple, optional): Spacing between kernel
elements. Default: 1
- groups (int, optional): Number of blocked connections from input
channels to output channels. Default: 1
- bias (bool, optional): If
True
, adds a learnable bias to the output. Default:
True
- padding_mode (string, optional):
- Shape:
Input: \((N, C_{in}, L_{in})\)
Output: \((N, C_{out}, L_{out})\) where
\[L_{out} = \left\lfloor\frac{L_{in} + 2 \times \text{padding} - \text{dilation} \times (\text{kernel\_size} - 1) - 1}{\text{stride}} + 1\right\rfloor\]
- Attributes:
- weight (Tensor): the learnable weights of the module of shape
\((\text{out\_channels}, \frac{\text{in\_channels}}{\text{groups}}, \text{kernel\_size})\). The values of these weights are sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{groups}{C_\text{in} * \text{kernel\_size}}\)
- bias (Tensor): the learnable bias of the module of shape
(out_channels). If
bias
isTrue
, then the values of these weights are sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{groups}{C_\text{in} * \text{kernel\_size}}\)
Examples:
>> m = nn.Conv1d(16, 33, 3, stride=2) >> input = torch.randn(20, 16, 50) >> output = m(input)
-
class
borch.nn.
Conv2d
(in_channels: int, out_channels: int, kernel_size: Union[int, Tuple[int, int]], stride: Union[int, Tuple[int, int]] = 1, padding: Union[str, int, Tuple[int, int]] = 0, dilation: Union[int, Tuple[int, int]] = 1, groups: int = 1, bias: bool = True, padding_mode: str = 'zeros', device=None, dtype=None)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.- Applies a 2D convolution over an input signal composed of several input
planes.
In the simplest case, the output value of the layer with input size \((N, C_{\text{in}}, H, W)\) and output \((N, C_{\text{out}}, H_{\text{out}}, W_{\text{out}})\) can be precisely described as:
\[\text{out}(N_i, C_{\text{out}_j}) = \text{bias}(C_{\text{out}_j}) + \sum_{k = 0}^{C_{\text{in}} - 1} \text{weight}(C_{\text{out}_j}, k) \star \text{input}(N_i, k)\]where \(\star\) is the valid 2D cross-correlation operator, \(N\) is a batch size, \(C\) denotes a number of channels, \(H\) is a height of input planes in pixels, and \(W\) is width in pixels.
This module supports TensorFloat32.
stride
controls the stride for the cross-correlation, a single number or a tuple.padding
controls the amount of padding applied to the input. It can be either a string {‘valid’, ‘same’} or a tuple of ints giving the amount of implicit padding applied on both sides.dilation
controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of whatdilation
does.groups
controls the connections between inputs and outputs.in_channels
andout_channels
must both be divisible bygroups
. For example,At groups=1, all inputs are convolved to all outputs.
At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels and producing half the output channels, and both subsequently concatenated.
At groups=
in_channels
, each input channel is convolved with its own set of filters (of size \(\frac{\text{out\_channels}}{\text{in\_channels}}\)).
The parameters
kernel_size
,stride
,padding
,dilation
can either be:a single
int
– in which case the same value is used for the height and width dimensiona
tuple
of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension
- Note:
When groups == in_channels and out_channels == K * in_channels, where K is a positive integer, this operation is also known as a “depthwise convolution”.
In other words, for an input of size \((N, C_{in}, L_{in})\), a depthwise convolution with a depthwise multiplier K can be performed with the arguments \((C_\text{in}=C_\text{in}, C_\text{out}=C_\text{in} \times \text{K}, ..., \text{groups}=C_\text{in})\).
- Note:
In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting
torch.backends.cudnn.deterministic = True
. See /notes/randomness for more information.- Note:
padding='valid'
is the same as no padding.padding='same'
pads the input so the output has the shape as the input. However, this mode doesn’t support any stride values other than 1.- Args:
in_channels (int): Number of channels in the input image out_channels (int): Number of channels produced by the convolution kernel_size (int or tuple): Size of the convolving kernel stride (int or tuple, optional): Stride of the convolution. Default: 1 padding (int, tuple or str, optional): Padding added to all four sides of
the input. Default: 0
- padding_mode (string, optional):
'zeros'
,'reflect'
, 'replicate'
or'circular'
. Default:'zeros'
dilation (int or tuple, optional): Spacing between kernel elements. Default: 1 groups (int, optional): Number of blocked connections from input
channels to output channels. Default: 1
- bias (bool, optional): If
True
, adds a learnable bias to the output. Default:
True
- padding_mode (string, optional):
- Shape:
Input: \((N, C_{in}, H_{in}, W_{in})\)
Output: \((N, C_{out}, H_{out}, W_{out})\) where
\[H_{out} = \left\lfloor\frac{H_{in} + 2 \times \text{padding}[0] - \text{dilation}[0] \times (\text{kernel\_size}[0] - 1) - 1}{\text{stride}[0]} + 1\right\rfloor\]\[W_{out} = \left\lfloor\frac{W_{in} + 2 \times \text{padding}[1] - \text{dilation}[1] \times (\text{kernel\_size}[1] - 1) - 1}{\text{stride}[1]} + 1\right\rfloor\]
- Attributes:
- weight (Tensor): the learnable weights of the module of shape
\((\text{out\_channels}, \frac{\text{in\_channels}}{\text{groups}},\) \(\text{kernel\_size[0]}, \text{kernel\_size[1]})\). The values of these weights are sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{groups}{C_\text{in} * \prod_{i=0}^{1}\text{kernel\_size}[i]}\)
- bias (Tensor): the learnable bias of the module of shape
(out_channels). If
bias
isTrue
, then the values of these weights are sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{groups}{C_\text{in} * \prod_{i=0}^{1}\text{kernel\_size}[i]}\)
Examples:
>>>
>> # With square kernels and equal stride >> m = nn.Conv2d(16, 33, 3, stride=2) >> # non-square kernels and unequal stride and with padding >> m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2)) >> # non-square kernels and unequal stride and with padding and dilation >> m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2), dilation=(3, 1)) >> input = torch.randn(20, 16, 50, 100) >> output = m(input)
-
class
borch.nn.
Conv3d
(in_channels: int, out_channels: int, kernel_size: Union[int, Tuple[int, int, int]], stride: Union[int, Tuple[int, int, int]] = 1, padding: Union[str, int, Tuple[int, int, int]] = 0, dilation: Union[int, Tuple[int, int, int]] = 1, groups: int = 1, bias: bool = True, padding_mode: str = 'zeros', device=None, dtype=None)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.- Applies a 3D convolution over an input signal composed of several input
planes.
In the simplest case, the output value of the layer with input size \((N, C_{in}, D, H, W)\) and output \((N, C_{out}, D_{out}, H_{out}, W_{out})\) can be precisely described as:
\[out(N_i, C_{out_j}) = bias(C_{out_j}) + \sum_{k = 0}^{C_{in} - 1} weight(C_{out_j}, k) \star input(N_i, k)\]where \(\star\) is the valid 3D cross-correlation operator
This module supports TensorFloat32.
stride
controls the stride for the cross-correlation.padding
controls the amount of padding applied to the input. It can be either a string {‘valid’, ‘same’} or a tuple of ints giving the amount of implicit padding applied on both sides.dilation
controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of whatdilation
does.groups
controls the connections between inputs and outputs.in_channels
andout_channels
must both be divisible bygroups
. For example,At groups=1, all inputs are convolved to all outputs.
At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels and producing half the output channels, and both subsequently concatenated.
At groups=
in_channels
, each input channel is convolved with its own set of filters (of size \(\frac{\text{out\_channels}}{\text{in\_channels}}\)).
The parameters
kernel_size
,stride
,padding
,dilation
can either be:a single
int
– in which case the same value is used for the depth, height and width dimensiona
tuple
of three ints – in which case, the first int is used for the depth dimension, the second int for the height dimension and the third int for the width dimension
- Note:
When groups == in_channels and out_channels == K * in_channels, where K is a positive integer, this operation is also known as a “depthwise convolution”.
In other words, for an input of size \((N, C_{in}, L_{in})\), a depthwise convolution with a depthwise multiplier K can be performed with the arguments \((C_\text{in}=C_\text{in}, C_\text{out}=C_\text{in} \times \text{K}, ..., \text{groups}=C_\text{in})\).
- Note:
In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting
torch.backends.cudnn.deterministic = True
. See /notes/randomness for more information.- Note:
padding='valid'
is the same as no padding.padding='same'
pads the input so the output has the shape as the input. However, this mode doesn’t support any stride values other than 1.- Args:
in_channels (int): Number of channels in the input image out_channels (int): Number of channels produced by the convolution kernel_size (int or tuple): Size of the convolving kernel stride (int or tuple, optional): Stride of the convolution. Default: 1 padding (int, tuple or str, optional): Padding added to all six sides of
the input. Default: 0
padding_mode (string, optional):
'zeros'
,'reflect'
,'replicate'
or'circular'
. Default:'zeros'
dilation (int or tuple, optional): Spacing between kernel elements. Default: 1 groups (int, optional): Number of blocked connections from input channels to output channels. Default: 1 bias (bool, optional): IfTrue
, adds a learnable bias to the output. Default:True
- Shape:
Input: \((N, C_{in}, D_{in}, H_{in}, W_{in})\)
Output: \((N, C_{out}, D_{out}, H_{out}, W_{out})\) where
\[D_{out} = \left\lfloor\frac{D_{in} + 2 \times \text{padding}[0] - \text{dilation}[0] \times (\text{kernel\_size}[0] - 1) - 1}{\text{stride}[0]} + 1\right\rfloor\]\[H_{out} = \left\lfloor\frac{H_{in} + 2 \times \text{padding}[1] - \text{dilation}[1] \times (\text{kernel\_size}[1] - 1) - 1}{\text{stride}[1]} + 1\right\rfloor\]\[W_{out} = \left\lfloor\frac{W_{in} + 2 \times \text{padding}[2] - \text{dilation}[2] \times (\text{kernel\_size}[2] - 1) - 1}{\text{stride}[2]} + 1\right\rfloor\]
- Attributes:
- weight (Tensor): the learnable weights of the module of shape
\((\text{out\_channels}, \frac{\text{in\_channels}}{\text{groups}},\) \(\text{kernel\_size[0]}, \text{kernel\_size[1]}, \text{kernel\_size[2]})\). The values of these weights are sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{groups}{C_\text{in} * \prod_{i=0}^{2}\text{kernel\_size}[i]}\)
- bias (Tensor): the learnable bias of the module of shape (out_channels). If
bias
isTrue
, then the values of these weights are sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{groups}{C_\text{in} * \prod_{i=0}^{2}\text{kernel\_size}[i]}\)
Examples:
>> # With square kernels and equal stride >> m = nn.Conv3d(16, 33, 3, stride=2) >> # non-square kernels and unequal stride and with padding >> m = nn.Conv3d(16, 33, (3, 5, 2), stride=(2, 1, 1), padding=(4, 2, 0)) >> input = torch.randn(20, 16, 10, 50, 100) >> output = m(input)
-
class
borch.nn.
ConvTranspose1d
(in_channels: int, out_channels: int, kernel_size: Union[int, Tuple[int]], stride: Union[int, Tuple[int]] = 1, padding: Union[int, Tuple[int]] = 0, output_padding: Union[int, Tuple[int]] = 0, groups: int = 1, bias: bool = True, dilation: Union[int, Tuple[int]] = 1, padding_mode: str = 'zeros', device=None, dtype=None)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.- Applies a 1D transposed convolution operator over an input image
composed of several input planes.
This module can be seen as the gradient of Conv1d with respect to its input. It is also known as a fractionally-strided convolution or a deconvolution (although it is not an actual deconvolution operation as it does not compute a true inverse of convolution). For more information, see the visualizations here and the Deconvolutional Networks paper.
This module supports TensorFloat32.
stride
controls the stride for the cross-correlation.padding
controls the amount of implicit zero padding on both sides fordilation * (kernel_size - 1) - padding
number of points. See note below for details.output_padding
controls the additional size added to one side of the output shape. See note below for details.dilation
controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but the link here has a nice visualization of whatdilation
does.groups
controls the connections between inputs and outputs.in_channels
andout_channels
must both be divisible bygroups
. For example,At groups=1, all inputs are convolved to all outputs.
At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels and producing half the output channels, and both subsequently concatenated.
At groups=
in_channels
, each input channel is convolved with its own set of filters (of size \(\frac{\text{out\_channels}}{\text{in\_channels}}\)).
- Note:
The
padding
argument effectively addsdilation * (kernel_size - 1) - padding
amount of zero padding to both sizes of the input. This is set so that when aConv1d
and aConvTranspose1d
are initialized with same parameters, they are inverses of each other in regard to the input and output shapes. However, whenstride > 1
,Conv1d
maps multiple input shapes to the same output shape.output_padding
is provided to resolve this ambiguity by effectively increasing the calculated output shape on one side. Note thatoutput_padding
is only used to find output shape, but does not actually add zero-padding to output.- Note:
In some circumstances when using the CUDA backend with CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting
torch.backends.cudnn.deterministic = True
. Please see the notes on /notes/randomness for background.- Args:
in_channels (int): Number of channels in the input image out_channels (int): Number of channels produced by the convolution kernel_size (int or tuple): Size of the convolving kernel stride (int or tuple, optional): Stride of the convolution. Default: 1 padding (int or tuple, optional):
dilation * (kernel_size - 1) - padding
zero-paddingwill be added to both sides of the input. Default: 0
- output_padding (int or tuple, optional): Additional size added to one side
of the output shape. Default: 0
groups (int, optional): Number of blocked connections from input channels to output channels. Default: 1 bias (bool, optional): If
True
, adds a learnable bias to the output. Default:True
dilation (int or tuple, optional): Spacing between kernel elements. Default: 1- Shape:
Input: \((N, C_{in}, L_{in})\)
Output: \((N, C_{out}, L_{out})\) where
\[L_{out} = (L_{in} - 1) \times \text{stride} - 2 \times \text{padding} + \text{dilation} \times (\text{kernel\_size} - 1) + \text{output\_padding} + 1\]
- Attributes:
- weight (Tensor): the learnable weights of the module of shape
\((\text{in\_channels}, \frac{\text{out\_channels}}{\text{groups}},\) \(\text{kernel\_size})\). The values of these weights are sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{groups}{C_\text{out} * \text{kernel\_size}}\)
- bias (Tensor): the learnable bias of the module of shape (out_channels).
If
bias
isTrue
, then the values of these weights are sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{groups}{C_\text{out} * \text{kernel\_size}}\)
-
class
borch.nn.
ConvTranspose2d
(in_channels: int, out_channels: int, kernel_size: Union[int, Tuple[int, int]], stride: Union[int, Tuple[int, int]] = 1, padding: Union[int, Tuple[int, int]] = 0, output_padding: Union[int, Tuple[int, int]] = 0, groups: int = 1, bias: bool = True, dilation: int = 1, padding_mode: str = 'zeros', device=None, dtype=None)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.- Applies a 2D transposed convolution operator over an input image
composed of several input planes.
This module can be seen as the gradient of Conv2d with respect to its input. It is also known as a fractionally-strided convolution or a deconvolution (although it is not an actual deconvolution operation as it does not compute a true inverse of convolution). For more information, see the visualizations here and the Deconvolutional Networks paper.
This module supports TensorFloat32.
stride
controls the stride for the cross-correlation.padding
controls the amount of implicit zero padding on both sides fordilation * (kernel_size - 1) - padding
number of points. See note below for details.output_padding
controls the additional size added to one side of the output shape. See note below for details.dilation
controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but the link here has a nice visualization of whatdilation
does.groups
controls the connections between inputs and outputs.in_channels
andout_channels
must both be divisible bygroups
. For example,At groups=1, all inputs are convolved to all outputs.
At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels and producing half the output channels, and both subsequently concatenated.
At groups=
in_channels
, each input channel is convolved with its own set of filters (of size \(\frac{\text{out\_channels}}{\text{in\_channels}}\)).
The parameters
kernel_size
,stride
,padding
,output_padding
can either be:a single
int
– in which case the same value is used for the height and width dimensionsa
tuple
of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension
- Note:
The
padding
argument effectively addsdilation * (kernel_size - 1) - padding
amount of zero padding to both sizes of the input. This is set so that when aConv2d
and aConvTranspose2d
are initialized with same parameters, they are inverses of each other in regard to the input and output shapes. However, whenstride > 1
,Conv2d
maps multiple input shapes to the same output shape.output_padding
is provided to resolve this ambiguity by effectively increasing the calculated output shape on one side. Note thatoutput_padding
is only used to find output shape, but does not actually add zero-padding to output.- Note:
In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting
torch.backends.cudnn.deterministic = True
. See /notes/randomness for more information.- Args:
in_channels (int): Number of channels in the input image out_channels (int): Number of channels produced by the convolution kernel_size (int or tuple): Size of the convolving kernel stride (int or tuple, optional): Stride of the convolution. Default: 1 padding (int or tuple, optional):
dilation * (kernel_size - 1) - padding
zero-paddingwill be added to both sides of each dimension in the input. Default: 0
- output_padding (int or tuple, optional): Additional size added to one side
of each dimension in the output shape. Default: 0
groups (int, optional): Number of blocked connections from input channels to output channels. Default: 1 bias (bool, optional): If
True
, adds a learnable bias to the output. Default:True
dilation (int or tuple, optional): Spacing between kernel elements. Default: 1- Shape:
Input: \((N, C_{in}, H_{in}, W_{in})\)
Output: \((N, C_{out}, H_{out}, W_{out})\) where
\[H_{out} = (H_{in} - 1) \times \text{stride}[0] - 2 \times \text{padding}[0] + \text{dilation}[0] \times (\text{kernel\_size}[0] - 1) + \text{output\_padding}[0] + 1\]\[W_{out} = (W_{in} - 1) \times \text{stride}[1] - 2 \times \text{padding}[1] + \text{dilation}[1] \times (\text{kernel\_size}[1] - 1) + \text{output\_padding}[1] + 1\]- Attributes:
- weight (Tensor): the learnable weights of the module of shape
\((\text{in\_channels}, \frac{\text{out\_channels}}{\text{groups}},\) \(\text{kernel\_size[0]}, \text{kernel\_size[1]})\). The values of these weights are sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{groups}{C_\text{out} * \prod_{i=0}^{1}\text{kernel\_size}[i]}\)
- bias (Tensor): the learnable bias of the module of shape (out_channels)
If
bias
isTrue
, then the values of these weights are sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{groups}{C_\text{out} * \prod_{i=0}^{1}\text{kernel\_size}[i]}\)
Examples:
>> # With square kernels and equal stride >> m = nn.ConvTranspose2d(16, 33, 3, stride=2) >> # non-square kernels and unequal stride and with padding >> m = nn.ConvTranspose2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2)) >> input = torch.randn(20, 16, 50, 100) >> output = m(input) >> # exact output size can be also specified as an argument >> input = torch.randn(1, 16, 12, 12) >> downsample = nn.Conv2d(16, 16, 3, stride=2, padding=1) >> upsample = nn.ConvTranspose2d(16, 16, 3, stride=2, padding=1) >> h = downsample(input) >> h.size() torch.Size([1, 16, 6, 6]) >> output = upsample(h, output_size=input.size()) >> output.size() torch.Size([1, 16, 12, 12])
-
class
borch.nn.
ConvTranspose3d
(in_channels: int, out_channels: int, kernel_size: Union[int, Tuple[int, int, int]], stride: Union[int, Tuple[int, int, int]] = 1, padding: Union[int, Tuple[int, int, int]] = 0, output_padding: Union[int, Tuple[int, int, int]] = 0, groups: int = 1, bias: bool = True, dilation: Union[int, Tuple[int, int, int]] = 1, padding_mode: str = 'zeros', device=None, dtype=None)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.- Applies a 3D transposed convolution operator over an input image composed of several input
planes. The transposed convolution operator multiplies each input value element-wise by a learnable kernel, and sums over the outputs from all input feature planes.
This module can be seen as the gradient of Conv3d with respect to its input. It is also known as a fractionally-strided convolution or a deconvolution (although it is not an actual deconvolution operation as it does not compute a true inverse of convolution). For more information, see the visualizations here and the Deconvolutional Networks paper.
This module supports TensorFloat32.
stride
controls the stride for the cross-correlation.padding
controls the amount of implicit zero padding on both sides fordilation * (kernel_size - 1) - padding
number of points. See note below for details.output_padding
controls the additional size added to one side of the output shape. See note below for details.dilation
controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but the link here has a nice visualization of whatdilation
does.groups
controls the connections between inputs and outputs.in_channels
andout_channels
must both be divisible bygroups
. For example,At groups=1, all inputs are convolved to all outputs.
At groups=2, the operation becomes equivalent to having two conv layers side by side, each seeing half the input channels and producing half the output channels, and both subsequently concatenated.
At groups=
in_channels
, each input channel is convolved with its own set of filters (of size \(\frac{\text{out\_channels}}{\text{in\_channels}}\)).
The parameters
kernel_size
,stride
,padding
,output_padding
can either be:a single
int
– in which case the same value is used for the depth, height and width dimensionsa
tuple
of three ints – in which case, the first int is used for the depth dimension, the second int for the height dimension and the third int for the width dimension
- Note:
The
padding
argument effectively addsdilation * (kernel_size - 1) - padding
amount of zero padding to both sizes of the input. This is set so that when aConv3d
and aConvTranspose3d
are initialized with same parameters, they are inverses of each other in regard to the input and output shapes. However, whenstride > 1
,Conv3d
maps multiple input shapes to the same output shape.output_padding
is provided to resolve this ambiguity by effectively increasing the calculated output shape on one side. Note thatoutput_padding
is only used to find output shape, but does not actually add zero-padding to output.- Note:
In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting
torch.backends.cudnn.deterministic = True
. See /notes/randomness for more information.- Args:
in_channels (int): Number of channels in the input image out_channels (int): Number of channels produced by the convolution kernel_size (int or tuple): Size of the convolving kernel stride (int or tuple, optional): Stride of the convolution. Default: 1 padding (int or tuple, optional):
dilation * (kernel_size - 1) - padding
zero-paddingwill be added to both sides of each dimension in the input. Default: 0
- output_padding (int or tuple, optional): Additional size added to one side
of each dimension in the output shape. Default: 0
groups (int, optional): Number of blocked connections from input channels to output channels. Default: 1 bias (bool, optional): If
True
, adds a learnable bias to the output. Default:True
dilation (int or tuple, optional): Spacing between kernel elements. Default: 1- Shape:
Input: \((N, C_{in}, D_{in}, H_{in}, W_{in})\)
Output: \((N, C_{out}, D_{out}, H_{out}, W_{out})\) where
\[D_{out} = (D_{in} - 1) \times \text{stride}[0] - 2 \times \text{padding}[0] + \text{dilation}[0] \times (\text{kernel\_size}[0] - 1) + \text{output\_padding}[0] + 1\]\[H_{out} = (H_{in} - 1) \times \text{stride}[1] - 2 \times \text{padding}[1] + \text{dilation}[1] \times (\text{kernel\_size}[1] - 1) + \text{output\_padding}[1] + 1\]\[W_{out} = (W_{in} - 1) \times \text{stride}[2] - 2 \times \text{padding}[2] + \text{dilation}[2] \times (\text{kernel\_size}[2] - 1) + \text{output\_padding}[2] + 1\]- Attributes:
- weight (Tensor): the learnable weights of the module of shape
\((\text{in\_channels}, \frac{\text{out\_channels}}{\text{groups}},\) \(\text{kernel\_size[0]}, \text{kernel\_size[1]}, \text{kernel\_size[2]})\). The values of these weights are sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{groups}{C_\text{out} * \prod_{i=0}^{2}\text{kernel\_size}[i]}\)
- bias (Tensor): the learnable bias of the module of shape (out_channels)
If
bias
isTrue
, then the values of these weights are sampled from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{groups}{C_\text{out} * \prod_{i=0}^{2}\text{kernel\_size}[i]}\)
Examples:
>> # With square kernels and equal stride >> m = nn.ConvTranspose3d(16, 33, 3, stride=2) >> # non-square kernels and unequal stride and with padding >> m = nn.ConvTranspose3d(16, 33, (3, 5, 2), stride=(2, 1, 1), padding=(0, 4, 2)) >> input = torch.randn(20, 16, 10, 50, 100) >> output = m(input)
-
class
borch.nn.
CosineEmbeddingLoss
(margin: float = 0.0, size_average=None, reduce=None, reduction: str = 'mean')¶ Creates a criterion that measures the loss given input tensors \(x_1\), \(x_2\) and a Tensor label \(y\) with values 1 or -1. This is used for measuring whether two inputs are similar or dissimilar, using the cosine distance, and is typically used for learning nonlinear embeddings or semi-supervised learning.
The loss function for each sample is:
\[\begin{split}\text{loss}(x, y) = \begin{cases} 1 - \cos(x_1, x_2), & \text{if } y = 1 \\ \max(0, \cos(x_1, x_2) - \text{margin}), & \text{if } y = -1 \end{cases}\end{split}\]- Parameters
margin (float, optional) – Should be a number from \(-1\) to \(1\), \(0\) to \(0.5\) is suggested. If
margin
is missing, the default value is \(0\).size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
- Shape:
Input1: \((N, D)\) or \((D)\), where N is the batch size and D is the embedding dimension.
Input2: \((N, D)\) or \((D)\), same shape as Input1.
Target: \((N)\) or \(()\).
Output: If
reduction
is'none'
, then \((N)\), otherwise scalar.
-
forward
(input1: torch.Tensor, input2: torch.Tensor, target: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
CosineSimilarity
(dim: int = 1, eps: float = 1e-08)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.Returns cosine similarity between \(x_1\) and \(x_2\), computed along dim.
\[\text{similarity} = \dfrac{x_1 \cdot x_2}{\max(\Vert x_1 \Vert _2 \cdot \Vert x_2 \Vert _2, \epsilon)}.\]- Args:
dim (int, optional): Dimension where cosine similarity is computed. Default: 1 eps (float, optional): Small value to avoid division by zero.
Default: 1e-8
- Shape:
Input1: \((\ast_1, D, \ast_2)\) where D is at position dim
- Input2: \((\ast_1, D, \ast_2)\), same number of dimensions as x1, matching x1 size at dimension dim,
and broadcastable with x1 at other dimensions.
Output: \((\ast_1, \ast_2)\)
- Examples::
>> input1 = torch.randn(100, 128) >> input2 = torch.randn(100, 128) >> cos = nn.CosineSimilarity(dim=1, eps=1e-6) >> output = cos(input1, input2)
-
class
borch.nn.
CrossEntropyLoss
(weight: Optional[torch.Tensor] = None, size_average=None, ignore_index: int = -100, reduce=None, reduction: str = 'mean', label_smoothing: float = 0.0)¶ This criterion computes the cross entropy loss between input and target.
It is useful when training a classification problem with C classes. If provided, the optional argument
weight
should be a 1D Tensor assigning weight to each of the classes. This is particularly useful when you have an unbalanced training set.The input is expected to contain raw, unnormalized scores for each class. input has to be a Tensor of size either \((minibatch, C)\) or \((minibatch, C, d_1, d_2, ..., d_K)\) with \(K \geq 1\) for the K-dimensional case. The latter is useful for higher dimension inputs, such as computing cross entropy loss per-pixel for 2D images.
The target that this criterion expects should contain either:
Class indices in the range \([0, C-1]\) where \(C\) is the number of classes; if ignore_index is specified, this loss also accepts this class index (this index may not necessarily be in the class range). The unreduced (i.e. with
reduction
set to'none'
) loss for this case can be described as:\[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_{y_n} \log \frac{\exp(x_{n,y_n})}{\sum_{c=1}^C \exp(x_{n,c})} \cdot \mathbb{1}\{y_n \not= \text{ignore\_index}\}\]where \(x\) is the input, \(y\) is the target, \(w\) is the weight, \(C\) is the number of classes, and \(N\) spans the minibatch dimension as well as \(d_1, ..., d_k\) for the K-dimensional case. If
reduction
is not'none'
(default'mean'
), then\[\begin{split}\ell(x, y) = \begin{cases} \sum_{n=1}^N \frac{1}{\sum_{n=1}^N w_{y_n} \cdot \mathbb{1}\{y_n \not= \text{ignore\_index}\}} l_n, & \text{if reduction} = \text{`mean';}\\ \sum_{n=1}^N l_n, & \text{if reduction} = \text{`sum'.} \end{cases}\end{split}\]Note that this case is equivalent to the combination of
LogSoftmax
andNLLLoss
.Probabilities for each class; useful when labels beyond a single class per minibatch item are required, such as for blended labels, label smoothing, etc. The unreduced (i.e. with
reduction
set to'none'
) loss for this case can be described as:\[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - \sum_{c=1}^C w_c \log \frac{\exp(x_{n,c})}{\exp(\sum_{i=1}^C x_{n,i})} y_{n,c}\]where \(x\) is the input, \(y\) is the target, \(w\) is the weight, \(C\) is the number of classes, and \(N\) spans the minibatch dimension as well as \(d_1, ..., d_k\) for the K-dimensional case. If
reduction
is not'none'
(default'mean'
), then\[\begin{split}\ell(x, y) = \begin{cases} \frac{\sum_{n=1}^N l_n}{N}, & \text{if reduction} = \text{`mean';}\\ \sum_{n=1}^N l_n, & \text{if reduction} = \text{`sum'.} \end{cases}\end{split}\]
Note
The performance of this criterion is generally better when target contains class indices, as this allows for optimized computation. Consider providing target as class probabilities only when a single class label per minibatch item is too restrictive.
- Parameters
weight (Tensor, optional) – a manual rescaling weight given to each class. If given, has to be a Tensor of size C
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
ignore_index (int, optional) – Specifies a target value that is ignored and does not contribute to the input gradient. When
size_average
isTrue
, the loss is averaged over non-ignored targets. Note thatignore_index
is only applicable when the target contains class indices.reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the weighted mean of the output is taken,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
label_smoothing (float, optional) – A float in [0.0, 1.0]. Specifies the amount of smoothing when computing the loss, where 0.0 means no smoothing. The targets become a mixture of the original ground truth and a uniform distribution as described in Rethinking the Inception Architecture for Computer Vision. Default: \(0.0\).
- Shape:
Input: \((N, C)\) where C = number of classes, or \((N, C, d_1, d_2, ..., d_K)\) with \(K \geq 1\) in the case of K-dimensional loss.
Target: If containing class indices, shape \((N)\) where each value is \(0 \leq \text{targets}[i] \leq C-1\), or \((N, d_1, d_2, ..., d_K)\) with \(K \geq 1\) in the case of K-dimensional loss. If containing class probabilities, same shape as the input.
Output: If
reduction
is'none'
, shape \((N)\) or \((N, d_1, d_2, ..., d_K)\) with \(K \geq 1\) in the case of K-dimensional loss. Otherwise, scalar.
Examples:
>>> # Example of target with class indices >>> loss = nn.CrossEntropyLoss() >>> input = torch.randn(3, 5, requires_grad=True) >>> target = torch.empty(3, dtype=torch.long).random_(5) >>> output = loss(input, target) >>> output.backward() >>> >>> # Example of target with class probabilities >>> input = torch.randn(3, 5, requires_grad=True) >>> target = torch.randn(3, 5).softmax(dim=1) >>> output = loss(input, target) >>> output.backward()
-
forward
(input: torch.Tensor, target: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
CrossMapLRN2d
(size: int, alpha: float = 0.0001, beta: float = 0.75, k: float = 1)¶
-
class
borch.nn.
DataParallel
(module, device_ids=None, output_device=None, dim=0)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.Implements data parallelism at the module level.
This container parallelizes the application of the given
module
by splitting the input across the specified devices by chunking in the batch dimension (other objects will be copied once per device). In the forward pass, the module is replicated on each device, and each replica handles a portion of the input. During the backwards pass, gradients from each replica are summed into the original module.The batch size should be larger than the number of GPUs used.
Warning
It is recommended to use
DistributedDataParallel
, instead of this class, to do multi-GPU training, even if there is only a single node. See: cuda-nn-ddp-instead and ddp.Arbitrary positional and keyword inputs are allowed to be passed into DataParallel but some types are specially handled. tensors will be scattered on dim specified (default 0). tuple, list and dict types will be shallow copied. The other types will be shared among different threads and can be corrupted if written to in the model’s forward pass.
The parallelized
module
must have its parameters and buffers ondevice_ids[0]
before running thisDataParallel
module.Warning
In each forward,
module
is replicated on each device, so any updates to the running module inforward
will be lost. For example, ifmodule
has a counter attribute that is incremented in eachforward
, it will always stay at the initial value because the update is done on the replicas which are destroyed afterforward
. However,DataParallel
guarantees that the replica ondevice[0]
will have its parameters and buffers sharing storage with the base parallelizedmodule
. So in-place updates to the parameters or buffers ondevice[0]
will be recorded. E.g.,BatchNorm2d
andspectral_norm()
rely on this behavior to update the buffers.Warning
Forward and backward hooks defined on
module
and its submodules will be invokedlen(device_ids)
times, each with inputs located on a particular device. Particularly, the hooks are only guaranteed to be executed in correct order with respect to operations on corresponding devices. For example, it is not guaranteed that hooks set viaregister_forward_pre_hook()
be executed before alllen(device_ids)
forward()
calls, but that each such hook be executed before the correspondingforward()
call of that device.Warning
When
module
returns a scalar (i.e., 0-dimensional tensor) inforward()
, this wrapper will return a vector of length equal to number of devices used in data parallelism, containing the result from each device.Note
There is a subtlety in using the
pack sequence -> recurrent network -> unpack sequence
pattern in aModule
wrapped inDataParallel
. See pack-rnn-unpack-with-data-parallelism section in FAQ for details.- Args:
module (Module): module to be parallelized device_ids (list of int or torch.device): CUDA devices (default: all devices) output_device (int or torch.device): device location of output (default: device_ids[0])
- Attributes:
module (Module): the module to be parallelized
Example:
>> net = torch.nn.DataParallel(model, device_ids=[0, 1, 2]) >> output = net(input_var) # input_var can be on any device, including CPU
-
class
borch.nn.
Dropout
(p: float = 0.5, inplace: bool = False)¶ During training, randomly zeroes some of the elements of the input tensor with probability
p
using samples from a Bernoulli distribution. Each channel will be zeroed out independently on every forward call.This has proven to be an effective technique for regularization and preventing the co-adaptation of neurons as described in the paper Improving neural networks by preventing co-adaptation of feature detectors .
Furthermore, the outputs are scaled by a factor of \(\frac{1}{1-p}\) during training. This means that during evaluation the module simply computes an identity function.
- Parameters
p – probability of an element to be zeroed. Default: 0.5
inplace – If set to
True
, will do this operation in-place. Default:False
- Shape:
Input: \((*)\). Input can be of any shape
Output: \((*)\). Output is of the same shape as input
Examples:
>>> m = nn.Dropout(p=0.2) >>> input = torch.randn(20, 16) >>> output = m(input)
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
Dropout2d
(p: float = 0.5, inplace: bool = False)¶ Randomly zero out entire channels (a channel is a 2D feature map, e.g., the \(j\)-th channel of the \(i\)-th sample in the batched input is a 2D tensor \(\text{input}[i, j]\)). Each channel will be zeroed out independently on every forward call with probability
p
using samples from a Bernoulli distribution.Usually the input comes from
nn.Conv2d
modules.As described in the paper Efficient Object Localization Using Convolutional Networks , if adjacent pixels within feature maps are strongly correlated (as is normally the case in early convolution layers) then i.i.d. dropout will not regularize the activations and will otherwise just result in an effective learning rate decrease.
In this case,
nn.Dropout2d()
will help promote independence between feature maps and should be used instead.- Parameters
p (float, optional) – probability of an element to be zero-ed.
inplace (bool, optional) – If set to
True
, will do this operation in-place
- Shape:
Input: \((N, C, H, W)\) or \((C, H, W)\).
Output: \((N, C, H, W)\) or \((C, H, W)\) (same shape as input).
Examples:
>>> m = nn.Dropout2d(p=0.2) >>> input = torch.randn(20, 16, 32, 32) >>> output = m(input)
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
Dropout3d
(p: float = 0.5, inplace: bool = False)¶ Randomly zero out entire channels (a channel is a 3D feature map, e.g., the \(j\)-th channel of the \(i\)-th sample in the batched input is a 3D tensor \(\text{input}[i, j]\)). Each channel will be zeroed out independently on every forward call with probability
p
using samples from a Bernoulli distribution.Usually the input comes from
nn.Conv3d
modules.As described in the paper Efficient Object Localization Using Convolutional Networks , if adjacent pixels within feature maps are strongly correlated (as is normally the case in early convolution layers) then i.i.d. dropout will not regularize the activations and will otherwise just result in an effective learning rate decrease.
In this case,
nn.Dropout3d()
will help promote independence between feature maps and should be used instead.- Parameters
p (float, optional) – probability of an element to be zeroed.
inplace (bool, optional) – If set to
True
, will do this operation in-place
- Shape:
Input: \((N, C, D, H, W)\) or \((C, D, H, W)\).
Output: \((N, C, D, H, W)\) or \((C, D, H, W)\) (same shape as input).
Examples:
>>> m = nn.Dropout3d(p=0.2) >>> input = torch.randn(20, 16, 4, 32, 32) >>> output = m(input)
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
ELU
(alpha: float = 1.0, inplace: bool = False)¶ Applies the element-wise function:
\[\begin{split}\text{ELU}(x) = \begin{cases} x, & \text{ if } x > 0\\ \alpha * (\exp(x) - 1), & \text{ if } x \leq 0 \end{cases}\end{split}\]- Parameters
alpha – the \(\alpha\) value for the ELU formulation. Default: 1.0
inplace – can optionally do the operation in-place. Default:
False
- Shape:
Input: \((*)\), where \(*\) means any number of dimensions.
Output: \((*)\), same shape as the input.
Examples:
>>> m = nn.ELU() >>> input = torch.randn(2) >>> output = m(input)
-
extra_repr
() → str¶ Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
Embedding
(num_embeddings: int, embedding_dim: int, padding_idx: Optional[int] = None, max_norm: Optional[float] = None, norm_type: float = 2.0, scale_grad_by_freq: bool = False, sparse: bool = False, _weight: Optional[torch.Tensor] = None, device=None, dtype=None)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.A simple lookup table that stores embeddings of a fixed dictionary and size.
This module is often used to store word embeddings and retrieve them using indices. The input to the module is a list of indices, and the output is the corresponding word embeddings.
- Args:
num_embeddings (int): size of the dictionary of embeddings embedding_dim (int): the size of each embedding vector padding_idx (int, optional): If specified, the entries at
padding_idx
do not contribute to the gradient;therefore, the embedding vector at
padding_idx
is not updated during training, i.e. it remains as a fixed “pad”. For a newly constructed Embedding, the embedding vector atpadding_idx
will default to all zeros, but can be updated to another value to be used as the padding vector.- max_norm (float, optional): If given, each embedding vector with norm larger than
max_norm
is renormalized to have norm
max_norm
.
norm_type (float, optional): The p of the p-norm to compute for the
max_norm
option. Default2
. scale_grad_by_freq (boolean, optional): If given, this will scale gradients by the inverse of frequency ofthe words in the mini-batch. Default
False
.- sparse (bool, optional): If
True
, gradient w.r.t.weight
matrix will be a sparse tensor. See Notes for more details regarding sparse gradients.
- max_norm (float, optional): If given, each embedding vector with norm larger than
- Attributes:
- weight (Tensor): the learnable weights of the module of shape (num_embeddings, embedding_dim)
initialized from \(\mathcal{N}(0, 1)\)
- Shape:
Input: \((*)\), IntTensor or LongTensor of arbitrary shape containing the indices to extract
Output: \((*, H)\), where * is the input shape and \(H=\text{embedding\_dim}\)
Note
Keep in mind that only a limited number of optimizers support sparse gradients: currently it’s
optim.SGD
(CUDA and CPU),optim.SparseAdam
(CUDA and CPU) andoptim.Adagrad
(CPU)Note
When
max_norm
is notNone
,Embedding
’s forward method will modify theweight
tensor in-place. Since tensors needed for gradient computations cannot be modified in-place, performing a differentiable operation onEmbedding.weight
before callingEmbedding
’s forward method requires cloningEmbedding.weight
whenmax_norm
is notNone
. For example:n, d, m = 3, 5, 7 embedding = nn.Embedding(n, d, max_norm=True) W = torch.randn((m, d), requires_grad=True) idx = torch.tensor([1, 2]) a = embedding.weight.clone() @ W.t() # weight must be cloned for this to be differentiable b = embedding(idx) @ W.t() # modifies weight in-place out = (a.unsqueeze(0) + b.unsqueeze(1)) loss = out.sigmoid().prod() loss.backward()
Examples:
>> # an Embedding module containing 10 tensors of size 3 >> embedding = nn.Embedding(10, 3) >> # a batch of 2 samples of 4 indices each >> input = torch.LongTensor([[1,2,4,5],[4,3,2,9]]) >> embedding(input) tensor([[[-0.0251, -1.6902, 0.7172], [-0.6431, 0.0748, 0.6969], [ 1.4970, 1.3448, -0.9685], [-0.3677, -2.7265, -0.1685]], [[ 1.4970, 1.3448, -0.9685], [ 0.4362, -0.4004, 0.9400], [-0.6431, 0.0748, 0.6969], [ 0.9124, -2.3616, 1.1151]]]) >> # example with padding_idx >> embedding = nn.Embedding(10, 3, padding_idx=0) >> input = torch.LongTensor([[0,2,0,5]]) >> embedding(input) tensor([[[ 0.0000, 0.0000, 0.0000], [ 0.1535, -2.0309, 0.9315], [ 0.0000, 0.0000, 0.0000], [-0.1655, 0.9897, 0.0635]]]) >> # example of changing `pad` vector >> padding_idx = 0 >> embedding = nn.Embedding(3, 3, padding_idx=padding_idx) >> embedding.weight Parameter containing: tensor([[ 0.0000, 0.0000, 0.0000], [-0.7895, -0.7089, -0.0364], [ 0.6778, 0.5803, 0.2678]], requires_grad=True) >> with torch.no_grad(): .. embedding.weight[padding_idx] = torch.ones(3) >> embedding.weight Parameter containing: tensor([[ 1.0000, 1.0000, 1.0000], [-0.7895, -0.7089, -0.0364], [ 0.6778, 0.5803, 0.2678]], requires_grad=True)
-
class
borch.nn.
EmbeddingBag
(num_embeddings: int, embedding_dim: int, max_norm: Optional[float] = None, norm_type: float = 2.0, scale_grad_by_freq: bool = False, mode: str = 'mean', sparse: bool = False, _weight: Optional[torch.Tensor] = None, include_last_offset: bool = False, padding_idx: Optional[int] = None, device=None, dtype=None)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.- Computes sums or means of ‘bags’ of embeddings, without instantiating the
intermediate embeddings.
For bags of constant length, no
per_sample_weights
, no indices equal topadding_idx
, and with 2D inputs, this classwith
mode="sum"
is equivalent toEmbedding
followed bytorch.sum(dim=1)
,with
mode="mean"
is equivalent toEmbedding
followed bytorch.mean(dim=1)
,with
mode="max"
is equivalent toEmbedding
followed bytorch.max(dim=1)
.
However,
EmbeddingBag
is much more time and memory efficient than using a chain of these operations.EmbeddingBag also supports per-sample weights as an argument to the forward pass. This scales the output of the Embedding before performing a weighted reduction as specified by
mode
. Ifper_sample_weights
is passed, the only supportedmode
is"sum"
, which computes a weighted sum according toper_sample_weights
.- Args:
num_embeddings (int): size of the dictionary of embeddings embedding_dim (int): the size of each embedding vector max_norm (float, optional): If given, each embedding vector with norm larger than
max_norm
is renormalized to have norm
max_norm
.norm_type (float, optional): The p of the p-norm to compute for the
max_norm
option. Default2
. scale_grad_by_freq (boolean, optional): if given, this will scale gradients by the inverse of frequency ofthe words in the mini-batch. Default
False
. Note: this option is not supported whenmode="max"
.- mode (string, optional):
"sum"
,"mean"
or"max"
. Specifies the way to reduce the bag. "sum"
computes the weighted sum, takingper_sample_weights
into consideration."mean"
computes the average of the values in the bag,"max"
computes the max value over each bag. Default:"mean"
- sparse (bool, optional): if
True
, gradient w.r.t.weight
matrix will be a sparse tensor. See Notes for more details regarding sparse gradients. Note: this option is not supported when
mode="max"
.- include_last_offset (bool, optional): if
True
,offsets
has one additional element, where the last element is equivalent to the size of indices. This matches the CSR format.
- padding_idx (int, optional): If specified, the entries at
padding_idx
do not contribute to the gradient; therefore, the embedding vector at
padding_idx
is not updated during training, i.e. it remains as a fixed “pad”. For a newly constructed EmbeddingBag, the embedding vector atpadding_idx
will default to all zeros, but can be updated to another value to be used as the padding vector. Note that the embedding vector atpadding_idx
is excluded from the reduction.
- mode (string, optional):
- Attributes:
- weight (Tensor): the learnable weights of the module of shape (num_embeddings, embedding_dim)
initialized from \(\mathcal{N}(0, 1)\).
Examples:
>> # an EmbeddingBag module containing 10 tensors of size 3 >> embedding_sum = nn.EmbeddingBag(10, 3, mode='sum') >> # a batch of 2 samples of 4 indices each >> input = torch.tensor([1,2,4,5,4,3,2,9], dtype=torch.long) >> offsets = torch.tensor([0,4], dtype=torch.long) >> embedding_sum(input, offsets) tensor([[-0.8861, -5.4350, -0.0523], [ 1.1306, -2.5798, -1.0044]]) >> # Example with padding_idx >> embedding_sum = nn.EmbeddingBag(10, 3, mode='sum', padding_idx=2) >> input = torch.tensor([2, 2, 2, 2, 4, 3, 2, 9], dtype=torch.long) >> offsets = torch.tensor([0,4], dtype=torch.long) >> embedding_sum(input, offsets) tensor([[ 0.0000, 0.0000, 0.0000], [-0.7082, 3.2145, -2.6251]]) >> # An EmbeddingBag can be loaded from an Embedding like so >> embedding = nn.Embedding(10, 3, padding_idx=2) >> embedding_sum = nn.EmbeddingBag.from_pretrained( embedding.weight, padding_idx=embedding.padding_idx, mode='sum')
-
class
borch.nn.
FeatureAlphaDropout
(p: float = 0.5, inplace: bool = False)¶ Randomly masks out entire channels (a channel is a feature map, e.g. the \(j\)-th channel of the \(i\)-th sample in the batch input is a tensor \(\text{input}[i, j]\)) of the input tensor). Instead of setting activations to zero, as in regular Dropout, the activations are set to the negative saturation value of the SELU activation function. More details can be found in the paper Self-Normalizing Neural Networks .
Each element will be masked independently for each sample on every forward call with probability
p
using samples from a Bernoulli distribution. The elements to be masked are randomized on every forward call, and scaled and shifted to maintain zero mean and unit variance.Usually the input comes from
nn.AlphaDropout
modules.As described in the paper Efficient Object Localization Using Convolutional Networks , if adjacent pixels within feature maps are strongly correlated (as is normally the case in early convolution layers) then i.i.d. dropout will not regularize the activations and will otherwise just result in an effective learning rate decrease.
In this case,
nn.AlphaDropout()
will help promote independence between feature maps and should be used instead.- Parameters
p (float, optional) – probability of an element to be zeroed. Default: 0.5
inplace (bool, optional) – If set to
True
, will do this operation in-place
- Shape:
Input: \((N, C, D, H, W)\) or \((C, D, H, W)\).
Output: \((N, C, D, H, W)\) or \((C, D, H, W)\) (same shape as input).
Examples:
>>> m = nn.FeatureAlphaDropout(p=0.2) >>> input = torch.randn(20, 16, 4, 32, 32) >>> output = m(input)
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
Flatten
(start_dim: int = 1, end_dim: int = -1)¶ Flattens a contiguous range of dims into a tensor. For use with
Sequential
.- Shape:
Input: \((*, S_{\text{start}},..., S_{i}, ..., S_{\text{end}}, *)\),’ where \(S_{i}\) is the size at dimension \(i\) and \(*\) means any number of dimensions including none.
Output: \((*, \prod_{i=\text{start}}^{\text{end}} S_{i}, *)\).
- Parameters
start_dim – first dim to flatten (default = 1).
end_dim – last dim to flatten (default = -1).
- Examples::
>>> input = torch.randn(32, 1, 5, 5) >>> m = nn.Sequential( >>> nn.Conv2d(1, 32, 5, 1, 1), >>> nn.Flatten() >>> ) >>> output = m(input) >>> output.size() torch.Size([32, 288])
-
extra_repr
() → str¶ Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
Fold
(output_size: Union[int, Tuple[int, ...]], kernel_size: Union[int, Tuple[int, ...]], dilation: Union[int, Tuple[int, ...]] = 1, padding: Union[int, Tuple[int, ...]] = 0, stride: Union[int, Tuple[int, ...]] = 1)¶ Combines an array of sliding local blocks into a large containing tensor.
Consider a batched
input
tensor containing sliding local blocks, e.g., patches of images, of shape \((N, C \times \prod(\text{kernel\_size}), L)\), where \(N\) is batch dimension, \(C \times \prod(\text{kernel\_size})\) is the number of values within a block (a block has \(\prod(\text{kernel\_size})\) spatial locations each containing a \(C\)-channeled vector), and \(L\) is the total number of blocks. (This is exactly the same specification as the output shape ofUnfold
.) This operation combines these local blocks into the largeoutput
tensor of shape \((N, C, \text{output\_size}[0], \text{output\_size}[1], \dots)\) by summing the overlapping values. Similar toUnfold
, the arguments must satisfy\[L = \prod_d \left\lfloor\frac{\text{output\_size}[d] + 2 \times \text{padding}[d] % - \text{dilation}[d] \times (\text{kernel\_size}[d] - 1) - 1}{\text{stride}[d]} + 1\right\rfloor,\]where \(d\) is over all spatial dimensions.
output_size
describes the spatial shape of the large containing tensor of the sliding local blocks. It is useful to resolve the ambiguity when multiple input shapes map to same number of sliding blocks, e.g., withstride > 0
.
The
padding
,stride
anddilation
arguments specify how the sliding blocks are retrieved.stride
controls the stride for the sliding blocks.padding
controls the amount of implicit zero-paddings on both sides forpadding
number of points for each dimension before reshaping.dilation
controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of whatdilation
does.
- Parameters
output_size (int or tuple) – the shape of the spatial dimensions of the output (i.e.,
output.sizes()[2:]
)kernel_size (int or tuple) – the size of the sliding blocks
stride (int or tuple) – the stride of the sliding blocks in the input spatial dimensions. Default: 1
padding (int or tuple, optional) – implicit zero padding to be added on both sides of input. Default: 0
dilation (int or tuple, optional) – a parameter that controls the stride of elements within the neighborhood. Default: 1
If
output_size
,kernel_size
,dilation
,padding
orstride
is an int or a tuple of length 1 then their values will be replicated across all spatial dimensions.For the case of two output spatial dimensions this operation is sometimes called
col2im
.
Note
Fold
calculates each combined value in the resulting large tensor by summing all values from all containing blocks.Unfold
extracts the values in the local blocks by copying from the large tensor. So, if the blocks overlap, they are not inverses of each other.In general, folding and unfolding operations are related as follows. Consider
Fold
andUnfold
instances created with the same parameters:>>> fold_params = dict(kernel_size=..., dilation=..., padding=..., stride=...) >>> fold = nn.Fold(output_size=..., **fold_params) >>> unfold = nn.Unfold(**fold_params)
Then for any (supported)
input
tensor the following equality holds:fold(unfold(input)) == divisor * input
where
divisor
is a tensor that depends only on the shape and dtype of theinput
:>>> input_ones = torch.ones(input.shape, dtype=input.dtype) >>> divisor = fold(unfold(input_ones))
When the
divisor
tensor contains no zero elements, thenfold
andunfold
operations are inverses of each other (up to constant divisor).Warning
Currently, only 4-D output tensors (batched image-like tensors) are supported.
- Shape:
Input: \((N, C \times \prod(\text{kernel\_size}), L)\)
Output: \((N, C, \text{output\_size}[0], \text{output\_size}[1], \dots)\) as described above
Examples:
>>> fold = nn.Fold(output_size=(4, 5), kernel_size=(2, 2)) >>> input = torch.randn(1, 3 * 2 * 2, 12) >>> output = fold(input) >>> output.size() torch.Size([1, 3, 4, 5])
-
extra_repr
() → str¶ Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
FractionalMaxPool2d
(kernel_size: Union[int, Tuple[int, int]], output_size: Union[int, Tuple[int, int], None] = None, output_ratio: Union[float, Tuple[float, float], None] = None, return_indices: bool = False, _random_samples=None)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.Applies a 2D fractional max pooling over an input signal composed of several input planes.
Fractional MaxPooling is described in detail in the paper Fractional MaxPooling by Ben Graham
The max-pooling operation is applied in \(kH \times kW\) regions by a stochastic step size determined by the target output size. The number of output features is equal to the number of input planes.
- Args:
- kernel_size: the size of the window to take a max over.
Can be a single number k (for a square kernel of k x k) or a tuple (kh, kw)
- output_size: the target output size of the image of the form oH x oW.
Can be a tuple (oH, oW) or a single number oH for a square image oH x oH
- output_ratio: If one wants to have an output size as a ratio of the input size, this option can be given.
This has to be a number or tuple in the range (0, 1)
- return_indices: if
True
, will return the indices along with the outputs. Useful to pass to
nn.MaxUnpool2d()
. Default:False
- Shape:
Input: \((N, C, H_{in}, W_{in})\) or \((C, H_{in}, W_{in})\).
Output: \((N, C, H_{out}, W_{out})\) or \((C, H_{out}, W_{out})\), where \((H_{out}, W_{out})=\text{output\_size}\) or \((H_{out}, W_{out})=\text{output\_ratio} \times (H_{in}, W_{in})\).
- Examples:
>>> >> # pool of square window of size=3, and target output size 13x12 >> m = nn.FractionalMaxPool2d(3, output_size=(13, 12)) >> # pool of square window and target output size being half of input image size >> m = nn.FractionalMaxPool2d(3, output_ratio=(0.5, 0.5)) >> input = torch.randn(20, 16, 50, 32) >> output = m(input)
-
class
borch.nn.
FractionalMaxPool3d
(kernel_size: Union[int, Tuple[int, int, int]], output_size: Union[int, Tuple[int, int, int], None] = None, output_ratio: Union[float, Tuple[float, float, float], None] = None, return_indices: bool = False, _random_samples=None)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.Applies a 3D fractional max pooling over an input signal composed of several input planes.
Fractional MaxPooling is described in detail in the paper Fractional MaxPooling by Ben Graham
The max-pooling operation is applied in \(kTxkHxkW\) regions by a stochastic step size determined by the target output size. The number of output features is equal to the number of input planes.
- Args:
- kernel_size: the size of the window to take a max over.
Can be a single number k (for a square kernel of k x k x k) or a tuple (kt x kh x kw)
- output_size: the target output size of the image of the form oT x oH x oW.
Can be a tuple (oT, oH, oW) or a single number oH for a square image oH x oH x oH
- output_ratio: If one wants to have an output size as a ratio of the input size, this option can be given.
This has to be a number or tuple in the range (0, 1)
- return_indices: if
True
, will return the indices along with the outputs. Useful to pass to
nn.MaxUnpool3d()
. Default:False
- Examples:
>>> >> # pool of cubic window of size=3, and target output size 13x12x11 >> m = nn.FractionalMaxPool3d(3, output_size=(13, 12, 11)) >> # pool of cubic window and target output size being half of input size >> m = nn.FractionalMaxPool3d(3, output_ratio=(0.5, 0.5, 0.5)) >> input = torch.randn(20, 16, 50, 32, 16) >> output = m(input)
-
class
borch.nn.
GELU
¶ Applies the Gaussian Error Linear Units function:
\[\text{GELU}(x) = x * \Phi(x)\]where \(\Phi(x)\) is the Cumulative Distribution Function for Gaussian Distribution.
- Shape:
Input: \((*)\), where \(*\) means any number of dimensions.
Output: \((*)\), same shape as the input.
Examples:
>>> m = nn.GELU() >>> input = torch.randn(2) >>> output = m(input)
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
GLU
(dim: int = -1)¶ Applies the gated linear unit function \({GLU}(a, b)= a \otimes \sigma(b)\) where \(a\) is the first half of the input matrices and \(b\) is the second half.
- Parameters
dim (int) – the dimension on which to split the input. Default: -1
- Shape:
Input: \((\ast_1, N, \ast_2)\) where * means, any number of additional dimensions
Output: \((\ast_1, M, \ast_2)\) where \(M=N/2\)
Examples:
>>> m = nn.GLU() >>> input = torch.randn(4, 2) >>> output = m(input)
-
extra_repr
() → str¶ Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
GRU
(*args, **kwargs)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.Applies a multi-layer gated recurrent unit (GRU) RNN to an input sequence.
For each element in the input sequence, each layer computes the following function:
\[\begin{split}\begin{array}{ll} r_t = \sigma(W_{ir} x_t + b_{ir} + W_{hr} h_{(t-1)} + b_{hr}) \\ z_t = \sigma(W_{iz} x_t + b_{iz} + W_{hz} h_{(t-1)} + b_{hz}) \\ n_t = \tanh(W_{in} x_t + b_{in} + r_t * (W_{hn} h_{(t-1)}+ b_{hn})) \\ h_t = (1 - z_t) * n_t + z_t * h_{(t-1)} \end{array}\end{split}\]where \(h_t\) is the hidden state at time t, \(x_t\) is the input at time t, \(h_{(t-1)}\) is the hidden state of the layer at time t-1 or the initial hidden state at time 0, and \(r_t\), \(z_t\), \(n_t\) are the reset, update, and new gates, respectively. \(\sigma\) is the sigmoid function, and \(*\) is the Hadamard product.
In a multilayer GRU, the input \(x^{(l)}_t\) of the \(l\) -th layer (\(l >= 2\)) is the hidden state \(h^{(l-1)}_t\) of the previous layer multiplied by dropout \(\delta^{(l-1)}_t\) where each \(\delta^{(l-1)}_t\) is a Bernoulli random variable which is \(0\) with probability
dropout
.- Args:
input_size: The number of expected features in the input x hidden_size: The number of features in the hidden state h num_layers: Number of recurrent layers. E.g., setting
num_layers=2
would mean stacking two GRUs together to form a stacked GRU, with the second GRU taking in outputs of the first GRU and computing the final results. Default: 1
- bias: If
False
, then the layer does not use bias weights b_ih and b_hh. Default:
True
- batch_first: If
True
, then the input and output tensors are provided as (batch, seq, feature) instead of (seq, batch, feature). Note that this does not apply to hidden or cell states. See the Inputs/Outputs sections below for details. Default:
False
- dropout: If non-zero, introduces a Dropout layer on the outputs of each
GRU layer except the last layer, with dropout probability equal to
dropout
. Default: 0
bidirectional: If
True
, becomes a bidirectional GRU. Default:False
- bias: If
- Inputs: input, h_0
input: tensor of shape \((L, N, H_{in})\) when
batch_first=False
or \((N, L, H_{in})\) whenbatch_first=True
containing the features of the input sequence. The input can also be a packed variable length sequence. Seetorch.nn.utils.rnn.pack_padded_sequence()
ortorch.nn.utils.rnn.pack_sequence()
for details.h_0: tensor of shape \((D * \text{num\_layers}, N, H_{out})\) containing the initial hidden state for each element in the batch. Defaults to zeros if not provided.
where:
\[\begin{split}\begin{aligned} N ={} & \text{batch size} \\ L ={} & \text{sequence length} \\ D ={} & 2 \text{ if bidirectional=True otherwise } 1 \\ H_{in} ={} & \text{input\_size} \\ H_{out} ={} & \text{hidden\_size} \end{aligned}\end{split}\]- Outputs: output, h_n
output: tensor of shape \((L, N, D * H_{out})\) when
batch_first=False
or \((N, L, D * H_{out})\) whenbatch_first=True
containing the output features (h_t) from the last layer of the GRU, for each t. If atorch.nn.utils.rnn.PackedSequence
has been given as the input, the output will also be a packed sequence.h_n: tensor of shape \((D * \text{num\_layers}, N, H_{out})\) containing the final hidden state for each element in the batch.
- Attributes:
- weight_ih_l[k]the learnable input-hidden weights of the \(\text{k}^{th}\) layer
(W_ir|W_iz|W_in), of shape (3*hidden_size, input_size) for k = 0. Otherwise, the shape is (3*hidden_size, num_directions * hidden_size)
- weight_hh_l[k]the learnable hidden-hidden weights of the \(\text{k}^{th}\) layer
(W_hr|W_hz|W_hn), of shape (3*hidden_size, hidden_size)
- bias_ih_l[k]the learnable input-hidden bias of the \(\text{k}^{th}\) layer
(b_ir|b_iz|b_in), of shape (3*hidden_size)
- bias_hh_l[k]the learnable hidden-hidden bias of the \(\text{k}^{th}\) layer
(b_hr|b_hz|b_hn), of shape (3*hidden_size)
Note
All the weights and biases are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{1}{\text{hidden\_size}}\)
Note
For bidirectional GRUs, forward and backward are directions 0 and 1 respectively. Example of splitting the output layers when
batch_first=False
:output.view(seq_len, batch, num_directions, hidden_size)
.Examples:
>> rnn = nn.GRU(10, 20, 2) >> input = torch.randn(5, 3, 10) >> h0 = torch.randn(2, 3, 20) >> output, hn = rnn(input, h0)
-
class
borch.nn.
GRUCell
(input_size: int, hidden_size: int, bias: bool = True, device=None, dtype=None)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.A gated recurrent unit (GRU) cell
\[\begin{split}\begin{array}{ll} r = \sigma(W_{ir} x + b_{ir} + W_{hr} h + b_{hr}) \\ z = \sigma(W_{iz} x + b_{iz} + W_{hz} h + b_{hz}) \\ n = \tanh(W_{in} x + b_{in} + r * (W_{hn} h + b_{hn})) \\ h' = (1 - z) * n + z * h \end{array}\end{split}\]where \(\sigma\) is the sigmoid function, and \(*\) is the Hadamard product.
- Args:
input_size: The number of expected features in the input x hidden_size: The number of features in the hidden state h bias: If
False
, then the layer does not use bias weights b_ih andb_hh. Default:
True
- Inputs: input, hidden
input of shape (batch, input_size): tensor containing input features
hidden of shape (batch, hidden_size): tensor containing the initial hidden state for each element in the batch. Defaults to zero if not provided.
- Outputs: h’
h’ of shape (batch, hidden_size): tensor containing the next hidden state for each element in the batch
- Shape:
Input1: \((N, H_{in})\) tensor containing input features where \(H_{in}\) = input_size
Input2: \((N, H_{out})\) tensor containing the initial hidden state for each element in the batch where \(H_{out}\) = hidden_size Defaults to zero if not provided.
Output: \((N, H_{out})\) tensor containing the next hidden state for each element in the batch
- Attributes:
- weight_ih: the learnable input-hidden weights, of shape
(3*hidden_size, input_size)
- weight_hh: the learnable hidden-hidden weights, of shape
(3*hidden_size, hidden_size)
bias_ih: the learnable input-hidden bias, of shape (3*hidden_size) bias_hh: the learnable hidden-hidden bias, of shape (3*hidden_size)
Note
All the weights and biases are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{1}{\text{hidden\_size}}\)
Examples:
>> rnn = nn.GRUCell(10, 20) >> input = torch.randn(6, 3, 10) >> hx = torch.randn(3, 20) >> output = [] >> for i in range(6): hx = rnn(input[i], hx) output.append(hx)
-
class
borch.nn.
GaussianNLLLoss
(*, full: bool = False, eps: float = 1e-06, reduction: str = 'mean')¶ Gaussian negative log likelihood loss.
The targets are treated as samples from Gaussian distributions with expectations and variances predicted by the neural network. For a
target
tensor modelled as having Gaussian distribution with a tensor of expectationsinput
and a tensor of positive variancesvar
the loss is:\[\text{loss} = \frac{1}{2}\left(\log\left(\text{max}\left(\text{var}, \ \text{eps}\right)\right) + \frac{\left(\text{input} - \text{target}\right)^2} {\text{max}\left(\text{var}, \ \text{eps}\right)}\right) + \text{const.}\]where
eps
is used for stability. By default, the constant term of the loss function is omitted unlessfull
isTrue
. Ifvar
is not the same size asinput
(due to a homoscedastic assumption), it must either have a final dimension of 1 or have one fewer dimension (with all other sizes being the same) for correct broadcasting.- Parameters
full (bool, optional) – include the constant term in the loss calculation. Default:
False
.eps (float, optional) – value used to clamp
var
(see note below), for stability. Default: 1e-6.reduction (string, optional) – specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the output is the average of all batch member losses,'sum'
: the output is the sum of all batch member losses. Default:'mean'
.
- Shape:
Input: \((N, *)\) where \(*\) means any number of additional dimensions
Target: \((N, *)\), same shape as the input, or same shape as the input but with one dimension equal to 1 (to allow for broadcasting)
Var: \((N, *)\), same shape as the input, or same shape as the input but with one dimension equal to 1, or same shape as the input but with one fewer dimension (to allow for broadcasting)
Output: scalar if
reduction
is'mean'
(default) or'sum'
. Ifreduction
is'none'
, then \((N, *)\), same shape as the input
- Examples::
>>> loss = nn.GaussianNLLLoss() >>> input = torch.randn(5, 2, requires_grad=True) >>> target = torch.randn(5, 2) >>> var = torch.ones(5, 2, requires_grad=True) #heteroscedastic >>> output = loss(input, target, var) >>> output.backward()
>>> loss = nn.GaussianNLLLoss() >>> input = torch.randn(5, 2, requires_grad=True) >>> target = torch.randn(5, 2) >>> var = torch.ones(5, 1, requires_grad=True) #homoscedastic >>> output = loss(input, target, var) >>> output.backward()
Note
The clamping of
var
is ignored with respect to autograd, and so the gradients are unaffected by it.- Reference:
Nix, D. A. and Weigend, A. S., “Estimating the mean and variance of the target probability distribution”, Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN’94), Orlando, FL, USA, 1994, pp. 55-60 vol.1, doi: 10.1109/ICNN.1994.374138.
-
forward
(input: torch.Tensor, target: torch.Tensor, var: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
GroupNorm
(num_groups: int, num_channels: int, eps: float = 1e-05, affine: bool = True, device=None, dtype=None)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.- Applies Group Normalization over a mini-batch of inputs as described in
the paper Group Normalization
\[y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]The input channels are separated into
num_groups
groups, each containingnum_channels / num_groups
channels. The mean and standard-deviation are calculated separately over the each group. \(\gamma\) and \(\beta\) are learnable per-channel affine transform parameter vectors of sizenum_channels
ifaffine
isTrue
. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False).This layer uses statistics computed from input data in both training and evaluation modes.
- Args:
num_groups (int): number of groups to separate the channels into num_channels (int): number of channels expected in input eps: a value added to the denominator for numerical stability. Default: 1e-5 affine: a boolean value that when set to
True
, this modulehas learnable per-channel affine parameters initialized to ones (for weights) and zeros (for biases). Default:
True
.- Shape:
Input: \((N, C, *)\) where \(C=\text{num\_channels}\)
Output: \((N, C, *)\) (same shape as input)
Examples:
>> input = torch.randn(20, 6, 10, 10) >> # Separate 6 channels into 3 groups >> m = nn.GroupNorm(3, 6) >> # Separate 6 channels into 6 groups (equivalent with InstanceNorm) >> m = nn.GroupNorm(6, 6) >> # Put all 6 channels into a single group (equivalent with LayerNorm) >> m = nn.GroupNorm(1, 6) >> # Activating the module >> output = m(input)
-
class
borch.nn.
Hardshrink
(lambd: float = 0.5)¶ Applies the hard shrinkage function element-wise:
\[\begin{split}\text{HardShrink}(x) = \begin{cases} x, & \text{ if } x > \lambda \\ x, & \text{ if } x < -\lambda \\ 0, & \text{ otherwise } \end{cases}\end{split}\]- Parameters
lambd – the \(\lambda\) value for the Hardshrink formulation. Default: 0.5
- Shape:
Input: \((*)\), where \(*\) means any number of dimensions.
Output: \((*)\), same shape as the input.
Examples:
>>> m = nn.Hardshrink() >>> input = torch.randn(2) >>> output = m(input)
-
extra_repr
() → str¶ Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
Hardsigmoid
(inplace: bool = False)¶ Applies the element-wise function:
\[\begin{split}\text{Hardsigmoid}(x) = \begin{cases} 0 & \text{if~} x \le -3, \\ 1 & \text{if~} x \ge +3, \\ x / 6 + 1 / 2 & \text{otherwise} \end{cases}\end{split}\]- Parameters
inplace – can optionally do the operation in-place. Default:
False
- Shape:
Input: \((*)\), where \(*\) means any number of dimensions.
Output: \((*)\), same shape as the input.
Examples:
>>> m = nn.Hardsigmoid() >>> input = torch.randn(2) >>> output = m(input)
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
Hardswish
(inplace: bool = False)¶ Applies the hardswish function, element-wise, as described in the paper:
\[\begin{split}\text{Hardswish}(x) = \begin{cases} 0 & \text{if~} x \le -3, \\ x & \text{if~} x \ge +3, \\ x \cdot (x + 3) /6 & \text{otherwise} \end{cases}\end{split}\]- Parameters
inplace – can optionally do the operation in-place. Default:
False
- Shape:
Input: \((*)\), where \(*\) means any number of dimensions.
Output: \((*)\), same shape as the input.
Examples:
>>> m = nn.Hardswish() >>> input = torch.randn(2) >>> output = m(input)
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
Hardtanh
(min_val: float = -1.0, max_val: float = 1.0, inplace: bool = False, min_value: Optional[float] = None, max_value: Optional[float] = None)¶ Applies the HardTanh function element-wise
HardTanh is defined as:
\[\begin{split}\text{HardTanh}(x) = \begin{cases} 1 & \text{ if } x > 1 \\ -1 & \text{ if } x < -1 \\ x & \text{ otherwise } \\ \end{cases}\end{split}\]The range of the linear region \([-1, 1]\) can be adjusted using
min_val
andmax_val
.- Parameters
min_val – minimum value of the linear region range. Default: -1
max_val – maximum value of the linear region range. Default: 1
inplace – can optionally do the operation in-place. Default:
False
Keyword arguments
min_value
andmax_value
have been deprecated in favor ofmin_val
andmax_val
.- Shape:
Input: \((*)\), where \(*\) means any number of dimensions.
Output: \((*)\), same shape as the input.
Examples:
>>> m = nn.Hardtanh(-2, 2) >>> input = torch.randn(2) >>> output = m(input)
-
extra_repr
() → str¶ Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
HingeEmbeddingLoss
(margin: float = 1.0, size_average=None, reduce=None, reduction: str = 'mean')¶ Measures the loss given an input tensor \(x\) and a labels tensor \(y\) (containing 1 or -1). This is usually used for measuring whether two inputs are similar or dissimilar, e.g. using the L1 pairwise distance as \(x\), and is typically used for learning nonlinear embeddings or semi-supervised learning.
The loss function for \(n\)-th sample in the mini-batch is
\[\begin{split}l_n = \begin{cases} x_n, & \text{if}\; y_n = 1,\\ \max \{0, \Delta - x_n\}, & \text{if}\; y_n = -1, \end{cases}\end{split}\]and the total loss functions is
\[\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases}\end{split}\]where \(L = \{l_1,\dots,l_N\}^\top\).
- Parameters
margin (float, optional) – Has a default value of 1.
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
- Shape:
Input: \((*)\) where \(*\) means, any number of dimensions. The sum operation operates over all the elements.
Target: \((*)\), same shape as the input
Output: scalar. If
reduction
is'none'
, then same shape as the input
-
forward
(input: torch.Tensor, target: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
HuberLoss
(reduction: str = 'mean', delta: float = 1.0)¶ Creates a criterion that uses a squared term if the absolute element-wise error falls below delta and a delta-scaled L1 term otherwise. This loss combines advantages of both
L1Loss
andMSELoss
; the delta-scaled L1 region makes the loss less sensitive to outliers thanMSELoss
, while the L2 region provides smoothness overL1Loss
near 0. See Huber loss for more information.For a batch of size \(N\), the unreduced loss can be described as:
\[\ell(x, y) = L = \{l_1, ..., l_N\}^T\]with
\[\begin{split}l_n = \begin{cases} 0.5 (x_n - y_n)^2, & \text{if } |x_n - y_n| < delta \\ delta * (|x_n - y_n| - 0.5 * delta), & \text{otherwise } \end{cases}\end{split}\]If reduction is not none, then:
\[\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases}\end{split}\]Note
When delta is set to 1, this loss is equivalent to
SmoothL1Loss
. In general, this loss differs fromSmoothL1Loss
by a factor of delta (AKA beta in Smooth L1). SeeSmoothL1Loss
for additional discussion on the differences in behavior between the two losses.- Parameters
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Default:'mean'
delta (float, optional) – Specifies the threshold at which to change between delta-scaled L1 and L2 loss. The value must be positive. Default: 1.0
- Shape:
Input: \((*)\) where \(*\) means any number of dimensions.
Target: \((*)\), same shape as the input.
Output: scalar. If
reduction
is'none'
, then \((*)\), same shape as the input.
-
forward
(input: torch.Tensor, target: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
Identity
(*args, **kwargs)¶ A placeholder identity operator that is argument-insensitive.
- Parameters
args – any argument (unused)
kwargs – any keyword argument (unused)
- Shape:
Input: \((*)\), where \(*\) means any number of dimensions.
Output: \((*)\), same shape as the input.
Examples:
>>> m = nn.Identity(54, unused_argument1=0.1, unused_argument2=False) >>> input = torch.randn(128, 20) >>> output = m(input) >>> print(output.size()) torch.Size([128, 20])
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
InstanceNorm1d
(num_features: int, eps: float = 1e-05, momentum: float = 0.1, affine: bool = False, track_running_stats: bool = False, device=None, dtype=None)¶ Applies Instance Normalization over a 3D input (a mini-batch of 1D inputs with optional additional channel dimension) as described in the paper Instance Normalization: The Missing Ingredient for Fast Stylization.
\[y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]The mean and standard-deviation are calculated per-dimension separately for each object in a mini-batch. \(\gamma\) and \(\beta\) are learnable parameter vectors of size C (where C is the input size) if
affine
isTrue
. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False).By default, this layer uses instance statistics computed from input data in both training and evaluation modes.
If
track_running_stats
is set toTrue
, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a defaultmomentum
of 0.1.Note
This
momentum
argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is \(\hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x_t\), where \(\hat{x}\) is the estimated statistic and \(x_t\) is the new observed value.Note
InstanceNorm1d
andLayerNorm
are very similar, but have some subtle differences.InstanceNorm1d
is applied on each channel of channeled data like multidimensional time series, butLayerNorm
is usually applied on entire sample and often in NLP tasks. Additionally,LayerNorm
applies elementwise affine transform, whileInstanceNorm1d
usually don’t apply affine transform.- Parameters
num_features – \(C\) from an expected input of size \((N, C, L)\) or \(L\) from input of size \((N, L)\)
eps – a value added to the denominator for numerical stability. Default: 1e-5
momentum – the value used for the running_mean and running_var computation. Default: 0.1
affine – a boolean value that when set to
True
, this module has learnable affine parameters, initialized the same way as done for batch normalization. Default:False
.track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default:False
- Shape:
Input: \((N, C, L)\)
Output: \((N, C, L)\) (same shape as input)
Examples:
>>> # Without Learnable Parameters >>> m = nn.InstanceNorm1d(100) >>> # With Learnable Parameters >>> m = nn.InstanceNorm1d(100, affine=True) >>> input = torch.randn(20, 100, 40) >>> output = m(input)
-
class
borch.nn.
InstanceNorm2d
(num_features: int, eps: float = 1e-05, momentum: float = 0.1, affine: bool = False, track_running_stats: bool = False, device=None, dtype=None)¶ Applies Instance Normalization over a 4D input (a mini-batch of 2D inputs with additional channel dimension) as described in the paper Instance Normalization: The Missing Ingredient for Fast Stylization.
\[y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]The mean and standard-deviation are calculated per-dimension separately for each object in a mini-batch. \(\gamma\) and \(\beta\) are learnable parameter vectors of size C (where C is the input size) if
affine
isTrue
. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False).By default, this layer uses instance statistics computed from input data in both training and evaluation modes.
If
track_running_stats
is set toTrue
, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a defaultmomentum
of 0.1.Note
This
momentum
argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is \(\hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x_t\), where \(\hat{x}\) is the estimated statistic and \(x_t\) is the new observed value.Note
InstanceNorm2d
andLayerNorm
are very similar, but have some subtle differences.InstanceNorm2d
is applied on each channel of channeled data like RGB images, butLayerNorm
is usually applied on entire sample and often in NLP tasks. Additionally,LayerNorm
applies elementwise affine transform, whileInstanceNorm2d
usually don’t apply affine transform.- Parameters
num_features – \(C\) from an expected input of size \((N, C, H, W)\)
eps – a value added to the denominator for numerical stability. Default: 1e-5
momentum – the value used for the running_mean and running_var computation. Default: 0.1
affine – a boolean value that when set to
True
, this module has learnable affine parameters, initialized the same way as done for batch normalization. Default:False
.track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default:False
- Shape:
Input: \((N, C, H, W)\)
Output: \((N, C, H, W)\) (same shape as input)
Examples:
>>> # Without Learnable Parameters >>> m = nn.InstanceNorm2d(100) >>> # With Learnable Parameters >>> m = nn.InstanceNorm2d(100, affine=True) >>> input = torch.randn(20, 100, 35, 45) >>> output = m(input)
-
class
borch.nn.
InstanceNorm3d
(num_features: int, eps: float = 1e-05, momentum: float = 0.1, affine: bool = False, track_running_stats: bool = False, device=None, dtype=None)¶ Applies Instance Normalization over a 5D input (a mini-batch of 3D inputs with additional channel dimension) as described in the paper Instance Normalization: The Missing Ingredient for Fast Stylization.
\[y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]The mean and standard-deviation are calculated per-dimension separately for each object in a mini-batch. \(\gamma\) and \(\beta\) are learnable parameter vectors of size C (where C is the input size) if
affine
isTrue
. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False).By default, this layer uses instance statistics computed from input data in both training and evaluation modes.
If
track_running_stats
is set toTrue
, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a defaultmomentum
of 0.1.Note
This
momentum
argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is \(\hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x_t\), where \(\hat{x}\) is the estimated statistic and \(x_t\) is the new observed value.Note
InstanceNorm3d
andLayerNorm
are very similar, but have some subtle differences.InstanceNorm3d
is applied on each channel of channeled data like 3D models with RGB color, butLayerNorm
is usually applied on entire sample and often in NLP tasks. Additionally,LayerNorm
applies elementwise affine transform, whileInstanceNorm3d
usually don’t apply affine transform.- Parameters
num_features – \(C\) from an expected input of size \((N, C, D, H, W)\)
eps – a value added to the denominator for numerical stability. Default: 1e-5
momentum – the value used for the running_mean and running_var computation. Default: 0.1
affine – a boolean value that when set to
True
, this module has learnable affine parameters, initialized the same way as done for batch normalization. Default:False
.track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default:False
- Shape:
Input: \((N, C, D, H, W)\)
Output: \((N, C, D, H, W)\) (same shape as input)
Examples:
>>> # Without Learnable Parameters >>> m = nn.InstanceNorm3d(100) >>> # With Learnable Parameters >>> m = nn.InstanceNorm3d(100, affine=True) >>> input = torch.randn(20, 100, 35, 45, 10) >>> output = m(input)
-
class
borch.nn.
KLDivLoss
(size_average=None, reduce=None, reduction: str = 'mean', log_target: bool = False)¶ The Kullback-Leibler divergence loss measure
Kullback-Leibler divergence is a useful distance measure for continuous distributions and is often useful when performing direct regression over the space of (discretely sampled) continuous output distributions.
As with
NLLLoss
, the input given is expected to contain log-probabilities and is not restricted to a 2D Tensor. The targets are interpreted as probabilities by default, but could be considered as log-probabilities withlog_target
set toTrue
.This criterion expects a target Tensor of the same size as the input Tensor.
The unreduced (i.e. with
reduction
set to'none'
) loss can be described as:\[l(x,y) = L = \{ l_1,\dots,l_N \}, \quad l_n = y_n \cdot \left( \log y_n - x_n \right)\]where the index \(N\) spans all dimensions of
input
and \(L\) has the same shape asinput
. Ifreduction
is not'none'
(default'mean'
), then:\[\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';} \\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases}\end{split}\]In default
reduction
mode'mean'
, the losses are averaged for each minibatch over observations as well as over dimensions.'batchmean'
mode gives the correct KL divergence where losses are averaged over batch dimension only.'mean'
mode’s behavior will be changed to the same as'batchmean'
in the next major release.- Parameters
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'batchmean'
|'sum'
|'mean'
.'none'
: no reduction will be applied.'batchmean'
: the sum of the output will be divided by batchsize.'sum'
: the output will be summed.'mean'
: the output will be divided by the number of elements in the output. Default:'mean'
log_target (bool, optional) – Specifies whether target is passed in the log space. Default:
False
Note
size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
.Note
reduction
='mean'
doesn’t return the true kl divergence value, please usereduction
='batchmean'
which aligns with KL math definition. In the next major release,'mean'
will be changed to be the same as'batchmean'
.- Shape:
Input: \((*)\), where \(*\) means any number of dimensions.
Target: \((*)\), same shape as the input.
Output: scalar by default. If :attr:
reduction
is'none'
, then \((*)\), same shape as the input.
-
forward
(input: torch.Tensor, target: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
L1Loss
(size_average=None, reduce=None, reduction: str = 'mean')¶ Creates a criterion that measures the mean absolute error (MAE) between each element in the input \(x\) and target \(y\).
The unreduced (i.e. with
reduction
set to'none'
) loss can be described as:\[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = \left| x_n - y_n \right|,\]where \(N\) is the batch size. If
reduction
is not'none'
(default'mean'
), then:\[\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases}\end{split}\]\(x\) and \(y\) are tensors of arbitrary shapes with a total of \(n\) elements each.
The sum operation still operates over all the elements, and divides by \(n\).
The division by \(n\) can be avoided if one sets
reduction = 'sum'
.Supports real-valued and complex-valued inputs.
- Parameters
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
- Shape:
Input: \((*)\), where \(*\) means any number of dimensions.
Target: \((*)\), same shape as the input.
Output: scalar. If
reduction
is'none'
, then \((*)\), same shape as the input.
Examples:
>>> loss = nn.L1Loss() >>> input = torch.randn(3, 5, requires_grad=True) >>> target = torch.randn(3, 5) >>> output = loss(input, target) >>> output.backward()
-
forward
(input: torch.Tensor, target: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
LPPool1d
(norm_type: float, kernel_size: Union[int, Tuple[int, ...]], stride: Union[int, Tuple[int, ...], None] = None, ceil_mode: bool = False)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.- Applies a 1D power-average pooling over an input signal composed of several input
planes.
On each window, the function computed is:
\[f(X) = \sqrt[p]{\sum_{x \in X} x^{p}}\]At p = \(\infty\), one gets Max Pooling
At p = 1, one gets Sum Pooling (which is proportional to Average Pooling)
Note
If the sum to the power of p is zero, the gradient of this function is not defined. This implementation will set the gradient to zero in this case.
- Args:
kernel_size: a single int, the size of the window stride: a single int, the stride of the window. Default value is
kernel_size
ceil_mode: when True, will use ceil instead of floor to compute the output shape- Shape:
Input: \((N, C, L_{in})\) or \((C, L_{in})\).
Output: \((N, C, L_{out})\) or \((C, L_{out})\), where
\[L_{out} = \left\lfloor\frac{L_{in} - \text{kernel\_size}}{\text{stride}} + 1\right\rfloor\]
- Examples::
>> # power-2 pool of window of length 3, with stride 2. >> m = nn.LPPool1d(2, 3, stride=2) >> input = torch.randn(20, 16, 50) >> output = m(input)
-
class
borch.nn.
LPPool2d
(norm_type: float, kernel_size: Union[int, Tuple[int, ...]], stride: Union[int, Tuple[int, ...], None] = None, ceil_mode: bool = False)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.- Applies a 2D power-average pooling over an input signal composed of several input
planes.
On each window, the function computed is:
\[f(X) = \sqrt[p]{\sum_{x \in X} x^{p}}\]At p = \(\infty\), one gets Max Pooling
At p = 1, one gets Sum Pooling (which is proportional to average pooling)
The parameters
kernel_size
,stride
can either be:a single
int
– in which case the same value is used for the height and width dimensiona
tuple
of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension
Note
If the sum to the power of p is zero, the gradient of this function is not defined. This implementation will set the gradient to zero in this case.
- Args:
kernel_size: the size of the window stride: the stride of the window. Default value is
kernel_size
ceil_mode: when True, will use ceil instead of floor to compute the output shape- Shape:
Input: \((N, C, H_{in}, W_{in})\)
Output: \((N, C, H_{out}, W_{out})\), where
\[H_{out} = \left\lfloor\frac{H_{in} - \text{kernel\_size}[0]}{\text{stride}[0]} + 1\right\rfloor\]\[W_{out} = \left\lfloor\frac{W_{in} - \text{kernel\_size}[1]}{\text{stride}[1]} + 1\right\rfloor\]
Examples:
>> # power-2 pool of square window of size=3, stride=2 >> m = nn.LPPool2d(2, 3, stride=2) >> # pool of non-square window of power 1.2 >> m = nn.LPPool2d(1.2, (3, 2), stride=(2, 1)) >> input = torch.randn(20, 16, 50, 32) >> output = m(input)
-
class
borch.nn.
LSTM
(*args, **kwargs)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.- Applies a multi-layer long short-term memory (LSTM) RNN to an input
sequence.
For each element in the input sequence, each layer computes the following function:
\[\begin{split}\begin{array}{ll} \\ i_t = \sigma(W_{ii} x_t + b_{ii} + W_{hi} h_{t-1} + b_{hi}) \\ f_t = \sigma(W_{if} x_t + b_{if} + W_{hf} h_{t-1} + b_{hf}) \\ g_t = \tanh(W_{ig} x_t + b_{ig} + W_{hg} h_{t-1} + b_{hg}) \\ o_t = \sigma(W_{io} x_t + b_{io} + W_{ho} h_{t-1} + b_{ho}) \\ c_t = f_t \odot c_{t-1} + i_t \odot g_t \\ h_t = o_t \odot \tanh(c_t) \\ \end{array}\end{split}\]where \(h_t\) is the hidden state at time t, \(c_t\) is the cell state at time t, \(x_t\) is the input at time t, \(h_{t-1}\) is the hidden state of the layer at time t-1 or the initial hidden state at time 0, and \(i_t\), \(f_t\), \(g_t\), \(o_t\) are the input, forget, cell, and output gates, respectively. \(\sigma\) is the sigmoid function, and \(\odot\) is the Hadamard product.
In a multilayer LSTM, the input \(x^{(l)}_t\) of the \(l\) -th layer (\(l >= 2\)) is the hidden state \(h^{(l-1)}_t\) of the previous layer multiplied by dropout \(\delta^{(l-1)}_t\) where each \(\delta^{(l-1)}_t\) is a Bernoulli random variable which is \(0\) with probability
dropout
.If
proj_size > 0
is specified, LSTM with projections will be used. This changes the LSTM cell in the following way. First, the dimension of \(h_t\) will be changed fromhidden_size
toproj_size
(dimensions of \(W_{hi}\) will be changed accordingly). Second, the output hidden state of each layer will be multiplied by a learnable projection matrix: \(h_t = W_{hr}h_t\). Note that as a consequence of this, the output of LSTM network will be of different shape as well. See Inputs/Outputs sections below for exact dimensions of all variables. You can find more details in https://arxiv.org/abs/1402.1128.- Args:
input_size: The number of expected features in the input x hidden_size: The number of features in the hidden state h num_layers: Number of recurrent layers. E.g., setting
num_layers=2
would mean stacking two LSTMs together to form a stacked LSTM, with the second LSTM taking in outputs of the first LSTM and computing the final results. Default: 1
- bias: If
False
, then the layer does not use bias weights b_ih and b_hh. Default:
True
- batch_first: If
True
, then the input and output tensors are provided as (batch, seq, feature) instead of (seq, batch, feature). Note that this does not apply to hidden or cell states. See the Inputs/Outputs sections below for details. Default:
False
- dropout: If non-zero, introduces a Dropout layer on the outputs of each
LSTM layer except the last layer, with dropout probability equal to
dropout
. Default: 0
bidirectional: If
True
, becomes a bidirectional LSTM. Default:False
proj_size: If> 0
, will use LSTM with projections of corresponding size. Default: 0- bias: If
- Inputs: input, (h_0, c_0)
input: tensor of shape \((L, N, H_{in})\) when
batch_first=False
or \((N, L, H_{in})\) whenbatch_first=True
containing the features of the input sequence. The input can also be a packed variable length sequence. Seetorch.nn.utils.rnn.pack_padded_sequence()
ortorch.nn.utils.rnn.pack_sequence()
for details.h_0: tensor of shape \((D * \text{num\_layers}, N, H_{out})\) containing the initial hidden state for each element in the batch. Defaults to zeros if (h_0, c_0) is not provided.
c_0: tensor of shape \((D * \text{num\_layers}, N, H_{cell})\) containing the initial cell state for each element in the batch. Defaults to zeros if (h_0, c_0) is not provided.
where:
\[\begin{split}\begin{aligned} N ={} & \text{batch size} \\ L ={} & \text{sequence length} \\ D ={} & 2 \text{ if bidirectional=True otherwise } 1 \\ H_{in} ={} & \text{input\_size} \\ H_{cell} ={} & \text{hidden\_size} \\ H_{out} ={} & \text{proj\_size if } \text{proj\_size}>0 \text{ otherwise hidden\_size} \\ \end{aligned}\end{split}\]- Outputs: output, (h_n, c_n)
output: tensor of shape \((L, N, D * H_{out})\) when
batch_first=False
or \((N, L, D * H_{out})\) whenbatch_first=True
containing the output features (h_t) from the last layer of the LSTM, for each t. If atorch.nn.utils.rnn.PackedSequence
has been given as the input, the output will also be a packed sequence.h_n: tensor of shape \((D * \text{num\_layers}, N, H_{out})\) containing the final hidden state for each element in the batch.
c_n: tensor of shape \((D * \text{num\_layers}, N, H_{cell})\) containing the final cell state for each element in the batch.
- Attributes:
- weight_ih_l[k]the learnable input-hidden weights of the \(\text{k}^{th}\) layer
(W_ii|W_if|W_ig|W_io), of shape (4*hidden_size, input_size) for k = 0. Otherwise, the shape is (4*hidden_size, num_directions * hidden_size). If
proj_size > 0
was specified, the shape will be (4*hidden_size, num_directions * proj_size) for k > 0- weight_hh_l[k]the learnable hidden-hidden weights of the \(\text{k}^{th}\) layer
(W_hi|W_hf|W_hg|W_ho), of shape (4*hidden_size, hidden_size). If
proj_size > 0
was specified, the shape will be (4*hidden_size, proj_size).- bias_ih_l[k]the learnable input-hidden bias of the \(\text{k}^{th}\) layer
(b_ii|b_if|b_ig|b_io), of shape (4*hidden_size)
- bias_hh_l[k]the learnable hidden-hidden bias of the \(\text{k}^{th}\) layer
(b_hi|b_hf|b_hg|b_ho), of shape (4*hidden_size)
- weight_hr_l[k]the learnable projection weights of the \(\text{k}^{th}\) layer
of shape (proj_size, hidden_size). Only present when
proj_size > 0
was specified.- weight_ih_l[k]_reverse: Analogous to weight_ih_l[k] for the reverse direction.
Only present when
bidirectional=True
.- weight_hh_l[k]_reverse: Analogous to weight_hh_l[k] for the reverse direction.
Only present when
bidirectional=True
.- bias_ih_l[k]_reverse: Analogous to bias_ih_l[k] for the reverse direction.
Only present when
bidirectional=True
.- bias_hh_l[k]_reverse: Analogous to bias_hh_l[k] for the reverse direction.
Only present when
bidirectional=True
.- weight_hr_l[k]_reverse: Analogous to weight_hr_l[k] for the reverse direction.
Only present when
bidirectional=True
andproj_size > 0
was specified.
Note
All the weights and biases are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{1}{\text{hidden\_size}}\)
Note
For bidirectional LSTMs, forward and backward are directions 0 and 1 respectively. Example of splitting the output layers when
batch_first=False
:output.view(seq_len, batch, num_directions, hidden_size)
.Examples:
>> rnn = nn.LSTM(10, 20, 2) >> input = torch.randn(5, 3, 10) >> h0 = torch.randn(2, 3, 20) >> c0 = torch.randn(2, 3, 20) >> output, (hn, cn) = rnn(input, (h0, c0))
-
class
borch.nn.
LSTMCell
(input_size: int, hidden_size: int, bias: bool = True, device=None, dtype=None)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.A long short-term memory (LSTM) cell.
\[\begin{split}\begin{array}{ll} i = \sigma(W_{ii} x + b_{ii} + W_{hi} h + b_{hi}) \\ f = \sigma(W_{if} x + b_{if} + W_{hf} h + b_{hf}) \\ g = \tanh(W_{ig} x + b_{ig} + W_{hg} h + b_{hg}) \\ o = \sigma(W_{io} x + b_{io} + W_{ho} h + b_{ho}) \\ c' = f * c + i * g \\ h' = o * \tanh(c') \\ \end{array}\end{split}\]where \(\sigma\) is the sigmoid function, and \(*\) is the Hadamard product.
- Args:
input_size: The number of expected features in the input x hidden_size: The number of features in the hidden state h bias: If
False
, then the layer does not use bias weights b_ih andb_hh. Default:
True
- Inputs: input, (h_0, c_0)
input of shape (batch, input_size): tensor containing input features
h_0 of shape (batch, hidden_size): tensor containing the initial hidden state for each element in the batch.
c_0 of shape (batch, hidden_size): tensor containing the initial cell state for each element in the batch.
If (h_0, c_0) is not provided, both h_0 and c_0 default to zero.
- Outputs: (h_1, c_1)
h_1 of shape (batch, hidden_size): tensor containing the next hidden state for each element in the batch
c_1 of shape (batch, hidden_size): tensor containing the next cell state for each element in the batch
- Attributes:
- weight_ih: the learnable input-hidden weights, of shape
(4*hidden_size, input_size)
- weight_hh: the learnable hidden-hidden weights, of shape
(4*hidden_size, hidden_size)
bias_ih: the learnable input-hidden bias, of shape (4*hidden_size) bias_hh: the learnable hidden-hidden bias, of shape (4*hidden_size)
Note
All the weights and biases are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{1}{\text{hidden\_size}}\)
Examples:
>> rnn = nn.LSTMCell(10, 20) # (input_size, hidden_size) >> input = torch.randn(2, 3, 10) # (time_steps, batch, input_size) >> hx = torch.randn(3, 20) # (batch, hidden_size) >> cx = torch.randn(3, 20) >> output = [] >> for i in range(input.size()[0]): hx, cx = rnn(input[i], (hx, cx)) output.append(hx) >> output = torch.stack(output, dim=0)
-
class
borch.nn.
LayerNorm
(normalized_shape: Union[int, List[int], torch.Size], eps: float = 1e-05, elementwise_affine: bool = True, device=None, dtype=None)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.- Applies Layer Normalization over a mini-batch of inputs as described in
the paper Layer Normalization
\[y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]The mean and standard-deviation are calculated over the last D dimensions, where D is the dimension of
normalized_shape
. For example, ifnormalized_shape
is(3, 5)
(a 2-dimensional shape), the mean and standard-deviation are computed over the last 2 dimensions of the input (i.e.input.mean((-2, -1))
). \(\gamma\) and \(\beta\) are learnable affine transform parameters ofnormalized_shape
ifelementwise_affine
isTrue
. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False).Note
Unlike Batch Normalization and Instance Normalization, which applies scalar scale and bias for each entire channel/plane with the
affine
option, Layer Normalization applies per-element scale and bias withelementwise_affine
.This layer uses statistics computed from input data in both training and evaluation modes.
- Args:
- normalized_shape (int or list or torch.Size): input shape from an expected input
of size
\[[* \times \text{normalized\_shape}[0] \times \text{normalized\_shape}[1] \times \ldots \times \text{normalized\_shape}[-1]]\]If a single integer is used, it is treated as a singleton list, and this module will normalize over the last dimension which is expected to be of that specific size.
eps: a value added to the denominator for numerical stability. Default: 1e-5 elementwise_affine: a boolean value that when set to
True
, this modulehas learnable per-element affine parameters initialized to ones (for weights) and zeros (for biases). Default:
True
.- Attributes:
- weight: the learnable weights of the module of shape
\(\text{normalized\_shape}\) when
elementwise_affine
is set toTrue
. The values are initialized to 1.- bias: the learnable bias of the module of shape
\(\text{normalized\_shape}\) when
elementwise_affine
is set toTrue
. The values are initialized to 0.
- Shape:
Input: \((N, *)\)
Output: \((N, *)\) (same shape as input)
Examples:
>> # NLP Example >> batch, sentence_length, embedding_dim = 20, 5, 10 >> embedding = torch.randn(batch, sentence_length, embedding_dim) >> layer_norm = nn.LayerNorm(embedding_dim) >> # Activate module >> layer_norm(embedding) >> >> # Image Example >> N, C, H, W = 20, 5, 10, 10 >> input = torch.randn(N, C, H, W) >> # Normalize over the last three dimensions (i.e. the channel and spatial dimensions) >> # as shown in the image below >> layer_norm = nn.LayerNorm([C, H, W]) >> output = layer_norm(input)
-
class
borch.nn.
LazyBatchNorm1d
(eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)¶ A
torch.nn.BatchNorm1d
module with lazy initialization of thenum_features
argument of theBatchNorm1d
that is inferred from theinput.size(1)
. The attributes that will be lazily initialized are weight, bias, running_mean and running_var.Check the
torch.nn.modules.lazy.LazyModuleMixin
for further documentation on lazy modules and their limitations.- Parameters
eps – a value added to the denominator for numerical stability. Default: 1e-5
momentum – the value used for the running_mean and running_var computation. Can be set to
None
for cumulative moving average (i.e. simple average). Default: 0.1affine – a boolean value that when set to
True
, this module has learnable affine parameters. Default:True
track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics, and initializes statistics buffersrunning_mean
andrunning_var
asNone
. When these buffers areNone
, this module always uses batch statistics. in both training and eval modes. Default:True
-
cls_to_become
¶ alias of
BatchNorm1d
-
class
borch.nn.
LazyBatchNorm2d
(eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)¶ A
torch.nn.BatchNorm2d
module with lazy initialization of thenum_features
argument of theBatchNorm2d
that is inferred from theinput.size(1)
. The attributes that will be lazily initialized are weight, bias, running_mean and running_var.Check the
torch.nn.modules.lazy.LazyModuleMixin
for further documentation on lazy modules and their limitations.- Parameters
eps – a value added to the denominator for numerical stability. Default: 1e-5
momentum – the value used for the running_mean and running_var computation. Can be set to
None
for cumulative moving average (i.e. simple average). Default: 0.1affine – a boolean value that when set to
True
, this module has learnable affine parameters. Default:True
track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics, and initializes statistics buffersrunning_mean
andrunning_var
asNone
. When these buffers areNone
, this module always uses batch statistics. in both training and eval modes. Default:True
-
cls_to_become
¶ alias of
BatchNorm2d
-
class
borch.nn.
LazyBatchNorm3d
(eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)¶ A
torch.nn.BatchNorm3d
module with lazy initialization of thenum_features
argument of theBatchNorm3d
that is inferred from theinput.size(1)
. The attributes that will be lazily initialized are weight, bias, running_mean and running_var.Check the
torch.nn.modules.lazy.LazyModuleMixin
for further documentation on lazy modules and their limitations.- Parameters
eps – a value added to the denominator for numerical stability. Default: 1e-5
momentum – the value used for the running_mean and running_var computation. Can be set to
None
for cumulative moving average (i.e. simple average). Default: 0.1affine – a boolean value that when set to
True
, this module has learnable affine parameters. Default:True
track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics, and initializes statistics buffersrunning_mean
andrunning_var
asNone
. When these buffers areNone
, this module always uses batch statistics. in both training and eval modes. Default:True
-
cls_to_become
¶ alias of
BatchNorm3d
-
class
borch.nn.
LazyConv1d
(out_channels: int, kernel_size: Union[int, Tuple[int]], stride: Union[int, Tuple[int]] = 1, padding: Union[int, Tuple[int]] = 0, dilation: Union[int, Tuple[int]] = 1, groups: int = 1, bias: bool = True, padding_mode: str = 'zeros', device=None, dtype=None)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.- A
torch.nn.Conv1d
module with lazy initialization of the
in_channels
argument of theConv1d
that is inferred from theinput.size(1)
. The attributes that will be lazily initialized are weight and bias.Check the
torch.nn.modules.lazy.LazyModuleMixin
for further documentation on lazy modules and their limitations.- Args:
out_channels (int): Number of channels produced by the convolution kernel_size (int or tuple): Size of the convolving kernel stride (int or tuple, optional): Stride of the convolution. Default: 1 padding (int or tuple, optional): Zero-padding added to both sides of
the input. Default: 0
- padding_mode (string, optional):
'zeros'
,'reflect'
, 'replicate'
or'circular'
. Default:'zeros'
- dilation (int or tuple, optional): Spacing between kernel
elements. Default: 1
- groups (int, optional): Number of blocked connections from input
channels to output channels. Default: 1
- bias (bool, optional): If
True
, adds a learnable bias to the output. Default:
True
- padding_mode (string, optional):
See also
torch.nn.Conv1d
andtorch.nn.modules.lazy.LazyModuleMixin
- A
-
class
borch.nn.
LazyConv2d
(out_channels: int, kernel_size: Union[int, Tuple[int, int]], stride: Union[int, Tuple[int, int]] = 1, padding: Union[int, Tuple[int, int]] = 0, dilation: Union[int, Tuple[int, int]] = 1, groups: int = 1, bias: bool = True, padding_mode: str = 'zeros', device=None, dtype=None)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.- A
torch.nn.Conv2d
module with lazy initialization of the
in_channels
argument of theConv2d
that is inferred from theinput.size(1)
. The attributes that will be lazily initialized are weight and bias.Check the
torch.nn.modules.lazy.LazyModuleMixin
for further documentation on lazy modules and their limitations.- Args:
out_channels (int): Number of channels produced by the convolution kernel_size (int or tuple): Size of the convolving kernel stride (int or tuple, optional): Stride of the convolution. Default: 1 padding (int or tuple, optional): Zero-padding added to both sides of
the input. Default: 0
- padding_mode (string, optional):
'zeros'
,'reflect'
, 'replicate'
or'circular'
. Default:'zeros'
- dilation (int or tuple, optional): Spacing between kernel
elements. Default: 1
- groups (int, optional): Number of blocked connections from input
channels to output channels. Default: 1
- bias (bool, optional): If
True
, adds a learnable bias to the output. Default:
True
- padding_mode (string, optional):
See also
torch.nn.Conv2d
andtorch.nn.modules.lazy.LazyModuleMixin
- A
-
class
borch.nn.
LazyConv3d
(out_channels: int, kernel_size: Union[int, Tuple[int, int, int]], stride: Union[int, Tuple[int, int, int]] = 1, padding: Union[int, Tuple[int, int, int]] = 0, dilation: Union[int, Tuple[int, int, int]] = 1, groups: int = 1, bias: bool = True, padding_mode: str = 'zeros', device=None, dtype=None)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.- A
torch.nn.Conv3d
module with lazy initialization of the
in_channels
argument of theConv3d
that is inferred from theinput.size(1)
. The attributes that will be lazily initialized are weight and bias.Check the
torch.nn.modules.lazy.LazyModuleMixin
for further documentation on lazy modules and their limitations.- Args:
out_channels (int): Number of channels produced by the convolution kernel_size (int or tuple): Size of the convolving kernel stride (int or tuple, optional): Stride of the convolution. Default: 1 padding (int or tuple, optional): Zero-padding added to both sides of
the input. Default: 0
- padding_mode (string, optional):
'zeros'
,'reflect'
, 'replicate'
or'circular'
. Default:'zeros'
- dilation (int or tuple, optional): Spacing between kernel
elements. Default: 1
- groups (int, optional): Number of blocked connections from input
channels to output channels. Default: 1
- bias (bool, optional): If
True
, adds a learnable bias to the output. Default:
True
- padding_mode (string, optional):
See also
torch.nn.Conv3d
andtorch.nn.modules.lazy.LazyModuleMixin
- A
-
class
borch.nn.
LazyConvTranspose1d
(out_channels: int, kernel_size: Union[int, Tuple[int]], stride: Union[int, Tuple[int]] = 1, padding: Union[int, Tuple[int]] = 0, output_padding: Union[int, Tuple[int]] = 0, groups: int = 1, bias: bool = True, dilation: Union[int, Tuple[int]] = 1, padding_mode: str = 'zeros', device=None, dtype=None)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.- A
torch.nn.ConvTranspose1d
module with lazy initialization of the
in_channels
argument of theConvTranspose1d
that is inferred from theinput.size(1)
. The attributes that will be lazily initialized are weight and bias.Check the
torch.nn.modules.lazy.LazyModuleMixin
for further documentation on lazy modules and their limitations.- Args:
out_channels (int): Number of channels produced by the convolution kernel_size (int or tuple): Size of the convolving kernel stride (int or tuple, optional): Stride of the convolution. Default: 1 padding (int or tuple, optional):
dilation * (kernel_size - 1) - padding
zero-paddingwill be added to both sides of the input. Default: 0
- output_padding (int or tuple, optional): Additional size added to one side
of the output shape. Default: 0
groups (int, optional): Number of blocked connections from input channels to output channels. Default: 1 bias (bool, optional): If
True
, adds a learnable bias to the output. Default:True
dilation (int or tuple, optional): Spacing between kernel elements. Default: 1
See also
torch.nn.ConvTranspose1d
andtorch.nn.modules.lazy.LazyModuleMixin
- A
-
class
borch.nn.
LazyConvTranspose2d
(out_channels: int, kernel_size: Union[int, Tuple[int, int]], stride: Union[int, Tuple[int, int]] = 1, padding: Union[int, Tuple[int, int]] = 0, output_padding: Union[int, Tuple[int, int]] = 0, groups: int = 1, bias: bool = True, dilation: int = 1, padding_mode: str = 'zeros', device=None, dtype=None)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.- A
torch.nn.ConvTranspose2d
module with lazy initialization of the
in_channels
argument of theConvTranspose2d
that is inferred from theinput.size(1)
. The attributes that will be lazily initialized are weight and bias.Check the
torch.nn.modules.lazy.LazyModuleMixin
for further documentation on lazy modules and their limitations.- Args:
out_channels (int): Number of channels produced by the convolution kernel_size (int or tuple): Size of the convolving kernel stride (int or tuple, optional): Stride of the convolution. Default: 1 padding (int or tuple, optional):
dilation * (kernel_size - 1) - padding
zero-paddingwill be added to both sides of each dimension in the input. Default: 0
- output_padding (int or tuple, optional): Additional size added to one side
of each dimension in the output shape. Default: 0
groups (int, optional): Number of blocked connections from input channels to output channels. Default: 1 bias (bool, optional): If
True
, adds a learnable bias to the output. Default:True
dilation (int or tuple, optional): Spacing between kernel elements. Default: 1
See also
torch.nn.ConvTranspose2d
andtorch.nn.modules.lazy.LazyModuleMixin
- A
-
class
borch.nn.
LazyConvTranspose3d
(out_channels: int, kernel_size: Union[int, Tuple[int, int, int]], stride: Union[int, Tuple[int, int, int]] = 1, padding: Union[int, Tuple[int, int, int]] = 0, output_padding: Union[int, Tuple[int, int, int]] = 0, groups: int = 1, bias: bool = True, dilation: Union[int, Tuple[int, int, int]] = 1, padding_mode: str = 'zeros', device=None, dtype=None)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.- A
torch.nn.ConvTranspose3d
module with lazy initialization of the
in_channels
argument of theConvTranspose3d
that is inferred from theinput.size(1)
. The attributes that will be lazily initialized are weight and bias.Check the
torch.nn.modules.lazy.LazyModuleMixin
for further documentation on lazy modules and their limitations.- Args:
out_channels (int): Number of channels produced by the convolution kernel_size (int or tuple): Size of the convolving kernel stride (int or tuple, optional): Stride of the convolution. Default: 1 padding (int or tuple, optional):
dilation * (kernel_size - 1) - padding
zero-paddingwill be added to both sides of each dimension in the input. Default: 0
- output_padding (int or tuple, optional): Additional size added to one side
of each dimension in the output shape. Default: 0
groups (int, optional): Number of blocked connections from input channels to output channels. Default: 1 bias (bool, optional): If
True
, adds a learnable bias to the output. Default:True
dilation (int or tuple, optional): Spacing between kernel elements. Default: 1
See also
torch.nn.ConvTranspose3d
andtorch.nn.modules.lazy.LazyModuleMixin
- A
-
class
borch.nn.
LazyInstanceNorm1d
(eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)¶ A
torch.nn.InstanceNorm1d
module with lazy initialization of thenum_features
argument of theInstanceNorm1d
that is inferred from theinput.size(1)
. The attributes that will be lazily initialized are weight, bias, running_mean and running_var.Check the
torch.nn.modules.lazy.LazyModuleMixin
for further documentation on lazy modules and their limitations.- Parameters
num_features – \(C\) from an expected input of size \((N, C, L)\) or \(L\) from input of size \((N, L)\)
eps – a value added to the denominator for numerical stability. Default: 1e-5
momentum – the value used for the running_mean and running_var computation. Default: 0.1
affine – a boolean value that when set to
True
, this module has learnable affine parameters, initialized the same way as done for batch normalization. Default:False
.track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default:False
-
cls_to_become
¶ alias of
InstanceNorm1d
-
class
borch.nn.
LazyInstanceNorm2d
(eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)¶ A
torch.nn.InstanceNorm2d
module with lazy initialization of thenum_features
argument of theInstanceNorm2d
that is inferred from theinput.size(1)
. The attributes that will be lazily initialized are weight, bias, running_mean and running_var.Check the
torch.nn.modules.lazy.LazyModuleMixin
for further documentation on lazy modules and their limitations.- Parameters
num_features – \(C\) from an expected input of size \((N, C, H, W)\)
eps – a value added to the denominator for numerical stability. Default: 1e-5
momentum – the value used for the running_mean and running_var computation. Default: 0.1
affine – a boolean value that when set to
True
, this module has learnable affine parameters, initialized the same way as done for batch normalization. Default:False
.track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default:False
-
cls_to_become
¶ alias of
InstanceNorm2d
-
class
borch.nn.
LazyInstanceNorm3d
(eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)¶ A
torch.nn.InstanceNorm3d
module with lazy initialization of thenum_features
argument of theInstanceNorm3d
that is inferred from theinput.size(1)
. The attributes that will be lazily initialized are weight, bias, running_mean and running_var.Check the
torch.nn.modules.lazy.LazyModuleMixin
for further documentation on lazy modules and their limitations.- Parameters
num_features – \(C\) from an expected input of size \((N, C, D, H, W)\)
eps – a value added to the denominator for numerical stability. Default: 1e-5
momentum – the value used for the running_mean and running_var computation. Default: 0.1
affine – a boolean value that when set to
True
, this module has learnable affine parameters, initialized the same way as done for batch normalization. Default:False
.track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics and always uses batch statistics in both training and eval modes. Default:False
-
cls_to_become
¶ alias of
InstanceNorm3d
-
class
borch.nn.
LazyLinear
(out_features: int, bias: bool = True, device=None, dtype=None)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.A
torch.nn.Linear
module where in_features is inferred.In this module, the weight and bias are of
torch.nn.UninitializedParameter
class. They will be initialized after the first call toforward
is done and the module will become a regulartorch.nn.Linear
module. Thein_features
argument of theLinear
is inferred from theinput.shape[-1]
.Check the
torch.nn.modules.lazy.LazyModuleMixin
for further documentation on lazy modules and their limitations.- Args:
out_features: size of each output sample bias: If set to
False
, the layer will not learn an additive bias.Default:
True
- Attributes:
- weight: the learnable weights of the module of shape
\((\text{out\_features}, \text{in\_features})\). The values are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\), where \(k = \frac{1}{\text{in\_features}}\)
- bias: the learnable bias of the module of shape \((\text{out\_features})\).
If
bias
isTrue
, the values are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{1}{\text{in\_features}}\)
-
class
borch.nn.
LeakyReLU
(negative_slope: float = 0.01, inplace: bool = False)¶ Applies the element-wise function:
\[\text{LeakyReLU}(x) = \max(0, x) + \text{negative\_slope} * \min(0, x)\]or
\[\begin{split}\text{LeakyRELU}(x) = \begin{cases} x, & \text{ if } x \geq 0 \\ \text{negative\_slope} \times x, & \text{ otherwise } \end{cases}\end{split}\]- Parameters
negative_slope – Controls the angle of the negative slope. Default: 1e-2
inplace – can optionally do the operation in-place. Default:
False
- Shape:
Input: \((*)\) where * means, any number of additional dimensions
Output: \((*)\), same shape as the input
Examples:
>>> m = nn.LeakyReLU(0.1) >>> input = torch.randn(2) >>> output = m(input)
-
extra_repr
() → str¶ Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
Linear
(in_features: int, out_features: int, bias: bool = True, device=None, dtype=None)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.Applies a linear transformation to the incoming data: \(y = xA^T + b\)
This module supports TensorFloat32.
- Args:
in_features: size of each input sample out_features: size of each output sample bias: If set to
False
, the layer will not learn an additive bias.Default:
True
- Shape:
Input: \((*, H_{in})\) where \(*\) means any number of dimensions including none and \(H_{in} = \text{in\_features}\).
Output: \((*, H_{out})\) where all but the last dimension are the same shape as the input and \(H_{out} = \text{out\_features}\).
- Attributes:
- weight: the learnable weights of the module of shape
\((\text{out\_features}, \text{in\_features})\). The values are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\), where \(k = \frac{1}{\text{in\_features}}\)
- bias: the learnable bias of the module of shape \((\text{out\_features})\).
If
bias
isTrue
, the values are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{1}{\text{in\_features}}\)
Examples:
>> m = nn.Linear(20, 30) >> input = torch.randn(128, 20) >> output = m(input) >> print(output.size()) torch.Size([128, 30])
-
class
borch.nn.
LocalResponseNorm
(size: int, alpha: float = 0.0001, beta: float = 0.75, k: float = 1.0)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.- Applies local response normalization over an input signal composed
of several input planes, where channels occupy the second dimension. Applies normalization across channels.
\[b_{c} = a_{c}\left(k + \frac{\alpha}{n} \sum_{c'=\max(0, c-n/2)}^{\min(N-1,c+n/2)}a_{c'}^2\right)^{-\beta}\]- Args:
size: amount of neighbouring channels used for normalization alpha: multiplicative factor. Default: 0.0001 beta: exponent. Default: 0.75 k: additive factor. Default: 1
- Shape:
Input: \((N, C, *)\)
Output: \((N, C, *)\) (same shape as input)
Examples:
>> lrn = nn.LocalResponseNorm(2) >> signal_2d = torch.randn(32, 5, 24, 24) >> signal_4d = torch.randn(16, 5, 7, 7, 7, 7) >> output_2d = lrn(signal_2d) >> output_4d = lrn(signal_4d)
-
class
borch.nn.
LogSigmoid
¶ Applies the element-wise function:
\[\text{LogSigmoid}(x) = \log\left(\frac{ 1 }{ 1 + \exp(-x)}\right)\]- Shape:
Input: \((*)\), where \(*\) means any number of dimensions.
Output: \((*)\), same shape as the input.
Examples:
>>> m = nn.LogSigmoid() >>> input = torch.randn(2) >>> output = m(input)
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
LogSoftmax
(dim: Optional[int] = None)¶ Applies the \(\log(\text{Softmax}(x))\) function to an n-dimensional input Tensor. The LogSoftmax formulation can be simplified as:
\[\text{LogSoftmax}(x_{i}) = \log\left(\frac{\exp(x_i) }{ \sum_j \exp(x_j)} \right)\]- Shape:
Input: \((*)\) where * means, any number of additional dimensions
Output: \((*)\), same shape as the input
- Parameters
dim (int) – A dimension along which LogSoftmax will be computed.
- Returns
a Tensor of the same dimension and shape as the input with values in the range [-inf, 0)
Examples:
>>> m = nn.LogSoftmax() >>> input = torch.randn(2, 3) >>> output = m(input)
-
extra_repr
()¶ Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
MSELoss
(size_average=None, reduce=None, reduction: str = 'mean')¶ Creates a criterion that measures the mean squared error (squared L2 norm) between each element in the input \(x\) and target \(y\).
The unreduced (i.e. with
reduction
set to'none'
) loss can be described as:\[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = \left( x_n - y_n \right)^2,\]where \(N\) is the batch size. If
reduction
is not'none'
(default'mean'
), then:\[\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases}\end{split}\]\(x\) and \(y\) are tensors of arbitrary shapes with a total of \(n\) elements each.
The mean operation still operates over all the elements, and divides by \(n\).
The division by \(n\) can be avoided if one sets
reduction = 'sum'
.- Parameters
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
- Shape:
Input: \((*)\), where \(*\) means any number of dimensions.
Target: \((*)\), same shape as the input.
Examples:
>>> loss = nn.MSELoss() >>> input = torch.randn(3, 5, requires_grad=True) >>> target = torch.randn(3, 5) >>> output = loss(input, target) >>> output.backward()
-
forward
(input: torch.Tensor, target: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
MarginRankingLoss
(margin: float = 0.0, size_average=None, reduce=None, reduction: str = 'mean')¶ Creates a criterion that measures the loss given inputs \(x1\), \(x2\), two 1D mini-batch Tensors, and a label 1D mini-batch tensor \(y\) (containing 1 or -1).
If \(y = 1\) then it assumed the first input should be ranked higher (have a larger value) than the second input, and vice-versa for \(y = -1\).
The loss function for each pair of samples in the mini-batch is:
\[\text{loss}(x1, x2, y) = \max(0, -y * (x1 - x2) + \text{margin})\]- Parameters
margin (float, optional) – Has a default value of \(0\).
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
- Shape:
Input1: \((N)\) where N is the batch size.
Input2: \((N)\), same shape as the Input1.
Target: \((N)\), same shape as the inputs.
Output: scalar. If
reduction
is'none'
, then \((N)\).
Examples:
>>> loss = nn.MarginRankingLoss() >>> input1 = torch.randn(3, requires_grad=True) >>> input2 = torch.randn(3, requires_grad=True) >>> target = torch.randn(3).sign() >>> output = loss(input1, input2, target) >>> output.backward()
-
forward
(input1: torch.Tensor, input2: torch.Tensor, target: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
MaxPool1d
(kernel_size: Union[int, Tuple[int, ...]], stride: Union[int, Tuple[int, ...], None] = None, padding: Union[int, Tuple[int, ...]] = 0, dilation: Union[int, Tuple[int, ...]] = 1, return_indices: bool = False, ceil_mode: bool = False)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.- Applies a 1D max pooling over an input signal composed of several input
planes.
In the simplest case, the output value of the layer with input size \((N, C, L)\) and output \((N, C, L_{out})\) can be precisely described as:
\[out(N_i, C_j, k) = \max_{m=0, \ldots, \text{kernel\_size} - 1} input(N_i, C_j, stride \times k + m)\]If
padding
is non-zero, then the input is implicitly padded with negative infinity on both sides forpadding
number of points.dilation
is the stride between the elements within the sliding window. This link has a nice visualization of the pooling parameters.- Note:
When ceil_mode=True, sliding windows are allowed to go off-bounds if they start within the left padding or the input. Sliding windows that would start in the right padded region are ignored.
- Args:
kernel_size: The size of the sliding window, must be > 0. stride: The stride of the sliding window, must be > 0. Default value is
kernel_size
. padding: Implicit negative infinity padding to be added on both sides, must be >= 0 and <= kernel_size / 2. dilation: The stride between elements within a sliding window, must be > 0. return_indices: IfTrue
, will return the argmax along with the max values.Useful for
torch.nn.MaxUnpool1d
later- ceil_mode: If
True
, will use ceil instead of floor to compute the output shape. This ensures that every element in the input tensor is covered by a sliding window.
- ceil_mode: If
- Shape:
Input: \((N, C, L_{in})\) or \((C, L_{in})\).
Output: \((N, C, L_{out})\) or \((C, L_{out})\), where
\[L_{out} = \left\lfloor \frac{L_{in} + 2 \times \text{padding} - \text{dilation} \times (\text{kernel\_size} - 1) - 1}{\text{stride}} + 1\right\rfloor\]
Examples:
>> # pool of size=3, stride=2 >> m = nn.MaxPool1d(3, stride=2) >> input = torch.randn(20, 16, 50) >> output = m(input)
-
class
borch.nn.
MaxPool2d
(kernel_size: Union[int, Tuple[int, ...]], stride: Union[int, Tuple[int, ...], None] = None, padding: Union[int, Tuple[int, ...]] = 0, dilation: Union[int, Tuple[int, ...]] = 1, return_indices: bool = False, ceil_mode: bool = False)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.- Applies a 2D max pooling over an input signal composed of several input
planes.
In the simplest case, the output value of the layer with input size \((N, C, H, W)\), output \((N, C, H_{out}, W_{out})\) and
kernel_size
\((kH, kW)\) can be precisely described as:\[\begin{split}\begin{aligned} out(N_i, C_j, h, w) ={} & \max_{m=0, \ldots, kH-1} \max_{n=0, \ldots, kW-1} \\ & \text{input}(N_i, C_j, \text{stride[0]} \times h + m, \text{stride[1]} \times w + n) \end{aligned}\end{split}\]If
padding
is non-zero, then the input is implicitly padded with negative infinity on both sides forpadding
number of points.dilation
controls the spacing between the kernel points. It is harder to describe, but this link has a nice visualization of whatdilation
does.- Note:
When ceil_mode=True, sliding windows are allowed to go off-bounds if they start within the left padding or the input. Sliding windows that would start in the right padded region are ignored.
The parameters
kernel_size
,stride
,padding
,dilation
can either be:a single
int
– in which case the same value is used for the height and width dimensiona
tuple
of two ints – in which case, the first int is used for the height dimension, and the second int for the width dimension
- Args:
kernel_size: the size of the window to take a max over stride: the stride of the window. Default value is
kernel_size
padding: implicit zero padding to be added on both sides dilation: a parameter that controls the stride of elements in the window return_indices: ifTrue
, will return the max indices along with the outputs.Useful for
torch.nn.MaxUnpool2d
laterceil_mode: when True, will use ceil instead of floor to compute the output shape
- Shape:
Input: \((N, C, H_{in}, W_{in})\) or \((C, H_{in}, W_{in})\)
Output: \((N, C, H_{out}, W_{out})\) or \((C, H_{out}, W_{out})\), where
\[H_{out} = \left\lfloor\frac{H_{in} + 2 * \text{padding[0]} - \text{dilation[0]} \times (\text{kernel\_size[0]} - 1) - 1}{\text{stride[0]}} + 1\right\rfloor\]\[W_{out} = \left\lfloor\frac{W_{in} + 2 * \text{padding[1]} - \text{dilation[1]} \times (\text{kernel\_size[1]} - 1) - 1}{\text{stride[1]}} + 1\right\rfloor\]
Examples:
>> # pool of square window of size=3, stride=2 >> m = nn.MaxPool2d(3, stride=2) >> # pool of non-square window >> m = nn.MaxPool2d((3, 2), stride=(2, 1)) >> input = torch.randn(20, 16, 50, 32) >> output = m(input)
-
class
borch.nn.
MaxPool3d
(kernel_size: Union[int, Tuple[int, ...]], stride: Union[int, Tuple[int, ...], None] = None, padding: Union[int, Tuple[int, ...]] = 0, dilation: Union[int, Tuple[int, ...]] = 1, return_indices: bool = False, ceil_mode: bool = False)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.- Applies a 3D max pooling over an input signal composed of several input
planes.
In the simplest case, the output value of the layer with input size \((N, C, D, H, W)\), output \((N, C, D_{out}, H_{out}, W_{out})\) and
kernel_size
\((kD, kH, kW)\) can be precisely described as:\[\begin{split}\begin{aligned} \text{out}(N_i, C_j, d, h, w) ={} & \max_{k=0, \ldots, kD-1} \max_{m=0, \ldots, kH-1} \max_{n=0, \ldots, kW-1} \\ & \text{input}(N_i, C_j, \text{stride[0]} \times d + k, \text{stride[1]} \times h + m, \text{stride[2]} \times w + n) \end{aligned}\end{split}\]If
padding
is non-zero, then the input is implicitly padded with negative infinity on both sides forpadding
number of points.dilation
controls the spacing between the kernel points. It is harder to describe, but this link has a nice visualization of whatdilation
does.- Note:
When ceil_mode=True, sliding windows are allowed to go off-bounds if they start within the left padding or the input. Sliding windows that would start in the right padded region are ignored.
The parameters
kernel_size
,stride
,padding
,dilation
can either be:a single
int
– in which case the same value is used for the depth, height and width dimensiona
tuple
of three ints – in which case, the first int is used for the depth dimension, the second int for the height dimension and the third int for the width dimension
- Args:
kernel_size: the size of the window to take a max over stride: the stride of the window. Default value is
kernel_size
padding: implicit zero padding to be added on all three sides dilation: a parameter that controls the stride of elements in the window return_indices: ifTrue
, will return the max indices along with the outputs.Useful for
torch.nn.MaxUnpool3d
laterceil_mode: when True, will use ceil instead of floor to compute the output shape
- Shape:
Input: \((N, C, D_{in}, H_{in}, W_{in})\) or \((C, D_{in}, H_{in}, W_{in})\).
Output: \((N, C, D_{out}, H_{out}, W_{out})\) or \((C, D_{out}, H_{out}, W_{out})\), where
\[D_{out} = \left\lfloor\frac{D_{in} + 2 \times \text{padding}[0] - \text{dilation}[0] \times (\text{kernel\_size}[0] - 1) - 1}{\text{stride}[0]} + 1\right\rfloor\]\[H_{out} = \left\lfloor\frac{H_{in} + 2 \times \text{padding}[1] - \text{dilation}[1] \times (\text{kernel\_size}[1] - 1) - 1}{\text{stride}[1]} + 1\right\rfloor\]\[W_{out} = \left\lfloor\frac{W_{in} + 2 \times \text{padding}[2] - \text{dilation}[2] \times (\text{kernel\_size}[2] - 1) - 1}{\text{stride}[2]} + 1\right\rfloor\]
Examples:
>> # pool of square window of size=3, stride=2 >> m = nn.MaxPool3d(3, stride=2) >> # pool of non-square window >> m = nn.MaxPool3d((3, 2, 2), stride=(2, 1, 2)) >> input = torch.randn(20, 16, 50,44, 31) >> output = m(input)
-
class
borch.nn.
MaxUnpool1d
(kernel_size: Union[int, Tuple[int]], stride: Union[int, Tuple[int], None] = None, padding: Union[int, Tuple[int]] = 0)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.Computes a partial inverse of
MaxPool1d
.MaxPool1d
is not fully invertible, since the non-maximal values are lost.MaxUnpool1d
takes in as input the output ofMaxPool1d
including the indices of the maximal values and computes a partial inverse in which all non-maximal values are set to zero.Note
MaxPool1d
can map several input sizes to the same output sizes. Hence, the inversion process can get ambiguous. To accommodate this, you can provide the needed output size as an additional argumentoutput_size
in the forward call. See the Inputs and Example below.- Args:
kernel_size (int or tuple): Size of the max pooling window. stride (int or tuple): Stride of the max pooling window.
It is set to
kernel_size
by default.padding (int or tuple): Padding that was added to the input
- Inputs:
input: the input Tensor to invert
indices: the indices given out by
MaxPool1d
output_size (optional): the targeted output size
- Shape:
Input: \((N, C, H_{in})\) or \((C, H_{in})\).
Output: \((N, C, H_{out})\) or \((C, H_{out})\), where
\[H_{out} = (H_{in} - 1) \times \text{stride}[0] - 2 \times \text{padding}[0] + \text{kernel\_size}[0]\]or as given by
output_size
in the call operator
Example:
>> pool = nn.MaxPool1d(2, stride=2, return_indices=True) >> unpool = nn.MaxUnpool1d(2, stride=2) >> input = torch.tensor([[[1., 2, 3, 4, 5, 6, 7, 8]]]) >> output, indices = pool(input) >> unpool(output, indices) tensor([[[ 0., 2., 0., 4., 0., 6., 0., 8.]]]) >> # Example showcasing the use of output_size >> input = torch.tensor([[[1., 2, 3, 4, 5, 6, 7, 8, 9]]]) >> output, indices = pool(input) >> unpool(output, indices, output_size=input.size()) tensor([[[ 0., 2., 0., 4., 0., 6., 0., 8., 0.]]]) >> unpool(output, indices) tensor([[[ 0., 2., 0., 4., 0., 6., 0., 8.]]])
-
class
borch.nn.
MaxUnpool2d
(kernel_size: Union[int, Tuple[int, int]], stride: Union[int, Tuple[int, int], None] = None, padding: Union[int, Tuple[int, int]] = 0)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.Computes a partial inverse of
MaxPool2d
.MaxPool2d
is not fully invertible, since the non-maximal values are lost.MaxUnpool2d
takes in as input the output ofMaxPool2d
including the indices of the maximal values and computes a partial inverse in which all non-maximal values are set to zero.Note
MaxPool2d
can map several input sizes to the same output sizes. Hence, the inversion process can get ambiguous. To accommodate this, you can provide the needed output size as an additional argumentoutput_size
in the forward call. See the Inputs and Example below.- Args:
kernel_size (int or tuple): Size of the max pooling window. stride (int or tuple): Stride of the max pooling window.
It is set to
kernel_size
by default.padding (int or tuple): Padding that was added to the input
- Inputs:
input: the input Tensor to invert
indices: the indices given out by
MaxPool2d
output_size (optional): the targeted output size
- Shape:
Input: \((N, C, H_{in}, W_{in})\) or \((C, H_{in}, W_{in})\).
Output: \((N, C, H_{out}, W_{out})\) or \((C, H_{out}, W_{out})\), where
\[H_{out} = (H_{in} - 1) \times \text{stride[0]} - 2 \times \text{padding[0]} + \text{kernel\_size[0]}\]\[W_{out} = (W_{in} - 1) \times \text{stride[1]} - 2 \times \text{padding[1]} + \text{kernel\_size[1]}\]or as given by
output_size
in the call operator
Example:
>> pool = nn.MaxPool2d(2, stride=2, return_indices=True) >> unpool = nn.MaxUnpool2d(2, stride=2) >> input = torch.tensor([[[[ 1., 2, 3, 4], [ 5, 6, 7, 8], [ 9, 10, 11, 12], [13, 14, 15, 16]]]]) >> output, indices = pool(input) >> unpool(output, indices) tensor([[[[ 0., 0., 0., 0.], [ 0., 6., 0., 8.], [ 0., 0., 0., 0.], [ 0., 14., 0., 16.]]]]) >> # specify a different output size than input size >> unpool(output, indices, output_size=torch.Size([1, 1, 5, 5])) tensor([[[[ 0., 0., 0., 0., 0.], [ 6., 0., 8., 0., 0.], [ 0., 0., 0., 14., 0.], [ 16., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0.]]]])
-
class
borch.nn.
MaxUnpool3d
(kernel_size: Union[int, Tuple[int, int, int]], stride: Union[int, Tuple[int, int, int], None] = None, padding: Union[int, Tuple[int, int, int]] = 0)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.Computes a partial inverse of
MaxPool3d
.MaxPool3d
is not fully invertible, since the non-maximal values are lost.MaxUnpool3d
takes in as input the output ofMaxPool3d
including the indices of the maximal values and computes a partial inverse in which all non-maximal values are set to zero.Note
MaxPool3d
can map several input sizes to the same output sizes. Hence, the inversion process can get ambiguous. To accommodate this, you can provide the needed output size as an additional argumentoutput_size
in the forward call. See the Inputs section below.- Args:
kernel_size (int or tuple): Size of the max pooling window. stride (int or tuple): Stride of the max pooling window.
It is set to
kernel_size
by default.padding (int or tuple): Padding that was added to the input
- Inputs:
input: the input Tensor to invert
indices: the indices given out by
MaxPool3d
output_size (optional): the targeted output size
- Shape:
Input: \((N, C, D_{in}, H_{in}, W_{in})\) or \((C, D_{in}, H_{in}, W_{in})\).
Output: \((N, C, D_{out}, H_{out}, W_{out})\) or \((C, D_{out}, H_{out}, W_{out})\), where
\[D_{out} = (D_{in} - 1) \times \text{stride[0]} - 2 \times \text{padding[0]} + \text{kernel\_size[0]}\]\[H_{out} = (H_{in} - 1) \times \text{stride[1]} - 2 \times \text{padding[1]} + \text{kernel\_size[1]}\]\[W_{out} = (W_{in} - 1) \times \text{stride[2]} - 2 \times \text{padding[2]} + \text{kernel\_size[2]}\]or as given by
output_size
in the call operator
Example:
>> # pool of square window of size=3, stride=2 >> pool = nn.MaxPool3d(3, stride=2, return_indices=True) >> unpool = nn.MaxUnpool3d(3, stride=2) >> output, indices = pool(torch.randn(20, 16, 51, 33, 15)) >> unpooled_output = unpool(output, indices) >> unpooled_output.size() torch.Size([20, 16, 51, 33, 15])
-
class
borch.nn.
Mish
(inplace: bool = False)¶ Applies the Mish function, element-wise. Mish: A Self Regularized Non-Monotonic Neural Activation Function.
\[\text{Mish}(x) = x * \text{Tanh}(\text{Softplus}(x))\]- Shape:
Input: \((*)\), where \(*\) means any number of dimensions.
Output: \((*)\), same shape as the input.
Examples:
>>> m = nn.Mish() >>> input = torch.randn(2) >>> output = m(input)
-
extra_repr
() → str¶ Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
Module
(posterior=None)¶ Acts as a
torch.nn.Module
but handlesborch.RandomVariable
s correctly.It can be used in just the same way as in
torch
>>> import torch >>> import borch >>> class MLP(Module): … def __init__(self, in_size, out_size): … super().__init__() … self.fc1 = borch.nn.Linear(in_size, in_size*2) … self.relu = borch.nn.ReLU() … self.fc2 = borch.nn.Linear(in_size*2, out_size) … … def forward(self, x): … x = self.fc1(x) … x = self.relu(x) … x = self.fc2(x) … >>> mlp = MLP(2, 2) >>> out = mlp(torch.randn(3, 2))It can be mixed with torch modules as one see fit. >>> class MLP2(Module): … def __init__(self, in_size, out_size): … super().__init__() … self.fc1 = torch.nn.Linear(in_size, in_size*2) … self.relu = torch.nn.ReLU() … self.fc2 = borch.nn.Linear(in_size*2, out_size) … … def forward(self, x): … x = self.fc1(x) … x = self.relu(x) … x = self.fc2(x) … >>> our = MLP2(2, 2)(torch.randn(3,2))
The more interesting case is when one start to involve
borch.RandomVariable
s >>> from borch import distributions as dist >>> from borch.posterior import Normal >>> class MyModule(Module): … def __init__(self, w_size): … super().__init__(posterior=Normal()) … self.weight = dist.Normal(torch.ones(w_size), torch.ones(w_size)) … … def forward(self, x): … return x.matmul(self.weight) … >>> my_module = MyModule(w_size=(4,))- Parameters
posterior – A
borch.Posterior
subclass that handles how the inference is preformed.
-
get
(name)¶ Standard getattr with no custom overloading
-
property
internal_modules
¶ Get the internal modules borch uses, like prior, posterior, observed
-
observe
(*args, **kwargs)¶ Set/revert any random variables on the current posterior to be observed /latent.
The behaviour of an observed variable means that any
RandomVariable
objects assigned will be observed at the stated value (if the name matches a previously observed variable).Note
Calling
observe' will overwrite all ``observe()
calls made to ANY random variable attached to the module, even if it has a differnt name. One can still callobserve
on RandomVariable`` s in the forward after theobserve
call is made on the module.- Parameters
args – If
None
, all observed behaviour will be forgotten.kwargs – Any named arguments will be set to observed given that the value is a tensor, or the observed behaviour will be forgotten if set to
None
.
Examples
>>> import torch >>> from borch.distributions import Normal >>> from borch.posterior import Automatic >>> >>> model = Module() >>> rv = Normal(Tensor([1.]), Tensor([1.])) >>> model.observe(rv_one=Tensor([100.])) >>> model.rv_one = rv # rv_one has been observed >>> model.rv_one tensor([100.]) >>> model.observe(None) # stop observing rv_one, the value is no >>> # longer at 100. >>> sample(model) >>> torch.equal(model.rv_one, Tensor([100.])) False
-
class
borch.nn.
ModuleDict
(modules: Optional[Mapping[str, torch.nn.modules.module.Module]] = None)¶ Holds submodules in a dictionary.
ModuleDict
can be indexed like a regular Python dictionary, but modules it contains are properly registered, and will be visible by allModule
methods.ModuleDict
is an ordered dictionary that respectsthe order of insertion, and
in
update()
, the order of the mergedOrderedDict
,dict
(started from Python 3.6) or anotherModuleDict
(the argument toupdate()
).
Note that
update()
with other unordered mapping types (e.g., Python’s plaindict
before Python version 3.6) does not preserve the order of the merged mapping.- Parameters
modules (iterable, optional) – a mapping (dictionary) of (string: module) or an iterable of key-value pairs of type (string, module)
Example:
class MyModule(nn.Module): def __init__(self): super(MyModule, self).__init__() self.choices = nn.ModuleDict({ 'conv': nn.Conv2d(10, 10, 3), 'pool': nn.MaxPool2d(3) }) self.activations = nn.ModuleDict([ ['lrelu', nn.LeakyReLU()], ['prelu', nn.PReLU()] ]) def forward(self, x, choice, act): x = self.choices[choice](x) x = self.activations[act](x) return x
-
clear
() → None¶ Remove all items from the ModuleDict.
-
items
() → Iterable[Tuple[str, torch.nn.modules.module.Module]]¶ Return an iterable of the ModuleDict key/value pairs.
-
keys
() → Iterable[str]¶ Return an iterable of the ModuleDict keys.
-
pop
(key: str) → torch.nn.modules.module.Module¶ Remove key from the ModuleDict and return its module.
- Parameters
key (string) – key to pop from the ModuleDict
-
update
(modules: Mapping[str, torch.nn.modules.module.Module]) → None¶ Update the
ModuleDict
with the key-value pairs from a mapping or an iterable, overwriting existing keys.Note
If
modules
is anOrderedDict
, aModuleDict
, or an iterable of key-value pairs, the order of new elements in it is preserved.- Parameters
modules (iterable) – a mapping (dictionary) from string to
Module
, or an iterable of key-value pairs of type (string,Module
)
-
values
() → Iterable[torch.nn.modules.module.Module]¶ Return an iterable of the ModuleDict values.
-
class
borch.nn.
ModuleList
(modules: Optional[Iterable[torch.nn.modules.module.Module]] = None)¶ Holds submodules in a list.
ModuleList
can be indexed like a regular Python list, but modules it contains are properly registered, and will be visible by allModule
methods.- Parameters
modules (iterable, optional) – an iterable of modules to add
Example:
class MyModule(nn.Module): def __init__(self): super(MyModule, self).__init__() self.linears = nn.ModuleList([nn.Linear(10, 10) for i in range(10)]) def forward(self, x): # ModuleList can act as an iterable, or be indexed using ints for i, l in enumerate(self.linears): x = self.linears[i // 2](x) + l(x) return x
-
append
(module: torch.nn.modules.module.Module) → torch.nn.modules.container.ModuleList¶ Appends a given module to the end of the list.
- Parameters
module (nn.Module) – module to append
-
extend
(modules: Iterable[torch.nn.modules.module.Module]) → torch.nn.modules.container.ModuleList¶ Appends modules from a Python iterable to the end of the list.
- Parameters
modules (iterable) – iterable of modules to append
-
class
borch.nn.
MultiLabelMarginLoss
(size_average=None, reduce=None, reduction: str = 'mean')¶ Creates a criterion that optimizes a multi-class multi-classification hinge loss (margin-based loss) between input \(x\) (a 2D mini-batch Tensor) and output \(y\) (which is a 2D Tensor of target class indices). For each sample in the mini-batch:
\[\text{loss}(x, y) = \sum_{ij}\frac{\max(0, 1 - (x[y[j]] - x[i]))}{\text{x.size}(0)}\]where \(x \in \left\{0, \; \cdots , \; \text{x.size}(0) - 1\right\}\), \(y \in \left\{0, \; \cdots , \; \text{y.size}(0) - 1\right\}\), \(0 \leq y[j] \leq \text{x.size}(0)-1\), and \(i \neq y[j]\) for all \(i\) and \(j\).
\(y\) and \(x\) must have the same size.
The criterion only considers a contiguous block of non-negative targets that starts at the front.
This allows for different samples to have variable amounts of target classes.
- Parameters
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
- Shape:
Input: \((C)\) or \((N, C)\) where N is the batch size and C is the number of classes.
Target: \((C)\) or \((N, C)\), label targets padded by -1 ensuring same shape as the input.
Output: scalar. If
reduction
is'none'
, then \((N)\).
Examples:
>>> loss = nn.MultiLabelMarginLoss() >>> x = torch.FloatTensor([[0.1, 0.2, 0.4, 0.8]]) >>> # for target y, only consider labels 3 and 0, not after label -1 >>> y = torch.LongTensor([[3, 0, -1, 1]]) >>> loss(x, y) >>> # 0.25 * ((1-(0.1-0.2)) + (1-(0.1-0.4)) + (1-(0.8-0.2)) + (1-(0.8-0.4))) tensor(0.8500)
-
forward
(input: torch.Tensor, target: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
MultiLabelSoftMarginLoss
(weight: Optional[torch.Tensor] = None, size_average=None, reduce=None, reduction: str = 'mean')¶ Creates a criterion that optimizes a multi-label one-versus-all loss based on max-entropy, between input \(x\) and target \(y\) of size \((N, C)\). For each sample in the minibatch:
\[loss(x, y) = - \frac{1}{C} * \sum_i y[i] * \log((1 + \exp(-x[i]))^{-1}) + (1-y[i]) * \log\left(\frac{\exp(-x[i])}{(1 + \exp(-x[i]))}\right)\]where \(i \in \left\{0, \; \cdots , \; \text{x.nElement}() - 1\right\}\), \(y[i] \in \left\{0, \; 1\right\}\).
- Parameters
weight (Tensor, optional) – a manual rescaling weight given to each class. If given, it has to be a Tensor of size C. Otherwise, it is treated as if having all ones.
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
- Shape:
Input: \((N, C)\) where N is the batch size and C is the number of classes.
Target: \((N, C)\), label targets padded by -1 ensuring same shape as the input.
Output: scalar. If
reduction
is'none'
, then \((N)\).
-
forward
(input: torch.Tensor, target: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
MultiMarginLoss
(p: int = 1, margin: float = 1.0, weight: Optional[torch.Tensor] = None, size_average=None, reduce=None, reduction: str = 'mean')¶ Creates a criterion that optimizes a multi-class classification hinge loss (margin-based loss) between input \(x\) (a 2D mini-batch Tensor) and output \(y\) (which is a 1D tensor of target class indices, \(0 \leq y \leq \text{x.size}(1)-1\)):
For each mini-batch sample, the loss in terms of the 1D input \(x\) and scalar output \(y\) is:
\[\text{loss}(x, y) = \frac{\sum_i \max(0, \text{margin} - x[y] + x[i])^p}{\text{x.size}(0)}\]where \(x \in \left\{0, \; \cdots , \; \text{x.size}(0) - 1\right\}\) and \(i \neq y\).
Optionally, you can give non-equal weighting on the classes by passing a 1D
weight
tensor into the constructor.The loss function then becomes:
\[\text{loss}(x, y) = \frac{\sum_i \max(0, w[y] * (\text{margin} - x[y] + x[i]))^p}{\text{x.size}(0)}\]- Parameters
p (int, optional) – Has a default value of \(1\). \(1\) and \(2\) are the only supported values.
margin (float, optional) – Has a default value of \(1\).
weight (Tensor, optional) – a manual rescaling weight given to each class. If given, it has to be a Tensor of size C. Otherwise, it is treated as if having all ones.
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
- Shape:
Input: \((N, C)\) or \((C)\), where \(N\) is the batch size and \(C\) is the number of classes.
Target: \((N)\) or \(()\), where each value is \(0 \leq \text{targets}[i] \leq C-1\).
Output: scalar. If
reduction
is'none'
, then same shape as the target.
Examples:
>>> loss = nn.MultiMarginLoss() >>> x = torch.tensor([[0.1, 0.2, 0.4, 0.8]]) >>> y = torch.tensor([3]) >>> loss(x, y) >>> # 0.25 * ((1-(0.8-0.1)) + (1-(0.8-0.2)) + (1-(0.8-0.4))) tensor(0.3250)
-
forward
(input: torch.Tensor, target: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
MultiheadAttention
(embed_dim, num_heads, dropout=0.0, bias=True, add_bias_kv=False, add_zero_attn=False, kdim=None, vdim=None, batch_first=False, device=None, dtype=None)¶ Allows the model to jointly attend to information from different representation subspaces. See Attention Is All You Need.
\[\text{MultiHead}(Q, K, V) = \text{Concat}(head_1,\dots,head_h)W^O\]where \(head_i = \text{Attention}(QW_i^Q, KW_i^K, VW_i^V)\).
- Parameters
embed_dim – Total dimension of the model.
num_heads – Number of parallel attention heads. Note that
embed_dim
will be split acrossnum_heads
(i.e. each head will have dimensionembed_dim // num_heads
).dropout – Dropout probability on
attn_output_weights
. Default:0.0
(no dropout).bias – If specified, adds bias to input / output projection layers. Default:
True
.add_bias_kv – If specified, adds bias to the key and value sequences at dim=0. Default:
False
.add_zero_attn – If specified, adds a new batch of zeros to the key and value sequences at dim=1. Default:
False
.kdim – Total number of features for keys. Default:
None
(useskdim=embed_dim
).vdim – Total number of features for values. Default:
None
(usesvdim=embed_dim
).batch_first – If
True
, then the input and output tensors are provided as (batch, seq, feature). Default:False
(seq, batch, feature).
Examples:
>>> multihead_attn = nn.MultiheadAttention(embed_dim, num_heads) >>> attn_output, attn_output_weights = multihead_attn(query, key, value)
-
forward
(query: torch.Tensor, key: torch.Tensor, value: torch.Tensor, key_padding_mask: Optional[torch.Tensor] = None, need_weights: bool = True, attn_mask: Optional[torch.Tensor] = None) → Tuple[torch.Tensor, Optional[torch.Tensor]]¶ - Parameters
query – Query embeddings of shape \((L, N, E_q)\) when
batch_first=False
or \((N, L, E_q)\) whenbatch_first=True
, where \(L\) is the target sequence length, \(N\) is the batch size, and \(E_q\) is the query embedding dimensionembed_dim
. Queries are compared against key-value pairs to produce the output. See “Attention Is All You Need” for more details.key – Key embeddings of shape \((S, N, E_k)\) when
batch_first=False
or \((N, S, E_k)\) whenbatch_first=True
, where \(S\) is the source sequence length, \(N\) is the batch size, and \(E_k\) is the key embedding dimensionkdim
. See “Attention Is All You Need” for more details.value – Value embeddings of shape \((S, N, E_v)\) when
batch_first=False
or \((N, S, E_v)\) whenbatch_first=True
, where \(S\) is the source sequence length, \(N\) is the batch size, and \(E_v\) is the value embedding dimensionvdim
. See “Attention Is All You Need” for more details.key_padding_mask – If specified, a mask of shape \((N, S)\) indicating which elements within
key
to ignore for the purpose of attention (i.e. treat as “padding”). Binary and byte masks are supported. For a binary mask, aTrue
value indicates that the correspondingkey
value will be ignored for the purpose of attention. For a byte mask, a non-zero value indicates that the correspondingkey
value will be ignored.need_weights – If specified, returns
attn_output_weights
in addition toattn_outputs
. Default:True
.attn_mask – If specified, a 2D or 3D mask preventing attention to certain positions. Must be of shape \((L, S)\) or \((N\cdot\text{num\_heads}, L, S)\), where \(N\) is the batch size, \(L\) is the target sequence length, and \(S\) is the source sequence length. A 2D mask will be broadcasted across the batch while a 3D mask allows for a different mask for each entry in the batch. Binary, byte, and float masks are supported. For a binary mask, a
True
value indicates that the corresponding position is not allowed to attend. For a byte mask, a non-zero value indicates that the corresponding position is not allowed to attend. For a float mask, the mask values will be added to the attention weight.
- Outputs:
attn_output - Attention outputs of shape \((L, N, E)\) when
batch_first=False
or \((N, L, E)\) whenbatch_first=True
, where \(L\) is the target sequence length, \(N\) is the batch size, and \(E\) is the embedding dimensionembed_dim
.attn_output_weights - Attention output weights of shape \((N, L, S)\), where \(N\) is the batch size, \(L\) is the target sequence length, and \(S\) is the source sequence length. Only returned when
need_weights=True
.
-
class
borch.nn.
NLLLoss
(weight: Optional[torch.Tensor] = None, size_average=None, ignore_index: int = -100, reduce=None, reduction: str = 'mean')¶ The negative log likelihood loss. It is useful to train a classification problem with C classes.
If provided, the optional argument
weight
should be a 1D Tensor assigning weight to each of the classes. This is particularly useful when you have an unbalanced training set.The input given through a forward call is expected to contain log-probabilities of each class. input has to be a Tensor of size either \((minibatch, C)\) or \((minibatch, C, d_1, d_2, ..., d_K)\) with \(K \geq 1\) for the K-dimensional case. The latter is useful for higher dimension inputs, such as computing NLL loss per-pixel for 2D images.
Obtaining log-probabilities in a neural network is easily achieved by adding a LogSoftmax layer in the last layer of your network. You may use CrossEntropyLoss instead, if you prefer not to add an extra layer.
The target that this loss expects should be a class index in the range \([0, C-1]\) where C = number of classes; if ignore_index is specified, this loss also accepts this class index (this index may not necessarily be in the class range).
The unreduced (i.e. with
reduction
set to'none'
) loss can be described as:\[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_{y_n} x_{n,y_n}, \quad w_{c} = \text{weight}[c] \cdot \mathbb{1}\{c \not= \text{ignore\_index}\},\]where \(x\) is the input, \(y\) is the target, \(w\) is the weight, and \(N\) is the batch size. If
reduction
is not'none'
(default'mean'
), then\[\begin{split}\ell(x, y) = \begin{cases} \sum_{n=1}^N \frac{1}{\sum_{n=1}^N w_{y_n}} l_n, & \text{if reduction} = \text{`mean';}\\ \sum_{n=1}^N l_n, & \text{if reduction} = \text{`sum'.} \end{cases}\end{split}\]- Parameters
weight (Tensor, optional) – a manual rescaling weight given to each class. If given, it has to be a Tensor of size C. Otherwise, it is treated as if having all ones.
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
ignore_index (int, optional) – Specifies a target value that is ignored and does not contribute to the input gradient. When
size_average
isTrue
, the loss is averaged over non-ignored targets.reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the weighted mean of the output is taken,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
- Shape:
Input: \((N, C)\) or \((C)\), where C = number of classes, or \((N, C, d_1, d_2, ..., d_K)\) with \(K \geq 1\) in the case of K-dimensional loss.
Target: \((N)\) or \(()\), where each value is \(0 \leq \text{targets}[i] \leq C-1\), or \((N, d_1, d_2, ..., d_K)\) with \(K \geq 1\) in the case of K-dimensional loss.
Output: If
reduction
is'none'
, shape \((N)\) or \((N, d_1, d_2, ..., d_K)\) with \(K \geq 1\) in the case of K-dimensional loss. Otherwise, scalar.
Examples:
>>> m = nn.LogSoftmax(dim=1) >>> loss = nn.NLLLoss() >>> # input is of size N x C = 3 x 5 >>> input = torch.randn(3, 5, requires_grad=True) >>> # each element in target has to have 0 <= value < C >>> target = torch.tensor([1, 0, 4]) >>> output = loss(m(input), target) >>> output.backward() >>> >>> >>> # 2D loss example (used, for example, with image inputs) >>> N, C = 5, 4 >>> loss = nn.NLLLoss() >>> # input is of size N x C x height x width >>> data = torch.randn(N, 16, 10, 10) >>> conv = nn.Conv2d(16, C, (3, 3)) >>> m = nn.LogSoftmax(dim=1) >>> # each element in target has to have 0 <= value < C >>> target = torch.empty(N, 8, 8, dtype=torch.long).random_(0, C) >>> output = loss(m(conv(data)), target) >>> output.backward()
-
forward
(input: torch.Tensor, target: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
NLLLoss2d
(weight: Optional[torch.Tensor] = None, size_average=None, ignore_index: int = -100, reduce=None, reduction: str = 'mean')¶
-
class
borch.nn.
PReLU
(num_parameters: int = 1, init: float = 0.25, device=None, dtype=None)¶ Applies the element-wise function:
\[\text{PReLU}(x) = \max(0,x) + a * \min(0,x)\]or
\[\begin{split}\text{PReLU}(x) = \begin{cases} x, & \text{ if } x \geq 0 \\ ax, & \text{ otherwise } \end{cases}\end{split}\]Here \(a\) is a learnable parameter. When called without arguments, nn.PReLU() uses a single parameter \(a\) across all input channels. If called with nn.PReLU(nChannels), a separate \(a\) is used for each input channel.
Note
weight decay should not be used when learning \(a\) for good performance.
Note
Channel dim is the 2nd dim of input. When input has dims < 2, then there is no channel dim and the number of channels = 1.
- Parameters
num_parameters (int) – number of \(a\) to learn. Although it takes an int as input, there is only two values are legitimate: 1, or the number of channels at input. Default: 1
init (float) – the initial value of \(a\). Default: 0.25
- Shape:
Input: \(( *)\) where * means, any number of additional dimensions.
Output: \((*)\), same shape as the input.
-
weight
¶ the learnable weights of shape (
num_parameters
).- Type
Tensor
Examples:
>>> m = nn.PReLU() >>> input = torch.randn(2) >>> output = m(input)
-
extra_repr
() → str¶ Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
PairwiseDistance
(p: float = 2.0, eps: float = 1e-06, keepdim: bool = False)¶ Computes the pairwise distance between vectors \(v_1\), \(v_2\) using the p-norm:
\[\Vert x \Vert _p = \left( \sum_{i=1}^n \vert x_i \vert ^ p \right) ^ {1/p}.\]- Parameters
p (real) – the norm degree. Default: 2
eps (float, optional) – Small value to avoid division by zero. Default: 1e-6
keepdim (bool, optional) – Determines whether or not to keep the vector dimension. Default: False
- Shape:
Input1: \((N, D)\) or \((D)\) where N = batch dimension and D = vector dimension
Input2: \((N, D)\) or \((D)\), same shape as the Input1
- Output: \((N)\) or \(()\) based on input dimension.
If
keepdim
isTrue
, then \((N, 1)\) or \((1)\) based on input dimension.
- Examples::
>>> pdist = nn.PairwiseDistance(p=2) >>> input1 = torch.randn(100, 128) >>> input2 = torch.randn(100, 128) >>> output = pdist(input1, input2)
-
forward
(x1: torch.Tensor, x2: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
ParameterDict
(parameters: Optional[Mapping[str, Parameter]] = None)¶ Holds parameters in a dictionary.
ParameterDict can be indexed like a regular Python dictionary, but parameters it contains are properly registered, and will be visible by all Module methods.
ParameterDict
is an ordered dictionary that respectsthe order of insertion, and
in
update()
, the order of the mergedOrderedDict
or anotherParameterDict
(the argument toupdate()
).
Note that
update()
with other unordered mapping types (e.g., Python’s plaindict
) does not preserve the order of the merged mapping.- Parameters
parameters (iterable, optional) – a mapping (dictionary) of (string :
Parameter
) or an iterable of key-value pairs of type (string,Parameter
)
Example:
class MyModule(nn.Module): def __init__(self): super(MyModule, self).__init__() self.params = nn.ParameterDict({ 'left': nn.Parameter(torch.randn(5, 10)), 'right': nn.Parameter(torch.randn(5, 10)) }) def forward(self, x, choice): x = self.params[choice].mm(x) return x
-
clear
() → None¶ Remove all items from the ParameterDict.
-
extra_repr
() → str¶ Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
-
items
() → Iterable[Tuple[str, Parameter]]¶ Return an iterable of the ParameterDict key/value pairs.
-
keys
() → Iterable[str]¶ Return an iterable of the ParameterDict keys.
-
pop
(key: str) → Parameter¶ Remove key from the ParameterDict and return its parameter.
- Parameters
key (string) – key to pop from the ParameterDict
-
update
(parameters: Mapping[str, Parameter]) → None¶ Update the
ParameterDict
with the key-value pairs from a mapping or an iterable, overwriting existing keys.Note
If
parameters
is anOrderedDict
, aParameterDict
, or an iterable of key-value pairs, the order of new elements in it is preserved.- Parameters
parameters (iterable) – a mapping (dictionary) from string to
Parameter
, or an iterable of key-value pairs of type (string,Parameter
)
-
values
() → Iterable[Parameter]¶ Return an iterable of the ParameterDict values.
-
class
borch.nn.
ParameterList
(parameters: Optional[Iterable[Parameter]] = None)¶ Holds parameters in a list.
ParameterList
can be indexed like a regular Python list, but parameters it contains are properly registered, and will be visible by allModule
methods.- Parameters
parameters (iterable, optional) – an iterable of
Parameter
to add
Example:
class MyModule(nn.Module): def __init__(self): super(MyModule, self).__init__() self.params = nn.ParameterList([nn.Parameter(torch.randn(10, 10)) for i in range(10)]) def forward(self, x): # ParameterList can act as an iterable, or be indexed using ints for i, p in enumerate(self.params): x = self.params[i // 2].mm(x) + p.mm(x) return x
-
append
(parameter: Parameter) → ParameterList¶ Appends a given parameter at the end of the list.
- Parameters
parameter (nn.Parameter) – parameter to append
-
extend
(parameters: Iterable[Parameter]) → ParameterList¶ Appends parameters from a Python iterable to the end of the list.
- Parameters
parameters (iterable) – iterable of parameters to append
-
extra_repr
() → str¶ Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
-
class
borch.nn.
PixelShuffle
(upscale_factor: int)¶ Rearranges elements in a tensor of shape \((*, C \times r^2, H, W)\) to a tensor of shape \((*, C, H \times r, W \times r)\), where r is an upscale factor.
This is useful for implementing efficient sub-pixel convolution with a stride of \(1/r\).
See the paper: Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network by Shi et. al (2016) for more details.
- Parameters
upscale_factor (int) – factor to increase spatial resolution by
- Shape:
Input: \((*, C_{in}, H_{in}, W_{in})\), where * is zero or more batch dimensions
Output: \((*, C_{out}, H_{out}, W_{out})\), where
\[C_{out} = C_{in} \div \text{upscale\_factor}^2\]\[H_{out} = H_{in} \times \text{upscale\_factor}\]\[W_{out} = W_{in} \times \text{upscale\_factor}\]Examples:
>>> pixel_shuffle = nn.PixelShuffle(3) >>> input = torch.randn(1, 9, 4, 4) >>> output = pixel_shuffle(input) >>> print(output.size()) torch.Size([1, 1, 12, 12])
-
extra_repr
() → str¶ Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
PixelUnshuffle
(downscale_factor: int)¶ Reverses the
PixelShuffle
operation by rearranging elements in a tensor of shape \((*, C, H \times r, W \times r)\) to a tensor of shape \((*, C \times r^2, H, W)\), where r is a downscale factor.See the paper: Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network by Shi et. al (2016) for more details.
- Parameters
downscale_factor (int) – factor to decrease spatial resolution by
- Shape:
Input: \((*, C_{in}, H_{in}, W_{in})\), where * is zero or more batch dimensions
Output: \((*, C_{out}, H_{out}, W_{out})\), where
\[C_{out} = C_{in} \times \text{downscale\_factor}^2\]\[H_{out} = H_{in} \div \text{downscale\_factor}\]\[W_{out} = W_{in} \div \text{downscale\_factor}\]Examples:
>>> pixel_unshuffle = nn.PixelUnshuffle(3) >>> input = torch.randn(1, 1, 12, 12) >>> output = pixel_unshuffle(input) >>> print(output.size()) torch.Size([1, 9, 4, 4])
-
extra_repr
() → str¶ Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
PoissonNLLLoss
(log_input: bool = True, full: bool = False, size_average=None, eps: float = 1e-08, reduce=None, reduction: str = 'mean')¶ Negative log likelihood loss with Poisson distribution of target.
The loss can be described as:
\[ \begin{align}\begin{aligned}\text{target} \sim \mathrm{Poisson}(\text{input})\\\text{loss}(\text{input}, \text{target}) = \text{input} - \text{target} * \log(\text{input}) + \log(\text{target!})\end{aligned}\end{align} \]The last term can be omitted or approximated with Stirling formula. The approximation is used for target values more than 1. For targets less or equal to 1 zeros are added to the loss.
- Parameters
log_input (bool, optional) – if
True
the loss is computed as \(\exp(\text{input}) - \text{target}*\text{input}\), ifFalse
the loss is \(\text{input} - \text{target}*\log(\text{input}+\text{eps})\).full (bool, optional) –
whether to compute full loss, i. e. to add the Stirling approximation term
\[\text{target}*\log(\text{target}) - \text{target} + 0.5 * \log(2\pi\text{target}).\]size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
eps (float, optional) – Small value to avoid evaluation of \(\log(0)\) when
log_input = False
. Default: 1e-8reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
Examples:
>>> loss = nn.PoissonNLLLoss() >>> log_input = torch.randn(5, 2, requires_grad=True) >>> target = torch.randn(5, 2) >>> output = loss(log_input, target) >>> output.backward()
- Shape:
Input: \((*)\), where \(*\) means any number of dimensions.
Target: \((*)\), same shape as the input.
Output: scalar by default. If
reduction
is'none'
, then \((*)\), the same shape as the input.
-
forward
(log_input: torch.Tensor, target: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
RNN
(*args, **kwargs)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.- Applies a multi-layer Elman RNN with \(\tanh\) or \(\text{ReLU}\) non-linearity to an
input sequence.
For each element in the input sequence, each layer computes the following function:
\[h_t = \tanh(W_{ih} x_t + b_{ih} + W_{hh} h_{(t-1)} + b_{hh})\]where \(h_t\) is the hidden state at time t, \(x_t\) is the input at time t, and \(h_{(t-1)}\) is the hidden state of the previous layer at time t-1 or the initial hidden state at time 0. If
nonlinearity
is'relu'
, then \(\text{ReLU}\) is used instead of \(\tanh\).- Args:
input_size: The number of expected features in the input x hidden_size: The number of features in the hidden state h num_layers: Number of recurrent layers. E.g., setting
num_layers=2
would mean stacking two RNNs together to form a stacked RNN, with the second RNN taking in outputs of the first RNN and computing the final results. Default: 1
nonlinearity: The non-linearity to use. Can be either
'tanh'
or'relu'
. Default:'tanh'
bias: IfFalse
, then the layer does not use bias weights b_ih and b_hh.Default:
True
- batch_first: If
True
, then the input and output tensors are provided as (batch, seq, feature) instead of (seq, batch, feature). Note that this does not apply to hidden or cell states. See the Inputs/Outputs sections below for details. Default:
False
- dropout: If non-zero, introduces a Dropout layer on the outputs of each
RNN layer except the last layer, with dropout probability equal to
dropout
. Default: 0
bidirectional: If
True
, becomes a bidirectional RNN. Default:False
- batch_first: If
- Inputs: input, h_0
input: tensor of shape \((L, N, H_{in})\) when
batch_first=False
or \((N, L, H_{in})\) whenbatch_first=True
containing the features of the input sequence. The input can also be a packed variable length sequence. Seetorch.nn.utils.rnn.pack_padded_sequence()
ortorch.nn.utils.rnn.pack_sequence()
for details.h_0: tensor of shape \((D * \text{num\_layers}, N, H_{out})\) containing the initial hidden state for each element in the batch. Defaults to zeros if not provided.
where:
\[\begin{split}\begin{aligned} N ={} & \text{batch size} \\ L ={} & \text{sequence length} \\ D ={} & 2 \text{ if bidirectional=True otherwise } 1 \\ H_{in} ={} & \text{input\_size} \\ H_{out} ={} & \text{hidden\_size} \end{aligned}\end{split}\]- Outputs: output, h_n
output: tensor of shape \((L, N, D * H_{out})\) when
batch_first=False
or \((N, L, D * H_{out})\) whenbatch_first=True
containing the output features (h_t) from the last layer of the RNN, for each t. If atorch.nn.utils.rnn.PackedSequence
has been given as the input, the output will also be a packed sequence.h_n: tensor of shape \((D * \text{num\_layers}, N, H_{out})\) containing the final hidden state for each element in the batch.
- Attributes:
- weight_ih_l[k]: the learnable input-hidden weights of the k-th layer,
of shape (hidden_size, input_size) for k = 0. Otherwise, the shape is (hidden_size, num_directions * hidden_size)
- weight_hh_l[k]: the learnable hidden-hidden weights of the k-th layer,
of shape (hidden_size, hidden_size)
- bias_ih_l[k]: the learnable input-hidden bias of the k-th layer,
of shape (hidden_size)
- bias_hh_l[k]: the learnable hidden-hidden bias of the k-th layer,
of shape (hidden_size)
Note
All the weights and biases are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{1}{\text{hidden\_size}}\)
Note
For bidirectional RNNs, forward and backward are directions 0 and 1 respectively. Example of splitting the output layers when
batch_first=False
:output.view(seq_len, batch, num_directions, hidden_size)
.Examples:
>> rnn = nn.RNN(10, 20, 2) >> input = torch.randn(5, 3, 10) >> h0 = torch.randn(2, 3, 20) >> output, hn = rnn(input, h0)
-
class
borch.nn.
RNNBase
(mode: str, input_size: int, hidden_size: int, num_layers: int = 1, bias: bool = True, batch_first: bool = False, dropout: float = 0.0, bidirectional: bool = False, proj_size: int = 0, device=None, dtype=None)¶
-
class
borch.nn.
RNNCell
(input_size: int, hidden_size: int, bias: bool = True, nonlinearity: str = 'tanh', device=None, dtype=None)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.An Elman RNN cell with tanh or ReLU non-linearity.
\[h' = \tanh(W_{ih} x + b_{ih} + W_{hh} h + b_{hh})\]If
nonlinearity
is ‘relu’, then ReLU is used in place of tanh.- Args:
input_size: The number of expected features in the input x hidden_size: The number of features in the hidden state h bias: If
False
, then the layer does not use bias weights b_ih and b_hh.Default:
True
nonlinearity: The non-linearity to use. Can be either
'tanh'
or'relu'
. Default:'tanh'
- Inputs: input, hidden
input of shape (batch, input_size): tensor containing input features
hidden of shape (batch, hidden_size): tensor containing the initial hidden state for each element in the batch. Defaults to zero if not provided.
- Outputs: h’
h’ of shape (batch, hidden_size): tensor containing the next hidden state for each element in the batch
- Shape:
Input1: \((N, H_{in})\) tensor containing input features where \(H_{in}\) = input_size
Input2: \((N, H_{out})\) tensor containing the initial hidden state for each element in the batch where \(H_{out}\) = hidden_size Defaults to zero if not provided.
Output: \((N, H_{out})\) tensor containing the next hidden state for each element in the batch
- Attributes:
- weight_ih: the learnable input-hidden weights, of shape
(hidden_size, input_size)
- weight_hh: the learnable hidden-hidden weights, of shape
(hidden_size, hidden_size)
bias_ih: the learnable input-hidden bias, of shape (hidden_size) bias_hh: the learnable hidden-hidden bias, of shape (hidden_size)
Note
All the weights and biases are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{1}{\text{hidden\_size}}\)
Examples:
>> rnn = nn.RNNCell(10, 20) >> input = torch.randn(6, 3, 10) >> hx = torch.randn(3, 20) >> output = [] >> for i in range(6): hx = rnn(input[i], hx) output.append(hx)
-
class
borch.nn.
RNNCellBase
(input_size: int, hidden_size: int, bias: bool, num_chunks: int, device=None, dtype=None)¶
-
class
borch.nn.
RReLU
(lower: float = 0.125, upper: float = 0.3333333333333333, inplace: bool = False)¶ Applies the randomized leaky rectified liner unit function, element-wise, as described in the paper:
Empirical Evaluation of Rectified Activations in Convolutional Network.
The function is defined as:
\[\begin{split}\text{RReLU}(x) = \begin{cases} x & \text{if } x \geq 0 \\ ax & \text{ otherwise } \end{cases}\end{split}\]where \(a\) is randomly sampled from uniform distribution \(\mathcal{U}(\text{lower}, \text{upper})\).
- Parameters
lower – lower bound of the uniform distribution. Default: \(\frac{1}{8}\)
upper – upper bound of the uniform distribution. Default: \(\frac{1}{3}\)
inplace – can optionally do the operation in-place. Default:
False
- Shape:
Input: \((*)\), where \(*\) means any number of dimensions.
Output: \((*)\), same shape as the input.
Examples:
>>> m = nn.RReLU(0.1, 0.3) >>> input = torch.randn(2) >>> output = m(input)
-
extra_repr
()¶ Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
ReLU
(inplace: bool = False)¶ Applies the rectified linear unit function element-wise:
\(\text{ReLU}(x) = (x)^+ = \max(0, x)\)
- Parameters
inplace – can optionally do the operation in-place. Default:
False
- Shape:
Input: \((*)\), where \(*\) means any number of dimensions.
Output: \((*)\), same shape as the input.
Examples:
>>> m = nn.ReLU() >>> input = torch.randn(2) >>> output = m(input) An implementation of CReLU - https://arxiv.org/abs/1603.05201 >>> m = nn.ReLU() >>> input = torch.randn(2).unsqueeze(0) >>> output = torch.cat((m(input),m(-input)))
-
extra_repr
() → str¶ Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
ReLU6
(inplace: bool = False)¶ Applies the element-wise function:
\[\text{ReLU6}(x) = \min(\max(0,x), 6)\]- Parameters
inplace – can optionally do the operation in-place. Default:
False
- Shape:
Input: \((*)\), where \(*\) means any number of dimensions.
Output: \((*)\), same shape as the input.
Examples:
>>> m = nn.ReLU6() >>> input = torch.randn(2) >>> output = m(input)
-
extra_repr
() → str¶ Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
-
class
borch.nn.
ReflectionPad1d
(padding: Union[int, Tuple[int, int]])¶ Pads the input tensor using the reflection of the input boundary.
For N-dimensional padding, use
torch.nn.functional.pad()
.- Parameters
padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 2-tuple, uses (\(\text{padding\_left}\), \(\text{padding\_right}\))
- Shape:
Input: \((C, W_{in})\) or \((N, C, W_{in})\).
Output: \((C, W_{out})\) or \((N, C, W_{out})\), where
\(W_{out} = W_{in} + \text{padding\_left} + \text{padding\_right}\)
Examples:
>>> m = nn.ReflectionPad1d(2) >>> input = torch.arange(8, dtype=torch.float).reshape(1, 2, 4) >>> input tensor([[[0., 1., 2., 3.], [4., 5., 6., 7.]]]) >>> m(input) tensor([[[2., 1., 0., 1., 2., 3., 2., 1.], [6., 5., 4., 5., 6., 7., 6., 5.]]]) >>> # using different paddings for different sides >>> m = nn.ReflectionPad1d((3, 1)) >>> m(input) tensor([[[3., 2., 1., 0., 1., 2., 3., 2.], [7., 6., 5., 4., 5., 6., 7., 6.]]])
-
class
borch.nn.
ReflectionPad2d
(padding: Union[int, Tuple[int, int, int, int]])¶ Pads the input tensor using the reflection of the input boundary.
For N-dimensional padding, use
torch.nn.functional.pad()
.- Parameters
padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 4-tuple, uses (\(\text{padding\_left}\), \(\text{padding\_right}\), \(\text{padding\_top}\), \(\text{padding\_bottom}\))
- Shape:
Input: \((N, C, H_{in}, W_{in})\) or \((C, H_{in}, W_{in})\).
Output: \((N, C, H_{out}, W_{out})\) or \((C, H_{out}, W_{out})\) where
\(H_{out} = H_{in} + \text{padding\_top} + \text{padding\_bottom}\)
\(W_{out} = W_{in} + \text{padding\_left} + \text{padding\_right}\)
Examples:
>>> m = nn.ReflectionPad2d(2) >>> input = torch.arange(9, dtype=torch.float).reshape(1, 1, 3, 3) >>> input tensor([[[[0., 1., 2.], [3., 4., 5.], [6., 7., 8.]]]]) >>> m(input) tensor([[[[8., 7., 6., 7., 8., 7., 6.], [5., 4., 3., 4., 5., 4., 3.], [2., 1., 0., 1., 2., 1., 0.], [5., 4., 3., 4., 5., 4., 3.], [8., 7., 6., 7., 8., 7., 6.], [5., 4., 3., 4., 5., 4., 3.], [2., 1., 0., 1., 2., 1., 0.]]]]) >>> # using different paddings for different sides >>> m = nn.ReflectionPad2d((1, 1, 2, 0)) >>> m(input) tensor([[[[7., 6., 7., 8., 7.], [4., 3., 4., 5., 4.], [1., 0., 1., 2., 1.], [4., 3., 4., 5., 4.], [7., 6., 7., 8., 7.]]]])
-
class
borch.nn.
ReflectionPad3d
(padding: Union[int, Tuple[int, int, int, int, int, int]])¶ Pads the input tensor using the reflection of the input boundary.
For N-dimensional padding, use
torch.nn.functional.pad()
.- Parameters
padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 6-tuple, uses (\(\text{padding\_left}\), \(\text{padding\_right}\), \(\text{padding\_top}\), \(\text{padding\_bottom}\), \(\text{padding\_front}\), \(\text{padding\_back}\))
- Shape:
Input: \((N, C, D_{in}, H_{in}, W_{in})\) or \((C, D_{in}, H_{in}, W_{in})\).
Output: \((N, C, D_{out}, H_{out}, W_{out})\) or \((C, D_{out}, H_{out}, W_{out})\), where
\(D_{out} = D_{in} + \text{padding\_front} + \text{padding\_back}\)
\(H_{out} = H_{in} + \text{padding\_top} + \text{padding\_bottom}\)
\(W_{out} = W_{in} + \text{padding\_left} + \text{padding\_right}\)
Examples:
>>> m = nn.ReflectionPad3d(1) >>> input = torch.arange(8, dtype=torch.float).reshape(1, 1, 2, 2, 2) >>> m(input) tensor([[[[[7., 6., 7., 6.], [5., 4., 5., 4.], [7., 6., 7., 6.], [5., 4., 5., 4.]], [[3., 2., 3., 2.], [1., 0., 1., 0.], [3., 2., 3., 2.], [1., 0., 1., 0.]], [[7., 6., 7., 6.], [5., 4., 5., 4.], [7., 6., 7., 6.], [5., 4., 5., 4.]], [[3., 2., 3., 2.], [1., 0., 1., 0.], [3., 2., 3., 2.], [1., 0., 1., 0.]]]]])
-
class
borch.nn.
ReplicationPad1d
(padding: Union[int, Tuple[int, int]])¶ Pads the input tensor using replication of the input boundary.
For N-dimensional padding, use
torch.nn.functional.pad()
.- Parameters
padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 2-tuple, uses (\(\text{padding\_left}\), \(\text{padding\_right}\))
- Shape:
Input: \((C, W_{in})\) or \((N, C, W_{in})\).
Output: \((C, W_{out})\) or \((N, C, W_{out})\), where
\(W_{out} = W_{in} + \text{padding\_left} + \text{padding\_right}\)
Examples:
>>> m = nn.ReplicationPad1d(2) >>> input = torch.arange(8, dtype=torch.float).reshape(1, 2, 4) >>> input tensor([[[0., 1., 2., 3.], [4., 5., 6., 7.]]]) >>> m(input) tensor([[[0., 0., 0., 1., 2., 3., 3., 3.], [4., 4., 4., 5., 6., 7., 7., 7.]]]) >>> # using different paddings for different sides >>> m = nn.ReplicationPad1d((3, 1)) >>> m(input) tensor([[[0., 0., 0., 0., 1., 2., 3., 3.], [4., 4., 4., 4., 5., 6., 7., 7.]]])
-
class
borch.nn.
ReplicationPad2d
(padding: Union[int, Tuple[int, int, int, int]])¶ Pads the input tensor using replication of the input boundary.
For N-dimensional padding, use
torch.nn.functional.pad()
.- Parameters
padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 4-tuple, uses (\(\text{padding\_left}\), \(\text{padding\_right}\), \(\text{padding\_top}\), \(\text{padding\_bottom}\))
- Shape:
Input: \((N, C, H_{in}, W_{in})\) or \((C, H_{in}, W_{in})\).
Output: \((N, C, H_{out}, W_{out})\) or \((C, H_{out}, W_{out})\), where
\(H_{out} = H_{in} + \text{padding\_top} + \text{padding\_bottom}\)
\(W_{out} = W_{in} + \text{padding\_left} + \text{padding\_right}\)
Examples:
>>> m = nn.ReplicationPad2d(2) >>> input = torch.arange(9, dtype=torch.float).reshape(1, 1, 3, 3) >>> input tensor([[[[0., 1., 2.], [3., 4., 5.], [6., 7., 8.]]]]) >>> m(input) tensor([[[[0., 0., 0., 1., 2., 2., 2.], [0., 0., 0., 1., 2., 2., 2.], [0., 0., 0., 1., 2., 2., 2.], [3., 3., 3., 4., 5., 5., 5.], [6., 6., 6., 7., 8., 8., 8.], [6., 6., 6., 7., 8., 8., 8.], [6., 6., 6., 7., 8., 8., 8.]]]]) >>> # using different paddings for different sides >>> m = nn.ReplicationPad2d((1, 1, 2, 0)) >>> m(input) tensor([[[[0., 0., 1., 2., 2.], [0., 0., 1., 2., 2.], [0., 0., 1., 2., 2.], [3., 3., 4., 5., 5.], [6., 6., 7., 8., 8.]]]])
-
class
borch.nn.
ReplicationPad3d
(padding: Union[int, Tuple[int, int, int, int, int, int]])¶ Pads the input tensor using replication of the input boundary.
For N-dimensional padding, use
torch.nn.functional.pad()
.- Parameters
padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 6-tuple, uses (\(\text{padding\_left}\), \(\text{padding\_right}\), \(\text{padding\_top}\), \(\text{padding\_bottom}\), \(\text{padding\_front}\), \(\text{padding\_back}\))
- Shape:
Input: \((N, C, D_{in}, H_{in}, W_{in})\) or \((C, D_{in}, H_{in}, W_{in})\).
Output: \((N, C, D_{out}, H_{out}, W_{out})\) or \((C, D_{out}, H_{out}, W_{out})\), where
\(D_{out} = D_{in} + \text{padding\_front} + \text{padding\_back}\)
\(H_{out} = H_{in} + \text{padding\_top} + \text{padding\_bottom}\)
\(W_{out} = W_{in} + \text{padding\_left} + \text{padding\_right}\)
Examples:
>>> m = nn.ReplicationPad3d(3) >>> input = torch.randn(16, 3, 8, 320, 480) >>> output = m(input) >>> # using different paddings for different sides >>> m = nn.ReplicationPad3d((3, 3, 6, 6, 1, 1)) >>> output = m(input)
-
class
borch.nn.
SELU
(inplace: bool = False)¶ Applied element-wise, as:
\[\text{SELU}(x) = \text{scale} * (\max(0,x) + \min(0, \alpha * (\exp(x) - 1)))\]with \(\alpha = 1.6732632423543772848170429916717\) and \(\text{scale} = 1.0507009873554804934193349852946\).
Warning
When using
kaiming_normal
orkaiming_normal_
for initialisation,nonlinearity='linear'
should be used instead ofnonlinearity='selu'
in order to get Self-Normalizing Neural Networks. Seetorch.nn.init.calculate_gain()
for more information.More details can be found in the paper Self-Normalizing Neural Networks .
- Parameters
inplace (bool, optional) – can optionally do the operation in-place. Default:
False
- Shape:
Input: \((*)\), where \(*\) means any number of dimensions.
Output: \((*)\), same shape as the input.
Examples:
>>> m = nn.SELU() >>> input = torch.randn(2) >>> output = m(input)
-
extra_repr
() → str¶ Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
Sequential
(*args)¶ A sequential container. Modules will be added to it in the order they are passed in the constructor. Alternatively, an
OrderedDict
of modules can be passed in. Theforward()
method ofSequential
accepts any input and forwards it to the first module it contains. It then “chains” outputs to inputs sequentially for each subsequent module, finally returning the output of the last module.The value a
Sequential
provides over manually calling a sequence of modules is that it allows treating the whole container as a single module, such that performing a transformation on theSequential
applies to each of the modules it stores (which are each a registered submodule of theSequential
).What’s the difference between a
Sequential
and atorch.nn.ModuleList
? AModuleList
is exactly what it sounds like–a list for storingModule
s! On the other hand, the layers in aSequential
are connected in a cascading way.Example:
# Using Sequential to create a small model. When `model` is run, # input will first be passed to `Conv2d(1,20,5)`. The output of # `Conv2d(1,20,5)` will be used as the input to the first # `ReLU`; the output of the first `ReLU` will become the input # for `Conv2d(20,64,5)`. Finally, the output of # `Conv2d(20,64,5)` will be used as input to the second `ReLU` model = nn.Sequential( nn.Conv2d(1,20,5), nn.ReLU(), nn.Conv2d(20,64,5), nn.ReLU() ) # Using Sequential with OrderedDict. This is functionally the # same as the above code model = nn.Sequential(OrderedDict([ ('conv1', nn.Conv2d(1,20,5)), ('relu1', nn.ReLU()), ('conv2', nn.Conv2d(20,64,5)), ('relu2', nn.ReLU()) ]))
-
forward
(input)¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
-
class
borch.nn.
SiLU
(inplace: bool = False)¶ Applies the Sigmoid Linear Unit (SiLU) function, element-wise. The SiLU function is also known as the swish function.
\[\text{silu}(x) = x * \sigma(x), \text{where } \sigma(x) \text{ is the logistic sigmoid.}\]Note
See Gaussian Error Linear Units (GELUs) where the SiLU (Sigmoid Linear Unit) was originally coined, and see Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning and Swish: a Self-Gated Activation Function where the SiLU was experimented with later.
- Shape:
Input: \((*)\), where \(*\) means any number of dimensions.
Output: \((*)\), same shape as the input.
Examples:
>>> m = nn.SiLU() >>> input = torch.randn(2) >>> output = m(input)
-
extra_repr
() → str¶ Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
Sigmoid
¶ Applies the element-wise function:
\[\text{Sigmoid}(x) = \sigma(x) = \frac{1}{1 + \exp(-x)}\]- Shape:
Input: \((*)\), where \(*\) means any number of dimensions.
Output: \((*)\), same shape as the input.
Examples:
>>> m = nn.Sigmoid() >>> input = torch.randn(2) >>> output = m(input)
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
SmoothL1Loss
(size_average=None, reduce=None, reduction: str = 'mean', beta: float = 1.0)¶ Creates a criterion that uses a squared term if the absolute element-wise error falls below beta and an L1 term otherwise. It is less sensitive to outliers than
torch.nn.MSELoss
and in some cases prevents exploding gradients (e.g. see the paper Fast R-CNN by Ross Girshick).For a batch of size \(N\), the unreduced loss can be described as:
\[\ell(x, y) = L = \{l_1, ..., l_N\}^T\]with
\[\begin{split}l_n = \begin{cases} 0.5 (x_n - y_n)^2 / beta, & \text{if } |x_n - y_n| < beta \\ |x_n - y_n| - 0.5 * beta, & \text{otherwise } \end{cases}\end{split}\]If reduction is not none, then:
\[\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases}\end{split}\]Note
Smooth L1 loss can be seen as exactly
L1Loss
, but with the \(|x - y| < beta\) portion replaced with a quadratic function such that its slope is 1 at \(|x - y| = beta\). The quadratic segment smooths the L1 loss near \(|x - y| = 0\).Note
Smooth L1 loss is closely related to
HuberLoss
, being equivalent to \(huber(x, y) / beta\) (note that Smooth L1’s beta hyper-parameter is also known as delta for Huber). This leads to the following differences:As beta -> 0, Smooth L1 loss converges to
L1Loss
, whileHuberLoss
converges to a constant 0 loss.As beta -> \(+\infty\), Smooth L1 loss converges to a constant 0 loss, while
HuberLoss
converges toMSELoss
.For Smooth L1 loss, as beta varies, the L1 segment of the loss has a constant slope of 1. For
HuberLoss
, the slope of the L1 segment is beta.
- Parameters
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
beta (float, optional) – Specifies the threshold at which to change between L1 and L2 loss. The value must be non-negative. Default: 1.0
- Shape:
Input: \((*)\), where \(*\) means any number of dimensions.
Target: \((*)\), same shape as the input.
Output: scalar. If
reduction
is'none'
, then \((*)\), same shape as the input.
-
forward
(input: torch.Tensor, target: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
SoftMarginLoss
(size_average=None, reduce=None, reduction: str = 'mean')¶ Creates a criterion that optimizes a two-class classification logistic loss between input tensor \(x\) and target tensor \(y\) (containing 1 or -1).
\[\text{loss}(x, y) = \sum_i \frac{\log(1 + \exp(-y[i]*x[i]))}{\text{x.nelement}()}\]- Parameters
size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
- Shape:
Input: \((*)\), where \(*\) means any number of dimensions.
Target: \((*)\), same shape as the input.
Output: scalar. If
reduction
is'none'
, then \((*)\), same shape as input.
-
forward
(input: torch.Tensor, target: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
Softmax
(dim: Optional[int] = None)¶ Applies the Softmax function to an n-dimensional input Tensor rescaling them so that the elements of the n-dimensional output Tensor lie in the range [0,1] and sum to 1.
Softmax is defined as:
\[\text{Softmax}(x_{i}) = \frac{\exp(x_i)}{\sum_j \exp(x_j)}\]When the input Tensor is a sparse tensor then the unspecifed values are treated as
-inf
.- Shape:
Input: \((*)\) where * means, any number of additional dimensions
Output: \((*)\), same shape as the input
- Returns
a Tensor of the same dimension and shape as the input with values in the range [0, 1]
- Parameters
dim (int) – A dimension along which Softmax will be computed (so every slice along dim will sum to 1).
Note
This module doesn’t work directly with NLLLoss, which expects the Log to be computed between the Softmax and itself. Use LogSoftmax instead (it’s faster and has better numerical properties).
Examples:
>>> m = nn.Softmax(dim=1) >>> input = torch.randn(2, 3) >>> output = m(input)
-
extra_repr
() → str¶ Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
Softmax2d
¶ Applies SoftMax over features to each spatial location.
When given an image of
Channels x Height x Width
, it will apply Softmax to each location \((Channels, h_i, w_j)\)- Shape:
Input: \((N, C, H, W)\) or \((C, H, W)\).
Output: \((N, C, H, W)\) or \((C, H, W)\) (same shape as input)
- Returns
a Tensor of the same dimension and shape as the input with values in the range [0, 1]
Examples:
>>> m = nn.Softmax2d() >>> # you softmax over the 2nd dimension >>> input = torch.randn(2, 3, 12, 13) >>> output = m(input)
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
Softmin
(dim: Optional[int] = None)¶ Applies the Softmin function to an n-dimensional input Tensor rescaling them so that the elements of the n-dimensional output Tensor lie in the range [0, 1] and sum to 1.
Softmin is defined as:
\[\text{Softmin}(x_{i}) = \frac{\exp(-x_i)}{\sum_j \exp(-x_j)}\]- Shape:
Input: \((*)\) where * means, any number of additional dimensions
Output: \((*)\), same shape as the input
- Parameters
dim (int) – A dimension along which Softmin will be computed (so every slice along dim will sum to 1).
- Returns
a Tensor of the same dimension and shape as the input, with values in the range [0, 1]
Examples:
>>> m = nn.Softmin() >>> input = torch.randn(2, 3) >>> output = m(input)
-
extra_repr
()¶ Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
Softplus
(beta: int = 1, threshold: int = 20)¶ Applies the element-wise function:
\[\text{Softplus}(x) = \frac{1}{\beta} * \log(1 + \exp(\beta * x))\]SoftPlus is a smooth approximation to the ReLU function and can be used to constrain the output of a machine to always be positive.
For numerical stability the implementation reverts to the linear function when \(input \times \beta > threshold\).
- Parameters
beta – the \(\beta\) value for the Softplus formulation. Default: 1
threshold – values above this revert to a linear function. Default: 20
- Shape:
Input: \((*)\), where \(*\) means any number of dimensions.
Output: \((*)\), same shape as the input.
Examples:
>>> m = nn.Softplus() >>> input = torch.randn(2) >>> output = m(input)
-
extra_repr
() → str¶ Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
Softshrink
(lambd: float = 0.5)¶ Applies the soft shrinkage function elementwise:
\[\begin{split}\text{SoftShrinkage}(x) = \begin{cases} x - \lambda, & \text{ if } x > \lambda \\ x + \lambda, & \text{ if } x < -\lambda \\ 0, & \text{ otherwise } \end{cases}\end{split}\]- Parameters
lambd – the \(\lambda\) (must be no less than zero) value for the Softshrink formulation. Default: 0.5
- Shape:
Input: \((*)\), where \(*\) means any number of dimensions.
Output: \((*)\), same shape as the input.
Examples:
>>> m = nn.Softshrink() >>> input = torch.randn(2) >>> output = m(input)
-
extra_repr
() → str¶ Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
Softsign
¶ Applies the element-wise function:
\[\text{SoftSign}(x) = \frac{x}{ 1 + |x|}\]- Shape:
Input: \((*)\), where \(*\) means any number of dimensions.
Output: \((*)\), same shape as the input.
Examples:
>>> m = nn.Softsign() >>> input = torch.randn(2) >>> output = m(input)
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
SyncBatchNorm
(num_features: int, eps: float = 1e-05, momentum: float = 0.1, affine: bool = True, track_running_stats: bool = True, process_group: Optional[Any] = None, device=None, dtype=None)¶ Applies Batch Normalization over a N-Dimensional input (a mini-batch of [N-2]D inputs with additional channel dimension) as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift .
\[y = \frac{x - \mathrm{E}[x]}{ \sqrt{\mathrm{Var}[x] + \epsilon}} * \gamma + \beta\]The mean and standard-deviation are calculated per-dimension over all mini-batches of the same process groups. \(\gamma\) and \(\beta\) are learnable parameter vectors of size C (where C is the input size). By default, the elements of \(\gamma\) are sampled from \(\mathcal{U}(0, 1)\) and the elements of \(\beta\) are set to 0. The standard-deviation is calculated via the biased estimator, equivalent to torch.var(input, unbiased=False).
Also by default, during training this layer keeps running estimates of its computed mean and variance, which are then used for normalization during evaluation. The running estimates are kept with a default
momentum
of 0.1.If
track_running_stats
is set toFalse
, this layer then does not keep running estimates, and batch statistics are instead used during evaluation time as well.Note
This
momentum
argument is different from one used in optimizer classes and the conventional notion of momentum. Mathematically, the update rule for running statistics here is \(\hat{x}_\text{new} = (1 - \text{momentum}) \times \hat{x} + \text{momentum} \times x_t\), where \(\hat{x}\) is the estimated statistic and \(x_t\) is the new observed value.Because the Batch Normalization is done for each channel in the
C
dimension, computing statistics on(N, +)
slices, it’s common terminology to call this Volumetric Batch Normalization or Spatio-temporal Batch Normalization.Currently
SyncBatchNorm
only supportsDistributedDataParallel
(DDP) with single GPU per process. Usetorch.nn.SyncBatchNorm.convert_sync_batchnorm()
to convertBatchNorm*D
layer toSyncBatchNorm
before wrapping Network with DDP.- Parameters
num_features – \(C\) from an expected input of size \((N, C, +)\)
eps – a value added to the denominator for numerical stability. Default:
1e-5
momentum – the value used for the running_mean and running_var computation. Can be set to
None
for cumulative moving average (i.e. simple average). Default: 0.1affine – a boolean value that when set to
True
, this module has learnable affine parameters. Default:True
track_running_stats – a boolean value that when set to
True
, this module tracks the running mean and variance, and when set toFalse
, this module does not track such statistics, and initializes statistics buffersrunning_mean
andrunning_var
asNone
. When these buffers areNone
, this module always uses batch statistics. in both training and eval modes. Default:True
process_group – synchronization of stats happen within each process group individually. Default behavior is synchronization across the whole world
- Shape:
Input: \((N, C, +)\)
Output: \((N, C, +)\) (same shape as input)
Note
Synchronization of batchnorm statistics occurs only while training, i.e. synchronization is disabled when
model.eval()
is set or ifself.training
is otherwiseFalse
.Examples:
>>> # With Learnable Parameters >>> m = nn.SyncBatchNorm(100) >>> # creating process group (optional) >>> # ranks is a list of int identifying rank ids. >>> ranks = list(range(8)) >>> r1, r2 = ranks[:4], ranks[4:] >>> # Note: every rank calls into new_group for every >>> # process group created, even if that rank is not >>> # part of the group. >>> process_groups = [torch.distributed.new_group(pids) for pids in [r1, r2]] >>> process_group = process_groups[0 if dist.get_rank() <= 3 else 1] >>> # Without Learnable Parameters >>> m = nn.BatchNorm3d(100, affine=False, process_group=process_group) >>> input = torch.randn(20, 100, 35, 45, 10) >>> output = m(input) >>> # network is nn.BatchNorm layer >>> sync_bn_network = nn.SyncBatchNorm.convert_sync_batchnorm(network, process_group) >>> # only single gpu per process is currently supported >>> ddp_sync_bn_network = torch.nn.parallel.DistributedDataParallel( >>> sync_bn_network, >>> device_ids=[args.local_rank], >>> output_device=args.local_rank)
-
classmethod
convert_sync_batchnorm
(module, process_group=None)¶ Helper function to convert all
BatchNorm*D
layers in the model totorch.nn.SyncBatchNorm
layers.- Parameters
module (nn.Module) – module containing one or more
BatchNorm*D
layersprocess_group (optional) – process group to scope synchronization, default is the whole world
- Returns
The original
module
with the convertedtorch.nn.SyncBatchNorm
layers. If the originalmodule
is aBatchNorm*D
layer, a newtorch.nn.SyncBatchNorm
layer object will be returned instead.
Example:
>>> # Network with nn.BatchNorm layer >>> module = torch.nn.Sequential( >>> torch.nn.Linear(20, 100), >>> torch.nn.BatchNorm1d(100), >>> ).cuda() >>> # creating process group (optional) >>> # ranks is a list of int identifying rank ids. >>> ranks = list(range(8)) >>> r1, r2 = ranks[:4], ranks[4:] >>> # Note: every rank calls into new_group for every >>> # process group created, even if that rank is not >>> # part of the group. >>> process_groups = [torch.distributed.new_group(pids) for pids in [r1, r2]] >>> process_group = process_groups[0 if dist.get_rank() <= 3 else 1] >>> sync_bn_module = torch.nn.SyncBatchNorm.convert_sync_batchnorm(module, process_group)
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
Tanh
¶ Applies the element-wise function:
\[\text{Tanh}(x) = \tanh(x) = \frac{\exp(x) - \exp(-x)} {\exp(x) + \exp(-x)}\]- Shape:
Input: \((*)\), where \(*\) means any number of dimensions.
Output: \((*)\), same shape as the input.
Examples:
>>> m = nn.Tanh() >>> input = torch.randn(2) >>> output = m(input)
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
Tanhshrink
¶ Applies the element-wise function:
\[\text{Tanhshrink}(x) = x - \tanh(x)\]- Shape:
Input: \((*)\), where \(*\) means any number of dimensions.
Output: \((*)\), same shape as the input.
Examples:
>>> m = nn.Tanhshrink() >>> input = torch.randn(2) >>> output = m(input)
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
Threshold
(threshold: float, value: float, inplace: bool = False)¶ Thresholds each element of the input Tensor.
Threshold is defined as:
\[\begin{split}y = \begin{cases} x, &\text{ if } x > \text{threshold} \\ \text{value}, &\text{ otherwise } \end{cases}\end{split}\]- Parameters
threshold – The value to threshold at
value – The value to replace with
inplace – can optionally do the operation in-place. Default:
False
- Shape:
Input: \((*)\), where \(*\) means any number of dimensions.
Output: \((*)\), same shape as the input.
Examples:
>>> m = nn.Threshold(0.1, 20) >>> input = torch.randn(2) >>> output = m(input)
-
extra_repr
()¶ Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
Transformer
(d_model: int = 512, nhead: int = 8, num_encoder_layers: int = 6, num_decoder_layers: int = 6, dim_feedforward: int = 2048, dropout: float = 0.1, activation: Union[str, Callable[[torch.Tensor], torch.Tensor]] = <function relu>, custom_encoder: Optional[Any] = None, custom_decoder: Optional[Any] = None, layer_norm_eps: float = 1e-05, batch_first: bool = False, norm_first: bool = False, device=None, dtype=None)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.- A transformer model. User is able to modify the attributes as needed. The architecture
is based on the paper “Attention Is All You Need”. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000-6010. Users can build the BERT(https://arxiv.org/abs/1810.04805) model with corresponding parameters.
- Args:
d_model: the number of expected features in the encoder/decoder inputs (default=512). nhead: the number of heads in the multiheadattention models (default=8). num_encoder_layers: the number of sub-encoder-layers in the encoder (default=6). num_decoder_layers: the number of sub-decoder-layers in the decoder (default=6). dim_feedforward: the dimension of the feedforward network model (default=2048). dropout: the dropout value (default=0.1). activation: the activation function of encoder/decoder intermediate layer, can be a string
(“relu” or “gelu”) or a unary callable. Default: relu
custom_encoder: custom encoder (default=None). custom_decoder: custom decoder (default=None). layer_norm_eps: the eps value in layer normalization components (default=1e-5). batch_first: If
True
, then the input and output tensors are providedas (batch, seq, feature). Default:
False
(seq, batch, feature).- norm_first: if
True
, encoder and decoder layers will perform LayerNorms before other attention and feedforward operations, otherwise after. Default:
False
(after).
- norm_first: if
- Examples::
>> transformer_model = nn.Transformer(nhead=16, num_encoder_layers=12) >> src = torch.rand((10, 32, 512)) >> tgt = torch.rand((20, 32, 512)) >> out = transformer_model(src, tgt)
Note: A full example to apply nn.Transformer module for the word language model is available in https://github.com/pytorch/examples/tree/master/word_language_model
-
class
borch.nn.
TransformerDecoder
(decoder_layer, num_layers, norm=None)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.TransformerDecoder is a stack of N decoder layers
- Args:
decoder_layer: an instance of the TransformerDecoderLayer() class (required). num_layers: the number of sub-decoder-layers in the decoder (required). norm: the layer normalization component (optional).
- Examples::
>> decoder_layer = nn.TransformerDecoderLayer(d_model=512, nhead=8) >> transformer_decoder = nn.TransformerDecoder(decoder_layer, num_layers=6) >> memory = torch.rand(10, 32, 512) >> tgt = torch.rand(20, 32, 512) >> out = transformer_decoder(tgt, memory)
-
class
borch.nn.
TransformerDecoderLayer
(d_model, nhead, dim_feedforward=2048, dropout=0.1, activation=<function relu>, layer_norm_eps=1e-05, batch_first=False, norm_first=False, device=None, dtype=None)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.- TransformerDecoderLayer is made up of self-attn, multi-head-attn and feedforward network.
This standard decoder layer is based on the paper “Attention Is All You Need”. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000-6010. Users may modify or implement in a different way during application.
- Args:
d_model: the number of expected features in the input (required). nhead: the number of heads in the multiheadattention models (required). dim_feedforward: the dimension of the feedforward network model (default=2048). dropout: the dropout value (default=0.1). activation: the activation function of the intermediate layer, can be a string
(“relu” or “gelu”) or a unary callable. Default: relu
layer_norm_eps: the eps value in layer normalization components (default=1e-5). batch_first: If
True
, then the input and output tensors are providedas (batch, seq, feature). Default:
False
.- norm_first: if
True
, layer norm is done prior to self attention, multihead attention and feedforward operations, respectivaly. Otherwise it’s done after. Default:
False
(after).
- norm_first: if
- Examples::
>> decoder_layer = nn.TransformerDecoderLayer(d_model=512, nhead=8) >> memory = torch.rand(10, 32, 512) >> tgt = torch.rand(20, 32, 512) >> out = decoder_layer(tgt, memory)
- Alternatively, when
batch_first
isTrue
: >> decoder_layer = nn.TransformerDecoderLayer(d_model=512, nhead=8, batch_first=True) >> memory = torch.rand(32, 10, 512) >> tgt = torch.rand(32, 20, 512) >> out = decoder_layer(tgt, memory)
-
class
borch.nn.
TransformerEncoder
(encoder_layer, num_layers, norm=None)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.TransformerEncoder is a stack of N encoder layers
- Args:
encoder_layer: an instance of the TransformerEncoderLayer() class (required). num_layers: the number of sub-encoder-layers in the encoder (required). norm: the layer normalization component (optional).
- Examples::
>> encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8) >> transformer_encoder = nn.TransformerEncoder(encoder_layer, num_layers=6) >> src = torch.rand(10, 32, 512) >> out = transformer_encoder(src)
-
class
borch.nn.
TransformerEncoderLayer
(d_model, nhead, dim_feedforward=2048, dropout=0.1, activation=<function relu>, layer_norm_eps=1e-05, batch_first=False, norm_first=False, device=None, dtype=None)¶ This is a ppl class. Please see
help(borch.nn)
for more information. If one gives distribution as kwargs, where names match the parameters of the Module, they will be used as priors for those parameters.- TransformerEncoderLayer is made up of self-attn and feedforward network.
This standard encoder layer is based on the paper “Attention Is All You Need”. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000-6010. Users may modify or implement in a different way during application.
- Args:
d_model: the number of expected features in the input (required). nhead: the number of heads in the multiheadattention models (required). dim_feedforward: the dimension of the feedforward network model (default=2048). dropout: the dropout value (default=0.1). activation: the activation function of the intermediate layer, can be a string
(“relu” or “gelu”) or a unary callable. Default: relu
layer_norm_eps: the eps value in layer normalization components (default=1e-5). batch_first: If
True
, then the input and output tensors are providedas (batch, seq, feature). Default:
False
.- norm_first: if
True
, layer norm is done prior to attention and feedforward operations, respectivaly. Otherwise it’s done after. Default:
False
(after).
- norm_first: if
- Examples::
>> encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8) >> src = torch.rand(10, 32, 512) >> out = encoder_layer(src)
- Alternatively, when
batch_first
isTrue
: >> encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8, batch_first=True) >> src = torch.rand(32, 10, 512) >> out = encoder_layer(src)
-
class
borch.nn.
TripletMarginLoss
(margin: float = 1.0, p: float = 2.0, eps: float = 1e-06, swap: bool = False, size_average=None, reduce=None, reduction: str = 'mean')¶ Creates a criterion that measures the triplet loss given an input tensors \(x1\), \(x2\), \(x3\) and a margin with a value greater than \(0\). This is used for measuring a relative similarity between samples. A triplet is composed by a, p and n (i.e., anchor, positive examples and negative examples respectively). The shapes of all input tensors should be \((N, D)\).
The distance swap is described in detail in the paper Learning shallow convolutional feature descriptors with triplet losses by V. Balntas, E. Riba et al.
The loss function for each sample in the mini-batch is:
\[L(a, p, n) = \max \{d(a_i, p_i) - d(a_i, n_i) + {\rm margin}, 0\}\]where
\[d(x_i, y_i) = \left\lVert {\bf x}_i - {\bf y}_i \right\rVert_p\]See also
TripletMarginWithDistanceLoss
, which computes the triplet margin loss for input tensors using a custom distance function.- Parameters
margin (float, optional) – Default: \(1\).
p (int, optional) – The norm degree for pairwise distance. Default: \(2\).
swap (bool, optional) – The distance swap is described in detail in the paper Learning shallow convolutional feature descriptors with triplet losses by V. Balntas, E. Riba et al. Default:
False
.size_average (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged over each loss element in the batch. Note that for some losses, there are multiple elements per sample. If the fieldsize_average
is set toFalse
, the losses are instead summed for each minibatch. Ignored whenreduce
isFalse
. Default:True
reduce (bool, optional) – Deprecated (see
reduction
). By default, the losses are averaged or summed over observations for each minibatch depending onsize_average
. Whenreduce
isFalse
, returns a loss per batch element instead and ignoressize_average
. Default:True
reduction (string, optional) – Specifies the reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Note:size_average
andreduce
are in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction
. Default:'mean'
- Shape:
Input: \((N, D)\) or :math`(D)` where \(D\) is the vector dimension.
- Output: A Tensor of shape \((N)\) if
reduction
is'none'
and input shape is :math`(N, D)`; a scalar otherwise.
- Output: A Tensor of shape \((N)\) if
Examples:
>>> triplet_loss = nn.TripletMarginLoss(margin=1.0, p=2) >>> anchor = torch.randn(100, 128, requires_grad=True) >>> positive = torch.randn(100, 128, requires_grad=True) >>> negative = torch.randn(100, 128, requires_grad=True) >>> output = triplet_loss(anchor, positive, negative) >>> output.backward()
-
forward
(anchor: torch.Tensor, positive: torch.Tensor, negative: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
TripletMarginWithDistanceLoss
(*, distance_function: Optional[Callable[[torch.Tensor, torch.Tensor], torch.Tensor]] = None, margin: float = 1.0, swap: bool = False, reduction: str = 'mean')¶ Creates a criterion that measures the triplet loss given input tensors \(a\), \(p\), and \(n\) (representing anchor, positive, and negative examples, respectively), and a nonnegative, real-valued function (“distance function”) used to compute the relationship between the anchor and positive example (“positive distance”) and the anchor and negative example (“negative distance”).
The unreduced loss (i.e., with
reduction
set to'none'
) can be described as:\[\ell(a, p, n) = L = \{l_1,\dots,l_N\}^\top, \quad l_i = \max \{d(a_i, p_i) - d(a_i, n_i) + {\rm margin}, 0\}\]where \(N\) is the batch size; \(d\) is a nonnegative, real-valued function quantifying the closeness of two tensors, referred to as the
distance_function
; and \(margin\) is a nonnegative margin representing the minimum difference between the positive and negative distances that is required for the loss to be 0. The input tensors have \(N\) elements each and can be of any shape that the distance function can handle.If
reduction
is not'none'
(default'mean'
), then:\[\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{`mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{`sum'.} \end{cases}\end{split}\]See also
TripletMarginLoss
, which computes the triplet loss for input tensors using the \(l_p\) distance as the distance function.- Parameters
distance_function (callable, optional) – A nonnegative, real-valued function that quantifies the closeness of two tensors. If not specified, nn.PairwiseDistance will be used. Default:
None
margin (float, optional) – A nonnegative margin representing the minimum difference between the positive and negative distances required for the loss to be 0. Larger margins penalize cases where the negative examples are not distant enough from the anchors, relative to the positives. Default: \(1\).
swap (bool, optional) – Whether to use the distance swap described in the paper Learning shallow convolutional feature descriptors with triplet losses by V. Balntas, E. Riba et al. If True, and if the positive example is closer to the negative example than the anchor is, swaps the positive example and the anchor in the loss computation. Default:
False
.reduction (string, optional) – Specifies the (optional) reduction to apply to the output:
'none'
|'mean'
|'sum'
.'none'
: no reduction will be applied,'mean'
: the sum of the output will be divided by the number of elements in the output,'sum'
: the output will be summed. Default:'mean'
- Shape:
Input: \((N, *)\) where \(*\) represents any number of additional dimensions as supported by the distance function.
Output: A Tensor of shape \((N)\) if
reduction
is'none'
, or a scalar otherwise.
Examples:
>>> # Initialize embeddings >>> embedding = nn.Embedding(1000, 128) >>> anchor_ids = torch.randint(0, 1000, (1,)) >>> positive_ids = torch.randint(0, 1000, (1,)) >>> negative_ids = torch.randint(0, 1000, (1,)) >>> anchor = embedding(anchor_ids) >>> positive = embedding(positive_ids) >>> negative = embedding(negative_ids) >>> >>> # Built-in Distance Function >>> triplet_loss = \ >>> nn.TripletMarginWithDistanceLoss(distance_function=nn.PairwiseDistance()) >>> output = triplet_loss(anchor, positive, negative) >>> output.backward() >>> >>> # Custom Distance Function >>> def l_infinity(x1, x2): >>> return torch.max(torch.abs(x1 - x2), dim=1).values >>> >>> triplet_loss = \ >>> nn.TripletMarginWithDistanceLoss(distance_function=l_infinity, margin=1.5) >>> output = triplet_loss(anchor, positive, negative) >>> output.backward() >>> >>> # Custom Distance Function (Lambda) >>> triplet_loss = \ >>> nn.TripletMarginWithDistanceLoss( >>> distance_function=lambda x, y: 1.0 - F.cosine_similarity(x, y)) >>> output = triplet_loss(anchor, positive, negative) >>> output.backward()
- Reference:
V. Balntas, et al.: Learning shallow convolutional feature descriptors with triplet losses: http://www.bmva.org/bmvc/2016/papers/paper119/index.html
-
forward
(anchor: torch.Tensor, positive: torch.Tensor, negative: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
Unflatten
(dim: Union[int, str], unflattened_size: Union[torch.Size, List[int], Tuple[int, ...], Tuple[Tuple[str, int]]])¶ Unflattens a tensor dim expanding it to a desired shape. For use with
Sequential
.dim
specifies the dimension of the input tensor to be unflattened, and it can be either int or str when Tensor or NamedTensor is used, respectively.unflattened_size
is the new shape of the unflattened dimension of the tensor and it can be a tuple of ints or a list of ints or torch.Size for Tensor input; a NamedShape (tuple of (name, size) tuples) for NamedTensor input.
- Shape:
Input: \((*, S_{\text{dim}}, *)\), where \(S_{\text{dim}}\) is the size at dimension
dim
and \(*\) means any number of dimensions including none.Output: \((*, U_1, ..., U_n, *)\), where \(U\) =
unflattened_size
and \(\prod_{i=1}^n U_i = S_{\text{dim}}\).
- Parameters
dim (Union[int, str]) – Dimension to be unflattened
unflattened_size (Union[torch.Size, Tuple, List, NamedShape]) – New shape of the unflattened dimension
Examples
>>> input = torch.randn(2, 50) >>> # With tuple of ints >>> m = nn.Sequential( >>> nn.Linear(50, 50), >>> nn.Unflatten(1, (2, 5, 5)) >>> ) >>> output = m(input) >>> output.size() torch.Size([2, 2, 5, 5]) >>> # With torch.Size >>> m = nn.Sequential( >>> nn.Linear(50, 50), >>> nn.Unflatten(1, torch.Size([2, 5, 5])) >>> ) >>> output = m(input) >>> output.size() torch.Size([2, 2, 5, 5]) >>> # With namedshape (tuple of tuples) >>> input = torch.randn(2, 50, names=('N', 'features')) >>> unflatten = nn.Unflatten('features', (('C', 2), ('H', 5), ('W', 5))) >>> output = unflatten(input) >>> output.size() torch.Size([2, 2, 5, 5])
-
extra_repr
() → str¶ Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
Unfold
(kernel_size: Union[int, Tuple[int, ...]], dilation: Union[int, Tuple[int, ...]] = 1, padding: Union[int, Tuple[int, ...]] = 0, stride: Union[int, Tuple[int, ...]] = 1)¶ Extracts sliding local blocks from a batched input tensor.
Consider a batched
input
tensor of shape \((N, C, *)\), where \(N\) is the batch dimension, \(C\) is the channel dimension, and \(*\) represent arbitrary spatial dimensions. This operation flattens each slidingkernel_size
-sized block within the spatial dimensions ofinput
into a column (i.e., last dimension) of a 3-Doutput
tensor of shape \((N, C \times \prod(\text{kernel\_size}), L)\), where \(C \times \prod(\text{kernel\_size})\) is the total number of values within each block (a block has \(\prod(\text{kernel\_size})\) spatial locations each containing a \(C\)-channeled vector), and \(L\) is the total number of such blocks:\[L = \prod_d \left\lfloor\frac{\text{spatial\_size}[d] + 2 \times \text{padding}[d] % - \text{dilation}[d] \times (\text{kernel\_size}[d] - 1) - 1}{\text{stride}[d]} + 1\right\rfloor,\]where \(\text{spatial\_size}\) is formed by the spatial dimensions of
input
(\(*\) above), and \(d\) is over all spatial dimensions.Therefore, indexing
output
at the last dimension (column dimension) gives all values within a certain block.The
padding
,stride
anddilation
arguments specify how the sliding blocks are retrieved.stride
controls the stride for the sliding blocks.padding
controls the amount of implicit zero-paddings on both sides forpadding
number of points for each dimension before reshaping.dilation
controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of whatdilation
does.
- Parameters
kernel_size (int or tuple) – the size of the sliding blocks
stride (int or tuple, optional) – the stride of the sliding blocks in the input spatial dimensions. Default: 1
padding (int or tuple, optional) – implicit zero padding to be added on both sides of input. Default: 0
dilation (int or tuple, optional) – a parameter that controls the stride of elements within the neighborhood. Default: 1
If
kernel_size
,dilation
,padding
orstride
is an int or a tuple of length 1, their values will be replicated across all spatial dimensions.For the case of two input spatial dimensions this operation is sometimes called
im2col
.
Note
Fold
calculates each combined value in the resulting large tensor by summing all values from all containing blocks.Unfold
extracts the values in the local blocks by copying from the large tensor. So, if the blocks overlap, they are not inverses of each other.In general, folding and unfolding operations are related as follows. Consider
Fold
andUnfold
instances created with the same parameters:>>> fold_params = dict(kernel_size=..., dilation=..., padding=..., stride=...) >>> fold = nn.Fold(output_size=..., **fold_params) >>> unfold = nn.Unfold(**fold_params)
Then for any (supported)
input
tensor the following equality holds:fold(unfold(input)) == divisor * input
where
divisor
is a tensor that depends only on the shape and dtype of theinput
:>>> input_ones = torch.ones(input.shape, dtype=input.dtype) >>> divisor = fold(unfold(input_ones))
When the
divisor
tensor contains no zero elements, thenfold
andunfold
operations are inverses of each other (up to constant divisor).Warning
Currently, only 4-D input tensors (batched image-like tensors) are supported.
- Shape:
Input: \((N, C, *)\)
Output: \((N, C \times \prod(\text{kernel\_size}), L)\) as described above
Examples:
>>> unfold = nn.Unfold(kernel_size=(2, 3)) >>> input = torch.randn(2, 5, 3, 4) >>> output = unfold(input) >>> # each patch contains 30 values (2x3=6 vectors, each of 5 channels) >>> # 4 blocks (2x3 kernels) in total in the 3x4 input >>> output.size() torch.Size([2, 30, 4]) >>> # Convolution is equivalent with Unfold + Matrix Multiplication + Fold (or view to output shape) >>> inp = torch.randn(1, 3, 10, 12) >>> w = torch.randn(2, 3, 4, 5) >>> inp_unf = torch.nn.functional.unfold(inp, (4, 5)) >>> out_unf = inp_unf.transpose(1, 2).matmul(w.view(w.size(0), -1).t()).transpose(1, 2) >>> out = torch.nn.functional.fold(out_unf, (7, 8), (1, 1)) >>> # or equivalently (and avoiding a copy), >>> # out = out_unf.view(1, 2, 7, 8) >>> (torch.nn.functional.conv2d(inp, w) - out).abs().max() tensor(1.9073e-06)
-
extra_repr
() → str¶ Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
Upsample
(size: Union[int, Tuple[int, ...], None] = None, scale_factor: Union[float, Tuple[float, ...], None] = None, mode: str = 'nearest', align_corners: Optional[bool] = None)¶ Upsamples a given multi-channel 1D (temporal), 2D (spatial) or 3D (volumetric) data.
The input data is assumed to be of the form minibatch x channels x [optional depth] x [optional height] x width. Hence, for spatial inputs, we expect a 4D Tensor and for volumetric inputs, we expect a 5D Tensor.
The algorithms available for upsampling are nearest neighbor and linear, bilinear, bicubic and trilinear for 3D, 4D and 5D input Tensor, respectively.
One can either give a
scale_factor
or the target outputsize
to calculate the output size. (You cannot give both, as it is ambiguous)- Parameters
size (int or Tuple[int] or Tuple[int, int] or Tuple[int, int, int], optional) – output spatial sizes
scale_factor (float or Tuple[float] or Tuple[float, float] or Tuple[float, float, float], optional) – multiplier for spatial size. Has to match input size if it is a tuple.
mode (str, optional) – the upsampling algorithm: one of
'nearest'
,'linear'
,'bilinear'
,'bicubic'
and'trilinear'
. Default:'nearest'
align_corners (bool, optional) – if
True
, the corner pixels of the input and output tensors are aligned, and thus preserving the values at those pixels. This only has effect whenmode
is'linear'
,'bilinear'
, or'trilinear'
. Default:False
- Shape:
Input: \((N, C, W_{in})\), \((N, C, H_{in}, W_{in})\) or \((N, C, D_{in}, H_{in}, W_{in})\)
Output: \((N, C, W_{out})\), \((N, C, H_{out}, W_{out})\) or \((N, C, D_{out}, H_{out}, W_{out})\), where
\[D_{out} = \left\lfloor D_{in} \times \text{scale\_factor} \right\rfloor\]\[H_{out} = \left\lfloor H_{in} \times \text{scale\_factor} \right\rfloor\]\[W_{out} = \left\lfloor W_{in} \times \text{scale\_factor} \right\rfloor\]Warning
With
align_corners = True
, the linearly interpolating modes (linear, bilinear, bicubic, and trilinear) don’t proportionally align the output and input pixels, and thus the output values can depend on the input size. This was the default behavior for these modes up to version 0.3.1. Since then, the default behavior isalign_corners = False
. See below for concrete examples on how this affects the outputs.Note
If you want downsampling/general resizing, you should use
interpolate()
.Examples:
>>> input = torch.arange(1, 5, dtype=torch.float32).view(1, 1, 2, 2) >>> input tensor([[[[ 1., 2.], [ 3., 4.]]]]) >>> m = nn.Upsample(scale_factor=2, mode='nearest') >>> m(input) tensor([[[[ 1., 1., 2., 2.], [ 1., 1., 2., 2.], [ 3., 3., 4., 4.], [ 3., 3., 4., 4.]]]]) >>> m = nn.Upsample(scale_factor=2, mode='bilinear') # align_corners=False >>> m(input) tensor([[[[ 1.0000, 1.2500, 1.7500, 2.0000], [ 1.5000, 1.7500, 2.2500, 2.5000], [ 2.5000, 2.7500, 3.2500, 3.5000], [ 3.0000, 3.2500, 3.7500, 4.0000]]]]) >>> m = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True) >>> m(input) tensor([[[[ 1.0000, 1.3333, 1.6667, 2.0000], [ 1.6667, 2.0000, 2.3333, 2.6667], [ 2.3333, 2.6667, 3.0000, 3.3333], [ 3.0000, 3.3333, 3.6667, 4.0000]]]]) >>> # Try scaling the same data in a larger tensor >>> >>> input_3x3 = torch.zeros(3, 3).view(1, 1, 3, 3) >>> input_3x3[:, :, :2, :2].copy_(input) tensor([[[[ 1., 2.], [ 3., 4.]]]]) >>> input_3x3 tensor([[[[ 1., 2., 0.], [ 3., 4., 0.], [ 0., 0., 0.]]]]) >>> m = nn.Upsample(scale_factor=2, mode='bilinear') # align_corners=False >>> # Notice that values in top left corner are the same with the small input (except at boundary) >>> m(input_3x3) tensor([[[[ 1.0000, 1.2500, 1.7500, 1.5000, 0.5000, 0.0000], [ 1.5000, 1.7500, 2.2500, 1.8750, 0.6250, 0.0000], [ 2.5000, 2.7500, 3.2500, 2.6250, 0.8750, 0.0000], [ 2.2500, 2.4375, 2.8125, 2.2500, 0.7500, 0.0000], [ 0.7500, 0.8125, 0.9375, 0.7500, 0.2500, 0.0000], [ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]]]) >>> m = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True) >>> # Notice that values in top left corner are now changed >>> m(input_3x3) tensor([[[[ 1.0000, 1.4000, 1.8000, 1.6000, 0.8000, 0.0000], [ 1.8000, 2.2000, 2.6000, 2.2400, 1.1200, 0.0000], [ 2.6000, 3.0000, 3.4000, 2.8800, 1.4400, 0.0000], [ 2.4000, 2.7200, 3.0400, 2.5600, 1.2800, 0.0000], [ 1.2000, 1.3600, 1.5200, 1.2800, 0.6400, 0.0000], [ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]]])
-
extra_repr
() → str¶ Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
-
forward
(input: torch.Tensor) → torch.Tensor¶ Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
-
class
borch.nn.
UpsamplingBilinear2d
(size: Union[int, Tuple[int, int], None] = None, scale_factor: Union[float, Tuple[float, float], None] = None)¶ Applies a 2D bilinear upsampling to an input signal composed of several input channels.
To specify the scale, it takes either the
size
or thescale_factor
as it’s constructor argument.When
size
is given, it is the output size of the image (h, w).- Parameters
size (int or Tuple[int, int], optional) – output spatial sizes
scale_factor (float or Tuple[float, float], optional) – multiplier for spatial size.
Warning
This class is deprecated in favor of
interpolate()
. It is equivalent tonn.functional.interpolate(..., mode='bilinear', align_corners=True)
.- Shape:
Input: \((N, C, H_{in}, W_{in})\)
Output: \((N, C, H_{out}, W_{out})\) where
\[H_{out} = \left\lfloor H_{in} \times \text{scale\_factor} \right\rfloor\]\[W_{out} = \left\lfloor W_{in} \times \text{scale\_factor} \right\rfloor\]Examples:
>>> input = torch.arange(1, 5, dtype=torch.float32).view(1, 1, 2, 2) >>> input tensor([[[[ 1., 2.], [ 3., 4.]]]]) >>> m = nn.UpsamplingBilinear2d(scale_factor=2) >>> m(input) tensor([[[[ 1.0000, 1.3333, 1.6667, 2.0000], [ 1.6667, 2.0000, 2.3333, 2.6667], [ 2.3333, 2.6667, 3.0000, 3.3333], [ 3.0000, 3.3333, 3.6667, 4.0000]]]])
-
class
borch.nn.
UpsamplingNearest2d
(size: Union[int, Tuple[int, int], None] = None, scale_factor: Union[float, Tuple[float, float], None] = None)¶ Applies a 2D nearest neighbor upsampling to an input signal composed of several input channels.
To specify the scale, it takes either the
size
or thescale_factor
as it’s constructor argument.When
size
is given, it is the output size of the image (h, w).- Parameters
size (int or Tuple[int, int], optional) – output spatial sizes
scale_factor (float or Tuple[float, float], optional) – multiplier for spatial size.
Warning
This class is deprecated in favor of
interpolate()
.- Shape:
Input: \((N, C, H_{in}, W_{in})\)
Output: \((N, C, H_{out}, W_{out})\) where
\[H_{out} = \left\lfloor H_{in} \times \text{scale\_factor} \right\rfloor\]\[W_{out} = \left\lfloor W_{in} \times \text{scale\_factor} \right\rfloor\]Examples:
>>> input = torch.arange(1, 5, dtype=torch.float32).view(1, 1, 2, 2) >>> input tensor([[[[ 1., 2.], [ 3., 4.]]]]) >>> m = nn.UpsamplingNearest2d(scale_factor=2) >>> m(input) tensor([[[[ 1., 1., 2., 2.], [ 1., 1., 2., 2.], [ 3., 3., 4., 4.], [ 3., 3., 4., 4.]]]])
-
class
borch.nn.
ZeroPad2d
(padding: Union[int, Tuple[int, int, int, int]])¶ Pads the input tensor boundaries with zero.
For N-dimensional padding, use
torch.nn.functional.pad()
.- Parameters
padding (int, tuple) – the size of the padding. If is int, uses the same padding in all boundaries. If a 4-tuple, uses (\(\text{padding\_left}\), \(\text{padding\_right}\), \(\text{padding\_top}\), \(\text{padding\_bottom}\))
- Shape:
Input: \((N, C, H_{in}, W_{in})\) or \((C, H_{in}, W_{in})\).
Output: \((N, C, H_{out}, W_{out})\) or \((C, H_{out}, W_{out})\), where
\(H_{out} = H_{in} + \text{padding\_top} + \text{padding\_bottom}\)
\(W_{out} = W_{in} + \text{padding\_left} + \text{padding\_right}\)
Examples:
>>> m = nn.ZeroPad2d(2) >>> input = torch.randn(1, 1, 3, 3) >>> input tensor([[[[-0.1678, -0.4418, 1.9466], [ 0.9604, -0.4219, -0.5241], [-0.9162, -0.5436, -0.6446]]]]) >>> m(input) tensor([[[[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000], [ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000], [ 0.0000, 0.0000, -0.1678, -0.4418, 1.9466, 0.0000, 0.0000], [ 0.0000, 0.0000, 0.9604, -0.4219, -0.5241, 0.0000, 0.0000], [ 0.0000, 0.0000, -0.9162, -0.5436, -0.6446, 0.0000, 0.0000], [ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000], [ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]]]) >>> # using different paddings for different sides >>> m = nn.ZeroPad2d((1, 1, 2, 0)) >>> m(input) tensor([[[[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000], [ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000], [ 0.0000, -0.1678, -0.4418, 1.9466, 0.0000], [ 0.0000, 0.9604, -0.4219, -0.5241, 0.0000], [ 0.0000, -0.9162, -0.5436, -0.6446, 0.0000]]]])
-
borch.nn.
borchify_module
(module: torch.nn.modules.module.Module, rv_factory: Optional[callable] = None, posterior: borch.posterior.posterior.Posterior = None) → borch.module.Module¶ Take a
Module
instance and return a corresponding Borch equivalent.- Parameters
module – The
Module
object to be ‘borchified’.rv_factory –
A callable which, when passed a
Parameter
, returns aRandomVariable
, if None the default ofborch.nn.borchify
will be used.
posterior – A posterior for which the borchified module should use. The default is
Normal
(seeborch.posterior
).
- Returns
A new module of type
borch.Module
.
Examples
>>> import torch >>> linear = torch.nn.Linear(3, 3) # create a linear module >>> blinear = borchify_module(linear) >>> type(blinear) <class 'borch.nn.torch_proxies.Linear'>
-
borch.nn.
borchify_namespace
(module, get_rv_factory=<function default_rv_factory>, doc_prefix='', ignore=None, parent=<class 'torch.nn.modules.module.Module'>, borchify_submodules=True, get_extra_baseclasses=None)¶ Create a new module that contains bayesian versions of the `torch.nn.Module`s.
Note that if the modules exists in torch.nn they will be replaced with the equivalent module from borch.nn instead of creating a new class.
- Parameters
module – a python module that contains `torch.nn.Module`s you want a Bayesian version of.
get_rv_factory – function that takes a string as an argument and returns a function that creates random variables. See borch.rv_factories.
doc_prefix (str) – Extra documentation to append to the new class.
ignore (List(str)) – List with string names of classes that should be skipped.
parent (torch.nn.Module) – a parent class you want to requite the subclasses to inherit from.
borchify_submodules (bool) – if submodules that are creates during the initialization method should also be borchified or not. Defaults to False.
- Returns
A new python module where the torch.nn.Modules now also inherit from borch.Module.
Examples
>>> import torch >>> bnn = borchify_namespace(torch.nn) >>> blinear = bnn.Linear(1,2) >>> type(blinear) <class 'borch.nn.torch_proxies.Linear'>
-
borch.nn.
borchify_network
(module: torch.nn.modules.module.Module, rv_factory: Optional[callable] = None, posterior_creator: callable = None, cache: dict = None) → borch.module.Module¶ Borchify a whole network. This applies
borchify_module
recursively on all modules within a network.- Parameters
module – The network to be borchified.
rv_factory –
A callable which, when passed a
Parameter
, returns aRandomVariable
, if None the default ofborch.nn.borchify
will be used.
posterior_creator – A callable which creates a posterior. This will be used to create a new posterior for each module in the network.
cache – Cache is mapping from id(torch module) -> ppl module. Used to prevent double usage/recursion of borchification (NB recursive ``Module``s are actually not supported by PyTorch).
Todo
Specify which modules should be borchified.
Specify what random variable factories should be used where.
Notes
Double use and recursion are respected. For example if a nested module appears in multiple locations in the original network, then the borchified version also uses the same borchified module in these locations.
- Returns
A new borchified network.
Examples
>>> import torch >>> >>> class Net(Module): ... def __init__(self): ... super(Net, self).__init__() ... self.linear = torch.nn.Linear(3,3) ... # add a nested module ... self.linear.add_module("nested", torch.nn.Linear(3,3)) ... self.sigmoid = torch.nn.Sigmoid() ... self.linear2 = torch.nn.Linear(3,3) ... >>> net = Net() >>> bnet = borchify_network(net) >>> type(bnet) <class 'borch.nn.borchify.PICKLE_CACHE.Net'> >>> type(bnet.linear) <class 'borch.nn.torch_proxies.Linear'>