Rate this Page

Linear#

class torch.nn.Linear(in_features, out_features, bias=True, device=None, dtype=None)[source]#

Applies an affine linear transformation to the incoming data: y=xAT+b.

This module supports TensorFloat32.

On certain ROCm devices, when using float16 inputs this module will use different precision for backward.

Parameters
  • in_features (int) – size of each input sample

  • out_features (int) – size of each output sample

  • bias (bool) – If set to False, the layer will not learn an additive bias. Default: True

Shape:
  • Input: (,Hin) where means any number of dimensions including none and Hin=in_features.

  • Output: (,Hout) where all but the last dimension are the same shape as the input and Hout=out_features.

Variables
  • weight (torch.Tensor) – the learnable weights of the module of shape (out_features,in_features). The values are initialized from U(k,k), where k=in_features1

  • bias – the learnable bias of the module of shape (out_features). If bias is True, the values are initialized from U(k,k) where k=in_features1

Examples:

>>> m = nn.Linear(20, 30)
>>> input = torch.randn(128, 20)
>>> output = m(input)
>>> print(output.size())
torch.Size([128, 30])
extra_repr()[source]#

Return the extra representation of the module.

Return type

str

forward(input)[source]#

Runs the forward pass.

Return type

Tensor

reset_parameters()[source]#

Resets parameters based on their initialization used in __init__.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources
Morty Proxy This is a proxified and sanitized view of the page, visit original site.