Bases: MessagePassing
The chebyshev spectral graph convolutional operator from the “Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering” paper.
where \(\mathbf{Z}^{(k)}\) is computed recursively by
and \(\mathbf{\hat{L}}\) denotes the scaled and normalized Laplacian \(\frac{2\mathbf{L}}{\lambda_{\max}} - \mathbf{I}\).
in_channels (int) – Size of each input sample, or -1
to derive
the size from the first input(s) to the forward method.
out_channels (int) – Size of each output sample.
K (int) – Chebyshev filter size \(K\).
normalization (str, optional) –
The normalization scheme for the graph
Laplacian (default: "sym"
):
1. None
: No normalization
\(\mathbf{L} = \mathbf{D} - \mathbf{A}\)
2. "sym"
: Symmetric normalization
\(\mathbf{L} = \mathbf{I} - \mathbf{D}^{-1/2} \mathbf{A}
\mathbf{D}^{-1/2}\)
3. "rw"
: Random-walk normalization
\(\mathbf{L} = \mathbf{I} - \mathbf{D}^{-1} \mathbf{A}\)
lambda_max
should be a torch.Tensor
of size
[num_graphs]
in a mini-batch scenario and a
scalar/zero-dimensional tensor when operating on single graphs.
You can pre-compute lambda_max
via the
torch_geometric.transforms.LaplacianLambdaMax
transform.
bias (bool, optional) – If set to False
, the layer will not learn
an additive bias. (default: True
)
**kwargs (optional) – Additional arguments of
torch_geometric.nn.conv.MessagePassing
.
input:
node features \((|\mathcal{V}|, F_{in})\),
edge indices \((2, |\mathcal{E}|)\),
edge weights \((|\mathcal{E}|)\) (optional),
batch vector \((|\mathcal{V}|)\) (optional),
maximum lambda
value \((|\mathcal{G}|)\) (optional)
output: node features \((|\mathcal{V}|, F_{out})\)