deepmd.pt.model.network.network

Module Contents

Classes

Dropout

Base class for all neural network modules.

Identity

Base class for all neural network modules.

DropPath

Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).

ResidualLinear

Base class for all neural network modules.

TypeFilter

Base class for all neural network modules.

SimpleLinear

Base class for all neural network modules.

Linear

Applies a linear transformation to the incoming data: \(y = xA^T + b\).

Transition

Base class for all neural network modules.

Embedding

A simple lookup table that stores embeddings of a fixed dictionary and size.

NonLinearHead

Base class for all neural network modules.

NonLinear

Base class for all neural network modules.

MaskLMHead

Head for masked language modeling.

ResidualDeep

Base class for all neural network modules.

TypeEmbedNet

Base class for all neural network modules.

TypeEmbedNetConsistent

Type embedding network that is consistent with other backends.

GaussianKernel

Base class for all neural network modules.

GaussianEmbedding

Base class for all neural network modules.

NeighborWiseAttention

Base class for all neural network modules.

NeighborWiseAttentionLayer

Base class for all neural network modules.

GatedSelfAttetion

Base class for all neural network modules.

LocalSelfMultiheadAttention

Base class for all neural network modules.

NodeTaskHead

Base class for all neural network modules.

EnergyHead

Base class for all neural network modules.

OuterProduct

Base class for all neural network modules.

Attention

Base class for all neural network modules.

AtomAttention

Base class for all neural network modules.

TriangleMultiplication

Base class for all neural network modules.

EvoformerEncoderLayer

Base class for all neural network modules.

Evoformer2bEncoder

Base class for all neural network modules.

Evoformer3bEncoderLayer

Base class for all neural network modules.

Evoformer3bEncoder

Base class for all neural network modules.

Functions

Tensor(*shape)

softmax_dropout(input_x, dropout_prob[, is_training, ...])

checkpoint_sequential(functions, input_x[, enabled])

gaussian(x, mean, std)

deepmd.pt.model.network.network.Tensor(*shape)[source]
class deepmd.pt.model.network.network.Dropout(p)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(x, inplace: bool = False)[source]
class deepmd.pt.model.network.network.Identity[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(x)[source]
class deepmd.pt.model.network.network.DropPath(prob=None)[source]

Bases: torch.nn.Module

Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).

forward(x)[source]
extra_repr() str[source]

Set the extra representation of the module.

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

deepmd.pt.model.network.network.softmax_dropout(input_x, dropout_prob, is_training=True, mask=None, bias=None, inplace=True)[source]
deepmd.pt.model.network.network.checkpoint_sequential(functions, input_x, enabled=True)[source]
class deepmd.pt.model.network.network.ResidualLinear(num_in, num_out, bavg=0.0, stddev=1.0, resnet_dt=False)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

resnet: Final[int][source]
forward(inputs)[source]

Return X ?+ X*W+b.

class deepmd.pt.model.network.network.TypeFilter(offset, length, neuron, return_G=False, tebd_dim=0, use_tebd=False, tebd_mode='concat')[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

use_tebd: Final[bool][source]
tebd_mode: Final[str][source]
forward(inputs, atype_tebd: torch.Tensor | None = None, nlist_tebd: torch.Tensor | None = None)[source]

Calculate decoded embedding for each atom.

Args: - inputs: Descriptor matrix. Its shape is [nframes*natoms[0], len_descriptor].

Returns:
  • torch.Tensor: Embedding contributed by me. Its shape is [nframes*natoms[0], 4, self.neuron[-1]].
class deepmd.pt.model.network.network.SimpleLinear(num_in, num_out, bavg=0.0, stddev=1.0, use_timestep=False, activate=None, bias: bool = True)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

use_timestep: Final[bool][source]
forward(inputs)[source]

Return X*W+b.

class deepmd.pt.model.network.network.Linear(d_in: int, d_out: int, bias: bool = True, init: str = 'default')[source]

Bases: torch.nn.Linear

Applies a linear transformation to the incoming data: \(y = xA^T + b\).

This module supports TensorFloat32.

On certain ROCm devices, when using float16 inputs this module will use different precision for backward.

Parameters:
  • in_features – size of each input sample

  • out_features – size of each output sample

  • bias – If set to False, the layer will not learn an additive bias. Default: True

Shape:
  • Input: \((*, H_{in})\) where \(*\) means any number of dimensions including none and \(H_{in} = \text{in\_features}\).

  • Output: \((*, H_{out})\) where all but the last dimension are the same shape as the input and \(H_{out} = \text{out\_features}\).

weight[source]

the learnable weights of the module of shape \((\text{out\_features}, \text{in\_features})\). The values are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\), where \(k = \frac{1}{\text{in\_features}}\)

bias

the learnable bias of the module of shape \((\text{out\_features})\). If bias is True, the values are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{1}{\text{in\_features}}\)

Examples:

>>> m = nn.Linear(20, 30)
>>> input = torch.randn(128, 20)
>>> output = m(input)
>>> print(output.size())
torch.Size([128, 30])
_trunc_normal_init(scale=1.0)[source]
_glorot_uniform_init()[source]
_zero_init(use_bias=True)[source]
_normal_init()[source]
class deepmd.pt.model.network.network.Transition(d_in, n, dropout=0.0)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

_transition(x)[source]
forward(x: torch.Tensor) torch.Tensor[source]
class deepmd.pt.model.network.network.Embedding(num_embeddings: int, embedding_dim: int, padding_idx: int | None = None, dtype=torch.float64)[source]

Bases: torch.nn.Embedding

A simple lookup table that stores embeddings of a fixed dictionary and size.

This module is often used to store word embeddings and retrieve them using indices. The input to the module is a list of indices, and the output is the corresponding word embeddings.

Parameters:
  • num_embeddings (int) – size of the dictionary of embeddings

  • embedding_dim (int) – the size of each embedding vector

  • padding_idx (int, optional) – If specified, the entries at padding_idx do not contribute to the gradient; therefore, the embedding vector at padding_idx is not updated during training, i.e. it remains as a fixed “pad”. For a newly constructed Embedding, the embedding vector at padding_idx will default to all zeros, but can be updated to another value to be used as the padding vector.

  • max_norm (float, optional) – If given, each embedding vector with norm larger than max_norm is renormalized to have norm max_norm.

  • norm_type (float, optional) – The p of the p-norm to compute for the max_norm option. Default 2.

  • scale_grad_by_freq (bool, optional) – If given, this will scale gradients by the inverse of frequency of the words in the mini-batch. Default False.

  • sparse (bool, optional) – If True, gradient w.r.t. weight matrix will be a sparse tensor. See Notes for more details regarding sparse gradients.

weight[source]

the learnable weights of the module of shape (num_embeddings, embedding_dim) initialized from \(\mathcal{N}(0, 1)\)

Type:

Tensor

Shape:
  • Input: \((*)\), IntTensor or LongTensor of arbitrary shape containing the indices to extract

  • Output: \((*, H)\), where * is the input shape and \(H=\text{embedding\_dim}\)

Note

Keep in mind that only a limited number of optimizers support sparse gradients: currently it’s optim.SGD (CUDA and CPU), optim.SparseAdam (CUDA and CPU) and optim.Adagrad (CPU)

Note

When max_norm is not None, Embedding’s forward method will modify the weight tensor in-place. Since tensors needed for gradient computations cannot be modified in-place, performing a differentiable operation on Embedding.weight before calling Embedding’s forward method requires cloning Embedding.weight when max_norm is not None. For example:

n, d, m = 3, 5, 7
embedding = nn.Embedding(n, d, max_norm=True)
W = torch.randn((m, d), requires_grad=True)
idx = torch.tensor([1, 2])
a = embedding.weight.clone() @ W.t()  # weight must be cloned for this to be differentiable
b = embedding(idx) @ W.t()  # modifies weight in-place
out = (a.unsqueeze(0) + b.unsqueeze(1))
loss = out.sigmoid().prod()
loss.backward()

Examples:

>>> # an Embedding module containing 10 tensors of size 3
>>> embedding = nn.Embedding(10, 3)
>>> # a batch of 2 samples of 4 indices each
>>> input = torch.LongTensor([[1, 2, 4, 5], [4, 3, 2, 9]])
>>> # xdoctest: +IGNORE_WANT("non-deterministic")
>>> embedding(input)
tensor([[[-0.0251, -1.6902,  0.7172],
         [-0.6431,  0.0748,  0.6969],
         [ 1.4970,  1.3448, -0.9685],
         [-0.3677, -2.7265, -0.1685]],

        [[ 1.4970,  1.3448, -0.9685],
         [ 0.4362, -0.4004,  0.9400],
         [-0.6431,  0.0748,  0.6969],
         [ 0.9124, -2.3616,  1.1151]]])

>>> # example with padding_idx
>>> embedding = nn.Embedding(10, 3, padding_idx=0)
>>> input = torch.LongTensor([[0, 2, 0, 5]])
>>> embedding(input)
tensor([[[ 0.0000,  0.0000,  0.0000],
         [ 0.1535, -2.0309,  0.9315],
         [ 0.0000,  0.0000,  0.0000],
         [-0.1655,  0.9897,  0.0635]]])

>>> # example of changing `pad` vector
>>> padding_idx = 0
>>> embedding = nn.Embedding(3, 3, padding_idx=padding_idx)
>>> embedding.weight
Parameter containing:
tensor([[ 0.0000,  0.0000,  0.0000],
        [-0.7895, -0.7089, -0.0364],
        [ 0.6778,  0.5803,  0.2678]], requires_grad=True)
>>> with torch.no_grad():
...     embedding.weight[padding_idx] = torch.ones(3)
>>> embedding.weight
Parameter containing:
tensor([[ 1.0000,  1.0000,  1.0000],
        [-0.7895, -0.7089, -0.0364],
        [ 0.6778,  0.5803,  0.2678]], requires_grad=True)
_normal_init(std=0.02)[source]
class deepmd.pt.model.network.network.NonLinearHead(input_dim, out_dim, activation_fn, hidden=None)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(x)[source]
class deepmd.pt.model.network.network.NonLinear(input, output_size, hidden=None)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(x)[source]
zero_init()[source]
class deepmd.pt.model.network.network.MaskLMHead(embed_dim, output_dim, activation_fn, weight=None)[source]

Bases: torch.nn.Module

Head for masked language modeling.

forward(features, masked_tokens: torch.Tensor | None = None, **kwargs)[source]
class deepmd.pt.model.network.network.ResidualDeep(type_id, embedding_width, neuron, bias_atom_e, out_dim=1, resnet_dt=False)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(inputs)[source]

Calculate decoded embedding for each atom.

Args: - inputs: Embedding net output per atom. Its shape is [nframes*nloc, self.embedding_width].

Returns:
  • torch.Tensor: Output layer with shape [nframes*nloc, self.neuron[-1]].
class deepmd.pt.model.network.network.TypeEmbedNet(type_nums, embed_dim, bavg=0.0, stddev=1.0, precision='default')[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(atype)[source]
Parameters:

atype – Type of each input, [nframes, nloc] or [nframes, nloc, nnei].

Returns:
type_embedding:
share_params(base_class, shared_level, resume=False)[source]

Share the parameters of self to the base_class with shared_level during multitask training. If not start from checkpoint (resume is False), some seperated parameters (e.g. mean and stddev) will be re-calculated across different classes.

class deepmd.pt.model.network.network.TypeEmbedNetConsistent(*, ntypes: int, neuron: List[int], resnet_dt: bool = False, activation_function: str = 'tanh', precision: str = 'default', trainable: bool = True, seed: int | None = None, padding: bool = False)[source]

Bases: torch.nn.Module

Type embedding network that is consistent with other backends.

Parameters:
ntypesint

Number of atom types

neuronlist[int]

Number of neurons in each hidden layers of the embedding net

resnet_dt

Time-step dt in the resnet construction: y = x + dt * phi (Wx + b)

activation_function

The activation function in the embedding net. Supported options are “relu”, “tanh”, “none”, “linear”, “softplus”, “sigmoid”, “relu6”, “gelu”, “gelu_tf”.

precision

The precision of the embedding net parameters. Supported options are “float32”, “default”, “float16”, “float64”.

trainable

If the weights of embedding net are trainable.

seed

Random seed for initializing the network parameters.

padding

Concat the zero padding to the output, as the default embedding of empty type.

forward(device: torch.device)[source]

Caulate type embedding network.

Returns:
type_embedding: torch.Tensor

Type embedding network.

classmethod deserialize(data: dict)[source]

Deserialize the model.

Parameters:
datadict

The serialized data

Returns:
TypeEmbedNetConsistent

The deserialized model

serialize() dict[source]

Serialize the model.

Returns:
dict

The serialized data

deepmd.pt.model.network.network.gaussian(x, mean, std: float)[source]
class deepmd.pt.model.network.network.GaussianKernel(K=128, num_pair=512, std_width=1.0, start=0.0, stop=9.0)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(x, atom_pair)[source]
class deepmd.pt.model.network.network.GaussianEmbedding(rcut, kernel_num, num_pair, embed_dim, pair_embed_dim, sel, ntypes, atomic_sum_gbf)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(coord_selected, atom_feature, edge_type_2dim, edge_feature)[source]

Calculate decoded embedding for each atom. :param coord_selected: Clustered atom coordinates with shape [nframes*nloc, natoms, 3]. :param atom_feature: Previous calculated atomic features with shape [nframes*nloc, natoms, embed_dim]. :param edge_type_2dim: Edge index for gbf calculation with shape [nframes*nloc, natoms, natoms, 2]. :param edge_feature: Previous calculated edge features with shape [nframes*nloc, natoms, natoms, pair_dim].

Returns:
atom_feature: Updated atomic features with shape [nframes*nloc, natoms, embed_dim].
attn_bias: Updated edge features as attention bias with shape [nframes*nloc, natoms, natoms, pair_dim].
delta_pos: Delta position for force/vector prediction with shape [nframes*nloc, natoms, natoms, 3].
class deepmd.pt.model.network.network.NeighborWiseAttention(layer_num, nnei, embed_dim, hidden_dim, dotr=False, do_mask=False, post_ln=True, ffn=False, ffn_embed_dim=1024, activation='tanh', scaling_factor=1.0, head_num=1, normalize=True, temperature=None, smooth=True)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(input_G, nei_mask, input_r: torch.Tensor | None = None, sw: torch.Tensor | None = None)[source]
Parameters:
  • input_G – Input G, [nframes * nloc, nnei, embed_dim].

  • nei_mask – neighbor mask, [nframes * nloc, nnei].

  • input_r – normalized radial, [nframes, nloc, nei, 3].

Returns:
out: Output G, [nframes * nloc, nnei, embed_dim]
class deepmd.pt.model.network.network.NeighborWiseAttentionLayer(nnei, embed_dim, hidden_dim, dotr=False, do_mask=False, post_ln=True, ffn=False, ffn_embed_dim=1024, activation='tanh', scaling_factor=1.0, head_num=1, normalize=True, temperature=None, smooth=True)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

ffn: Final[bool][source]
forward(x, nei_mask, input_r: torch.Tensor | None = None, sw: torch.Tensor | None = None)[source]
class deepmd.pt.model.network.network.GatedSelfAttetion(nnei, embed_dim, hidden_dim, dotr=False, do_mask=False, scaling_factor=1.0, head_num=1, normalize=True, temperature=None, bias=True, smooth=True)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(query, nei_mask, input_r: torch.Tensor | None = None, sw: torch.Tensor | None = None, attnw_shift: float = 20.0)[source]
Parameters:
  • query – input G, [nframes * nloc, nnei, embed_dim].

  • nei_mask – neighbor mask, [nframes * nloc, nnei].

  • input_r – normalized radial, [nframes, nloc, nei, 3].

Returns:
type_embedding:
class deepmd.pt.model.network.network.LocalSelfMultiheadAttention(feature_dim, attn_head, scaling_factor=1.0)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(query, attn_bias: torch.Tensor | None = None, nlist_mask: torch.Tensor | None = None, nlist: torch.Tensor | None = None, return_attn=True)[source]
class deepmd.pt.model.network.network.NodeTaskHead(embed_dim: int, pair_dim: int, num_head: int)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

zero_init()[source]
forward(query: Tensor, pair: Tensor, delta_pos: Tensor, attn_mask: Tensor = None) Tensor[source]
class deepmd.pt.model.network.network.EnergyHead(input_dim, output_dim)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(x)[source]
class deepmd.pt.model.network.network.OuterProduct(d_atom, d_pair, d_hid=32)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

_opm(a, b)[source]
forward(m: torch.Tensor, nlist: torch.Tensor, op_mask: float, op_norm: float) torch.Tensor[source]
class deepmd.pt.model.network.network.Attention(q_dim: int, k_dim: int, v_dim: int, head_dim: int, num_heads: int, gating: bool = False, dropout: float = 0.0)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(q: torch.Tensor, k: torch.Tensor, v: torch.Tensor, bias: torch.Tensor, mask: torch.Tensor = None) torch.Tensor[source]
class deepmd.pt.model.network.network.AtomAttention(q_dim: int, k_dim: int, v_dim: int, pair_dim: int, head_dim: int, num_heads: int, gating: bool = False, dropout: float = 0.0)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(q: torch.Tensor, k: torch.Tensor, v: torch.Tensor, nlist: torch.Tensor, pair: torch.Tensor, mask: torch.Tensor = None) torch.Tensor[source]
class deepmd.pt.model.network.network.TriangleMultiplication(d_pair, d_hid)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(z: torch.Tensor, mask: torch.Tensor | None = None) torch.Tensor[source]
class deepmd.pt.model.network.network.EvoformerEncoderLayer(feature_dim: int = 768, ffn_dim: int = 2048, attn_head: int = 8, activation_fn: str = 'gelu', post_ln: bool = False)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(x, attn_bias: torch.Tensor | None = None, nlist_mask: torch.Tensor | None = None, nlist: torch.Tensor | None = None, return_attn=True)[source]
class deepmd.pt.model.network.network.Evoformer2bEncoder(nnei: int, layer_num: int = 6, attn_head: int = 8, atomic_dim: int = 1024, pair_dim: int = 100, feature_dim: int = 1024, ffn_dim: int = 2048, post_ln: bool = False, final_layer_norm: bool = True, final_head_layer_norm: bool = False, emb_layer_norm: bool = False, atomic_residual: bool = False, evo_residual: bool = False, residual_factor: float = 1.0, activation_function: str = 'gelu')[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(atomic_rep, pair_rep, nlist, nlist_type, nlist_mask)[source]

Encoder the atomic and pair representations.

Args: - atomic_rep: Atomic representation with shape [nframes, nloc, atomic_dim]. - pair_rep: Pair representation with shape [nframes, nloc, nnei, pair_dim]. - nlist: Neighbor list with shape [nframes, nloc, nnei]. - nlist_type: Neighbor types with shape [nframes, nloc, nnei]. - nlist_mask: Neighbor mask with shape [nframes, nloc, nnei], False if blank.

Returns:
  • atomic_rep: Atomic representation after encoder with shape [nframes, nloc, feature_dim].
  • transformed_atomic_rep: Transformed atomic representation after encoder with shape [nframes, nloc, atomic_dim].
  • pair_rep: Pair representation after encoder with shape [nframes, nloc, nnei, attn_head].
  • delta_pair_rep: Delta pair representation after encoder with shape [nframes, nloc, nnei, attn_head].
  • norm_x: Normalization loss of atomic_rep.
  • norm_delta_pair_rep: Normalization loss of delta_pair_rep.
class deepmd.pt.model.network.network.Evoformer3bEncoderLayer(nnei, embedding_dim: int = 768, pair_dim: int = 64, pair_hidden_dim: int = 32, ffn_embedding_dim: int = 3072, num_attention_heads: int = 8, dropout: float = 0.1, droppath_prob: float = 0.0, pair_dropout: float = 0.25, attention_dropout: float = 0.1, activation_dropout: float = 0.1, pre_ln: bool = True, tri_update: bool = True)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

update_pair(x, pair, nlist, op_mask, op_norm)[source]
shared_dropout(x, shared_dim, dropout)[source]
forward(x: torch.Tensor, pair: torch.Tensor, nlist: torch.Tensor = None, attn_mask: torch.Tensor | None = None, pair_mask: torch.Tensor | None = None, op_mask: float = 1.0, op_norm: float = 1.0)[source]

Encoder the atomic and pair representations.

Args: - x: Atomic representation with shape [ncluster, natoms, embed_dim]. - pair: Pair representation with shape [ncluster, natoms, natoms, pair_dim]. - attn_mask: Attention mask with shape [ncluster, head, natoms, natoms]. - pair_mask: Neighbor mask with shape [ncluster, natoms, natoms].

class deepmd.pt.model.network.network.Evoformer3bEncoder(nnei, layer_num=6, attn_head=8, atomic_dim=768, pair_dim=64, pair_hidden_dim=32, ffn_embedding_dim=3072, dropout: float = 0.1, droppath_prob: float = 0.0, pair_dropout: float = 0.25, attention_dropout: float = 0.1, activation_dropout: float = 0.1, pre_ln: bool = True, tri_update: bool = True, **kwargs)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(x, pair, attn_mask=None, pair_mask=None, atom_mask=None)[source]

Encoder the atomic and pair representations.

Parameters:
  • x – Atomic representation with shape [ncluster, natoms, atomic_dim].

  • pair – Pair representation with shape [ncluster, natoms, natoms, pair_dim].

  • attn_mask – Attention mask (with -inf for softmax) with shape [ncluster, head, natoms, natoms].

  • pair_mask – Pair mask (with 1 for real atom pair and 0 for padding) with shape [ncluster, natoms, natoms].

  • atom_mask – Atom mask (with 1 for real atom and 0 for padding) with shape [ncluster, natoms].

Returns:
x: Atomic representation with shape [ncluster, natoms, atomic_dim].
pair: Pair representation with shape [ncluster, natoms, natoms, pair_dim].