deepmd.pt.model.descriptor.se_atten

Module Contents

Classes

DescrptBlockSeAtten

The building block of descriptor.

NeighborGatedAttention

Base class for all neural network modules.

NeighborGatedAttentionLayer

Base class for all neural network modules.

GatedAttentionLayer

Base class for all neural network modules.

class deepmd.pt.model.descriptor.se_atten.DescrptBlockSeAtten(rcut: float, rcut_smth: float, sel: List[int] | int, ntypes: int, neuron: list = [25, 50, 100], axis_neuron: int = 16, tebd_dim: int = 8, tebd_input_mode: str = 'concat', set_davg_zero: bool = True, attn: int = 128, attn_layer: int = 2, attn_dotr: bool = True, attn_mask: bool = False, activation_function='tanh', precision: str = 'float64', resnet_dt: bool = False, scaling_factor=1.0, normalize=True, temperature=None, smooth: bool = True, type_one_side: bool = False, exclude_types: List[Tuple[int, int]] = [], env_protection: float = 0.0, trainable_ln: bool = True, ln_eps: float | None = 1e-05, type: str | None = None, old_impl: bool = False)[source]

Bases: deepmd.pt.model.descriptor.descriptor.DescriptorBlock

The building block of descriptor. Given the input descriptor, provide with the atomic coordinates, atomic types and neighbor list, calculate the new descriptor.

property dim_out[source]

Returns the output dimension of this descriptor.

property dim_in[source]

Returns the atomic input dimension of this descriptor.

property dim_emb[source]

Returns the output dimension of embedding.

get_rcut() float[source]

Returns the cut-off radius.

get_nsel() int[source]

Returns the number of selected atoms in the cut-off radius.

get_sel() List[int][source]

Returns the number of selected atoms for each type.

get_ntypes() int[source]

Returns the number of element types.

get_dim_in() int[source]

Returns the output dimension.

get_dim_out() int[source]

Returns the output dimension.

get_dim_emb() int[source]

Returns the output dimension of embedding.

__setitem__(key, value)[source]
__getitem__(key)[source]
mixed_types() bool[source]

If true, the discriptor 1. assumes total number of atoms aligned across frames; 2. requires a neighbor list that does not distinguish different atomic types.

If false, the discriptor 1. assumes total number of atoms of each atom type aligned across frames; 2. requires a neighbor list that distinguishes different atomic types.

compute_input_stats(merged: Callable[[], List[dict]] | List[dict], path: deepmd.utils.path.DPPath | None = None)[source]

Compute the input statistics (e.g. mean and stddev) for the descriptors from packed data.

Parameters:
mergedUnion[Callable[[], List[dict]], List[dict]]
  • List[dict]: A list of data samples from various data systems.

    Each element, merged[i], is a data dictionary containing keys: torch.Tensor originating from the i-th data system.

  • Callable[[], List[dict]]: A lazy function that returns data samples in the above format

    only when needed. Since the sampling process can be slow and memory-intensive, the lazy function helps by only sampling once.

pathOptional[DPPath]

The path to the stat file.

get_stats() Dict[str, deepmd.utils.env_mat_stat.StatItem][source]

Get the statistics of the descriptor.

reinit_exclude(exclude_types: List[Tuple[int, int]] = [])[source]
forward(nlist: torch.Tensor, extended_coord: torch.Tensor, extended_atype: torch.Tensor, extended_atype_embd: torch.Tensor | None = None, mapping: torch.Tensor | None = None)[source]

Compute the descriptor.

Parameters:
nlist

The neighbor list. shape: nf x nloc x nnei

extended_coord

The extended coordinates of atoms. shape: nf x (nallx3)

extended_atype

The extended aotm types. shape: nf x nall x nt

extended_atype_embd

The extended type embedding of atoms. shape: nf x nall

mapping

The index mapping, not required by this descriptor.

Returns:
result

The descriptor. shape: nf x nloc x (ng x axis_neuron)

g2

The rotationally invariant pair-partical representation. shape: nf x nloc x nnei x ng

h2

The rotationally equivariant pair-partical representation. shape: nf x nloc x nnei x 3

gr

The rotationally equivariant and permutationally invariant single particle representation. shape: nf x nloc x ng x 3

sw

The smooth switch function. shape: nf x nloc x nnei

class deepmd.pt.model.descriptor.se_atten.NeighborGatedAttention(layer_num: int, nnei: int, embed_dim: int, hidden_dim: int, dotr: bool = False, do_mask: bool = False, scaling_factor: float = 1.0, normalize: bool = True, temperature: float | None = None, trainable_ln: bool = True, ln_eps: float = 1e-05, smooth: bool = True, precision: str = DEFAULT_PRECISION)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(input_G, nei_mask, input_r: torch.Tensor | None = None, sw: torch.Tensor | None = None)[source]

Compute the multi-layer gated self-attention.

Parameters:
input_G

inputs with shape: (nf x nloc) x nnei x embed_dim.

nei_mask

neighbor mask, with paddings being 0. shape: (nf x nloc) x nnei.

input_r

normalized radial. shape: (nf x nloc) x nnei x 3.

sw

The smooth switch function. shape: nf x nloc x nnei

__getitem__(key)[source]
__setitem__(key, value)[source]
serialize() dict[source]

Serialize the networks to a dict.

Returns:
dict

The serialized networks.

classmethod deserialize(data: dict) NeighborGatedAttention[source]

Deserialize the networks from a dict.

Parameters:
datadict

The dict to deserialize from.

class deepmd.pt.model.descriptor.se_atten.NeighborGatedAttentionLayer(nnei: int, embed_dim: int, hidden_dim: int, dotr: bool = False, do_mask: bool = False, scaling_factor: float = 1.0, normalize: bool = True, temperature: float | None = None, smooth: bool = True, trainable_ln: bool = True, ln_eps: float = 1e-05, precision: str = DEFAULT_PRECISION)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(x, nei_mask, input_r: torch.Tensor | None = None, sw: torch.Tensor | None = None)[source]
serialize() dict[source]

Serialize the networks to a dict.

Returns:
dict

The serialized networks.

classmethod deserialize(data: dict) NeighborGatedAttentionLayer[source]

Deserialize the networks from a dict.

Parameters:
datadict

The dict to deserialize from.

class deepmd.pt.model.descriptor.se_atten.GatedAttentionLayer(nnei: int, embed_dim: int, hidden_dim: int, dotr: bool = False, do_mask: bool = False, scaling_factor: float = 1.0, normalize: bool = True, temperature: float | None = None, bias: bool = True, smooth: bool = True, precision: str = DEFAULT_PRECISION)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(query, nei_mask, input_r: torch.Tensor | None = None, sw: torch.Tensor | None = None, attnw_shift: float = 20.0)[source]

Compute the gated self-attention.

Parameters:
query

inputs with shape: (nf x nloc) x nnei x embed_dim.

nei_mask

neighbor mask, with paddings being 0. shape: (nf x nloc) x nnei.

input_r

normalized radial. shape: (nf x nloc) x nnei x 3.

sw

The smooth switch function. shape: (nf x nloc) x nnei

attnw_shiftfloat

The attention weight shift to preserve smoothness when doing padding before softmax.

serialize() dict[source]

Serialize the networks to a dict.

Returns:
dict

The serialized networks.

classmethod deserialize(data: dict) GatedAttentionLayer[source]

Deserialize the networks from a dict.

Parameters:
datadict

The dict to deserialize from.