deepmd.pt.model.descriptor.se_atten#
Classes#
The building block of descriptor. | |
Functions#
|
Module Contents#
- deepmd.pt.model.descriptor.se_atten.tabulate_fusion_se_atten(argument0: torch.Tensor, argument1: torch.Tensor, argument2: torch.Tensor, argument3: torch.Tensor, argument4: torch.Tensor, argument5: int, argument6: bool) list[torch.Tensor] [source]#
- class deepmd.pt.model.descriptor.se_atten.DescrptBlockSeAtten(rcut: float, rcut_smth: float, sel: list[int] | int, ntypes: int, neuron: list = [25, 50, 100], axis_neuron: int = 16, tebd_dim: int = 8, tebd_input_mode: str = 'concat', set_davg_zero: bool = True, attn: int = 128, attn_layer: int = 2, attn_dotr: bool = True, attn_mask: bool = False, activation_function='tanh', precision: str = 'float64', resnet_dt: bool = False, scaling_factor=1.0, normalize=True, temperature=None, smooth: bool = True, type_one_side: bool = False, exclude_types: list[tuple[int, int]] = [], env_protection: float = 0.0, trainable_ln: bool = True, ln_eps: float | None = 1e-05, seed: int | list[int] | None = None, type: str | None = None)[source]#
Bases:
deepmd.pt.model.descriptor.descriptor.DescriptorBlock
The building block of descriptor. Given the input descriptor, provide with the atomic coordinates, atomic types and neighbor list, calculate the new descriptor.
- get_rcut_smth() float [source]#
Returns the radius where the neighbor information starts to smoothly decay to 0.
- get_dim_rot_mat_1() int [source]#
Returns the first dimension of the rotation matrix. The rotation is of shape dim_1 x 3.
- mixed_types() bool [source]#
If true, the descriptor 1. assumes total number of atoms aligned across frames; 2. requires a neighbor list that does not distinguish different atomic types.
If false, the descriptor 1. assumes total number of atoms of each atom type aligned across frames; 2. requires a neighbor list that distinguishes different atomic types.
- compute_input_stats(merged: Callable[[], list[dict]] | list[dict], path: deepmd.utils.path.DPPath | None = None) None [source]#
Compute the input statistics (e.g. mean and stddev) for the descriptors from packed data.
- Parameters:
- merged
Union
[Callable
[[],list
[dict
]],list
[dict
]] - list[dict]: A list of data samples from various data systems.
Each element, merged[i], is a data dictionary containing keys: torch.Tensor originating from the i-th data system.
- Callable[[], list[dict]]: A lazy function that returns data samples in the above format
only when needed. Since the sampling process can be slow and memory-intensive, the lazy function helps by only sampling once.
- path
Optional
[DPPath
] The path to the stat file.
- merged
- get_stats() dict[str, deepmd.utils.env_mat_stat.StatItem] [source]#
Get the statistics of the descriptor.
- forward(nlist: torch.Tensor, extended_coord: torch.Tensor, extended_atype: torch.Tensor, extended_atype_embd: torch.Tensor | None = None, mapping: torch.Tensor | None = None, type_embedding: torch.Tensor | None = None)[source]#
Compute the descriptor.
- Parameters:
- nlist
The neighbor list. shape: nf x nloc x nnei
- extended_coord
The extended coordinates of atoms. shape: nf x (nallx3)
- extended_atype
The extended aotm types. shape: nf x nall x nt
- extended_atype_embd
The extended type embedding of atoms. shape: nf x nall
- mapping
The index mapping, not required by this descriptor.
- type_embedding
Full type embeddings. shape: (ntypes+1) x nt Required for stripped type embeddings.
- Returns:
result
The descriptor. shape: nf x nloc x (ng x axis_neuron)
g2
The rotationally invariant pair-partical representation. shape: nf x nloc x nnei x ng
h2
The rotationally equivariant pair-partical representation. shape: nf x nloc x nnei x 3
gr
The rotationally equivariant and permutationally invariant single particle representation. shape: nf x nloc x ng x 3
sw
The smooth switch function. shape: nf x nloc x nnei
- class deepmd.pt.model.descriptor.se_atten.NeighborGatedAttention(layer_num: int, nnei: int, embed_dim: int, hidden_dim: int, dotr: bool = False, do_mask: bool = False, scaling_factor: float = 1.0, normalize: bool = True, temperature: float | None = None, trainable_ln: bool = True, ln_eps: float = 1e-05, smooth: bool = True, precision: str = DEFAULT_PRECISION, seed: int | list[int] | None = None)[source]#
Bases:
torch.nn.Module
- forward(input_G, nei_mask, input_r: torch.Tensor | None = None, sw: torch.Tensor | None = None)[source]#
Compute the multi-layer gated self-attention.
- Parameters:
- input_G
inputs with shape: (nf x nloc) x nnei x embed_dim.
- nei_mask
neighbor mask, with paddings being 0. shape: (nf x nloc) x nnei.
- input_r
normalized radial. shape: (nf x nloc) x nnei x 3.
- sw
The smooth switch function. shape: nf x nloc x nnei
- classmethod deserialize(data: dict) NeighborGatedAttention [source]#
Deserialize the networks from a dict.
- Parameters:
- data
dict
The dict to deserialize from.
- data
- class deepmd.pt.model.descriptor.se_atten.NeighborGatedAttentionLayer(nnei: int, embed_dim: int, hidden_dim: int, dotr: bool = False, do_mask: bool = False, scaling_factor: float = 1.0, normalize: bool = True, temperature: float | None = None, smooth: bool = True, trainable_ln: bool = True, ln_eps: float = 1e-05, precision: str = DEFAULT_PRECISION, seed: int | list[int] | None = None)[source]#
Bases:
torch.nn.Module
- classmethod deserialize(data: dict) NeighborGatedAttentionLayer [source]#
Deserialize the networks from a dict.
- Parameters:
- data
dict
The dict to deserialize from.
- data
- class deepmd.pt.model.descriptor.se_atten.GatedAttentionLayer(nnei: int, embed_dim: int, hidden_dim: int, num_heads: int = 1, dotr: bool = False, do_mask: bool = False, scaling_factor: float = 1.0, normalize: bool = True, temperature: float | None = None, bias: bool = True, smooth: bool = True, precision: str = DEFAULT_PRECISION, seed: int | list[int] | None = None)[source]#
Bases:
torch.nn.Module
- forward(query, nei_mask, input_r: torch.Tensor | None = None, sw: torch.Tensor | None = None, attnw_shift: float = 20.0)[source]#
Compute the multi-head gated self-attention.
- Parameters:
- query
inputs with shape: (nf x nloc) x nnei x embed_dim.
- nei_mask
neighbor mask, with paddings being 0. shape: (nf x nloc) x nnei.
- input_r
normalized radial. shape: (nf x nloc) x nnei x 3.
- sw
The smooth switch function. shape: (nf x nloc) x nnei
- attnw_shift
float
The attention weight shift to preserve smoothness when doing padding before softmax.
- classmethod deserialize(data: dict) GatedAttentionLayer [source]#
Deserialize the networks from a dict.
- Parameters:
- data
dict
The dict to deserialize from.
- data