deepmd.pt.model.task

Submodules

Package Contents

Classes

FittingNetAttenLcc

Base class for all neural network modules.

DenoiseNet

Base class for all neural network modules.

DipoleFittingNet

Construct a dipole fitting net.

EnergyFittingNet

Construct a fitting net for energy.

EnergyFittingNetDirect

Base class for all neural network modules.

Fitting

Base class for all neural network modules.

PolarFittingNet

Construct a polar fitting net.

TypePredictNet

Base class for all neural network modules.

Attributes

BaseFitting

class deepmd.pt.model.task.FittingNetAttenLcc(embedding_width, bias_atom_e, pair_embed_dim, attention_heads, **kwargs)[source]

Bases: deepmd.pt.model.task.fitting.Fitting

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(output, pair, delta_pos, atype, nframes, nloc)[source]
deepmd.pt.model.task.BaseFitting[source]
class deepmd.pt.model.task.DenoiseNet(feature_dim, ntypes, attn_head=8, prefactor=[0.5, 0.5], activation_function='gelu', **kwargs)[source]

Bases: deepmd.pt.model.task.fitting.Fitting

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

output_def()[source]

Returns the output def of the fitting net.

forward(pair_weights, diff, nlist_mask, features, sw, masked_tokens: torch.Tensor | None = None)[source]

Calculate the updated coord. Args: - coord: Input noisy coord with shape [nframes, nloc, 3]. - pair_weights: Input pair weights with shape [nframes, nloc, nnei, head]. - diff: Input pair relative coord list with shape [nframes, nloc, nnei, 3]. - nlist_mask: Input nlist mask with shape [nframes, nloc, nnei].

Returns:
  • denoised_coord: Denoised updated coord with shape [nframes, nloc, 3].
class deepmd.pt.model.task.DipoleFittingNet(ntypes: int, dim_descrpt: int, embedding_width: int, neuron: List[int] = [128, 128, 128], resnet_dt: bool = True, numb_fparam: int = 0, numb_aparam: int = 0, activation_function: str = 'tanh', precision: str = DEFAULT_PRECISION, mixed_types: bool = True, rcond: float | None = None, seed: int | None = None, exclude_types: List[int] = [], r_differentiable: bool = True, c_differentiable: bool = True, **kwargs)[source]

Bases: deepmd.pt.model.task.fitting.GeneralFitting

Construct a dipole fitting net.

Parameters:
ntypesint

Element count.

dim_descrptint

Embedding width per atom.

embedding_widthint

The dimension of rotation matrix, m1.

neuronList[int]

Number of neurons in each hidden layers of the fitting net.

resnet_dtbool

Using time-step in the ResNet construction.

numb_fparamint

Number of frame parameters.

numb_aparamint

Number of atomic parameters.

activation_functionstr

Activation function.

precisionstr

Numerical precision.

mixed_typesbool

If true, use a uniform fitting net for all atom types, otherwise use different fitting nets for different atom types.

rcondfloat, optional

The condition number for the regression of atomic energy.

seedint, optional

Random seed.

r_differentiable

If the variable is differentiated with respect to coordinates of atoms. Only reduciable variable are differentiable.

c_differentiable

If the variable is differentiated with respect to the cell tensor (pbc case). Only reduciable variable are differentiable.

exclude_types: List[int]
_net_out_dim()[source]

Set the FittingNet output dim.

serialize() dict[source]

Serialize the fitting to dict.

classmethod deserialize(data: dict) deepmd.pt.model.task.fitting.GeneralFitting[source]

Deserialize the fitting.

Parameters:
datadict

The serialized data

Returns:
BF

The deserialized fitting

output_def() deepmd.dpmodel.FittingOutputDef[source]

Returns the output def of the fitting net.

compute_output_stats(merged: Callable[[], List[dict]] | List[dict], stat_file_path: deepmd.utils.path.DPPath | None = None)[source]

Compute the output statistics (e.g. energy bias) for the fitting net from packed data.

Parameters:
mergedUnion[Callable[[], List[dict]], List[dict]]
  • List[dict]: A list of data samples from various data systems.

    Each element, merged[i], is a data dictionary containing keys: torch.Tensor originating from the i-th data system.

  • Callable[[], List[dict]]: A lazy function that returns data samples in the above format

    only when needed. Since the sampling process can be slow and memory-intensive, the lazy function helps by only sampling once.

stat_file_pathOptional[DPPath]

The path to the stat file.

forward(descriptor: torch.Tensor, atype: torch.Tensor, gr: torch.Tensor | None = None, g2: torch.Tensor | None = None, h2: torch.Tensor | None = None, fparam: torch.Tensor | None = None, aparam: torch.Tensor | None = None)[source]
class deepmd.pt.model.task.EnergyFittingNet(ntypes: int, dim_descrpt: int, neuron: List[int] = [128, 128, 128], bias_atom_e: torch.Tensor | None = None, resnet_dt: bool = True, numb_fparam: int = 0, numb_aparam: int = 0, activation_function: str = 'tanh', precision: str = DEFAULT_PRECISION, mixed_types: bool = True, **kwargs)[source]

Bases: deepmd.pt.model.task.invar_fitting.InvarFitting

Construct a fitting net for energy.

Parameters:
var_namestr

The atomic property to fit, ‘energy’, ‘dipole’, and ‘polar’.

ntypesint

Element count.

dim_descrptint

Embedding width per atom.

dim_outint

The output dimension of the fitting net.

neuronList[int]

Number of neurons in each hidden layers of the fitting net.

bias_atom_etorch.Tensor, optional

Average enery per atom for each element.

resnet_dtbool

Using time-step in the ResNet construction.

numb_fparamint

Number of frame parameters.

numb_aparamint

Number of atomic parameters.

activation_functionstr

Activation function.

precisionstr

Numerical precision.

mixed_typesbool

If true, use a uniform fitting net for all atom types, otherwise use different fitting nets for different atom types.

rcondfloat, optional

The condition number for the regression of atomic energy.

seedint, optional

Random seed.

exclude_types: List[int]

Atomic contributions of the excluded atom types are set zero.

atom_ener: List[Optional[torch.Tensor]], optional

Specifying atomic energy contribution in vacuum. The value is a list specifying the bias. the elements can be None or np.array of output shape. For example: [None, [2.]] means type 0 is not set, type 1 is set to [2.] The set_davg_zero key in the descrptor should be set.

exclude_types: List[int]
classmethod deserialize(data: dict) deepmd.pt.model.task.fitting.GeneralFitting[source]

Deserialize the fitting.

Parameters:
datadict

The serialized data

Returns:
BF

The deserialized fitting

serialize() dict[source]

Serialize the fitting to dict.

class deepmd.pt.model.task.EnergyFittingNetDirect(ntypes, dim_descrpt, neuron, bias_atom_e=None, out_dim=1, resnet_dt=True, use_tebd=True, return_energy=False, **kwargs)[source]

Bases: deepmd.pt.model.task.fitting.Fitting

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

output_def()[source]

Returns the output def of the fitting net.

abstract serialize() dict[source]

Serialize the obj to dict.

abstract deserialize() EnergyFittingNetDirect[source]

Deserialize the fitting.

Parameters:
datadict

The serialized data

Returns:
BF

The deserialized fitting

forward(inputs: torch.Tensor, atype: torch.Tensor, gr: torch.Tensor | None = None, g2: torch.Tensor | None = None, h2: torch.Tensor | None = None, fparam: torch.Tensor | None = None, aparam: torch.Tensor | None = None) Tuple[torch.Tensor, None][source]

Based on embedding net output, alculate total energy.

Args: - inputs: Embedding matrix. Its shape is [nframes, natoms[0], self.dim_descrpt]. - natoms: Tell atom count and element count. Its shape is [2+self.ntypes].

Returns:
  • torch.Tensor: Total energy with shape [nframes, natoms[0]].
class deepmd.pt.model.task.Fitting(*args, **kwargs)[source]

Bases: torch.nn.Module, deepmd.pt.model.task.base_fitting.BaseFitting

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

share_params(base_class, shared_level, resume=False)[source]

Share the parameters of self to the base_class with shared_level during multitask training. If not start from checkpoint (resume is False), some seperated parameters (e.g. mean and stddev) will be re-calculated across different classes.

class deepmd.pt.model.task.PolarFittingNet(ntypes: int, dim_descrpt: int, embedding_width: int, neuron: List[int] = [128, 128, 128], resnet_dt: bool = True, numb_fparam: int = 0, numb_aparam: int = 0, activation_function: str = 'tanh', precision: str = DEFAULT_PRECISION, mixed_types: bool = True, rcond: float | None = None, seed: int | None = None, exclude_types: List[int] = [], fit_diag: bool = True, scale: List[float] | float | None = None, shift_diag: bool = True, **kwargs)[source]

Bases: deepmd.pt.model.task.fitting.GeneralFitting

Construct a polar fitting net.

Parameters:
ntypesint

Element count.

dim_descrptint

Embedding width per atom.

embedding_widthint

The dimension of rotation matrix, m1.

neuronList[int]

Number of neurons in each hidden layers of the fitting net.

resnet_dtbool

Using time-step in the ResNet construction.

numb_fparamint

Number of frame parameters.

numb_aparamint

Number of atomic parameters.

activation_functionstr

Activation function.

precisionstr

Numerical precision.

mixed_typesbool

If true, use a uniform fitting net for all atom types, otherwise use different fitting nets for different atom types.

rcondfloat, optional

The condition number for the regression of atomic energy.

seedint, optional

Random seed.

fit_diagbool

Fit the diagonal part of the rotational invariant polarizability matrix, which will be converted to normal polarizability matrix by contracting with the rotation matrix.

scaleList[float]

The output of the fitting net (polarizability matrix) for type i atom will be scaled by scale[i]

shift_diagbool

Whether to shift the diagonal part of the polarizability matrix. The shift operation is carried out after scale.

exclude_types: List[int]
_net_out_dim()[source]

Set the FittingNet output dim.

__setitem__(key, value)[source]
__getitem__(key)[source]
serialize() dict[source]

Serialize the fitting to dict.

classmethod deserialize(data: dict) deepmd.pt.model.task.fitting.GeneralFitting[source]

Deserialize the fitting.

Parameters:
datadict

The serialized data

Returns:
BF

The deserialized fitting

output_def() deepmd.dpmodel.FittingOutputDef[source]

Returns the output def of the fitting net.

forward(descriptor: torch.Tensor, atype: torch.Tensor, gr: torch.Tensor | None = None, g2: torch.Tensor | None = None, h2: torch.Tensor | None = None, fparam: torch.Tensor | None = None, aparam: torch.Tensor | None = None)[source]
class deepmd.pt.model.task.TypePredictNet(feature_dim, ntypes, activation_function='gelu', **kwargs)[source]

Bases: deepmd.pt.model.task.Fitting

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

forward(features, masked_tokens: torch.Tensor | None = None)[source]

Calculate the predicted logits. Args: - features: Input features with shape [nframes, nloc, feature_dim]. - masked_tokens: Input masked tokens with shape [nframes, nloc].

Returns:
  • logits: Predicted probs with shape [nframes, nloc, ntypes].