deepmd.pt.model.task.fitting

Module Contents

Classes

Fitting

Base class for all neural network modules.

GeneralFitting

Construct a general fitting net.

Attributes

dtype

device

log

deepmd.pt.model.task.fitting.dtype[source]
deepmd.pt.model.task.fitting.device[source]
deepmd.pt.model.task.fitting.log[source]
class deepmd.pt.model.task.fitting.Fitting(*args, **kwargs)[source]

Bases: torch.nn.Module, deepmd.pt.model.task.base_fitting.BaseFitting

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

share_params(base_class, shared_level, resume=False)[source]

Share the parameters of self to the base_class with shared_level during multitask training. If not start from checkpoint (resume is False), some seperated parameters (e.g. mean and stddev) will be re-calculated across different classes.

class deepmd.pt.model.task.fitting.GeneralFitting(var_name: str, ntypes: int, dim_descrpt: int, neuron: List[int] = [128, 128, 128], bias_atom_e: torch.Tensor | None = None, resnet_dt: bool = True, numb_fparam: int = 0, numb_aparam: int = 0, activation_function: str = 'tanh', precision: str = DEFAULT_PRECISION, mixed_types: bool = True, rcond: float | None = None, seed: int | None = None, exclude_types: List[int] = [], trainable: bool | List[bool] = True, remove_vaccum_contribution: List[bool] | None = None, **kwargs)[source]

Bases: Fitting

Construct a general fitting net.

Parameters:
var_namestr

The atomic property to fit, ‘energy’, ‘dipole’, and ‘polar’.

ntypesint

Element count.

dim_descrptint

Embedding width per atom.

dim_outint

The output dimension of the fitting net.

neuronList[int]

Number of neurons in each hidden layers of the fitting net.

bias_atom_etorch.Tensor, optional

Average enery per atom for each element.

resnet_dtbool

Using time-step in the ResNet construction.

numb_fparamint

Number of frame parameters.

numb_aparamint

Number of atomic parameters.

activation_functionstr

Activation function.

precisionstr

Numerical precision.

mixed_typesbool

If true, use a uniform fitting net for all atom types, otherwise use different fitting nets for different atom types.

rcondfloat, optional

The condition number for the regression of atomic energy.

seedint, optional

Random seed.

exclude_types: List[int]

Atomic contributions of the excluded atom types are set zero.

trainableUnion[List[bool], bool]

If the parameters in the fitting net are trainable. Now this only supports setting all the parameters in the fitting net at one state. When in List[bool], the trainable will be True only if all the boolean parameters are True.

remove_vaccum_contribution: List[bool], optional

Remove vaccum contribution before the bias is added. The list assigned each type. For mixed_types provide [True], otherwise it should be a list of the same length as ntypes signaling if or not removing the vaccum contribution for the atom types in the list.

exclude_types: List[int][source]
reinit_exclude(exclude_types: List[int] = [])[source]
serialize() dict[source]

Serialize the fitting to dict.

classmethod deserialize(data: dict) GeneralFitting[source]

Deserialize the fitting.

Parameters:
datadict

The serialized data

Returns:
BF

The deserialized fitting

get_dim_fparam() int[source]

Get the number (dimension) of frame parameters of this atomic model.

get_dim_aparam() int[source]

Get the number (dimension) of atomic parameters of this atomic model.

get_sel_type() List[int][source]

Get the selected atom types of this model.

Only atoms with selected atom types have atomic contribution to the result of the model. If returning an empty list, all atom types are selected.

__setitem__(key, value)[source]
__getitem__(key)[source]
abstract _net_out_dim()[source]

Set the FittingNet output dim.

_extend_f_avg_std(xx: torch.Tensor, nb: int) torch.Tensor[source]
_extend_a_avg_std(xx: torch.Tensor, nb: int, nloc: int) torch.Tensor[source]
_forward_common(descriptor: torch.Tensor, atype: torch.Tensor, gr: torch.Tensor | None = None, g2: torch.Tensor | None = None, h2: torch.Tensor | None = None, fparam: torch.Tensor | None = None, aparam: torch.Tensor | None = None)[source]