deepmd.pt.model.task.fitting
Module Contents
Classes
Base class for all neural network modules. | |
Construct a general fitting net. |
Attributes
- class deepmd.pt.model.task.fitting.Fitting(*args, **kwargs)[source]
Bases:
torch.nn.Module
,deepmd.pt.model.task.base_fitting.BaseFitting
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
Share the parameters of self to the base_class with shared_level during multitask training. If not start from checkpoint (resume is False), some seperated parameters (e.g. mean and stddev) will be re-calculated across different classes.
- class deepmd.pt.model.task.fitting.GeneralFitting(var_name: str, ntypes: int, dim_descrpt: int, neuron: List[int] = [128, 128, 128], bias_atom_e: torch.Tensor | None = None, resnet_dt: bool = True, numb_fparam: int = 0, numb_aparam: int = 0, activation_function: str = 'tanh', precision: str = DEFAULT_PRECISION, mixed_types: bool = True, rcond: float | None = None, seed: int | None = None, exclude_types: List[int] = [], trainable: bool | List[bool] = True, remove_vaccum_contribution: List[bool] | None = None, **kwargs)[source]
Bases:
Fitting
Construct a general fitting net.
- Parameters:
- var_name
str
The atomic property to fit, ‘energy’, ‘dipole’, and ‘polar’.
- ntypes
int
Element count.
- dim_descrpt
int
Embedding width per atom.
- dim_out
int
The output dimension of the fitting net.
- neuron
List
[int
] Number of neurons in each hidden layers of the fitting net.
- bias_atom_e
torch.Tensor
,optional
Average enery per atom for each element.
- resnet_dtbool
Using time-step in the ResNet construction.
- numb_fparam
int
Number of frame parameters.
- numb_aparam
int
Number of atomic parameters.
- activation_function
str
Activation function.
- precision
str
Numerical precision.
- mixed_typesbool
If true, use a uniform fitting net for all atom types, otherwise use different fitting nets for different atom types.
- rcond
float
,optional
The condition number for the regression of atomic energy.
- seed
int
,optional
Random seed.
- exclude_types: List[int]
Atomic contributions of the excluded atom types are set zero.
- trainable
Union
[List
[bool], bool] If the parameters in the fitting net are trainable. Now this only supports setting all the parameters in the fitting net at one state. When in List[bool], the trainable will be True only if all the boolean parameters are True.
- remove_vaccum_contribution: List[bool], optional
Remove vaccum contribution before the bias is added. The list assigned each type. For mixed_types provide [True], otherwise it should be a list of the same length as ntypes signaling if or not removing the vaccum contribution for the atom types in the list.
- var_name
- classmethod deserialize(data: dict) GeneralFitting [source]
Deserialize the fitting.
- Parameters:
- data
dict
The serialized data
- data
- Returns:
BF
The deserialized fitting
- get_dim_aparam() int [source]
Get the number (dimension) of atomic parameters of this atomic model.