deepmd.pt.model.task.ener
Module Contents
Classes
Construct a fitting net for energy. | |
Base class for all neural network modules. |
Attributes
- class deepmd.pt.model.task.ener.EnergyFittingNet(ntypes: int, dim_descrpt: int, neuron: List[int] = [128, 128, 128], bias_atom_e: torch.Tensor | None = None, resnet_dt: bool = True, numb_fparam: int = 0, numb_aparam: int = 0, activation_function: str = 'tanh', precision: str = DEFAULT_PRECISION, mixed_types: bool = True, **kwargs)[source]
Bases:
deepmd.pt.model.task.invar_fitting.InvarFitting
Construct a fitting net for energy.
- Parameters:
- var_name
str
The atomic property to fit, ‘energy’, ‘dipole’, and ‘polar’.
- ntypes
int
Element count.
- dim_descrpt
int
Embedding width per atom.
- dim_out
int
The output dimension of the fitting net.
- neuron
List
[int
] Number of neurons in each hidden layers of the fitting net.
- bias_atom_e
torch.Tensor
,optional
Average enery per atom for each element.
- resnet_dtbool
Using time-step in the ResNet construction.
- numb_fparam
int
Number of frame parameters.
- numb_aparam
int
Number of atomic parameters.
- activation_function
str
Activation function.
- precision
str
Numerical precision.
- mixed_typesbool
If true, use a uniform fitting net for all atom types, otherwise use different fitting nets for different atom types.
- rcond
float
,optional
The condition number for the regression of atomic energy.
- seed
int
,optional
Random seed.
- exclude_types: List[int]
Atomic contributions of the excluded atom types are set zero.
- atom_ener: List[Optional[torch.Tensor]], optional
Specifying atomic energy contribution in vacuum. The value is a list specifying the bias. the elements can be None or np.array of output shape. For example: [None, [2.]] means type 0 is not set, type 1 is set to [2.] The set_davg_zero key in the descrptor should be set.
- var_name
- classmethod deserialize(data: dict) deepmd.pt.model.task.fitting.GeneralFitting [source]
Deserialize the fitting.
- Parameters:
- data
dict
The serialized data
- data
- Returns:
BF
The deserialized fitting
- class deepmd.pt.model.task.ener.EnergyFittingNetDirect(ntypes, dim_descrpt, neuron, bias_atom_e=None, out_dim=1, resnet_dt=True, use_tebd=True, return_energy=False, **kwargs)[source]
Bases:
deepmd.pt.model.task.fitting.Fitting
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:
import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their parameters converted too when you call
to()
, etc.Note
As per the example above, an
__init__()
call to the parent class must be made before assignment on the child.- Variables:
training (bool) – Boolean represents whether this module is in training or evaluation mode.
- abstract deserialize() EnergyFittingNetDirect [source]
Deserialize the fitting.
- Parameters:
- data
dict
The serialized data
- data
- Returns:
BF
The deserialized fitting
- forward(inputs: torch.Tensor, atype: torch.Tensor, gr: torch.Tensor | None = None, g2: torch.Tensor | None = None, h2: torch.Tensor | None = None, fparam: torch.Tensor | None = None, aparam: torch.Tensor | None = None) Tuple[torch.Tensor, None] [source]
Based on embedding net output, alculate total energy.
Args: - inputs: Embedding matrix. Its shape is [nframes, natoms[0], self.dim_descrpt]. - natoms: Tell atom count and element count. Its shape is [2+self.ntypes].
- Returns:
- torch.Tensor:
Total
energy
with
shape
[nframes
,natoms
[0]].
- torch.Tensor: