deepmd.pt.model.model

The model that takes the coordinates, cell and atom types as input and predicts some property. The models are automatically generated from atomic models by the deepmd.dpmodel.make_model method.

The make_model method does the reduction, auto-differentiation and communication of the atomic properties according to output variable definition deepmd.dpmodel.OutputVariableDef.

All models should be inherited from deepmd.pt.model.model.model.BaseModel. Models generated by make_model have already done it.

Submodules

Package Contents

Classes

DPModelCommon

A base class to implement common methods for all the Models.

DPZBLModel

Base class for all neural network modules.

EnergyModel

A base class to implement common methods for all the Models.

FrozenModel

Load model from a frozen model, which cannot be trained.

BaseModel

Base class for all neural network modules.

SpinEnergyModel

A spin model for energy.

SpinModel

A spin model wrapper, with spin input preprocess and output split.

Functions

make_hessian_model(T_Model)

Make a model that can compute Hessian.

make_model(T_AtomicModel)

Make a model as a derived class of an atomic model.

get_model(model_params)

class deepmd.pt.model.model.DPModelCommon[source]

A base class to implement common methods for all the Models.

classmethod update_sel(global_jdata: dict, local_jdata: dict)[source]

Update the selection and perform neighbor statistics.

Parameters:
global_jdatadict

The global data, containing the training section

local_jdatadict

The local data refer to the current class

get_fitting_net()[source]

Get the fitting network.

get_descriptor()[source]

Get the descriptor.

class deepmd.pt.model.model.DPZBLModel(*args, **kwargs)[source]

Bases: DPZBLModel_

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

model_type = 'ener'
forward(coord, atype, box: torch.Tensor | None = None, fparam: torch.Tensor | None = None, aparam: torch.Tensor | None = None, do_atomic_virial: bool = False) Dict[str, torch.Tensor][source]
forward_lower(extended_coord, extended_atype, nlist, mapping: torch.Tensor | None = None, fparam: torch.Tensor | None = None, aparam: torch.Tensor | None = None, do_atomic_virial: bool = False)[source]
classmethod update_sel(global_jdata: dict, local_jdata: dict)[source]

Update the selection and perform neighbor statistics.

Parameters:
global_jdatadict

The global data, containing the training section

local_jdatadict

The local data refer to the current class

class deepmd.pt.model.model.EnergyModel(*args, **kwargs)[source]

Bases: deepmd.pt.model.model.dp_model.DPModelCommon, DPEnergyModel_

A base class to implement common methods for all the Models.

model_type = 'ener'
forward(coord, atype, box: torch.Tensor | None = None, fparam: torch.Tensor | None = None, aparam: torch.Tensor | None = None, do_atomic_virial: bool = False) Dict[str, torch.Tensor][source]
forward_lower(extended_coord, extended_atype, nlist, mapping: torch.Tensor | None = None, fparam: torch.Tensor | None = None, aparam: torch.Tensor | None = None, do_atomic_virial: bool = False, comm_dict: Dict[str, torch.Tensor] | None = None)[source]
class deepmd.pt.model.model.FrozenModel(model_file: str, **kwargs)[source]

Bases: deepmd.pt.model.model.model.BaseModel

Load model from a frozen model, which cannot be trained.

Parameters:
model_filestr

The path to the frozen model

fitting_output_def() deepmd.dpmodel.output_def.FittingOutputDef[source]

Get the output def of developer implemented atomic models.

get_rcut() float[source]

Get the cut-off radius.

get_type_map() List[str][source]

Get the type map.

get_sel() List[int][source]

Returns the number of selected atoms for each type.

get_dim_fparam() int[source]

Get the number (dimension) of frame parameters of this atomic model.

get_dim_aparam() int[source]

Get the number (dimension) of atomic parameters of this atomic model.

get_sel_type() List[int][source]

Get the selected atom types of this model.

Only atoms with selected atom types have atomic contribution to the result of the model. If returning an empty list, all atom types are selected.

is_aparam_nall() bool[source]

Check whether the shape of atomic parameters is (nframes, nall, ndim).

If False, the shape is (nframes, nloc, ndim).

mixed_types() bool[source]

If true, the model 1. assumes total number of atoms aligned across frames; 2. uses a neighbor list that does not distinguish different atomic types.

If false, the model 1. assumes total number of atoms of each atom type aligned across frames; 2. uses a neighbor list that distinguishes different atomic types.

forward(coord, atype, box: torch.Tensor | None = None, fparam: torch.Tensor | None = None, aparam: torch.Tensor | None = None, do_atomic_virial: bool = False) Dict[str, torch.Tensor][source]
get_model_def_script() str[source]

Get the model definition script.

serialize() dict[source]

Serialize the model.

Returns:
dict

The serialized data

classmethod deserialize(data: dict)[source]

Deserialize the model.

Parameters:
datadict

The serialized data

Returns:
BaseModel

The deserialized model

get_nnei() int[source]

Returns the total number of selected neighboring atoms in the cut-off radius.

get_nsel() int[source]

Returns the total number of selected neighboring atoms in the cut-off radius.

classmethod update_sel(global_jdata: dict, local_jdata: dict)[source]

Update the selection and perform neighbor statistics.

Parameters:
global_jdatadict

The global data, containing the training section

local_jdatadict

The local data refer to the current class

model_output_type() str[source]

Get the output type for the model.

deepmd.pt.model.model.make_hessian_model(T_Model)[source]

Make a model that can compute Hessian.

LIMITATION: this model is not jitable due to the restrictions of torch jit script.

LIMITATION: only the hessian of forward_common is available.

Parameters:
T_Model

The model. Should provide the forward_common and atomic_output_def methods

Returns:
The model computes hessian.
deepmd.pt.model.model.make_model(T_AtomicModel: Type[deepmd.pt.model.atomic_model.base_atomic_model.BaseAtomicModel])[source]

Make a model as a derived class of an atomic model.

The model provide two interfaces.

1. the forward_common_lower, that takes extended coordinates, atyps and neighbor list, and outputs the atomic and property and derivatives (if required) on the extended region.

2. the forward_common, that takes coordinates, atypes and cell and predicts the atomic and reduced property, and derivatives (if required) on the local region.

Parameters:
T_AtomicModel

The atomic model.

Returns:
CM

The model.

class deepmd.pt.model.model.BaseModel(*args, **kwargs)[source]

Bases: torch.nn.Module, make_base_model()

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

abstract compute_or_load_stat(sampled_func, stat_file_path: deepmd.utils.path.DPPath | None = None)[source]

Compute or load the statistics parameters of the model, such as mean and standard deviation of descriptors or the energy bias of the fitting net. When sampled is provided, all the statistics parameters will be calculated (or re-calculated for update), and saved in the stat_file_path`(s). When `sampled is not provided, it will check the existence of `stat_file_path`(s) and load the calculated statistics parameters.

Parameters:
sampled_func

The sampled data frames from different data systems.

stat_file_path

The path to the statistics files.

get_model_def_script() str[source]

Get the model definition script.

get_ntypes()[source]

Returns the number of element types.

class deepmd.pt.model.model.SpinEnergyModel(backbone_model, spin: deepmd.utils.spin.Spin)[source]

Bases: SpinModel

A spin model for energy.

model_type = 'ener'
forward(coord, atype, spin, box: torch.Tensor | None = None, fparam: torch.Tensor | None = None, aparam: torch.Tensor | None = None, do_atomic_virial: bool = False) Dict[str, torch.Tensor][source]
forward_lower(extended_coord, extended_atype, extended_spin, nlist, mapping: torch.Tensor | None = None, fparam: torch.Tensor | None = None, aparam: torch.Tensor | None = None, do_atomic_virial: bool = False)[source]
class deepmd.pt.model.model.SpinModel(backbone_model, spin: deepmd.utils.spin.Spin)[source]

Bases: torch.nn.Module

A spin model wrapper, with spin input preprocess and output split.

process_spin_input(coord, atype, spin)[source]

Generate virtual coordinates and types, concat into the input.

process_spin_input_lower(extended_coord, extended_atype, extended_spin, nlist, mapping: torch.Tensor | None = None)[source]

Add extended_spin into extended_coord to generate virtual atoms, and extend nlist and mapping. Note that the final extended_coord_updated with shape [nframes, nall + nall, 3] has the following order: - [:, :nloc]: original nloc real atoms. - [:, nloc: nloc + nloc]: virtual atoms corresponding to nloc real atoms. - [:, nloc + nloc: nloc + nall]: ghost real atoms. - [:, nloc + nall: nall + nall]: virtual atoms corresponding to ghost real atoms.

process_spin_output(atype, out_tensor, add_mag: bool = True, virtual_scale: bool = True)[source]

Split the output both real and virtual atoms, and scale the latter. add_mag: whether to add magnetic tensor onto the real tensor.

Default: True. e.g. Ture for forces and False for atomic virials on real atoms.

virtual_scale: whether to scale the magnetic tensor with virtual scale factor.

Default: True. e.g. Ture for forces and False for atomic virials on virtual atoms.

process_spin_output_lower(extended_atype, extended_out_tensor, nloc: int, add_mag: bool = True, virtual_scale: bool = True)[source]

Split the extended output of both real and virtual atoms with switch, and scale the latter. add_mag: whether to add magnetic tensor onto the real tensor.

Default: True. e.g. Ture for forces and False for atomic virials on real atoms.

virtual_scale: whether to scale the magnetic tensor with virtual scale factor.

Default: True. e.g. Ture for forces and False for atomic virials on virtual atoms.

static extend_nlist(extended_atype, nlist)[source]
static concat_switch_virtual(extended_tensor, extended_tensor_virtual, nloc: int)[source]

Concat real and virtual extended tensors, and switch all the local ones to the first nloc * 2 atoms. - [:, :nloc]: original nloc real atoms. - [:, nloc: nloc + nloc]: virtual atoms corresponding to nloc real atoms. - [:, nloc + nloc: nloc + nall]: ghost real atoms. - [:, nloc + nall: nall + nall]: virtual atoms corresponding to ghost real atoms.

static expand_aparam(aparam, nloc: int)[source]

Expand the atom parameters for virtual atoms if necessary.

get_type_map() List[str][source]

Get the type map.

get_rcut()[source]

Get the cut-off radius.

get_dim_fparam()[source]

Get the number (dimension) of frame parameters of this atomic model.

get_dim_aparam()[source]

Get the number (dimension) of atomic parameters of this atomic model.

get_sel_type() List[int][source]

Get the selected atom types of this model. Only atoms with selected atom types have atomic contribution to the result of the model. If returning an empty list, all atom types are selected.

is_aparam_nall() bool[source]

Check whether the shape of atomic parameters is (nframes, nall, ndim). If False, the shape is (nframes, nloc, ndim).

model_output_type() List[str][source]

Get the output type for the model.

get_model_def_script() str[source]

Get the model definition script.

get_nnei() int[source]

Returns the total number of selected neighboring atoms in the cut-off radius.

get_nsel() int[source]

Returns the total number of selected neighboring atoms in the cut-off radius.

has_spin() bool[source]

Returns whether it has spin input and output.

__getattr__(name)[source]

Get attribute from the wrapped model.

compute_or_load_stat(sampled_func, stat_file_path: deepmd.utils.path.DPPath | None = None)[source]

Compute or load the statistics parameters of the model, such as mean and standard deviation of descriptors or the energy bias of the fitting net. When sampled is provided, all the statistics parameters will be calculated (or re-calculated for update), and saved in the stat_file_path`(s). When `sampled is not provided, it will check the existence of `stat_file_path`(s) and load the calculated statistics parameters.

Parameters:
sampled_func

The lazy sampled function to get data frames from different data systems.

stat_file_path

The dictionary of paths to the statistics files.

forward_common(coord, atype, spin, box: torch.Tensor | None = None, fparam: torch.Tensor | None = None, aparam: torch.Tensor | None = None, do_atomic_virial: bool = False) Dict[str, torch.Tensor][source]
forward_common_lower(extended_coord, extended_atype, extended_spin, nlist, mapping: torch.Tensor | None = None, fparam: torch.Tensor | None = None, aparam: torch.Tensor | None = None, do_atomic_virial: bool = False)[source]
serialize() dict[source]
classmethod deserialize(data) SpinModel[source]
deepmd.pt.model.model.get_model(model_params)[source]