deepmd.pt.model.task.fitting#

Attributes#

Classes#

Fitting

Base fitting provides the interfaces of fitting net.

GeneralFitting

Construct a general fitting net.

Module Contents#

deepmd.pt.model.task.fitting.dtype[source]#
deepmd.pt.model.task.fitting.device[source]#
deepmd.pt.model.task.fitting.log[source]#
class deepmd.pt.model.task.fitting.Fitting[source]#

Bases: torch.nn.Module, deepmd.pt.model.task.base_fitting.BaseFitting

Base fitting provides the interfaces of fitting net.

share_params(base_class, shared_level, resume=False) None[source]#

Share the parameters of self to the base_class with shared_level during multitask training. If not start from checkpoint (resume is False), some separated parameters (e.g. mean and stddev) will be re-calculated across different classes.

class deepmd.pt.model.task.fitting.GeneralFitting(var_name: str, ntypes: int, dim_descrpt: int, neuron: list[int] = [128, 128, 128], bias_atom_e: torch.Tensor | None = None, resnet_dt: bool = True, numb_fparam: int = 0, numb_aparam: int = 0, activation_function: str = 'tanh', precision: str = DEFAULT_PRECISION, mixed_types: bool = True, rcond: float | None = None, seed: int | list[int] | None = None, exclude_types: list[int] = [], trainable: bool | list[bool] = True, remove_vaccum_contribution: list[bool] | None = None, type_map: list[str] | None = None, use_aparam_as_mask: bool = False, **kwargs)[source]#

Bases: Fitting

Construct a general fitting net.

Parameters:
var_namestr

The atomic property to fit, ‘energy’, ‘dipole’, and ‘polar’.

ntypesint

Element count.

dim_descrptint

Embedding width per atom.

dim_outint

The output dimension of the fitting net.

neuronlist[int]

Number of neurons in each hidden layers of the fitting net.

bias_atom_etorch.Tensor, optional

Average energy per atom for each element.

resnet_dtbool

Using time-step in the ResNet construction.

numb_fparamint

Number of frame parameters.

numb_aparamint

Number of atomic parameters.

activation_functionstr

Activation function.

precisionstr

Numerical precision.

mixed_typesbool

If true, use a uniform fitting net for all atom types, otherwise use different fitting nets for different atom types.

rcondfloat, optional

The condition number for the regression of atomic energy.

seedint, optional

Random seed.

exclude_types: list[int]

Atomic contributions of the excluded atom types are set zero.

trainableUnion[list[bool], bool]

If the parameters in the fitting net are trainable. Now this only supports setting all the parameters in the fitting net at one state. When in list[bool], the trainable will be True only if all the boolean parameters are True.

remove_vaccum_contribution: list[bool], optional

Remove vacuum contribution before the bias is added. The list assigned each type. For mixed_types provide [True], otherwise it should be a list of the same length as ntypes signaling if or not removing the vacuum contribution for the atom types in the list.

type_map: list[str], Optional

A list of strings. Give the name to each type of atoms.

use_aparam_as_mask: bool

If True, the aparam will not be used in fitting net for embedding.

var_name[source]#
ntypes[source]#
dim_descrpt[source]#
neuron[source]#
mixed_types[source]#
resnet_dt[source]#
numb_fparam[source]#
numb_aparam[source]#
activation_function[source]#
precision[source]#
prec[source]#
rcond[source]#
seed[source]#
type_map[source]#
use_aparam_as_mask[source]#
trainable[source]#
remove_vaccum_contribution[source]#
filter_layers[source]#
reinit_exclude(exclude_types: list[int] = []) None[source]#
change_type_map(type_map: list[str], model_with_new_type_stat=None) None[source]#

Change the type related params to new ones, according to type_map and the original one in the model. If there are new types in type_map, statistics will be updated accordingly to model_with_new_type_stat for these new types.

serialize() dict[source]#

Serialize the fitting to dict.

classmethod deserialize(data: dict) GeneralFitting[source]#

Deserialize the fitting.

Parameters:
datadict

The serialized data

Returns:
BF

The deserialized fitting

get_dim_fparam() int[source]#

Get the number (dimension) of frame parameters of this atomic model.

get_dim_aparam() int[source]#

Get the number (dimension) of atomic parameters of this atomic model.

exclude_types: list[int][source]#
get_sel_type() list[int][source]#

Get the selected atom types of this model.

Only atoms with selected atom types have atomic contribution to the result of the model. If returning an empty list, all atom types are selected.

get_type_map() list[str][source]#

Get the name to each type of atoms.

__setitem__(key, value) None[source]#
__getitem__(key)[source]#
abstract _net_out_dim()[source]#

Set the FittingNet output dim.

_extend_f_avg_std(xx: torch.Tensor, nb: int) torch.Tensor[source]#
_extend_a_avg_std(xx: torch.Tensor, nb: int, nloc: int) torch.Tensor[source]#
_forward_common(descriptor: torch.Tensor, atype: torch.Tensor, gr: torch.Tensor | None = None, g2: torch.Tensor | None = None, h2: torch.Tensor | None = None, fparam: torch.Tensor | None = None, aparam: torch.Tensor | None = None)[source]#