deepmd.dpmodel.fitting

Submodules

Package Contents

Classes

DipoleFitting

Fitting rotationally equivariant diploe of the system.

DOSFittingNet

Fitting the energy (or a rotationally invariant porperty of dim_out) of the system. The force and the virial can also be trained.

EnergyFittingNet

Fitting the energy (or a rotationally invariant porperty of dim_out) of the system. The force and the virial can also be trained.

InvarFitting

Fitting the energy (or a rotationally invariant porperty of dim_out) of the system. The force and the virial can also be trained.

PolarFitting

Fitting rotationally equivariant polarizability of the system.

Functions

make_base_fitting(t_tensor[, fwd_method_name])

Make the base class for the fitting.

class deepmd.dpmodel.fitting.DipoleFitting(ntypes: int, dim_descrpt: int, embedding_width: int, neuron: List[int] = [120, 120, 120], resnet_dt: bool = True, numb_fparam: int = 0, numb_aparam: int = 0, rcond: float | None = None, tot_ener_zero: bool = False, trainable: List[bool] | None = None, activation_function: str = 'tanh', precision: str = DEFAULT_PRECISION, layer_name: List[str | None] | None = None, use_aparam_as_mask: bool = False, spin: Any = None, mixed_types: bool = False, exclude_types: List[int] = [], r_differentiable: bool = True, c_differentiable: bool = True, old_impl=False, seed: int | None = None)[source]

Bases: deepmd.dpmodel.fitting.general_fitting.GeneralFitting

Fitting rotationally equivariant diploe of the system.

Parameters:
ntypes

The number of atom types.

dim_descrpt

The dimension of the input descriptor.

embedding_widthint

The dimension of rotation matrix, m1.

neuron

Number of neurons \(N\) in each hidden layer of the fitting net

resnet_dt

Time-step dt in the resnet construction: \(y = x + dt * \phi (Wx + b)\)

numb_fparam

Number of frame parameter

numb_aparam

Number of atomic parameter

rcond

The condition number for the regression of atomic energy.

tot_ener_zero

Force the total energy to zero. Useful for the charge fitting.

trainable

If the weights of fitting net are trainable. Suppose that we have \(N_l\) hidden layers in the fitting net, this list is of length \(N_l + 1\), specifying if the hidden layers and the output layer are trainable.

activation_function

The activation function \(\boldsymbol{\phi}\) in the embedding net. Supported options are “relu”, “tanh”, “none”, “linear”, “softplus”, “sigmoid”, “relu6”, “gelu”, “gelu_tf”.

precision

The precision of the embedding net parameters. Supported options are “float32”, “default”, “float16”, “float64”.

layer_namelist[Optional[str]], optional

The name of the each layer. If two layers, either in the same fitting or different fittings, have the same name, they will share the same neural network parameters.

use_aparam_as_mask: bool, optional

If True, the atomic parameters will be used as a mask that determines the atom is real/virtual. And the aparam will not be used as the atomic parameters for embedding.

mixed_types

If true, use a uniform fitting net for all atom types, otherwise use different fitting nets for different atom types.

exclude_types

Atomic contributions of the excluded atom types are set zero.

r_differentiable

If the variable is differentiated with respect to coordinates of atoms. Only reduciable variable are differentiable.

c_differentiable

If the variable is differentiated with respect to the cell tensor (pbc case). Only reduciable variable are differentiable.

_net_out_dim()[source]

Set the FittingNet output dim.

serialize() dict[source]

Serialize the fitting to dict.

classmethod deserialize(data: dict) deepmd.dpmodel.fitting.general_fitting.GeneralFitting[source]

Deserialize the fitting.

Parameters:
datadict

The serialized data

Returns:
BF

The deserialized fitting

output_def()[source]

Returns the output def of the fitting net.

call(descriptor: numpy.ndarray, atype: numpy.ndarray, gr: numpy.ndarray | None = None, g2: numpy.ndarray | None = None, h2: numpy.ndarray | None = None, fparam: numpy.ndarray | None = None, aparam: numpy.ndarray | None = None) Dict[str, numpy.ndarray][source]

Calculate the fitting.

Parameters:
descriptor

input descriptor. shape: nf x nloc x nd

atype

the atom type. shape: nf x nloc

gr

The rotationally equivariant and permutationally invariant single particle representation. shape: nf x nloc x ng x 3

g2

The rotationally invariant pair-partical representation. shape: nf x nloc x nnei x ng

h2

The rotationally equivariant pair-partical representation. shape: nf x nloc x nnei x 3

fparam

The frame parameter. shape: nf x nfp. nfp being numb_fparam

aparam

The atomic parameter. shape: nf x nloc x nap. nap being numb_aparam

class deepmd.dpmodel.fitting.DOSFittingNet(ntypes: int, dim_descrpt: int, numb_dos: int = 300, neuron: List[int] = [120, 120, 120], resnet_dt: bool = True, numb_fparam: int = 0, numb_aparam: int = 0, bias_dos: numpy.ndarray | None = None, rcond: float | None = None, trainable: bool | List[bool] = True, activation_function: str = 'tanh', precision: str = DEFAULT_PRECISION, mixed_types: bool = False, exclude_types: List[int] = [], seed: int | None = None)[source]

Bases: deepmd.dpmodel.fitting.invar_fitting.InvarFitting

Fitting the energy (or a rotationally invariant porperty of dim_out) of the system. The force and the virial can also be trained.

Lets take the energy fitting task as an example. The potential energy \(E\) is a fitting network function of the descriptor \(\mathcal{D}\):

\[E(\mathcal{D}) = \mathcal{L}^{(n)} \circ \mathcal{L}^{(n-1)} \circ \cdots \circ \mathcal{L}^{(1)} \circ \mathcal{L}^{(0)}\]

The first \(n\) hidden layers \(\mathcal{L}^{(0)}, \cdots, \mathcal{L}^{(n-1)}\) are given by

\[\mathbf{y}=\mathcal{L}(\mathbf{x};\mathbf{w},\mathbf{b})= \boldsymbol{\phi}(\mathbf{x}^T\mathbf{w}+\mathbf{b})\]

where \(\mathbf{x} \in \mathbb{R}^{N_1}\) is the input vector and \(\mathbf{y} \in \mathbb{R}^{N_2}\) is the output vector. \(\mathbf{w} \in \mathbb{R}^{N_1 \times N_2}\) and \(\mathbf{b} \in \mathbb{R}^{N_2}\) are weights and biases, respectively, both of which are trainable if trainable[i] is True. \(\boldsymbol{\phi}\) is the activation function.

The output layer \(\mathcal{L}^{(n)}\) is given by

\[\mathbf{y}=\mathcal{L}^{(n)}(\mathbf{x};\mathbf{w},\mathbf{b})= \mathbf{x}^T\mathbf{w}+\mathbf{b}\]

where \(\mathbf{x} \in \mathbb{R}^{N_{n-1}}\) is the input vector and \(\mathbf{y} \in \mathbb{R}\) is the output scalar. \(\mathbf{w} \in \mathbb{R}^{N_{n-1}}\) and \(\mathbf{b} \in \mathbb{R}\) are weights and bias, respectively, both of which are trainable if trainable[n] is True.

Parameters:
var_name

The name of the output variable.

ntypes

The number of atom types.

dim_descrpt

The dimension of the input descriptor.

dim_out

The dimension of the output fit property.

neuron

Number of neurons \(N\) in each hidden layer of the fitting net

resnet_dt

Time-step dt in the resnet construction: \(y = x + dt * \phi (Wx + b)\)

numb_fparam

Number of frame parameter

numb_aparam

Number of atomic parameter

rcond

The condition number for the regression of atomic energy.

bias_atom

Bias for each element.

tot_ener_zero

Force the total energy to zero. Useful for the charge fitting.

trainable

If the weights of fitting net are trainable. Suppose that we have \(N_l\) hidden layers in the fitting net, this list is of length \(N_l + 1\), specifying if the hidden layers and the output layer are trainable.

atom_ener

Specifying atomic energy contribution in vacuum. The set_davg_zero key in the descrptor should be set.

activation_function

The activation function \(\boldsymbol{\phi}\) in the embedding net. Supported options are “relu”, “tanh”, “none”, “linear”, “softplus”, “sigmoid”, “relu6”, “gelu”, “gelu_tf”.

precision

The precision of the embedding net parameters. Supported options are “float32”, “default”, “float16”, “float64”.

layer_namelist[Optional[str]], optional

The name of the each layer. If two layers, either in the same fitting or different fittings, have the same name, they will share the same neural network parameters.

use_aparam_as_mask: bool, optional

If True, the atomic parameters will be used as a mask that determines the atom is real/virtual. And the aparam will not be used as the atomic parameters for embedding.

mixed_types

If false, different atomic types uses different fitting net, otherwise different atom types share the same fitting net.

exclude_types: List[int]

Atomic contributions of the excluded atom types are set zero.

classmethod deserialize(data: dict) deepmd.dpmodel.fitting.general_fitting.GeneralFitting[source]

Deserialize the fitting.

Parameters:
datadict

The serialized data

Returns:
BF

The deserialized fitting

serialize() dict[source]

Serialize the fitting to dict.

class deepmd.dpmodel.fitting.EnergyFittingNet(ntypes: int, dim_descrpt: int, neuron: List[int] = [120, 120, 120], resnet_dt: bool = True, numb_fparam: int = 0, numb_aparam: int = 0, rcond: float | None = None, tot_ener_zero: bool = False, trainable: List[bool] | None = None, atom_ener: List[float] | None = None, activation_function: str = 'tanh', precision: str = DEFAULT_PRECISION, layer_name: List[str | None] | None = None, use_aparam_as_mask: bool = False, spin: Any = None, mixed_types: bool = False, exclude_types: List[int] = [], seed: int | None = None)[source]

Bases: deepmd.dpmodel.fitting.invar_fitting.InvarFitting

Fitting the energy (or a rotationally invariant porperty of dim_out) of the system. The force and the virial can also be trained.

Lets take the energy fitting task as an example. The potential energy \(E\) is a fitting network function of the descriptor \(\mathcal{D}\):

\[E(\mathcal{D}) = \mathcal{L}^{(n)} \circ \mathcal{L}^{(n-1)} \circ \cdots \circ \mathcal{L}^{(1)} \circ \mathcal{L}^{(0)}\]

The first \(n\) hidden layers \(\mathcal{L}^{(0)}, \cdots, \mathcal{L}^{(n-1)}\) are given by

\[\mathbf{y}=\mathcal{L}(\mathbf{x};\mathbf{w},\mathbf{b})= \boldsymbol{\phi}(\mathbf{x}^T\mathbf{w}+\mathbf{b})\]

where \(\mathbf{x} \in \mathbb{R}^{N_1}\) is the input vector and \(\mathbf{y} \in \mathbb{R}^{N_2}\) is the output vector. \(\mathbf{w} \in \mathbb{R}^{N_1 \times N_2}\) and \(\mathbf{b} \in \mathbb{R}^{N_2}\) are weights and biases, respectively, both of which are trainable if trainable[i] is True. \(\boldsymbol{\phi}\) is the activation function.

The output layer \(\mathcal{L}^{(n)}\) is given by

\[\mathbf{y}=\mathcal{L}^{(n)}(\mathbf{x};\mathbf{w},\mathbf{b})= \mathbf{x}^T\mathbf{w}+\mathbf{b}\]

where \(\mathbf{x} \in \mathbb{R}^{N_{n-1}}\) is the input vector and \(\mathbf{y} \in \mathbb{R}\) is the output scalar. \(\mathbf{w} \in \mathbb{R}^{N_{n-1}}\) and \(\mathbf{b} \in \mathbb{R}\) are weights and bias, respectively, both of which are trainable if trainable[n] is True.

Parameters:
var_name

The name of the output variable.

ntypes

The number of atom types.

dim_descrpt

The dimension of the input descriptor.

dim_out

The dimension of the output fit property.

neuron

Number of neurons \(N\) in each hidden layer of the fitting net

resnet_dt

Time-step dt in the resnet construction: \(y = x + dt * \phi (Wx + b)\)

numb_fparam

Number of frame parameter

numb_aparam

Number of atomic parameter

rcond

The condition number for the regression of atomic energy.

bias_atom

Bias for each element.

tot_ener_zero

Force the total energy to zero. Useful for the charge fitting.

trainable

If the weights of fitting net are trainable. Suppose that we have \(N_l\) hidden layers in the fitting net, this list is of length \(N_l + 1\), specifying if the hidden layers and the output layer are trainable.

atom_ener

Specifying atomic energy contribution in vacuum. The set_davg_zero key in the descrptor should be set.

activation_function

The activation function \(\boldsymbol{\phi}\) in the embedding net. Supported options are “relu”, “tanh”, “none”, “linear”, “softplus”, “sigmoid”, “relu6”, “gelu”, “gelu_tf”.

precision

The precision of the embedding net parameters. Supported options are “float32”, “default”, “float16”, “float64”.

layer_namelist[Optional[str]], optional

The name of the each layer. If two layers, either in the same fitting or different fittings, have the same name, they will share the same neural network parameters.

use_aparam_as_mask: bool, optional

If True, the atomic parameters will be used as a mask that determines the atom is real/virtual. And the aparam will not be used as the atomic parameters for embedding.

mixed_types

If false, different atomic types uses different fitting net, otherwise different atom types share the same fitting net.

exclude_types: List[int]

Atomic contributions of the excluded atom types are set zero.

classmethod deserialize(data: dict) deepmd.dpmodel.fitting.general_fitting.GeneralFitting[source]

Deserialize the fitting.

Parameters:
datadict

The serialized data

Returns:
BF

The deserialized fitting

serialize() dict[source]

Serialize the fitting to dict.

class deepmd.dpmodel.fitting.InvarFitting(var_name: str, ntypes: int, dim_descrpt: int, dim_out: int, neuron: List[int] = [120, 120, 120], resnet_dt: bool = True, numb_fparam: int = 0, numb_aparam: int = 0, bias_atom: numpy.ndarray | None = None, rcond: float | None = None, tot_ener_zero: bool = False, trainable: List[bool] | None = None, atom_ener: List[float] | None = None, activation_function: str = 'tanh', precision: str = DEFAULT_PRECISION, layer_name: List[str | None] | None = None, use_aparam_as_mask: bool = False, spin: Any = None, mixed_types: bool = True, exclude_types: List[int] = [])[source]

Bases: deepmd.dpmodel.fitting.general_fitting.GeneralFitting

Fitting the energy (or a rotationally invariant porperty of dim_out) of the system. The force and the virial can also be trained.

Lets take the energy fitting task as an example. The potential energy \(E\) is a fitting network function of the descriptor \(\mathcal{D}\):

\[E(\mathcal{D}) = \mathcal{L}^{(n)} \circ \mathcal{L}^{(n-1)} \circ \cdots \circ \mathcal{L}^{(1)} \circ \mathcal{L}^{(0)}\]

The first \(n\) hidden layers \(\mathcal{L}^{(0)}, \cdots, \mathcal{L}^{(n-1)}\) are given by

\[\mathbf{y}=\mathcal{L}(\mathbf{x};\mathbf{w},\mathbf{b})= \boldsymbol{\phi}(\mathbf{x}^T\mathbf{w}+\mathbf{b})\]

where \(\mathbf{x} \in \mathbb{R}^{N_1}\) is the input vector and \(\mathbf{y} \in \mathbb{R}^{N_2}\) is the output vector. \(\mathbf{w} \in \mathbb{R}^{N_1 \times N_2}\) and \(\mathbf{b} \in \mathbb{R}^{N_2}\) are weights and biases, respectively, both of which are trainable if trainable[i] is True. \(\boldsymbol{\phi}\) is the activation function.

The output layer \(\mathcal{L}^{(n)}\) is given by

\[\mathbf{y}=\mathcal{L}^{(n)}(\mathbf{x};\mathbf{w},\mathbf{b})= \mathbf{x}^T\mathbf{w}+\mathbf{b}\]

where \(\mathbf{x} \in \mathbb{R}^{N_{n-1}}\) is the input vector and \(\mathbf{y} \in \mathbb{R}\) is the output scalar. \(\mathbf{w} \in \mathbb{R}^{N_{n-1}}\) and \(\mathbf{b} \in \mathbb{R}\) are weights and bias, respectively, both of which are trainable if trainable[n] is True.

Parameters:
var_name

The name of the output variable.

ntypes

The number of atom types.

dim_descrpt

The dimension of the input descriptor.

dim_out

The dimension of the output fit property.

neuron

Number of neurons \(N\) in each hidden layer of the fitting net

resnet_dt

Time-step dt in the resnet construction: \(y = x + dt * \phi (Wx + b)\)

numb_fparam

Number of frame parameter

numb_aparam

Number of atomic parameter

rcond

The condition number for the regression of atomic energy.

bias_atom

Bias for each element.

tot_ener_zero

Force the total energy to zero. Useful for the charge fitting.

trainable

If the weights of fitting net are trainable. Suppose that we have \(N_l\) hidden layers in the fitting net, this list is of length \(N_l + 1\), specifying if the hidden layers and the output layer are trainable.

atom_ener

Specifying atomic energy contribution in vacuum. The set_davg_zero key in the descrptor should be set.

activation_function

The activation function \(\boldsymbol{\phi}\) in the embedding net. Supported options are “relu”, “tanh”, “none”, “linear”, “softplus”, “sigmoid”, “relu6”, “gelu”, “gelu_tf”.

precision

The precision of the embedding net parameters. Supported options are “float32”, “default”, “float16”, “float64”.

layer_namelist[Optional[str]], optional

The name of the each layer. If two layers, either in the same fitting or different fittings, have the same name, they will share the same neural network parameters.

use_aparam_as_mask: bool, optional

If True, the atomic parameters will be used as a mask that determines the atom is real/virtual. And the aparam will not be used as the atomic parameters for embedding.

mixed_types

If false, different atomic types uses different fitting net, otherwise different atom types share the same fitting net.

exclude_types: List[int]

Atomic contributions of the excluded atom types are set zero.

serialize() dict[source]

Serialize the fitting to dict.

classmethod deserialize(data: dict) deepmd.dpmodel.fitting.general_fitting.GeneralFitting[source]

Deserialize the fitting.

Parameters:
datadict

The serialized data

Returns:
BF

The deserialized fitting

_net_out_dim()[source]

Set the FittingNet output dim.

abstract compute_output_stats(merged)[source]

Update the output bias for fitting net.

output_def()[source]

Returns the output def of the fitting net.

call(descriptor: numpy.ndarray, atype: numpy.ndarray, gr: numpy.ndarray | None = None, g2: numpy.ndarray | None = None, h2: numpy.ndarray | None = None, fparam: numpy.ndarray | None = None, aparam: numpy.ndarray | None = None) Dict[str, numpy.ndarray][source]

Calculate the fitting.

Parameters:
descriptor

input descriptor. shape: nf x nloc x nd

atype

the atom type. shape: nf x nloc

gr

The rotationally equivariant and permutationally invariant single particle representation. shape: nf x nloc x ng x 3

g2

The rotationally invariant pair-partical representation. shape: nf x nloc x nnei x ng

h2

The rotationally equivariant pair-partical representation. shape: nf x nloc x nnei x 3

fparam

The frame parameter. shape: nf x nfp. nfp being numb_fparam

aparam

The atomic parameter. shape: nf x nloc x nap. nap being numb_aparam

deepmd.dpmodel.fitting.make_base_fitting(t_tensor, fwd_method_name: str = 'forward')[source]

Make the base class for the fitting.

Parameters:
t_tensor

The type of the tensor. used in the type hint.

fwd_method_name

Name of the forward method. For dpmodels, it should be “call”. For torch models, it should be “forward”.

class deepmd.dpmodel.fitting.PolarFitting(ntypes: int, dim_descrpt: int, embedding_width: int, neuron: List[int] = [120, 120, 120], resnet_dt: bool = True, numb_fparam: int = 0, numb_aparam: int = 0, rcond: float | None = None, tot_ener_zero: bool = False, trainable: List[bool] | None = None, activation_function: str = 'tanh', precision: str = DEFAULT_PRECISION, layer_name: List[str | None] | None = None, use_aparam_as_mask: bool = False, spin: Any = None, mixed_types: bool = False, exclude_types: List[int] = [], old_impl: bool = False, fit_diag: bool = True, scale: List[float] | None = None, shift_diag: bool = True, seed: int | None = None)[source]

Bases: deepmd.dpmodel.fitting.general_fitting.GeneralFitting

Fitting rotationally equivariant polarizability of the system.

Parameters:
ntypes

The number of atom types.

dim_descrpt

The dimension of the input descriptor.

embedding_widthint

The dimension of rotation matrix, m1.

neuron

Number of neurons \(N\) in each hidden layer of the fitting net

resnet_dt

Time-step dt in the resnet construction: \(y = x + dt * \phi (Wx + b)\)

numb_fparam

Number of frame parameter

numb_aparam

Number of atomic parameter

rcond

The condition number for the regression of atomic energy.

tot_ener_zero

Force the total energy to zero. Useful for the charge fitting.

trainable

If the weights of fitting net are trainable. Suppose that we have \(N_l\) hidden layers in the fitting net, this list is of length \(N_l + 1\), specifying if the hidden layers and the output layer are trainable.

activation_function

The activation function \(\boldsymbol{\phi}\) in the embedding net. Supported options are “relu”, “tanh”, “none”, “linear”, “softplus”, “sigmoid”, “relu6”, “gelu”, “gelu_tf”.

precision

The precision of the embedding net parameters. Supported options are “float32”, “default”, “float16”, “float64”.

layer_namelist[Optional[str]], optional

The name of the each layer. If two layers, either in the same fitting or different fittings, have the same name, they will share the same neural network parameters.

use_aparam_as_mask: bool, optional

If True, the atomic parameters will be used as a mask that determines the atom is real/virtual. And the aparam will not be used as the atomic parameters for embedding.

mixed_types

If true, use a uniform fitting net for all atom types, otherwise use different fitting nets for different atom types.

fit_diagbool

Fit the diagonal part of the rotational invariant polarizability matrix, which will be converted to normal polarizability matrix by contracting with the rotation matrix.

scaleList[float]

The output of the fitting net (polarizability matrix) for type i atom will be scaled by scale[i]

shift_diagbool

Whether to shift the diagonal part of the polarizability matrix. The shift operation is carried out after scale.

_net_out_dim()[source]

Set the FittingNet output dim.

__setitem__(key, value)[source]
__getitem__(key)[source]
serialize() dict[source]

Serialize the fitting to dict.

classmethod deserialize(data: dict) deepmd.dpmodel.fitting.general_fitting.GeneralFitting[source]

Deserialize the fitting.

Parameters:
datadict

The serialized data

Returns:
BF

The deserialized fitting

output_def()[source]

Returns the output def of the fitting net.

call(descriptor: numpy.ndarray, atype: numpy.ndarray, gr: numpy.ndarray | None = None, g2: numpy.ndarray | None = None, h2: numpy.ndarray | None = None, fparam: numpy.ndarray | None = None, aparam: numpy.ndarray | None = None) Dict[str, numpy.ndarray][source]

Calculate the fitting.

Parameters:
descriptor

input descriptor. shape: nf x nloc x nd

atype

the atom type. shape: nf x nloc

gr

The rotationally equivariant and permutationally invariant single particle representation. shape: nf x nloc x ng x 3

g2

The rotationally invariant pair-partical representation. shape: nf x nloc x nnei x ng

h2

The rotationally equivariant pair-partical representation. shape: nf x nloc x nnei x 3

fparam

The frame parameter. shape: nf x nfp. nfp being numb_fparam

aparam

The atomic parameter. shape: nf x nloc x nap. nap being numb_aparam