deepmd.fit package

Submodules

deepmd.fit.dipole module

class deepmd.fit.dipole.DipoleFittingSeA(descrpt: tensorflow.python.framework.ops.Tensor, neuron: List[int] = [120, 120, 120], resnet_dt: bool = True, sel_type: Optional[List[int]] = None, seed: Optional[int] = None, activation_function: str = 'tanh', precision: str = 'default', uniform_seed: bool = False)[source]

Bases: deepmd.fit.fitting.Fitting

Fit the atomic dipole with descriptor se_a

Parameters
descrpttf.Tensor

The descrptor

neuronList[int]

Number of neurons in each hidden layer of the fitting net

resnet_dtbool

Time-step dt in the resnet construction: y = x + dt * phi (Wx + b)

sel_typeList[int]

The atom types selected to have an atomic dipole prediction. If is None, all atoms are selected.

seedint

Random seed for initializing the network parameters.

activation_functionstr

The activation function in the embedding net. Supported options are “relu”, “relu6”, “softplus”, “sigmoid”, “tanh”, “gelu”.

precisionstr

The precision of the embedding net parameters. Supported options are “default”, “float16”, “float32”, “float64”.

uniform_seed

Only for the purpose of backward compatibility, retrieves the old behavior of using the random seed

Attributes
precision

Precision of fitting network.

Methods

build(input_d, rot_mat, natoms[, reuse, suffix])

Build the computational graph for fitting net

enable_mixed_precision([mixed_prec])

Reveive the mixed precision setting.

get_out_size()

Get the output size.

get_sel_type()

Get selected type

init_variables(graph, graph_def[, suffix])

Init the fitting net variables with the given dict

build(input_d: tensorflow.python.framework.ops.Tensor, rot_mat: tensorflow.python.framework.ops.Tensor, natoms: tensorflow.python.framework.ops.Tensor, reuse: bool = None, suffix: str = '') tensorflow.python.framework.ops.Tensor[source]

Build the computational graph for fitting net

Parameters
input_d

The input descriptor

rot_mat

The rotation matrix from the descriptor.

natoms

The number of atoms. This tensor has the length of Ntypes + 2 natoms[0]: number of local atoms natoms[1]: total number of atoms held by this processor natoms[i]: 2 <= i < Ntypes+2, number of type i atoms

reuse

The weights in the networks should be reused when get the variable.

suffix

Name suffix to identify this descriptor

Returns
dipole

The atomic dipole.

enable_mixed_precision(mixed_prec: Optional[dict] = None) None[source]

Reveive the mixed precision setting.

Parameters
mixed_prec

The mixed precision setting used in the embedding net

get_out_size() int[source]

Get the output size. Should be 3

get_sel_type() int[source]

Get selected type

init_variables(graph: tensorflow.python.framework.ops.Graph, graph_def: tensorflow.core.framework.graph_pb2.GraphDef, suffix: str = '') None[source]

Init the fitting net variables with the given dict

Parameters
graphtf.Graph

The input frozen model graph

graph_deftf.GraphDef

The input frozen model graph_def

suffixstr

suffix to name scope

deepmd.fit.ener module

class deepmd.fit.ener.EnerFitting(descrpt: tensorflow.python.framework.ops.Tensor, neuron: List[int] = [120, 120, 120], resnet_dt: bool = True, numb_fparam: int = 0, numb_aparam: int = 0, rcond: float = 0.001, tot_ener_zero: bool = False, trainable: Optional[List[bool]] = None, seed: Optional[int] = None, atom_ener: List[float] = [], activation_function: str = 'tanh', precision: str = 'default', uniform_seed: bool = False)[source]

Bases: deepmd.fit.fitting.Fitting

Fitting the energy of the system. The force and the virial can also be trained.

The potential energy \(E\) is a fitting network function of the descriptor \(\mathcal{D}\):

\[E(\mathcal{D}) = \mathcal{L}^{(n)} \circ \mathcal{L}^{(n-1)} \circ \cdots \circ \mathcal{L}^{(1)} \circ \mathcal{L}^{(0)}\]

The first \(n\) hidden layers \(\mathcal{L}^{(0)}, \cdots, \mathcal{L}^{(n-1)}\) are given by

\[\mathbf{y}=\mathcal{L}(\mathbf{x};\mathbf{w},\mathbf{b})= \boldsymbol{\phi}(\mathbf{x}^T\mathbf{w}+\mathbf{b})\]

where \(\mathbf{x} \in \mathbb{R}^{N_1}\) is the input vector and \(\mathbf{y} \in \mathbb{R}^{N_2}\) is the output vector. \(\mathbf{w} \in \mathbb{R}^{N_1 \times N_2}\) and \(\mathbf{b} \in \mathbb{R}^{N_2}\) are weights and biases, respectively, both of which are trainable if trainable[i] is True. \(\boldsymbol{\phi}\) is the activation function.

The output layer \(\mathcal{L}^{(n)}\) is given by

\[\mathbf{y}=\mathcal{L}^{(n)}(\mathbf{x};\mathbf{w},\mathbf{b})= \mathbf{x}^T\mathbf{w}+\mathbf{b}\]

where \(\mathbf{x} \in \mathbb{R}^{N_{n-1}}\) is the input vector and \(\mathbf{y} \in \mathbb{R}\) is the output scalar. \(\mathbf{w} \in \mathbb{R}^{N_{n-1}}\) and \(\mathbf{b} \in \mathbb{R}\) are weights and bias, respectively, both of which are trainable if trainable[n] is True.

Parameters
descrpt

The descrptor \(\mathcal{D}\)

neuron

Number of neurons \(N\) in each hidden layer of the fitting net

resnet_dt

Time-step dt in the resnet construction: \(y = x + dt * \phi (Wx + b)\)

numb_fparam

Number of frame parameter

numb_aparam

Number of atomic parameter

rcond

The condition number for the regression of atomic energy.

tot_ener_zero

Force the total energy to zero. Useful for the charge fitting.

trainable

If the weights of fitting net are trainable. Suppose that we have \(N_l\) hidden layers in the fitting net, this list is of length \(N_l + 1\), specifying if the hidden layers and the output layer are trainable.

seed

Random seed for initializing the network parameters.

atom_ener

Specifying atomic energy contribution in vacuum. The set_davg_zero key in the descrptor should be set.

activation_function

The activation function \(\boldsymbol{\phi}\) in the embedding net. Supported options are “relu”, “relu6”, “softplus”, “sigmoid”, “tanh”, “gelu”.

precision

The precision of the embedding net parameters. Supported options are “default”, “float16”, “float32”, “float64”.

uniform_seed

Only for the purpose of backward compatibility, retrieves the old behavior of using the random seed

Attributes
precision

Precision of fitting network.

Methods

build(inputs, natoms[, input_dict, reuse, ...])

Build the computational graph for fitting net

compute_input_stats(all_stat[, protection])

Compute the input statistics

compute_output_stats(all_stat)

Compute the ouput statistics

enable_compression(model_file[, suffix])

Set the fitting net attributes from the frozen model_file when fparam or aparam is not zero

enable_mixed_precision([mixed_prec])

Reveive the mixed precision setting.

get_numb_aparam()

Get the number of atomic parameters

get_numb_fparam()

Get the number of frame parameters

init_variables(graph, graph_def[, suffix])

Init the fitting net variables with the given dict

build(inputs: tensorflow.python.framework.ops.Tensor, natoms: tensorflow.python.framework.ops.Tensor, input_dict: dict = None, reuse: bool = None, suffix: str = '') tensorflow.python.framework.ops.Tensor[source]

Build the computational graph for fitting net

Parameters
inputs

The input descriptor

input_dict

Additional dict for inputs. if numb_fparam > 0, should have input_dict[‘fparam’] if numb_aparam > 0, should have input_dict[‘aparam’]

natoms

The number of atoms. This tensor has the length of Ntypes + 2 natoms[0]: number of local atoms natoms[1]: total number of atoms held by this processor natoms[i]: 2 <= i < Ntypes+2, number of type i atoms

reuse

The weights in the networks should be reused when get the variable.

suffix

Name suffix to identify this descriptor

Returns
ener

The system energy

compute_input_stats(all_stat: dict, protection: float = 0.01) None[source]

Compute the input statistics

Parameters
all_stat

if numb_fparam > 0 must have all_stat[‘fparam’] if numb_aparam > 0 must have all_stat[‘aparam’] can be prepared by model.make_stat_input

protection

Divided-by-zero protection

compute_output_stats(all_stat: dict) None[source]

Compute the ouput statistics

Parameters
all_stat

must have the following components: all_stat[‘energy’] of shape n_sys x n_batch x n_frame can be prepared by model.make_stat_input

enable_compression(model_file: str, suffix: str = '') None[source]

Set the fitting net attributes from the frozen model_file when fparam or aparam is not zero

Parameters
model_filestr

The input frozen model file

suffixstr, optional

The suffix of the scope

enable_mixed_precision(mixed_prec: Optional[dict] = None) None[source]

Reveive the mixed precision setting.

Parameters
mixed_prec

The mixed precision setting used in the embedding net

get_numb_aparam() int[source]

Get the number of atomic parameters

get_numb_fparam() int[source]

Get the number of frame parameters

init_variables(graph: tensorflow.python.framework.ops.Graph, graph_def: tensorflow.core.framework.graph_pb2.GraphDef, suffix: str = '') None[source]

Init the fitting net variables with the given dict

Parameters
graphtf.Graph

The input frozen model graph

graph_deftf.GraphDef

The input frozen model graph_def

suffixstr

suffix to name scope

deepmd.fit.fitting module

class deepmd.fit.fitting.Fitting[source]

Bases: object

Attributes
precision

Precision of fitting network.

Methods

init_variables(graph, graph_def[, suffix])

Init the fitting net variables with the given dict

init_variables(graph: tensorflow.python.framework.ops.Graph, graph_def: tensorflow.core.framework.graph_pb2.GraphDef, suffix: str = '') None[source]

Init the fitting net variables with the given dict

Parameters
graphtf.Graph

The input frozen model graph

graph_deftf.GraphDef

The input frozen model graph_def

suffixstr

suffix to name scope

Notes

This method is called by others when the fitting supported initialization from the given variables.

property precision: tensorflow.python.framework.dtypes.DType

Precision of fitting network.

deepmd.fit.polar module

class deepmd.fit.polar.GlobalPolarFittingSeA(descrpt: tensorflow.python.framework.ops.Tensor, neuron: List[int] = [120, 120, 120], resnet_dt: bool = True, sel_type: Optional[List[int]] = None, fit_diag: bool = True, scale: Optional[List[float]] = None, diag_shift: Optional[List[float]] = None, seed: Optional[int] = None, activation_function: str = 'tanh', precision: str = 'default')[source]

Bases: object

Fit the system polarizability with descriptor se_a

Parameters
descrpttf.Tensor

The descrptor

neuronList[int]

Number of neurons in each hidden layer of the fitting net

resnet_dtbool

Time-step dt in the resnet construction: y = x + dt * phi (Wx + b)

sel_typeList[int]

The atom types selected to have an atomic polarizability prediction

fit_diagbool

Fit the diagonal part of the rotational invariant polarizability matrix, which will be converted to normal polarizability matrix by contracting with the rotation matrix.

scaleList[float]

The output of the fitting net (polarizability matrix) for type i atom will be scaled by scale[i]

diag_shiftList[float]

The diagonal part of the polarizability matrix of type i will be shifted by diag_shift[i]. The shift operation is carried out after scale.

seedint

Random seed for initializing the network parameters.

activation_functionstr

The activation function in the embedding net. Supported options are “relu”, “relu6”, “softplus”, “sigmoid”, “tanh”, “gelu”.

precisionstr

The precision of the embedding net parameters. Supported options are “default”, “float16”, “float32”, “float64”.

Methods

build(input_d, rot_mat, natoms[, reuse, suffix])

Build the computational graph for fitting net

enable_mixed_precision([mixed_prec])

Reveive the mixed precision setting.

get_out_size()

Get the output size.

get_sel_type()

Get selected atom types

init_variables(graph, graph_def[, suffix])

Init the fitting net variables with the given dict

build(input_d, rot_mat, natoms, reuse=None, suffix='') tensorflow.python.framework.ops.Tensor[source]

Build the computational graph for fitting net

Parameters
input_d

The input descriptor

rot_mat

The rotation matrix from the descriptor.

natoms

The number of atoms. This tensor has the length of Ntypes + 2 natoms[0]: number of local atoms natoms[1]: total number of atoms held by this processor natoms[i]: 2 <= i < Ntypes+2, number of type i atoms

reuse

The weights in the networks should be reused when get the variable.

suffix

Name suffix to identify this descriptor

Returns
polar

The system polarizability

enable_mixed_precision(mixed_prec: Optional[dict] = None) None[source]

Reveive the mixed precision setting.

Parameters
mixed_prec

The mixed precision setting used in the embedding net

get_out_size() int[source]

Get the output size. Should be 9

get_sel_type() int[source]

Get selected atom types

init_variables(graph: tensorflow.python.framework.ops.Graph, graph_def: tensorflow.core.framework.graph_pb2.GraphDef, suffix: str = '') None[source]

Init the fitting net variables with the given dict

Parameters
graphtf.Graph

The input frozen model graph

graph_deftf.GraphDef

The input frozen model graph_def

suffixstr

suffix to name scope

class deepmd.fit.polar.PolarFittingLocFrame(jdata, descrpt)[source]

Bases: object

Fitting polarizability with local frame descriptor.

Deprecated since version 2.0.0: This class is not supported any more.

Methods

build

get_out_size

get_sel_type

build(input_d, rot_mat, natoms, reuse=None, suffix='')[source]
get_out_size()[source]
get_sel_type()[source]
class deepmd.fit.polar.PolarFittingSeA(descrpt: tensorflow.python.framework.ops.Tensor, neuron: List[int] = [120, 120, 120], resnet_dt: bool = True, sel_type: Optional[List[int]] = None, fit_diag: bool = True, scale: Optional[List[float]] = None, shift_diag: bool = True, seed: Optional[int] = None, activation_function: str = 'tanh', precision: str = 'default', uniform_seed: bool = False)[source]

Bases: deepmd.fit.fitting.Fitting

Fit the atomic polarizability with descriptor se_a

Parameters
descrpttf.Tensor

The descrptor

neuronList[int]

Number of neurons in each hidden layer of the fitting net

resnet_dtbool

Time-step dt in the resnet construction: y = x + dt * phi (Wx + b)

sel_typeList[int]

The atom types selected to have an atomic polarizability prediction. If is None, all atoms are selected.

fit_diagbool

Fit the diagonal part of the rotational invariant polarizability matrix, which will be converted to normal polarizability matrix by contracting with the rotation matrix.

scaleList[float]

The output of the fitting net (polarizability matrix) for type i atom will be scaled by scale[i]

diag_shiftList[float]

The diagonal part of the polarizability matrix of type i will be shifted by diag_shift[i]. The shift operation is carried out after scale.

seedint

Random seed for initializing the network parameters.

activation_functionstr

The activation function in the embedding net. Supported options are “relu”, “relu6”, “softplus”, “sigmoid”, “tanh”, “gelu”.

precisionstr

The precision of the embedding net parameters. Supported options are “default”, “float16”, “float32”, “float64”.

uniform_seed

Only for the purpose of backward compatibility, retrieves the old behavior of using the random seed

Attributes
precision

Precision of fitting network.

Methods

build(input_d, rot_mat, natoms[, reuse, suffix])

Build the computational graph for fitting net

compute_input_stats(all_stat[, protection])

Compute the input statistics

enable_mixed_precision([mixed_prec])

Reveive the mixed precision setting.

get_out_size()

Get the output size.

get_sel_type()

Get selected atom types

init_variables(graph, graph_def[, suffix])

Init the fitting net variables with the given dict

build(input_d: tensorflow.python.framework.ops.Tensor, rot_mat: tensorflow.python.framework.ops.Tensor, natoms: tensorflow.python.framework.ops.Tensor, reuse: bool = None, suffix: str = '')[source]

Build the computational graph for fitting net

Parameters
input_d

The input descriptor

rot_mat

The rotation matrix from the descriptor.

natoms

The number of atoms. This tensor has the length of Ntypes + 2 natoms[0]: number of local atoms natoms[1]: total number of atoms held by this processor natoms[i]: 2 <= i < Ntypes+2, number of type i atoms

reuse

The weights in the networks should be reused when get the variable.

suffix

Name suffix to identify this descriptor

Returns
atomic_polar

The atomic polarizability

compute_input_stats(all_stat, protection=0.01)[source]

Compute the input statistics

Parameters
all_stat

Dictionary of inputs. can be prepared by model.make_stat_input

protection

Divided-by-zero protection

enable_mixed_precision(mixed_prec: Optional[dict] = None) None[source]

Reveive the mixed precision setting.

Parameters
mixed_prec

The mixed precision setting used in the embedding net

get_out_size() int[source]

Get the output size. Should be 9

get_sel_type() List[int][source]

Get selected atom types

init_variables(graph: tensorflow.python.framework.ops.Graph, graph_def: tensorflow.core.framework.graph_pb2.GraphDef, suffix: str = '') None[source]

Init the fitting net variables with the given dict

Parameters
graphtf.Graph

The input frozen model graph

graph_deftf.GraphDef

The input frozen model graph_def

suffixstr

suffix to name scope

deepmd.fit.wfc module

class deepmd.fit.wfc.WFCFitting(jdata, descrpt)[source]

Bases: object

Fitting Wannier function centers (WFCs) with local frame descriptor.

Deprecated since version 2.0.0: This class is not supported any more.

Methods

build

get_out_size

get_sel_type

get_wfc_numb

build(input_d, rot_mat, natoms, reuse=None, suffix='')[source]
get_out_size()[source]
get_sel_type()[source]
get_wfc_numb()[source]