deepmd.tf.loss.ener

Contents

deepmd.tf.loss.ener#

Classes#

EnerStdLoss

Standard loss function for DP models.

EnerSpinLoss

The abstract class for the loss function.

EnerDipoleLoss

The abstract class for the loss function.

Module Contents#

class deepmd.tf.loss.ener.EnerStdLoss(starter_learning_rate: float, start_pref_e: float = 0.02, limit_pref_e: float = 1.0, start_pref_f: float = 1000, limit_pref_f: float = 1.0, start_pref_v: float = 0.0, limit_pref_v: float = 0.0, start_pref_ae: float = 0.0, limit_pref_ae: float = 0.0, start_pref_pf: float = 0.0, limit_pref_pf: float = 0.0, relative_f: float | None = None, enable_atom_ener_coeff: bool = False, start_pref_gf: float = 0.0, limit_pref_gf: float = 0.0, numb_generalized_coord: int = 0, **kwargs)[source]#

Bases: deepmd.tf.loss.loss.Loss

Standard loss function for DP models.

Parameters:
starter_learning_ratefloat

The learning rate at the start of the training.

start_pref_efloat

The prefactor of energy loss at the start of the training.

limit_pref_efloat

The prefactor of energy loss at the end of the training.

start_pref_ffloat

The prefactor of force loss at the start of the training.

limit_pref_ffloat

The prefactor of force loss at the end of the training.

start_pref_vfloat

The prefactor of virial loss at the start of the training.

limit_pref_vfloat

The prefactor of virial loss at the end of the training.

start_pref_aefloat

The prefactor of atomic energy loss at the start of the training.

limit_pref_aefloat

The prefactor of atomic energy loss at the end of the training.

start_pref_pffloat

The prefactor of atomic prefactor force loss at the start of the training.

limit_pref_pffloat

The prefactor of atomic prefactor force loss at the end of the training.

relative_ffloat

If provided, relative force error will be used in the loss. The difference of force will be normalized by the magnitude of the force in the label with a shift given by relative_f

enable_atom_ener_coeffbool

if true, the energy will be computed as sum_i c_i E_i

start_pref_gffloat

The prefactor of generalized force loss at the start of the training.

limit_pref_gffloat

The prefactor of generalized force loss at the end of the training.

numb_generalized_coordint

The dimension of generalized coordinates.

**kwargs

Other keyword arguments.

starter_learning_rate[source]#
start_pref_e[source]#
limit_pref_e[source]#
start_pref_f[source]#
limit_pref_f[source]#
start_pref_v[source]#
limit_pref_v[source]#
start_pref_ae[source]#
limit_pref_ae[source]#
start_pref_pf[source]#
limit_pref_pf[source]#
relative_f[source]#
enable_atom_ener_coeff[source]#
start_pref_gf[source]#
limit_pref_gf[source]#
numb_generalized_coord[source]#
has_e[source]#
has_f[source]#
has_v[source]#
has_ae[source]#
has_pf[source]#
has_gf[source]#
build(learning_rate, natoms, model_dict, label_dict, suffix)[source]#

Build the loss function graph.

Parameters:
learning_ratetf.Tensor

learning rate

natomstf.Tensor

number of atoms

model_dictdict[str, tf.Tensor]

A dictionary that maps model keys to tensors

label_dictdict[str, tf.Tensor]

A dictionary that maps label keys to tensors

suffixstr

suffix

Returns:
tf.Tensor

the total squared loss

dict[str, tf.Tensor]

A dictionary that maps loss keys to more loss tensors

eval(sess, feed_dict, natoms)[source]#

Eval the loss function.

Parameters:
sesstf.Session

TensorFlow session

feed_dictdict[tf.placeholder, tf.Tensor]

A dictionary that maps graph elements to values

natomstf.Tensor

number of atoms

Returns:
dict

A dictionary that maps keys to values. It should contain key natoms

property label_requirement: list[deepmd.utils.data.DataRequirementItem][source]#

Return data label requirements needed for this loss calculation.

class deepmd.tf.loss.ener.EnerSpinLoss(starter_learning_rate: float, start_pref_e: float = 0.02, limit_pref_e: float = 1.0, start_pref_fr: float = 1000, limit_pref_fr: float = 1.0, start_pref_fm: float = 10000, limit_pref_fm: float = 10.0, start_pref_v: float = 0.0, limit_pref_v: float = 0.0, start_pref_ae: float = 0.0, limit_pref_ae: float = 0.0, start_pref_pf: float = 0.0, limit_pref_pf: float = 0.0, relative_f: float | None = None, enable_atom_ener_coeff: bool = False, use_spin: list | None = None)[source]#

Bases: deepmd.tf.loss.loss.Loss

The abstract class for the loss function.

starter_learning_rate[source]#
start_pref_e[source]#
limit_pref_e[source]#
start_pref_fr[source]#
limit_pref_fr[source]#
start_pref_fm[source]#
limit_pref_fm[source]#
start_pref_v[source]#
limit_pref_v[source]#
start_pref_ae[source]#
limit_pref_ae[source]#
start_pref_pf[source]#
limit_pref_pf[source]#
relative_f[source]#
enable_atom_ener_coeff[source]#
use_spin[source]#
has_e[source]#
has_fr[source]#
has_fm[source]#
has_v[source]#
has_ae[source]#
build(learning_rate, natoms, model_dict, label_dict, suffix)[source]#

Build the loss function graph.

Parameters:
learning_ratetf.Tensor

learning rate

natomstf.Tensor

number of atoms

model_dictdict[str, tf.Tensor]

A dictionary that maps model keys to tensors

label_dictdict[str, tf.Tensor]

A dictionary that maps label keys to tensors

suffixstr

suffix

Returns:
tf.Tensor

the total squared loss

dict[str, tf.Tensor]

A dictionary that maps loss keys to more loss tensors

eval(sess, feed_dict, natoms)[source]#

Eval the loss function.

Parameters:
sesstf.Session

TensorFlow session

feed_dictdict[tf.placeholder, tf.Tensor]

A dictionary that maps graph elements to values

natomstf.Tensor

number of atoms

Returns:
dict

A dictionary that maps keys to values. It should contain key natoms

print_header()[source]#
print_on_training(tb_writer, cur_batch, sess, natoms, feed_dict_test, feed_dict_batch)[source]#
property label_requirement: list[deepmd.utils.data.DataRequirementItem][source]#

Return data label requirements needed for this loss calculation.

class deepmd.tf.loss.ener.EnerDipoleLoss(starter_learning_rate: float, start_pref_e: float = 0.1, limit_pref_e: float = 1.0, start_pref_ed: float = 1.0, limit_pref_ed: float = 1.0)[source]#

Bases: deepmd.tf.loss.loss.Loss

The abstract class for the loss function.

starter_learning_rate[source]#
start_pref_e[source]#
limit_pref_e[source]#
start_pref_ed[source]#
limit_pref_ed[source]#
build(learning_rate, natoms, model_dict, label_dict, suffix)[source]#

Build the loss function graph.

Parameters:
learning_ratetf.Tensor

learning rate

natomstf.Tensor

number of atoms

model_dictdict[str, tf.Tensor]

A dictionary that maps model keys to tensors

label_dictdict[str, tf.Tensor]

A dictionary that maps label keys to tensors

suffixstr

suffix

Returns:
tf.Tensor

the total squared loss

dict[str, tf.Tensor]

A dictionary that maps loss keys to more loss tensors

eval(sess, feed_dict, natoms)[source]#

Eval the loss function.

Parameters:
sesstf.Session

TensorFlow session

feed_dictdict[tf.placeholder, tf.Tensor]

A dictionary that maps graph elements to values

natomstf.Tensor

number of atoms

Returns:
dict

A dictionary that maps keys to values. It should contain key natoms

property label_requirement: list[deepmd.utils.data.DataRequirementItem][source]#

Return data label requirements needed for this loss calculation.