deepmd.tf.loss.ener#
Classes#
Standard loss function for DP models. | |
The abstract class for the loss function. | |
The abstract class for the loss function. |
Module Contents#
- class deepmd.tf.loss.ener.EnerStdLoss(starter_learning_rate: float, start_pref_e: float = 0.02, limit_pref_e: float = 1.0, start_pref_f: float = 1000, limit_pref_f: float = 1.0, start_pref_v: float = 0.0, limit_pref_v: float = 0.0, start_pref_ae: float = 0.0, limit_pref_ae: float = 0.0, start_pref_pf: float = 0.0, limit_pref_pf: float = 0.0, relative_f: float | None = None, enable_atom_ener_coeff: bool = False, start_pref_gf: float = 0.0, limit_pref_gf: float = 0.0, numb_generalized_coord: int = 0, **kwargs)[source]#
Bases:
deepmd.tf.loss.loss.Loss
Standard loss function for DP models.
- Parameters:
- starter_learning_rate
float
The learning rate at the start of the training.
- start_pref_e
float
The prefactor of energy loss at the start of the training.
- limit_pref_e
float
The prefactor of energy loss at the end of the training.
- start_pref_f
float
The prefactor of force loss at the start of the training.
- limit_pref_f
float
The prefactor of force loss at the end of the training.
- start_pref_v
float
The prefactor of virial loss at the start of the training.
- limit_pref_v
float
The prefactor of virial loss at the end of the training.
- start_pref_ae
float
The prefactor of atomic energy loss at the start of the training.
- limit_pref_ae
float
The prefactor of atomic energy loss at the end of the training.
- start_pref_pf
float
The prefactor of atomic prefactor force loss at the start of the training.
- limit_pref_pf
float
The prefactor of atomic prefactor force loss at the end of the training.
- relative_f
float
If provided, relative force error will be used in the loss. The difference of force will be normalized by the magnitude of the force in the label with a shift given by relative_f
- enable_atom_ener_coeffbool
if true, the energy will be computed as sum_i c_i E_i
- start_pref_gf
float
The prefactor of generalized force loss at the start of the training.
- limit_pref_gf
float
The prefactor of generalized force loss at the end of the training.
- numb_generalized_coord
int
The dimension of generalized coordinates.
- **kwargs
Other keyword arguments.
- starter_learning_rate
- build(learning_rate, natoms, model_dict, label_dict, suffix)[source]#
Build the loss function graph.
- Parameters:
- Returns:
- property label_requirement: list[deepmd.utils.data.DataRequirementItem][source]#
Return data label requirements needed for this loss calculation.
- class deepmd.tf.loss.ener.EnerSpinLoss(starter_learning_rate: float, start_pref_e: float = 0.02, limit_pref_e: float = 1.0, start_pref_fr: float = 1000, limit_pref_fr: float = 1.0, start_pref_fm: float = 10000, limit_pref_fm: float = 10.0, start_pref_v: float = 0.0, limit_pref_v: float = 0.0, start_pref_ae: float = 0.0, limit_pref_ae: float = 0.0, start_pref_pf: float = 0.0, limit_pref_pf: float = 0.0, relative_f: float | None = None, enable_atom_ener_coeff: bool = False, use_spin: list | None = None)[source]#
Bases:
deepmd.tf.loss.loss.Loss
The abstract class for the loss function.
- build(learning_rate, natoms, model_dict, label_dict, suffix)[source]#
Build the loss function graph.
- Parameters:
- Returns:
- property label_requirement: list[deepmd.utils.data.DataRequirementItem][source]#
Return data label requirements needed for this loss calculation.
- class deepmd.tf.loss.ener.EnerDipoleLoss(starter_learning_rate: float, start_pref_e: float = 0.1, limit_pref_e: float = 1.0, start_pref_ed: float = 1.0, limit_pref_ed: float = 1.0)[source]#
Bases:
deepmd.tf.loss.loss.Loss
The abstract class for the loss function.
- build(learning_rate, natoms, model_dict, label_dict, suffix)[source]#
Build the loss function graph.
- Parameters:
- Returns:
- property label_requirement: list[deepmd.utils.data.DataRequirementItem][source]#
Return data label requirements needed for this loss calculation.