deepmd.tf.loss
Submodules
Package Contents
Classes
Loss function for DeepDOS models. | |
The abstract class for the loss function. | |
The abstract class for the loss function. | |
Standard loss function for DP models. | |
Loss function for tensorial properties. |
- class deepmd.tf.loss.DOSLoss(starter_learning_rate: float, numb_dos: int = 500, start_pref_dos: float = 1.0, limit_pref_dos: float = 1.0, start_pref_cdf: float = 1000, limit_pref_cdf: float = 1.0, start_pref_ados: float = 0.0, limit_pref_ados: float = 0.0, start_pref_acdf: float = 0.0, limit_pref_acdf: float = 0.0, protect_value: float = 1e-08, log_fit: bool = False, **kwargs)[source]
Bases:
deepmd.tf.loss.loss.Loss
Loss function for DeepDOS models.
- build(learning_rate, natoms, model_dict, label_dict, suffix)[source]
Build the loss function graph.
- Parameters:
- Returns:
- class deepmd.tf.loss.EnerDipoleLoss(starter_learning_rate: float, start_pref_e: float = 0.1, limit_pref_e: float = 1.0, start_pref_ed: float = 1.0, limit_pref_ed: float = 1.0)[source]
Bases:
deepmd.tf.loss.loss.Loss
The abstract class for the loss function.
- build(learning_rate, natoms, model_dict, label_dict, suffix)[source]
Build the loss function graph.
- Parameters:
- Returns:
- class deepmd.tf.loss.EnerSpinLoss(starter_learning_rate: float, start_pref_e: float = 0.02, limit_pref_e: float = 1.0, start_pref_fr: float = 1000, limit_pref_fr: float = 1.0, start_pref_fm: float = 10000, limit_pref_fm: float = 10.0, start_pref_v: float = 0.0, limit_pref_v: float = 0.0, start_pref_ae: float = 0.0, limit_pref_ae: float = 0.0, start_pref_pf: float = 0.0, limit_pref_pf: float = 0.0, relative_f: float | None = None, enable_atom_ener_coeff: bool = False, use_spin: list | None = None)[source]
Bases:
deepmd.tf.loss.loss.Loss
The abstract class for the loss function.
- build(learning_rate, natoms, model_dict, label_dict, suffix)[source]
Build the loss function graph.
- Parameters:
- Returns:
- class deepmd.tf.loss.EnerStdLoss(starter_learning_rate: float, start_pref_e: float = 0.02, limit_pref_e: float = 1.0, start_pref_f: float = 1000, limit_pref_f: float = 1.0, start_pref_v: float = 0.0, limit_pref_v: float = 0.0, start_pref_ae: float = 0.0, limit_pref_ae: float = 0.0, start_pref_pf: float = 0.0, limit_pref_pf: float = 0.0, relative_f: float | None = None, enable_atom_ener_coeff: bool = False, start_pref_gf: float = 0.0, limit_pref_gf: float = 0.0, numb_generalized_coord: int = 0, **kwargs)[source]
Bases:
deepmd.tf.loss.loss.Loss
Standard loss function for DP models.
- Parameters:
- starter_learning_rate
float
The learning rate at the start of the training.
- start_pref_e
float
The prefactor of energy loss at the start of the training.
- limit_pref_e
float
The prefactor of energy loss at the end of the training.
- start_pref_f
float
The prefactor of force loss at the start of the training.
- limit_pref_f
float
The prefactor of force loss at the end of the training.
- start_pref_v
float
The prefactor of virial loss at the start of the training.
- limit_pref_v
float
The prefactor of virial loss at the end of the training.
- start_pref_ae
float
The prefactor of atomic energy loss at the start of the training.
- limit_pref_ae
float
The prefactor of atomic energy loss at the end of the training.
- start_pref_pf
float
The prefactor of atomic prefactor force loss at the start of the training.
- limit_pref_pf
float
The prefactor of atomic prefactor force loss at the end of the training.
- relative_f
float
If provided, relative force error will be used in the loss. The difference of force will be normalized by the magnitude of the force in the label with a shift given by relative_f
- enable_atom_ener_coeffbool
if true, the energy will be computed as sum_i c_i E_i
- start_pref_gf
float
The prefactor of generalized force loss at the start of the training.
- limit_pref_gf
float
The prefactor of generalized force loss at the end of the training.
- numb_generalized_coord
int
The dimension of generalized coordinates.
- **kwargs
Other keyword arguments.
- starter_learning_rate
- build(learning_rate, natoms, model_dict, label_dict, suffix)[source]
Build the loss function graph.
- Parameters:
- Returns:
- class deepmd.tf.loss.TensorLoss(jdata, **kwarg)[source]
Bases:
deepmd.tf.loss.loss.Loss
Loss function for tensorial properties.
- build(learning_rate, natoms, model_dict, label_dict, suffix)[source]
Build the loss function graph.
- Parameters:
- Returns: