deepmd.loss package
- class deepmd.loss.DOSLoss(starter_learning_rate: float, numb_dos: int = 500, start_pref_dos: float = 1.0, limit_pref_dos: float = 1.0, start_pref_cdf: float = 1000, limit_pref_cdf: float = 1.0, start_pref_ados: float = 0.0, limit_pref_ados: float = 0.0, start_pref_acdf: float = 0.0, limit_pref_acdf: float = 0.0, protect_value: float = 1e-08, log_fit: bool = False, **kwargs)[source]
Bases:
Loss
Loss function for DeepDOS models.
Methods
build
(learning_rate, natoms, model_dict, ...)Build the loss function graph.
display_if_exist
(loss, find_property)Display NaN if labeled property is not found.
eval
(sess, feed_dict, natoms)Eval the loss function.
- class deepmd.loss.EnerDipoleLoss(starter_learning_rate: float, start_pref_e: float = 0.1, limit_pref_e: float = 1.0, start_pref_ed: float = 1.0, limit_pref_ed: float = 1.0)[source]
Bases:
Loss
Methods
build
(learning_rate, natoms, model_dict, ...)Build the loss function graph.
display_if_exist
(loss, find_property)Display NaN if labeled property is not found.
eval
(sess, feed_dict, natoms)Eval the loss function.
- class deepmd.loss.EnerSpinLoss(starter_learning_rate: float, start_pref_e: float = 0.02, limit_pref_e: float = 1.0, start_pref_fr: float = 1000, limit_pref_fr: float = 1.0, start_pref_fm: float = 10000, limit_pref_fm: float = 10.0, start_pref_v: float = 0.0, limit_pref_v: float = 0.0, start_pref_ae: float = 0.0, limit_pref_ae: float = 0.0, start_pref_pf: float = 0.0, limit_pref_pf: float = 0.0, relative_f: Optional[float] = None, enable_atom_ener_coeff: bool = False, use_spin: Optional[list] = None)[source]
Bases:
Loss
Methods
build
(learning_rate, natoms, model_dict, ...)Build the loss function graph.
display_if_exist
(loss, find_property)Display NaN if labeled property is not found.
eval
(sess, feed_dict, natoms)Eval the loss function.
print_header
print_on_training
- build(learning_rate, natoms, model_dict, label_dict, suffix)[source]
Build the loss function graph.
- Parameters
- Returns
- class deepmd.loss.EnerStdLoss(starter_learning_rate: float, start_pref_e: float = 0.02, limit_pref_e: float = 1.0, start_pref_f: float = 1000, limit_pref_f: float = 1.0, start_pref_v: float = 0.0, limit_pref_v: float = 0.0, start_pref_ae: float = 0.0, limit_pref_ae: float = 0.0, start_pref_pf: float = 0.0, limit_pref_pf: float = 0.0, relative_f: Optional[float] = None, enable_atom_ener_coeff: bool = False, start_pref_gf: float = 0.0, limit_pref_gf: float = 0.0, numb_generalized_coord: int = 0, **kwargs)[source]
Bases:
Loss
Standard loss function for DP models.
- Parameters
- starter_learning_rate
float
The learning rate at the start of the training.
- start_pref_e
float
The prefactor of energy loss at the start of the training.
- limit_pref_e
float
The prefactor of energy loss at the end of the training.
- start_pref_f
float
The prefactor of force loss at the start of the training.
- limit_pref_f
float
The prefactor of force loss at the end of the training.
- start_pref_v
float
The prefactor of virial loss at the start of the training.
- limit_pref_v
float
The prefactor of virial loss at the end of the training.
- start_pref_ae
float
The prefactor of atomic energy loss at the start of the training.
- limit_pref_ae
float
The prefactor of atomic energy loss at the end of the training.
- start_pref_pf
float
The prefactor of atomic prefactor force loss at the start of the training.
- limit_pref_pf
float
The prefactor of atomic prefactor force loss at the end of the training.
- relative_f
float
If provided, relative force error will be used in the loss. The difference of force will be normalized by the magnitude of the force in the label with a shift given by relative_f
- enable_atom_ener_coeffbool
if true, the energy will be computed as sum_i c_i E_i
- start_pref_gf
float
The prefactor of generalized force loss at the start of the training.
- limit_pref_gf
float
The prefactor of generalized force loss at the end of the training.
- numb_generalized_coord
int
The dimension of generalized coordinates.
- **kwargs
Other keyword arguments.
- starter_learning_rate
Methods
build
(learning_rate, natoms, model_dict, ...)Build the loss function graph.
display_if_exist
(loss, find_property)Display NaN if labeled property is not found.
eval
(sess, feed_dict, natoms)Eval the loss function.
- class deepmd.loss.TensorLoss(jdata, **kwarg)[source]
Bases:
Loss
Loss function for tensorial properties.
Methods
build
(learning_rate, natoms, model_dict, ...)Build the loss function graph.
display_if_exist
(loss, find_property)Display NaN if labeled property is not found.
eval
(sess, feed_dict, natoms)Eval the loss function.
Submodules
deepmd.loss.dos module
- class deepmd.loss.dos.DOSLoss(starter_learning_rate: float, numb_dos: int = 500, start_pref_dos: float = 1.0, limit_pref_dos: float = 1.0, start_pref_cdf: float = 1000, limit_pref_cdf: float = 1.0, start_pref_ados: float = 0.0, limit_pref_ados: float = 0.0, start_pref_acdf: float = 0.0, limit_pref_acdf: float = 0.0, protect_value: float = 1e-08, log_fit: bool = False, **kwargs)[source]
Bases:
Loss
Loss function for DeepDOS models.
Methods
build
(learning_rate, natoms, model_dict, ...)Build the loss function graph.
display_if_exist
(loss, find_property)Display NaN if labeled property is not found.
eval
(sess, feed_dict, natoms)Eval the loss function.
deepmd.loss.ener module
- class deepmd.loss.ener.EnerDipoleLoss(starter_learning_rate: float, start_pref_e: float = 0.1, limit_pref_e: float = 1.0, start_pref_ed: float = 1.0, limit_pref_ed: float = 1.0)[source]
Bases:
Loss
Methods
build
(learning_rate, natoms, model_dict, ...)Build the loss function graph.
display_if_exist
(loss, find_property)Display NaN if labeled property is not found.
eval
(sess, feed_dict, natoms)Eval the loss function.
- class deepmd.loss.ener.EnerSpinLoss(starter_learning_rate: float, start_pref_e: float = 0.02, limit_pref_e: float = 1.0, start_pref_fr: float = 1000, limit_pref_fr: float = 1.0, start_pref_fm: float = 10000, limit_pref_fm: float = 10.0, start_pref_v: float = 0.0, limit_pref_v: float = 0.0, start_pref_ae: float = 0.0, limit_pref_ae: float = 0.0, start_pref_pf: float = 0.0, limit_pref_pf: float = 0.0, relative_f: Optional[float] = None, enable_atom_ener_coeff: bool = False, use_spin: Optional[list] = None)[source]
Bases:
Loss
Methods
build
(learning_rate, natoms, model_dict, ...)Build the loss function graph.
display_if_exist
(loss, find_property)Display NaN if labeled property is not found.
eval
(sess, feed_dict, natoms)Eval the loss function.
print_header
print_on_training
- build(learning_rate, natoms, model_dict, label_dict, suffix)[source]
Build the loss function graph.
- Parameters
- Returns
- class deepmd.loss.ener.EnerStdLoss(starter_learning_rate: float, start_pref_e: float = 0.02, limit_pref_e: float = 1.0, start_pref_f: float = 1000, limit_pref_f: float = 1.0, start_pref_v: float = 0.0, limit_pref_v: float = 0.0, start_pref_ae: float = 0.0, limit_pref_ae: float = 0.0, start_pref_pf: float = 0.0, limit_pref_pf: float = 0.0, relative_f: Optional[float] = None, enable_atom_ener_coeff: bool = False, start_pref_gf: float = 0.0, limit_pref_gf: float = 0.0, numb_generalized_coord: int = 0, **kwargs)[source]
Bases:
Loss
Standard loss function for DP models.
- Parameters
- starter_learning_rate
float
The learning rate at the start of the training.
- start_pref_e
float
The prefactor of energy loss at the start of the training.
- limit_pref_e
float
The prefactor of energy loss at the end of the training.
- start_pref_f
float
The prefactor of force loss at the start of the training.
- limit_pref_f
float
The prefactor of force loss at the end of the training.
- start_pref_v
float
The prefactor of virial loss at the start of the training.
- limit_pref_v
float
The prefactor of virial loss at the end of the training.
- start_pref_ae
float
The prefactor of atomic energy loss at the start of the training.
- limit_pref_ae
float
The prefactor of atomic energy loss at the end of the training.
- start_pref_pf
float
The prefactor of atomic prefactor force loss at the start of the training.
- limit_pref_pf
float
The prefactor of atomic prefactor force loss at the end of the training.
- relative_f
float
If provided, relative force error will be used in the loss. The difference of force will be normalized by the magnitude of the force in the label with a shift given by relative_f
- enable_atom_ener_coeffbool
if true, the energy will be computed as sum_i c_i E_i
- start_pref_gf
float
The prefactor of generalized force loss at the start of the training.
- limit_pref_gf
float
The prefactor of generalized force loss at the end of the training.
- numb_generalized_coord
int
The dimension of generalized coordinates.
- **kwargs
Other keyword arguments.
- starter_learning_rate
Methods
build
(learning_rate, natoms, model_dict, ...)Build the loss function graph.
display_if_exist
(loss, find_property)Display NaN if labeled property is not found.
eval
(sess, feed_dict, natoms)Eval the loss function.
deepmd.loss.loss module
- class deepmd.loss.loss.Loss[source]
Bases:
object
The abstract class for the loss function.
Methods
build
(learning_rate, natoms, model_dict, ...)Build the loss function graph.
display_if_exist
(loss, find_property)Display NaN if labeled property is not found.
eval
(sess, feed_dict, natoms)Eval the loss function.
- abstract build(learning_rate: Tensor, natoms: Tensor, model_dict: Dict[str, Tensor], label_dict: Dict[str, Tensor], suffix: str) Tuple[Tensor, Dict[str, Tensor]] [source]
Build the loss function graph.
- Parameters
- Returns
- static display_if_exist(loss: Tensor, find_property: float) Tensor [source]
Display NaN if labeled property is not found.
deepmd.loss.tensor module
- class deepmd.loss.tensor.TensorLoss(jdata, **kwarg)[source]
Bases:
Loss
Loss function for tensorial properties.
Methods
build
(learning_rate, natoms, model_dict, ...)Build the loss function graph.
display_if_exist
(loss, find_property)Display NaN if labeled property is not found.
eval
(sess, feed_dict, natoms)Eval the loss function.