4.3. Training Parameters
Note
One can load, modify, and export the input file by using our effective web-based tool DP-GUI online or hosted using the command line interface dp gui
. All training parameters below can be set in DP-GUI. By clicking “SAVE JSON”, one can download the input file for furthur training.
- model:
- type:
dict
argument path:model
- type_map:
- type:
typing.List[str]
, optionalargument path:model/type_map
A list of strings. Give the name to each type of atoms. It is noted that the number of atom type of training system must be less than 128 in a GPU environment. If not given, type.raw in each system should use the same type indexes, and type_map.raw will take no effect.
- data_stat_nbatch:
- type:
int
, optional, default:10
argument path:model/data_stat_nbatch
The model determines the normalization from the statistics of the data. This key specifies the number of frames in each system used for statistics.
- data_stat_protect:
- type:
float
, optional, default:0.01
argument path:model/data_stat_protect
Protect parameter for atomic energy regression.
- data_bias_nsample:
- type:
int
, optional, default:10
argument path:model/data_bias_nsample
The number of training samples in a system to compute and change the energy bias.
- use_srtab:
- type:
str
, optionalargument path:model/use_srtab
The table for the short-range pairwise interaction added on top of DP. The table is a text data file with (N_t + 1) * N_t / 2 + 1 columes. The first colume is the distance between atoms. The second to the last columes are energies for pairs of certain types. For example we have two atom types, 0 and 1. The columes from 2nd to 4th are for 0-0, 0-1 and 1-1 correspondingly.
- smin_alpha:
- type:
float
, optionalargument path:model/smin_alpha
The short-range tabulated interaction will be swithed according to the distance of the nearest neighbor. This distance is calculated by softmin. This parameter is the decaying parameter in the softmin. It is only required when use_srtab is provided.
- sw_rmin:
- type:
float
, optionalargument path:model/sw_rmin
The lower boundary of the interpolation between short-range tabulated interaction and DP. It is only required when use_srtab is provided.
- sw_rmax:
- type:
float
, optionalargument path:model/sw_rmax
The upper boundary of the interpolation between short-range tabulated interaction and DP. It is only required when use_srtab is provided.
- srtab_add_bias:
- type:
bool
, optional, default:True
argument path:model/srtab_add_bias
Whether add energy bias from the statistics of the data to short-range tabulated atomic energy. It only takes effect when use_srtab is provided.
- type_embedding:
- type:
dict
, optionalargument path:model/type_embedding
The type embedding.
- neuron:
- type:
typing.List[int]
, optional, default:[8]
argument path:model/type_embedding/neuron
Number of neurons in each hidden layers of the embedding net. When two layers are of the same size or one layer is twice as large as the previous layer, a skip connection is built.
- activation_function:
- type:
str
, optional, default:tanh
argument path:model/type_embedding/activation_function
The activation function in the embedding net. Supported activation functions are “relu”, “relu6”, “softplus”, “sigmoid”, “tanh”, “gelu”, “gelu_tf”, “None”, “none”. Note that “gelu” denotes the custom operator version, and “gelu_tf” denotes the TF standard version. If you set “None” or “none” here, no activation function will be used.
- resnet_dt:
- type:
bool
, optional, default:False
argument path:model/type_embedding/resnet_dt
Whether to use a “Timestep” in the skip connection
- precision:
- type:
str
, optional, default:default
argument path:model/type_embedding/precision
The precision of the embedding net parameters, supported options are “default”, “float16”, “float32”, “float64”, “bfloat16”. Default follows the interface precision.
- trainable:
- type:
bool
, optional, default:True
argument path:model/type_embedding/trainable
If the parameters in the embedding net are trainable
- seed:
- type:
NoneType
|int
, optional, default:None
argument path:model/type_embedding/seed
Random seed for parameter initialization
- modifier:
- type:
dict
, optionalargument path:model/modifier
The modifier of model output.
Depending on the value of type, different sub args are accepted.
- type:
The type of modifier. See explanation below.
-dipole_charge: Use WFCC to model the electronic structure of the system. Correct the long-range interaction
When type is set to
dipole_charge
:- model_name:
- type:
str
argument path:model/modifier[dipole_charge]/model_name
The name of the frozen dipole model file.
- model_charge_map:
- type:
typing.List[float]
argument path:model/modifier[dipole_charge]/model_charge_map
The charge of the WFCC. The list length should be the same as the `sel_type <model/fitting_net[dipole]/sel_type_>`_.
- sys_charge_map:
- type:
typing.List[float]
argument path:model/modifier[dipole_charge]/sys_charge_map
The charge of real atoms. The list length should be the same as the type_map
- ewald_beta:
- type:
float
, optional, default:0.4
argument path:model/modifier[dipole_charge]/ewald_beta
The splitting parameter of Ewald sum. Unit is A^-1
- ewald_h:
- type:
float
, optional, default:1.0
argument path:model/modifier[dipole_charge]/ewald_h
The grid spacing of the FFT grid. Unit is A
- compress:
- type:
dict
, optionalargument path:model/compress
Model compression configurations
- spin:
- type:
dict
, optionalargument path:model/spin
The settings for systems with spin.
- use_spin:
- type:
typing.List[bool]
argument path:model/spin/use_spin
Whether to use atomic spin model for each atom type
- spin_norm:
- type:
typing.List[float]
argument path:model/spin/spin_norm
The magnitude of atomic spin for each atom type with spin
- virtual_len:
- type:
typing.List[float]
argument path:model/spin/virtual_len
The distance between virtual atom representing spin and its corresponding real atom for each atom type with spin
Depending on the value of type, different sub args are accepted.
- type:
- type:
str
(flag key), default:standard
argument path:model/type
When type is set to
standard
:Stardard model, which contains a descriptor and a fitting.
- descriptor:
- type:
dict
argument path:model[standard]/descriptor
The descriptor of atomic environment.
Depending on the value of type, different sub args are accepted.
- type:
- type:
str
(flag key)argument path:model[standard]/descriptor/type
possible choices:loc_frame
,se_e2_a
,se_e3
,se_a_tpe
,se_e2_r
,hybrid
,se_atten
,se_atten_v2
,se_a_ebd_v2
,se_a_mask
The type of the descritpor. See explanation below.
loc_frame: Defines a local frame at each atom, and the compute the descriptor as local coordinates under this frame.
se_e2_a: Used by the smooth edition of Deep Potential. The full relative coordinates are used to construct the descriptor.
se_e2_r: Used by the smooth edition of Deep Potential. Only the distance between atoms is used to construct the descriptor.
se_e3: Used by the smooth edition of Deep Potential. The full relative coordinates are used to construct the descriptor. Three-body embedding will be used by this descriptor.
se_a_tpe: Used by the smooth edition of Deep Potential. The full relative coordinates are used to construct the descriptor. Type embedding will be used by this descriptor.
se_atten: Used by the smooth edition of Deep Potential. The full relative coordinates are used to construct the descriptor. Attention mechanism will be used by this descriptor.
se_atten_v2: Used by the smooth edition of Deep Potential. The full relative coordinates are used to construct the descriptor. Attention mechanism with new modifications will be used by this descriptor.
se_a_mask: Used by the smooth edition of Deep Potential. It can accept a variable number of atoms in a frame (Non-PBC system). aparam are required as an indicator matrix for the real/virtual sign of input atoms.
hybrid: Concatenate of a list of descriptors as a new descriptor.
When type is set to
loc_frame
:- sel_a:
- type:
typing.List[int]
argument path:model[standard]/descriptor[loc_frame]/sel_a
A list of integers. The length of the list should be the same as the number of atom types in the system. sel_a[i] gives the selected number of type-i neighbors. The full relative coordinates of the neighbors are used by the descriptor.
- sel_r:
- type:
typing.List[int]
argument path:model[standard]/descriptor[loc_frame]/sel_r
A list of integers. The length of the list should be the same as the number of atom types in the system. sel_r[i] gives the selected number of type-i neighbors. Only relative distance of the neighbors are used by the descriptor. sel_a[i] + sel_r[i] is recommended to be larger than the maximally possible number of type-i neighbors in the cut-off radius.
- rcut:
- type:
float
, optional, default:6.0
argument path:model[standard]/descriptor[loc_frame]/rcut
The cut-off radius. The default value is 6.0
- axis_rule:
- type:
typing.List[int]
argument path:model[standard]/descriptor[loc_frame]/axis_rule
A list of integers. The length should be 6 times of the number of types.
axis_rule[i*6+0]: class of the atom defining the first axis of type-i atom. 0 for neighbors with full coordinates and 1 for neighbors only with relative distance.
axis_rule[i*6+1]: type of the atom defining the first axis of type-i atom.
axis_rule[i*6+2]: index of the axis atom defining the first axis. Note that the neighbors with the same class and type are sorted according to their relative distance.
axis_rule[i*6+3]: class of the atom defining the second axis of type-i atom. 0 for neighbors with full coordinates and 1 for neighbors only with relative distance.
axis_rule[i*6+4]: type of the atom defining the second axis of type-i atom.
axis_rule[i*6+5]: index of the axis atom defining the second axis. Note that the neighbors with the same class and type are sorted according to their relative distance.
When type is set to
se_e2_a
(or its aliasse_a
):- sel:
- type:
typing.List[int]
|str
, optional, default:auto
argument path:model[standard]/descriptor[se_e2_a]/sel
This parameter set the number of selected neighbors for each type of atom. It can be:
List[int]. The length of the list should be the same as the number of atom types in the system. sel[i] gives the selected number of type-i neighbors. sel[i] is recommended to be larger than the maximally possible number of type-i neighbors in the cut-off radius. It is noted that the total sel value must be less than 4096 in a GPU environment.
str. Can be “auto:factor” or “auto”. “factor” is a float number larger than 1. This option will automatically determine the sel. In detail it counts the maximal number of neighbors with in the cutoff radius for each type of neighbor, then multiply the maximum by the “factor”. Finally the number is wraped up to 4 divisible. The option “auto” is equivalent to “auto:1.1”.
- rcut:
- type:
float
, optional, default:6.0
argument path:model[standard]/descriptor[se_e2_a]/rcut
The cut-off radius.
- rcut_smth:
- type:
float
, optional, default:0.5
argument path:model[standard]/descriptor[se_e2_a]/rcut_smth
Where to start smoothing. For example the 1/r term is smoothed from rcut to rcut_smth
- neuron:
- type:
typing.List[int]
, optional, default:[10, 20, 40]
argument path:model[standard]/descriptor[se_e2_a]/neuron
Number of neurons in each hidden layers of the embedding net. When two layers are of the same size or one layer is twice as large as the previous layer, a skip connection is built.
- axis_neuron:
- type:
int
, optional, default:4
, alias: n_axis_neuronargument path:model[standard]/descriptor[se_e2_a]/axis_neuron
Size of the submatrix of G (embedding matrix).
- activation_function:
- type:
str
, optional, default:tanh
argument path:model[standard]/descriptor[se_e2_a]/activation_function
The activation function in the embedding net. Supported activation functions are “relu”, “relu6”, “softplus”, “sigmoid”, “tanh”, “gelu”, “gelu_tf”, “None”, “none”. Note that “gelu” denotes the custom operator version, and “gelu_tf” denotes the TF standard version. If you set “None” or “none” here, no activation function will be used.
- resnet_dt:
- type:
bool
, optional, default:False
argument path:model[standard]/descriptor[se_e2_a]/resnet_dt
Whether to use a “Timestep” in the skip connection
- type_one_side:
- type:
bool
, optional, default:False
argument path:model[standard]/descriptor[se_e2_a]/type_one_side
If true, the embedding network parameters vary by types of neighbor atoms only, so there will be $N_text{types}$ sets of embedding network parameters. Otherwise, the embedding network parameters vary by types of centric atoms and types of neighbor atoms, so there will be $N_text{types}^2$ sets of embedding network parameters.
- precision:
- type:
str
, optional, default:default
argument path:model[standard]/descriptor[se_e2_a]/precision
The precision of the embedding net parameters, supported options are “default”, “float16”, “float32”, “float64”, “bfloat16”. Default follows the interface precision.
- trainable:
- type:
bool
, optional, default:True
argument path:model[standard]/descriptor[se_e2_a]/trainable
If the parameters in the embedding net is trainable
- seed:
- type:
NoneType
|int
, optionalargument path:model[standard]/descriptor[se_e2_a]/seed
Random seed for parameter initialization
- exclude_types:
- type:
typing.List[typing.List[int]]
, optional, default:[]
argument path:model[standard]/descriptor[se_e2_a]/exclude_types
The excluded pairs of types which have no interaction with each other. For example, [[0, 1]] means no interaction between type 0 and type 1.
- set_davg_zero:
- type:
bool
, optional, default:False
argument path:model[standard]/descriptor[se_e2_a]/set_davg_zero
Set the normalization average to zero. This option should be set when atom_ener in the energy fitting is used
When type is set to
se_e3
(or its aliasesse_at
,se_a_3be
,se_t
):- sel:
- type:
typing.List[int]
|str
, optional, default:auto
argument path:model[standard]/descriptor[se_e3]/sel
This parameter set the number of selected neighbors for each type of atom. It can be:
List[int]. The length of the list should be the same as the number of atom types in the system. sel[i] gives the selected number of type-i neighbors. sel[i] is recommended to be larger than the maximally possible number of type-i neighbors in the cut-off radius. It is noted that the total sel value must be less than 4096 in a GPU environment.
str. Can be “auto:factor” or “auto”. “factor” is a float number larger than 1. This option will automatically determine the sel. In detail it counts the maximal number of neighbors with in the cutoff radius for each type of neighbor, then multiply the maximum by the “factor”. Finally the number is wraped up to 4 divisible. The option “auto” is equivalent to “auto:1.1”.
- rcut:
- type:
float
, optional, default:6.0
argument path:model[standard]/descriptor[se_e3]/rcut
The cut-off radius.
- rcut_smth:
- type:
float
, optional, default:0.5
argument path:model[standard]/descriptor[se_e3]/rcut_smth
Where to start smoothing. For example the 1/r term is smoothed from rcut to rcut_smth
- neuron:
- type:
typing.List[int]
, optional, default:[10, 20, 40]
argument path:model[standard]/descriptor[se_e3]/neuron
Number of neurons in each hidden layers of the embedding net. When two layers are of the same size or one layer is twice as large as the previous layer, a skip connection is built.
- activation_function:
- type:
str
, optional, default:tanh
argument path:model[standard]/descriptor[se_e3]/activation_function
The activation function in the embedding net. Supported activation functions are “relu”, “relu6”, “softplus”, “sigmoid”, “tanh”, “gelu”, “gelu_tf”, “None”, “none”. Note that “gelu” denotes the custom operator version, and “gelu_tf” denotes the TF standard version. If you set “None” or “none” here, no activation function will be used.
- resnet_dt:
- type:
bool
, optional, default:False
argument path:model[standard]/descriptor[se_e3]/resnet_dt
Whether to use a “Timestep” in the skip connection
- precision:
- type:
str
, optional, default:default
argument path:model[standard]/descriptor[se_e3]/precision
The precision of the embedding net parameters, supported options are “default”, “float16”, “float32”, “float64”, “bfloat16”. Default follows the interface precision.
- trainable:
- type:
bool
, optional, default:True
argument path:model[standard]/descriptor[se_e3]/trainable
If the parameters in the embedding net are trainable
- seed:
- type:
NoneType
|int
, optionalargument path:model[standard]/descriptor[se_e3]/seed
Random seed for parameter initialization
- set_davg_zero:
- type:
bool
, optional, default:False
argument path:model[standard]/descriptor[se_e3]/set_davg_zero
Set the normalization average to zero. This option should be set when atom_ener in the energy fitting is used
When type is set to
se_a_tpe
(or its aliasse_a_ebd
):- sel:
- type:
typing.List[int]
|str
, optional, default:auto
argument path:model[standard]/descriptor[se_a_tpe]/sel
This parameter set the number of selected neighbors for each type of atom. It can be:
List[int]. The length of the list should be the same as the number of atom types in the system. sel[i] gives the selected number of type-i neighbors. sel[i] is recommended to be larger than the maximally possible number of type-i neighbors in the cut-off radius. It is noted that the total sel value must be less than 4096 in a GPU environment.
str. Can be “auto:factor” or “auto”. “factor” is a float number larger than 1. This option will automatically determine the sel. In detail it counts the maximal number of neighbors with in the cutoff radius for each type of neighbor, then multiply the maximum by the “factor”. Finally the number is wraped up to 4 divisible. The option “auto” is equivalent to “auto:1.1”.
- rcut:
- type:
float
, optional, default:6.0
argument path:model[standard]/descriptor[se_a_tpe]/rcut
The cut-off radius.
- rcut_smth:
- type:
float
, optional, default:0.5
argument path:model[standard]/descriptor[se_a_tpe]/rcut_smth
Where to start smoothing. For example the 1/r term is smoothed from rcut to rcut_smth
- neuron:
- type:
typing.List[int]
, optional, default:[10, 20, 40]
argument path:model[standard]/descriptor[se_a_tpe]/neuron
Number of neurons in each hidden layers of the embedding net. When two layers are of the same size or one layer is twice as large as the previous layer, a skip connection is built.
- axis_neuron:
- type:
int
, optional, default:4
, alias: n_axis_neuronargument path:model[standard]/descriptor[se_a_tpe]/axis_neuron
Size of the submatrix of G (embedding matrix).
- activation_function:
- type:
str
, optional, default:tanh
argument path:model[standard]/descriptor[se_a_tpe]/activation_function
The activation function in the embedding net. Supported activation functions are “relu”, “relu6”, “softplus”, “sigmoid”, “tanh”, “gelu”, “gelu_tf”, “None”, “none”. Note that “gelu” denotes the custom operator version, and “gelu_tf” denotes the TF standard version. If you set “None” or “none” here, no activation function will be used.
- resnet_dt:
- type:
bool
, optional, default:False
argument path:model[standard]/descriptor[se_a_tpe]/resnet_dt
Whether to use a “Timestep” in the skip connection
- type_one_side:
- type:
bool
, optional, default:False
argument path:model[standard]/descriptor[se_a_tpe]/type_one_side
If true, the embedding network parameters vary by types of neighbor atoms only, so there will be $N_text{types}$ sets of embedding network parameters. Otherwise, the embedding network parameters vary by types of centric atoms and types of neighbor atoms, so there will be $N_text{types}^2$ sets of embedding network parameters.
- precision:
- type:
str
, optional, default:default
argument path:model[standard]/descriptor[se_a_tpe]/precision
The precision of the embedding net parameters, supported options are “default”, “float16”, “float32”, “float64”, “bfloat16”. Default follows the interface precision.
- trainable:
- type:
bool
, optional, default:True
argument path:model[standard]/descriptor[se_a_tpe]/trainable
If the parameters in the embedding net is trainable
- seed:
- type:
NoneType
|int
, optionalargument path:model[standard]/descriptor[se_a_tpe]/seed
Random seed for parameter initialization
- exclude_types:
- type:
typing.List[typing.List[int]]
, optional, default:[]
argument path:model[standard]/descriptor[se_a_tpe]/exclude_types
The excluded pairs of types which have no interaction with each other. For example, [[0, 1]] means no interaction between type 0 and type 1.
- set_davg_zero:
- type:
bool
, optional, default:False
argument path:model[standard]/descriptor[se_a_tpe]/set_davg_zero
Set the normalization average to zero. This option should be set when atom_ener in the energy fitting is used
- type_nchanl:
- type:
int
, optional, default:4
argument path:model[standard]/descriptor[se_a_tpe]/type_nchanl
number of channels for type embedding
- type_nlayer:
- type:
int
, optional, default:2
argument path:model[standard]/descriptor[se_a_tpe]/type_nlayer
number of hidden layers of type embedding net
- numb_aparam:
- type:
int
, optional, default:0
argument path:model[standard]/descriptor[se_a_tpe]/numb_aparam
dimension of atomic parameter. if set to a value > 0, the atomic parameters are embedded.
When type is set to
se_e2_r
(or its aliasse_r
):- sel:
- type:
typing.List[int]
|str
, optional, default:auto
argument path:model[standard]/descriptor[se_e2_r]/sel
This parameter set the number of selected neighbors for each type of atom. It can be:
List[int]. The length of the list should be the same as the number of atom types in the system. sel[i] gives the selected number of type-i neighbors. sel[i] is recommended to be larger than the maximally possible number of type-i neighbors in the cut-off radius. It is noted that the total sel value must be less than 4096 in a GPU environment.
str. Can be “auto:factor” or “auto”. “factor” is a float number larger than 1. This option will automatically determine the sel. In detail it counts the maximal number of neighbors with in the cutoff radius for each type of neighbor, then multiply the maximum by the “factor”. Finally the number is wraped up to 4 divisible. The option “auto” is equivalent to “auto:1.1”.
- rcut:
- type:
float
, optional, default:6.0
argument path:model[standard]/descriptor[se_e2_r]/rcut
The cut-off radius.
- rcut_smth:
- type:
float
, optional, default:0.5
argument path:model[standard]/descriptor[se_e2_r]/rcut_smth
Where to start smoothing. For example the 1/r term is smoothed from rcut to rcut_smth
- neuron:
- type:
typing.List[int]
, optional, default:[10, 20, 40]
argument path:model[standard]/descriptor[se_e2_r]/neuron
Number of neurons in each hidden layers of the embedding net. When two layers are of the same size or one layer is twice as large as the previous layer, a skip connection is built.
- activation_function:
- type:
str
, optional, default:tanh
argument path:model[standard]/descriptor[se_e2_r]/activation_function
The activation function in the embedding net. Supported activation functions are “relu”, “relu6”, “softplus”, “sigmoid”, “tanh”, “gelu”, “gelu_tf”, “None”, “none”. Note that “gelu” denotes the custom operator version, and “gelu_tf” denotes the TF standard version. If you set “None” or “none” here, no activation function will be used.
- resnet_dt:
- type:
bool
, optional, default:False
argument path:model[standard]/descriptor[se_e2_r]/resnet_dt
Whether to use a “Timestep” in the skip connection
- type_one_side:
- type:
bool
, optional, default:False
argument path:model[standard]/descriptor[se_e2_r]/type_one_side
If true, the embedding network parameters vary by types of neighbor atoms only, so there will be $N_text{types}$ sets of embedding network parameters. Otherwise, the embedding network parameters vary by types of centric atoms and types of neighbor atoms, so there will be $N_text{types}^2$ sets of embedding network parameters.
- precision:
- type:
str
, optional, default:default
argument path:model[standard]/descriptor[se_e2_r]/precision
The precision of the embedding net parameters, supported options are “default”, “float16”, “float32”, “float64”, “bfloat16”. Default follows the interface precision.
- trainable:
- type:
bool
, optional, default:True
argument path:model[standard]/descriptor[se_e2_r]/trainable
If the parameters in the embedding net are trainable
- seed:
- type:
NoneType
|int
, optionalargument path:model[standard]/descriptor[se_e2_r]/seed
Random seed for parameter initialization
- exclude_types:
- type:
typing.List[typing.List[int]]
, optional, default:[]
argument path:model[standard]/descriptor[se_e2_r]/exclude_types
The excluded pairs of types which have no interaction with each other. For example, [[0, 1]] means no interaction between type 0 and type 1.
- set_davg_zero:
- type:
bool
, optional, default:False
argument path:model[standard]/descriptor[se_e2_r]/set_davg_zero
Set the normalization average to zero. This option should be set when atom_ener in the energy fitting is used
When type is set to
hybrid
:- list:
- type:
list
argument path:model[standard]/descriptor[hybrid]/list
A list of descriptor definitions
When type is set to
se_atten
:- sel:
- type:
typing.List[int]
|str
|int
, optional, default:auto
argument path:model[standard]/descriptor[se_atten]/sel
This parameter set the number of selected neighbors. Note that this parameter is a little different from that in other descriptors. Instead of separating each type of atoms, only the summation matters. And this number is highly related with the efficiency, thus one should not make it too large. Usually 200 or less is enough, far away from the GPU limitation 4096. It can be:
int. The maximum number of neighbor atoms to be considered. We recommend it to be less than 200.
List[int]. The length of the list should be the same as the number of atom types in the system. sel[i] gives the selected number of type-i neighbors. Only the summation of sel[i] matters, and it is recommended to be less than 200. - str. Can be “auto:factor” or “auto”. “factor” is a float number larger than 1. This option will automatically determine the sel. In detail it counts the maximal number of neighbors with in the cutoff radius for each type of neighbor, then multiply the maximum by the “factor”. Finally the number is wraped up to 4 divisible. The option “auto” is equivalent to “auto:1.1”.
- rcut:
- type:
float
, optional, default:6.0
argument path:model[standard]/descriptor[se_atten]/rcut
The cut-off radius.
- rcut_smth:
- type:
float
, optional, default:0.5
argument path:model[standard]/descriptor[se_atten]/rcut_smth
Where to start smoothing. For example the 1/r term is smoothed from rcut to rcut_smth
- neuron:
- type:
typing.List[int]
, optional, default:[10, 20, 40]
argument path:model[standard]/descriptor[se_atten]/neuron
Number of neurons in each hidden layers of the embedding net. When two layers are of the same size or one layer is twice as large as the previous layer, a skip connection is built.
- axis_neuron:
- type:
int
, optional, default:4
, alias: n_axis_neuronargument path:model[standard]/descriptor[se_atten]/axis_neuron
Size of the submatrix of G (embedding matrix).
- activation_function:
- type:
str
, optional, default:tanh
argument path:model[standard]/descriptor[se_atten]/activation_function
The activation function in the embedding net. Supported activation functions are “relu”, “relu6”, “softplus”, “sigmoid”, “tanh”, “gelu”, “gelu_tf”, “None”, “none”. Note that “gelu” denotes the custom operator version, and “gelu_tf” denotes the TF standard version. If you set “None” or “none” here, no activation function will be used.
- resnet_dt:
- type:
bool
, optional, default:False
argument path:model[standard]/descriptor[se_atten]/resnet_dt
Whether to use a “Timestep” in the skip connection
- type_one_side:
- type:
bool
, optional, default:False
argument path:model[standard]/descriptor[se_atten]/type_one_side
If true, the embedding network parameters vary by types of neighbor atoms only, so there will be $N_text{types}$ sets of embedding network parameters. Otherwise, the embedding network parameters vary by types of centric atoms and types of neighbor atoms, so there will be $N_text{types}^2$ sets of embedding network parameters.
- precision:
- type:
str
, optional, default:default
argument path:model[standard]/descriptor[se_atten]/precision
The precision of the embedding net parameters, supported options are “default”, “float16”, “float32”, “float64”, “bfloat16”. Default follows the interface precision.
- trainable:
- type:
bool
, optional, default:True
argument path:model[standard]/descriptor[se_atten]/trainable
If the parameters in the embedding net is trainable
- seed:
- type:
NoneType
|int
, optionalargument path:model[standard]/descriptor[se_atten]/seed
Random seed for parameter initialization
- exclude_types:
- type:
typing.List[typing.List[int]]
, optional, default:[]
argument path:model[standard]/descriptor[se_atten]/exclude_types
The excluded pairs of types which have no interaction with each other. For example, [[0, 1]] means no interaction between type 0 and type 1.
- attn:
- type:
int
, optional, default:128
argument path:model[standard]/descriptor[se_atten]/attn
The length of hidden vectors in attention layers
- attn_layer:
- type:
int
, optional, default:2
argument path:model[standard]/descriptor[se_atten]/attn_layer
The number of attention layers.
- attn_dotr:
- type:
bool
, optional, default:True
argument path:model[standard]/descriptor[se_atten]/attn_dotr
Whether to do dot product with the normalized relative coordinates
- attn_mask:
- type:
bool
, optional, default:False
argument path:model[standard]/descriptor[se_atten]/attn_mask
Whether to do mask on the diagonal in the attention matrix
- stripped_type_embedding:
- type:
bool
, optional, default:False
argument path:model[standard]/descriptor[se_atten]/stripped_type_embedding
Whether to strip the type embedding into a separated embedding network. Setting it to False will fall back to the previous version of se_atten which is non-compressible.
- smooth_type_embedding:
- type:
bool
, optional, default:False
, alias: smooth_type_embddingargument path:model[standard]/descriptor[se_atten]/smooth_type_embedding
When using stripped type embedding, whether to dot smooth factor on the network output of type embedding to keep the network smooth, instead of setting set_davg_zero to be True.
- set_davg_zero:
- type:
bool
, optional, default:True
argument path:model[standard]/descriptor[se_atten]/set_davg_zero
Set the normalization average to zero. This option should be set when se_atten descriptor or atom_ener in the energy fitting is used
When type is set to
se_atten_v2
:- sel:
- type:
typing.List[int]
|str
|int
, optional, default:auto
argument path:model[standard]/descriptor[se_atten_v2]/sel
This parameter set the number of selected neighbors. Note that this parameter is a little different from that in other descriptors. Instead of separating each type of atoms, only the summation matters. And this number is highly related with the efficiency, thus one should not make it too large. Usually 200 or less is enough, far away from the GPU limitation 4096. It can be:
int. The maximum number of neighbor atoms to be considered. We recommend it to be less than 200.
List[int]. The length of the list should be the same as the number of atom types in the system. sel[i] gives the selected number of type-i neighbors. Only the summation of sel[i] matters, and it is recommended to be less than 200. - str. Can be “auto:factor” or “auto”. “factor” is a float number larger than 1. This option will automatically determine the sel. In detail it counts the maximal number of neighbors with in the cutoff radius for each type of neighbor, then multiply the maximum by the “factor”. Finally the number is wraped up to 4 divisible. The option “auto” is equivalent to “auto:1.1”.
- rcut:
- type:
float
, optional, default:6.0
argument path:model[standard]/descriptor[se_atten_v2]/rcut
The cut-off radius.
- rcut_smth:
- type:
float
, optional, default:0.5
argument path:model[standard]/descriptor[se_atten_v2]/rcut_smth
Where to start smoothing. For example the 1/r term is smoothed from rcut to rcut_smth
- neuron:
- type:
typing.List[int]
, optional, default:[10, 20, 40]
argument path:model[standard]/descriptor[se_atten_v2]/neuron
Number of neurons in each hidden layers of the embedding net. When two layers are of the same size or one layer is twice as large as the previous layer, a skip connection is built.
- axis_neuron:
- type:
int
, optional, default:4
, alias: n_axis_neuronargument path:model[standard]/descriptor[se_atten_v2]/axis_neuron
Size of the submatrix of G (embedding matrix).
- activation_function:
- type:
str
, optional, default:tanh
argument path:model[standard]/descriptor[se_atten_v2]/activation_function
The activation function in the embedding net. Supported activation functions are “relu”, “relu6”, “softplus”, “sigmoid”, “tanh”, “gelu”, “gelu_tf”, “None”, “none”. Note that “gelu” denotes the custom operator version, and “gelu_tf” denotes the TF standard version. If you set “None” or “none” here, no activation function will be used.
- resnet_dt:
- type:
bool
, optional, default:False
argument path:model[standard]/descriptor[se_atten_v2]/resnet_dt
Whether to use a “Timestep” in the skip connection
- type_one_side:
- type:
bool
, optional, default:False
argument path:model[standard]/descriptor[se_atten_v2]/type_one_side
If true, the embedding network parameters vary by types of neighbor atoms only, so there will be $N_text{types}$ sets of embedding network parameters. Otherwise, the embedding network parameters vary by types of centric atoms and types of neighbor atoms, so there will be $N_text{types}^2$ sets of embedding network parameters.
- precision:
- type:
str
, optional, default:default
argument path:model[standard]/descriptor[se_atten_v2]/precision
The precision of the embedding net parameters, supported options are “default”, “float16”, “float32”, “float64”, “bfloat16”. Default follows the interface precision.
- trainable:
- type:
bool
, optional, default:True
argument path:model[standard]/descriptor[se_atten_v2]/trainable
If the parameters in the embedding net is trainable
- seed:
- type:
NoneType
|int
, optionalargument path:model[standard]/descriptor[se_atten_v2]/seed
Random seed for parameter initialization
- exclude_types:
- type:
typing.List[typing.List[int]]
, optional, default:[]
argument path:model[standard]/descriptor[se_atten_v2]/exclude_types
The excluded pairs of types which have no interaction with each other. For example, [[0, 1]] means no interaction between type 0 and type 1.
- attn:
- type:
int
, optional, default:128
argument path:model[standard]/descriptor[se_atten_v2]/attn
The length of hidden vectors in attention layers
- attn_layer:
- type:
int
, optional, default:2
argument path:model[standard]/descriptor[se_atten_v2]/attn_layer
The number of attention layers.
- attn_dotr:
- type:
bool
, optional, default:True
argument path:model[standard]/descriptor[se_atten_v2]/attn_dotr
Whether to do dot product with the normalized relative coordinates
- attn_mask:
- type:
bool
, optional, default:False
argument path:model[standard]/descriptor[se_atten_v2]/attn_mask
Whether to do mask on the diagonal in the attention matrix
- set_davg_zero:
- type:
bool
, optional, default:False
argument path:model[standard]/descriptor[se_atten_v2]/set_davg_zero
Set the normalization average to zero. This option should be set when se_atten descriptor or atom_ener in the energy fitting is used
When type is set to
se_a_ebd_v2
(or its aliasse_a_tpe_v2
):- sel:
- type:
typing.List[int]
|str
, optional, default:auto
argument path:model[standard]/descriptor[se_a_ebd_v2]/sel
This parameter set the number of selected neighbors for each type of atom. It can be:
List[int]. The length of the list should be the same as the number of atom types in the system. sel[i] gives the selected number of type-i neighbors. sel[i] is recommended to be larger than the maximally possible number of type-i neighbors in the cut-off radius. It is noted that the total sel value must be less than 4096 in a GPU environment.
str. Can be “auto:factor” or “auto”. “factor” is a float number larger than 1. This option will automatically determine the sel. In detail it counts the maximal number of neighbors with in the cutoff radius for each type of neighbor, then multiply the maximum by the “factor”. Finally the number is wraped up to 4 divisible. The option “auto” is equivalent to “auto:1.1”.
- rcut:
- type:
float
, optional, default:6.0
argument path:model[standard]/descriptor[se_a_ebd_v2]/rcut
The cut-off radius.
- rcut_smth:
- type:
float
, optional, default:0.5
argument path:model[standard]/descriptor[se_a_ebd_v2]/rcut_smth
Where to start smoothing. For example the 1/r term is smoothed from rcut to rcut_smth
- neuron:
- type:
typing.List[int]
, optional, default:[10, 20, 40]
argument path:model[standard]/descriptor[se_a_ebd_v2]/neuron
Number of neurons in each hidden layers of the embedding net. When two layers are of the same size or one layer is twice as large as the previous layer, a skip connection is built.
- axis_neuron:
- type:
int
, optional, default:4
, alias: n_axis_neuronargument path:model[standard]/descriptor[se_a_ebd_v2]/axis_neuron
Size of the submatrix of G (embedding matrix).
- activation_function:
- type:
str
, optional, default:tanh
argument path:model[standard]/descriptor[se_a_ebd_v2]/activation_function
The activation function in the embedding net. Supported activation functions are “relu”, “relu6”, “softplus”, “sigmoid”, “tanh”, “gelu”, “gelu_tf”, “None”, “none”. Note that “gelu” denotes the custom operator version, and “gelu_tf” denotes the TF standard version. If you set “None” or “none” here, no activation function will be used.
- resnet_dt:
- type:
bool
, optional, default:False
argument path:model[standard]/descriptor[se_a_ebd_v2]/resnet_dt
Whether to use a “Timestep” in the skip connection
- type_one_side:
- type:
bool
, optional, default:False
argument path:model[standard]/descriptor[se_a_ebd_v2]/type_one_side
If true, the embedding network parameters vary by types of neighbor atoms only, so there will be $N_text{types}$ sets of embedding network parameters. Otherwise, the embedding network parameters vary by types of centric atoms and types of neighbor atoms, so there will be $N_text{types}^2$ sets of embedding network parameters.
- precision:
- type:
str
, optional, default:default
argument path:model[standard]/descriptor[se_a_ebd_v2]/precision
The precision of the embedding net parameters, supported options are “default”, “float16”, “float32”, “float64”, “bfloat16”. Default follows the interface precision.
- trainable:
- type:
bool
, optional, default:True
argument path:model[standard]/descriptor[se_a_ebd_v2]/trainable
If the parameters in the embedding net is trainable
- seed:
- type:
NoneType
|int
, optionalargument path:model[standard]/descriptor[se_a_ebd_v2]/seed
Random seed for parameter initialization
- exclude_types:
- type:
typing.List[typing.List[int]]
, optional, default:[]
argument path:model[standard]/descriptor[se_a_ebd_v2]/exclude_types
The excluded pairs of types which have no interaction with each other. For example, [[0, 1]] means no interaction between type 0 and type 1.
- set_davg_zero:
- type:
bool
, optional, default:False
argument path:model[standard]/descriptor[se_a_ebd_v2]/set_davg_zero
Set the normalization average to zero. This option should be set when atom_ener in the energy fitting is used
When type is set to
se_a_mask
:- sel:
- type:
typing.List[int]
|str
, optional, default:auto
argument path:model[standard]/descriptor[se_a_mask]/sel
This parameter sets the number of selected neighbors for each type of atom. It can be:
List[int]. The length of the list should be the same as the number of atom types in the system. sel[i] gives the selected number of type-i neighbors. sel[i] is recommended to be larger than the maximally possible number of type-i neighbors in the cut-off radius. It is noted that the total sel value must be less than 4096 in a GPU environment.
str. Can be “auto:factor” or “auto”. “factor” is a float number larger than 1. This option will automatically determine the sel. In detail it counts the maximal number of neighbors with in the cutoff radius for each type of neighbor, then multiply the maximum by the “factor”. Finally the number is wraped up to 4 divisible. The option “auto” is equivalent to “auto:1.1”.
- neuron:
- type:
typing.List[int]
, optional, default:[10, 20, 40]
argument path:model[standard]/descriptor[se_a_mask]/neuron
Number of neurons in each hidden layers of the embedding net. When two layers are of the same size or one layer is twice as large as the previous layer, a skip connection is built.
- axis_neuron:
- type:
int
, optional, default:4
, alias: n_axis_neuronargument path:model[standard]/descriptor[se_a_mask]/axis_neuron
Size of the submatrix of G (embedding matrix).
- activation_function:
- type:
str
, optional, default:tanh
argument path:model[standard]/descriptor[se_a_mask]/activation_function
The activation function in the embedding net. Supported activation functions are “relu”, “relu6”, “softplus”, “sigmoid”, “tanh”, “gelu”, “gelu_tf”, “None”, “none”. Note that “gelu” denotes the custom operator version, and “gelu_tf” denotes the TF standard version. If you set “None” or “none” here, no activation function will be used.
- resnet_dt:
- type:
bool
, optional, default:False
argument path:model[standard]/descriptor[se_a_mask]/resnet_dt
Whether to use a “Timestep” in the skip connection
- type_one_side:
- type:
bool
, optional, default:False
argument path:model[standard]/descriptor[se_a_mask]/type_one_side
If true, the embedding network parameters vary by types of neighbor atoms only, so there will be $N_text{types}$ sets of embedding network parameters. Otherwise, the embedding network parameters vary by types of centric atoms and types of neighbor atoms, so there will be $N_text{types}^2$ sets of embedding network parameters.
- exclude_types:
- type:
typing.List[typing.List[int]]
, optional, default:[]
argument path:model[standard]/descriptor[se_a_mask]/exclude_types
The excluded pairs of types which have no interaction with each other. For example, [[0, 1]] means no interaction between type 0 and type 1.
- precision:
- type:
str
, optional, default:default
argument path:model[standard]/descriptor[se_a_mask]/precision
The precision of the embedding net parameters, supported options are “default”, “float16”, “float32”, “float64”, “bfloat16”. Default follows the interface precision.
- trainable:
- type:
bool
, optional, default:True
argument path:model[standard]/descriptor[se_a_mask]/trainable
If the parameters in the embedding net is trainable
- seed:
- type:
NoneType
|int
, optionalargument path:model[standard]/descriptor[se_a_mask]/seed
Random seed for parameter initialization
- fitting_net:
- type:
dict
argument path:model[standard]/fitting_net
The fitting of physical properties.
Depending on the value of type, different sub args are accepted.
- type:
- type:
str
(flag key), default:ener
argument path:model[standard]/fitting_net/type
The type of the fitting. See explanation below.
ener: Fit an energy model (potential energy surface).
dos : Fit a density of states model. The total density of states / site-projected density of states labels should be provided by dos.npy or atom_dos.npy in each data system. The file has number of frames lines and number of energy grid columns (times number of atoms in atom_dos.npy). See loss parameter.
dipole: Fit an atomic dipole model. Global dipole labels or atomic dipole labels for all the selected atoms (see sel_type) should be provided by dipole.npy in each data system. The file either has number of frames lines and 3 times of number of selected atoms columns, or has number of frames lines and 3 columns. See loss parameter.
polar: Fit an atomic polarizability model. Global polarizazbility labels or atomic polarizability labels for all the selected atoms (see sel_type) should be provided by polarizability.npy in each data system. The file eith has number of frames lines and 9 times of number of selected atoms columns, or has number of frames lines and 9 columns. See loss parameter.
When type is set to
ener
:- numb_fparam:
- type:
int
, optional, default:0
argument path:model[standard]/fitting_net[ener]/numb_fparam
The dimension of the frame parameter. If set to >0, file fparam.npy should be included to provided the input fparams.
- numb_aparam:
- type:
int
, optional, default:0
argument path:model[standard]/fitting_net[ener]/numb_aparam
The dimension of the atomic parameter. If set to >0, file aparam.npy should be included to provided the input aparams.
- neuron:
- type:
typing.List[int]
, optional, default:[120, 120, 120]
, alias: n_neuronargument path:model[standard]/fitting_net[ener]/neuron
The number of neurons in each hidden layers of the fitting net. When two hidden layers are of the same size, a skip connection is built.
- activation_function:
- type:
str
, optional, default:tanh
argument path:model[standard]/fitting_net[ener]/activation_function
The activation function in the fitting net. Supported activation functions are “relu”, “relu6”, “softplus”, “sigmoid”, “tanh”, “gelu”, “gelu_tf”, “None”, “none”. Note that “gelu” denotes the custom operator version, and “gelu_tf” denotes the TF standard version. If you set “None” or “none” here, no activation function will be used.
- precision:
- type:
str
, optional, default:default
argument path:model[standard]/fitting_net[ener]/precision
The precision of the fitting net parameters, supported options are “default”, “float16”, “float32”, “float64”, “bfloat16”. Default follows the interface precision.
- resnet_dt:
- type:
bool
, optional, default:True
argument path:model[standard]/fitting_net[ener]/resnet_dt
Whether to use a “Timestep” in the skip connection
- trainable:
- type:
bool
|typing.List[bool]
, optional, default:True
argument path:model[standard]/fitting_net[ener]/trainable
Whether the parameters in the fitting net are trainable. This option can be
bool: True if all parameters of the fitting net are trainable, False otherwise.
list of bool: Specifies if each layer is trainable. Since the fitting net is composed by hidden layers followed by a output layer, the length of this list should be equal to len(neuron)+1.
- rcond:
- type:
float
|NoneType
, optional, default:None
argument path:model[standard]/fitting_net[ener]/rcond
The condition number used to determine the inital energy shift for each type of atoms. See rcond in
numpy.linalg.lstsq()
for more details.
- seed:
- type:
NoneType
|int
, optionalargument path:model[standard]/fitting_net[ener]/seed
Random seed for parameter initialization of the fitting net
- atom_ener:
- type:
typing.List[typing.Optional[float]]
, optional, default:[]
argument path:model[standard]/fitting_net[ener]/atom_ener
Specify the atomic energy in vacuum for each type
- layer_name:
- type:
typing.List[str]
, optionalargument path:model[standard]/fitting_net[ener]/layer_name
The name of the each layer. The length of this list should be equal to n_neuron + 1. If two layers, either in the same fitting or different fittings, have the same name, they will share the same neural network parameters. The shape of these layers should be the same. If null is given for a layer, parameters will not be shared.
- use_aparam_as_mask:
- type:
bool
, optional, default:False
argument path:model[standard]/fitting_net[ener]/use_aparam_as_mask
Whether to use the aparam as a mask in input.If True, the aparam will not be used in fitting net for embedding.When descrpt is se_a_mask, the aparam will be used as a mask to indicate the input atom is real/virtual. And use_aparam_as_mask should be set to True.
When type is set to
dos
:- numb_fparam:
- type:
int
, optional, default:0
argument path:model[standard]/fitting_net[dos]/numb_fparam
The dimension of the frame parameter. If set to >0, file fparam.npy should be included to provided the input fparams.
- numb_aparam:
- type:
int
, optional, default:0
argument path:model[standard]/fitting_net[dos]/numb_aparam
The dimension of the atomic parameter. If set to >0, file aparam.npy should be included to provided the input aparams.
- neuron:
- type:
typing.List[int]
, optional, default:[120, 120, 120]
argument path:model[standard]/fitting_net[dos]/neuron
The number of neurons in each hidden layers of the fitting net. When two hidden layers are of the same size, a skip connection is built.
- activation_function:
- type:
str
, optional, default:tanh
argument path:model[standard]/fitting_net[dos]/activation_function
The activation function in the fitting net. Supported activation functions are “relu”, “relu6”, “softplus”, “sigmoid”, “tanh”, “gelu”, “gelu_tf”, “None”, “none”. Note that “gelu” denotes the custom operator version, and “gelu_tf” denotes the TF standard version. If you set “None” or “none” here, no activation function will be used.
- precision:
- type:
str
, optional, default:float64
argument path:model[standard]/fitting_net[dos]/precision
The precision of the fitting net parameters, supported options are “default”, “float16”, “float32”, “float64”, “bfloat16”. Default follows the interface precision.
- resnet_dt:
- type:
bool
, optional, default:True
argument path:model[standard]/fitting_net[dos]/resnet_dt
Whether to use a “Timestep” in the skip connection
- trainable:
- type:
bool
|typing.List[bool]
, optional, default:True
argument path:model[standard]/fitting_net[dos]/trainable
Whether the parameters in the fitting net are trainable. This option can be
bool: True if all parameters of the fitting net are trainable, False otherwise.
list of bool: Specifies if each layer is trainable. Since the fitting net is composed by hidden layers followed by a output layer, the length of tihs list should be equal to len(neuron)+1.
- rcond:
- type:
float
|NoneType
, optional, default:None
argument path:model[standard]/fitting_net[dos]/rcond
The condition number used to determine the inital energy shift for each type of atoms. See rcond in
numpy.linalg.lstsq()
for more details.
- seed:
- type:
NoneType
|int
, optionalargument path:model[standard]/fitting_net[dos]/seed
Random seed for parameter initialization of the fitting net
- numb_dos:
- type:
int
, optional, default:300
argument path:model[standard]/fitting_net[dos]/numb_dos
The number of gridpoints on which the DOS is evaluated (NEDOS in VASP)
When type is set to
polar
:- neuron:
- type:
typing.List[int]
, optional, default:[120, 120, 120]
, alias: n_neuronargument path:model[standard]/fitting_net[polar]/neuron
The number of neurons in each hidden layers of the fitting net. When two hidden layers are of the same size, a skip connection is built.
- activation_function:
- type:
str
, optional, default:tanh
argument path:model[standard]/fitting_net[polar]/activation_function
The activation function in the fitting net. Supported activation functions are “relu”, “relu6”, “softplus”, “sigmoid”, “tanh”, “gelu”, “gelu_tf”, “None”, “none”. Note that “gelu” denotes the custom operator version, and “gelu_tf” denotes the TF standard version. If you set “None” or “none” here, no activation function will be used.
- resnet_dt:
- type:
bool
, optional, default:True
argument path:model[standard]/fitting_net[polar]/resnet_dt
Whether to use a “Timestep” in the skip connection
- precision:
- type:
str
, optional, default:default
argument path:model[standard]/fitting_net[polar]/precision
The precision of the fitting net parameters, supported options are “default”, “float16”, “float32”, “float64”, “bfloat16”. Default follows the interface precision.
- fit_diag:
- type:
bool
, optional, default:True
argument path:model[standard]/fitting_net[polar]/fit_diag
Fit the diagonal part of the rotational invariant polarizability matrix, which will be converted to normal polarizability matrix by contracting with the rotation matrix.
- scale:
- type:
float
|typing.List[float]
, optional, default:1.0
argument path:model[standard]/fitting_net[polar]/scale
The output of the fitting net (polarizability matrix) will be scaled by
scale
- shift_diag:
- type:
bool
, optional, default:True
argument path:model[standard]/fitting_net[polar]/shift_diag
Whether to shift the diagonal of polar, which is beneficial to training. Default is true.
- sel_type:
- type:
typing.List[int]
|NoneType
|int
, optional, alias: pol_typeargument path:model[standard]/fitting_net[polar]/sel_type
The atom types for which the atomic polarizability will be provided. If not set, all types will be selected.
- seed:
- type:
NoneType
|int
, optionalargument path:model[standard]/fitting_net[polar]/seed
Random seed for parameter initialization of the fitting net
When type is set to
dipole
:- neuron:
- type:
typing.List[int]
, optional, default:[120, 120, 120]
, alias: n_neuronargument path:model[standard]/fitting_net[dipole]/neuron
The number of neurons in each hidden layers of the fitting net. When two hidden layers are of the same size, a skip connection is built.
- activation_function:
- type:
str
, optional, default:tanh
argument path:model[standard]/fitting_net[dipole]/activation_function
The activation function in the fitting net. Supported activation functions are “relu”, “relu6”, “softplus”, “sigmoid”, “tanh”, “gelu”, “gelu_tf”, “None”, “none”. Note that “gelu” denotes the custom operator version, and “gelu_tf” denotes the TF standard version. If you set “None” or “none” here, no activation function will be used.
- resnet_dt:
- type:
bool
, optional, default:True
argument path:model[standard]/fitting_net[dipole]/resnet_dt
Whether to use a “Timestep” in the skip connection
- precision:
- type:
str
, optional, default:default
argument path:model[standard]/fitting_net[dipole]/precision
The precision of the fitting net parameters, supported options are “default”, “float16”, “float32”, “float64”, “bfloat16”. Default follows the interface precision.
- sel_type:
- type:
typing.List[int]
|NoneType
|int
, optional, alias: dipole_typeargument path:model[standard]/fitting_net[dipole]/sel_type
The atom types for which the atomic dipole will be provided. If not set, all types will be selected.
- seed:
- type:
NoneType
|int
, optionalargument path:model[standard]/fitting_net[dipole]/seed
Random seed for parameter initialization of the fitting net
When type is set to
multi
:Multiple-task model.
- descriptor:
- type:
dict
argument path:model[multi]/descriptor
The descriptor of atomic environment. See model[standard]/descriptor for details.
- fitting_net_dict:
- type:
dict
argument path:model[multi]/fitting_net_dict
The dictionary of multiple fitting nets in multi-task mode. Each fitting_net_dict[fitting_key] is the single definition of fitting of physical properties with user-defined name fitting_key.
When type is set to
frozen
:- model_file:
- type:
str
argument path:model[frozen]/model_file
Path to the frozen model file.
When type is set to
pairtab
:Pairwise tabulation energy model.
- tab_file:
- type:
str
argument path:model[pairtab]/tab_file
Path to the tabulation file.
- rcut:
- type:
float
argument path:model[pairtab]/rcut
The cut-off radius.
- sel:
- type:
typing.List[int]
|str
|int
argument path:model[pairtab]/sel
This parameter set the number of selected neighbors. Note that this parameter is a little different from that in other descriptors. Instead of separating each type of atoms, only the summation matters. And this number is highly related with the efficiency, thus one should not make it too large. Usually 200 or less is enough, far away from the GPU limitation 4096. It can be:
int. The maximum number of neighbor atoms to be considered. We recommend it to be less than 200.
List[int]. The length of the list should be the same as the number of atom types in the system. sel[i] gives the selected number of type-i neighbors. Only the summation of sel[i] matters, and it is recommended to be less than 200. - str. Can be “auto:factor” or “auto”. “factor” is a float number larger than 1. This option will automatically determine the sel. In detail it counts the maximal number of neighbors with in the cutoff radius for each type of neighbor, then multiply the maximum by the “factor”. Finally the number is wraped up to 4 divisible. The option “auto” is equivalent to “auto:1.1”.
When type is set to
pairwise_dprc
:- qm_model:
- type:
dict
argument path:model[pairwise_dprc]/qm_model
- qmmm_model:
- type:
dict
argument path:model[pairwise_dprc]/qmmm_model
When type is set to
linear_ener
:- models:
- type:
dict
|list
argument path:model[linear_ener]/models
The sub-models.
- weights:
- type:
list
|str
argument path:model[linear_ener]/weights
If the type is list of float, a list of weights for each model. If “mean”, the weights are set to be 1 / len(models). If “sum”, the weights are set to be 1.
- learning_rate:
- type:
dict
, optionalargument path:learning_rate
The definitio of learning rate
- scale_by_worker:
- type:
str
, optional, default:linear
argument path:learning_rate/scale_by_worker
When parallel training or batch size scaled, how to alter learning rate. Valid values are linear`(default), `sqrt or none.
Depending on the value of type, different sub args are accepted.
- type:
The type of the learning rate.
When type is set to
exp
:- start_lr:
- type:
float
, optional, default:0.001
argument path:learning_rate[exp]/start_lr
The learning rate at the start of the training.
- stop_lr:
- type:
float
, optional, default:1e-08
argument path:learning_rate[exp]/stop_lr
The desired learning rate at the end of the training.
- decay_steps:
- type:
int
, optional, default:5000
argument path:learning_rate[exp]/decay_steps
The learning rate is decaying every this number of training steps.
- learning_rate_dict:
- type:
dict
, optionalargument path:learning_rate_dict
The dictionary of definitions of learning rates in multi-task mode. Each learning_rate_dict[fitting_key], with user-defined name fitting_key in model/fitting_net_dict, is the single definition of learning rate.
- loss:
- type:
dict
, optionalargument path:loss
The definition of loss function. The loss type should be set to tensor, ener or left unset.
Depending on the value of type, different sub args are accepted.
- type:
When type is set to
ener
:- start_pref_e:
- type:
float
|int
, optional, default:0.02
argument path:loss[ener]/start_pref_e
The prefactor of energy loss at the start of the training. Should be larger than or equal to 0. If set to none-zero value, the energy label should be provided by file energy.npy in each data system. If both start_pref_e and limit_pref_e are set to 0, then the energy will be ignored.
- limit_pref_e:
- type:
float
|int
, optional, default:1.0
argument path:loss[ener]/limit_pref_e
The prefactor of energy loss at the limit of the training, Should be larger than or equal to 0. i.e. the training step goes to infinity.
- start_pref_f:
- type:
float
|int
, optional, default:1000
argument path:loss[ener]/start_pref_f
The prefactor of force loss at the start of the training. Should be larger than or equal to 0. If set to none-zero value, the force label should be provided by file force.npy in each data system. If both start_pref_f and limit_pref_f are set to 0, then the force will be ignored.
- limit_pref_f:
- type:
float
|int
, optional, default:1.0
argument path:loss[ener]/limit_pref_f
The prefactor of force loss at the limit of the training, Should be larger than or equal to 0. i.e. the training step goes to infinity.
- start_pref_v:
- type:
float
|int
, optional, default:0.0
argument path:loss[ener]/start_pref_v
The prefactor of virial loss at the start of the training. Should be larger than or equal to 0. If set to none-zero value, the virial label should be provided by file virial.npy in each data system. If both start_pref_v and limit_pref_v are set to 0, then the virial will be ignored.
- limit_pref_v:
- type:
float
|int
, optional, default:0.0
argument path:loss[ener]/limit_pref_v
The prefactor of virial loss at the limit of the training, Should be larger than or equal to 0. i.e. the training step goes to infinity.
- start_pref_ae:
- type:
float
|int
, optional, default:0.0
argument path:loss[ener]/start_pref_ae
The prefactor of atomic energy loss at the start of the training. Should be larger than or equal to 0. If set to none-zero value, the atom_ener label should be provided by file atom_ener.npy in each data system. If both start_pref_ae and limit_pref_ae are set to 0, then the atomic energy will be ignored.
- limit_pref_ae:
- type:
float
|int
, optional, default:0.0
argument path:loss[ener]/limit_pref_ae
The prefactor of atomic energy loss at the limit of the training, Should be larger than or equal to 0. i.e. the training step goes to infinity.
- start_pref_pf:
- type:
float
|int
, optional, default:0.0
argument path:loss[ener]/start_pref_pf
The prefactor of atomic prefactor force loss at the start of the training. Should be larger than or equal to 0. If set to none-zero value, the atom_pref label should be provided by file atom_pref.npy in each data system. If both start_pref_pf and limit_pref_pf are set to 0, then the atomic prefactor force will be ignored.
- limit_pref_pf:
- type:
float
|int
, optional, default:0.0
argument path:loss[ener]/limit_pref_pf
The prefactor of atomic prefactor force loss at the limit of the training, Should be larger than or equal to 0. i.e. the training step goes to infinity.
- relative_f:
- type:
float
|NoneType
, optionalargument path:loss[ener]/relative_f
If provided, relative force error will be used in the loss. The difference of force will be normalized by the magnitude of the force in the label with a shift given by relative_f, i.e. DF_i / ( || F || + relative_f ) with DF denoting the difference between prediction and label and || F || denoting the L2 norm of the label.
- enable_atom_ener_coeff:
- type:
bool
, optional, default:False
argument path:loss[ener]/enable_atom_ener_coeff
If true, the energy will be computed as sum_i c_i E_i. c_i should be provided by file atom_ener_coeff.npy in each data system, otherwise it’s 1.
- start_pref_gf:
- type:
float
, optional, default:0.0
argument path:loss[ener]/start_pref_gf
The prefactor of generalized force loss at the start of the training. Should be larger than or equal to 0. If set to none-zero value, the drdq label should be provided by file drdq.npy in each data system. If both start_pref_gf and limit_pref_gf are set to 0, then the generalized force will be ignored.
- limit_pref_gf:
- type:
float
, optional, default:0.0
argument path:loss[ener]/limit_pref_gf
The prefactor of generalized force loss at the limit of the training, Should be larger than or equal to 0. i.e. the training step goes to infinity.
- numb_generalized_coord:
- type:
int
, optional, default:0
argument path:loss[ener]/numb_generalized_coord
The dimension of generalized coordinates. Required when generalized force loss is used.
When type is set to
ener_spin
:- start_pref_e:
- type:
float
|int
, optional, default:0.02
argument path:loss[ener_spin]/start_pref_e
The prefactor of energy loss at the start of the training. Should be larger than or equal to 0. If set to none-zero value, the energy label should be provided by file energy.npy in each data system. If both start_pref_energy and limit_pref_energy are set to 0, then the energy will be ignored.
- limit_pref_e:
- type:
float
|int
, optional, default:1.0
argument path:loss[ener_spin]/limit_pref_e
The prefactor of energy loss at the limit of the training, Should be larger than or equal to 0. i.e. the training step goes to infinity.
- start_pref_fr:
- type:
float
|int
, optional, default:1000
argument path:loss[ener_spin]/start_pref_fr
The prefactor of force_real_atom loss at the start of the training. Should be larger than or equal to 0. If set to none-zero value, the force_real_atom label should be provided by file force_real_atom.npy in each data system. If both start_pref_force_real_atom and limit_pref_force_real_atom are set to 0, then the force_real_atom will be ignored.
- limit_pref_fr:
- type:
float
|int
, optional, default:1.0
argument path:loss[ener_spin]/limit_pref_fr
The prefactor of force_real_atom loss at the limit of the training, Should be larger than or equal to 0. i.e. the training step goes to infinity.
- start_pref_fm:
- type:
float
|int
, optional, default:10000
argument path:loss[ener_spin]/start_pref_fm
The prefactor of force_magnetic loss at the start of the training. Should be larger than or equal to 0. If set to none-zero value, the force_magnetic label should be provided by file force_magnetic.npy in each data system. If both start_pref_force_magnetic and limit_pref_force_magnetic are set to 0, then the force_magnetic will be ignored.
- limit_pref_fm:
- type:
float
|int
, optional, default:10.0
argument path:loss[ener_spin]/limit_pref_fm
The prefactor of force_magnetic loss at the limit of the training, Should be larger than or equal to 0. i.e. the training step goes to infinity.
- start_pref_v:
- type:
float
|int
, optional, default:0.0
argument path:loss[ener_spin]/start_pref_v
The prefactor of virial loss at the start of the training. Should be larger than or equal to 0. If set to none-zero value, the virial label should be provided by file virial.npy in each data system. If both start_pref_virial and limit_pref_virial are set to 0, then the virial will be ignored.
- limit_pref_v:
- type:
float
|int
, optional, default:0.0
argument path:loss[ener_spin]/limit_pref_v
The prefactor of virial loss at the limit of the training, Should be larger than or equal to 0. i.e. the training step goes to infinity.
- start_pref_ae:
- type:
float
|int
, optional, default:0.0
argument path:loss[ener_spin]/start_pref_ae
The prefactor of atom_ener loss at the start of the training. Should be larger than or equal to 0. If set to none-zero value, the atom_ener label should be provided by file atom_ener.npy in each data system. If both start_pref_atom_ener and limit_pref_atom_ener are set to 0, then the atom_ener will be ignored.
- limit_pref_ae:
- type:
float
|int
, optional, default:0.0
argument path:loss[ener_spin]/limit_pref_ae
The prefactor of atom_ener loss at the limit of the training, Should be larger than or equal to 0. i.e. the training step goes to infinity.
- start_pref_pf:
- type:
float
|int
, optional, default:0.0
argument path:loss[ener_spin]/start_pref_pf
The prefactor of atom_pref loss at the start of the training. Should be larger than or equal to 0. If set to none-zero value, the atom_pref label should be provided by file atom_pref.npy in each data system. If both start_pref_atom_pref and limit_pref_atom_pref are set to 0, then the atom_pref will be ignored.
- limit_pref_pf:
- type:
float
|int
, optional, default:0.0
argument path:loss[ener_spin]/limit_pref_pf
The prefactor of atom_pref loss at the limit of the training, Should be larger than or equal to 0. i.e. the training step goes to infinity.
- relative_f:
- type:
float
|NoneType
, optionalargument path:loss[ener_spin]/relative_f
If provided, relative force error will be used in the loss. The difference of force will be normalized by the magnitude of the force in the label with a shift given by relative_f, i.e. DF_i / ( || F || + relative_f ) with DF denoting the difference between prediction and label and || F || denoting the L2 norm of the label.
- enable_atom_ener_coeff:
- type:
bool
, optional, default:False
argument path:loss[ener_spin]/enable_atom_ener_coeff
If true, the energy will be computed as sum_i c_i E_i. c_i should be provided by file atom_ener_coeff.npy in each data system, otherwise it’s 1.
When type is set to
dos
:- start_pref_dos:
- type:
float
|int
, optional, default:0.0
argument path:loss[dos]/start_pref_dos
The prefactor of Density of State (DOS) loss at the start of the training. Should be larger than or equal to 0. If set to none-zero value, the Density of State (DOS) label should be provided by file Density of State (DOS).npy in each data system. If both start_pref_Density of State (DOS) and limit_pref_Density of State (DOS) are set to 0, then the Density of State (DOS) will be ignored.
- limit_pref_dos:
- type:
float
|int
, optional, default:0.0
argument path:loss[dos]/limit_pref_dos
The prefactor of Density of State (DOS) loss at the limit of the training, Should be larger than or equal to 0. i.e. the training step goes to infinity.
- start_pref_cdf:
- type:
float
|int
, optional, default:0.0
argument path:loss[dos]/start_pref_cdf
The prefactor of Cumulative Distribution Function (cumulative intergral of DOS) loss at the start of the training. Should be larger than or equal to 0. If set to none-zero value, the Cumulative Distribution Function (cumulative intergral of DOS) label should be provided by file Cumulative Distribution Function (cumulative intergral of DOS).npy in each data system. If both start_pref_Cumulative Distribution Function (cumulative intergral of DOS) and limit_pref_Cumulative Distribution Function (cumulative intergral of DOS) are set to 0, then the Cumulative Distribution Function (cumulative intergral of DOS) will be ignored.
- limit_pref_cdf:
- type:
float
|int
, optional, default:0.0
argument path:loss[dos]/limit_pref_cdf
The prefactor of Cumulative Distribution Function (cumulative intergral of DOS) loss at the limit of the training, Should be larger than or equal to 0. i.e. the training step goes to infinity.
- start_pref_ados:
- type:
float
|int
, optional, default:1.0
argument path:loss[dos]/start_pref_ados
The prefactor of atomic DOS (site-projected DOS) loss at the start of the training. Should be larger than or equal to 0. If set to none-zero value, the atomic DOS (site-projected DOS) label should be provided by file atomic DOS (site-projected DOS).npy in each data system. If both start_pref_atomic DOS (site-projected DOS) and limit_pref_atomic DOS (site-projected DOS) are set to 0, then the atomic DOS (site-projected DOS) will be ignored.
- limit_pref_ados:
- type:
float
|int
, optional, default:1.0
argument path:loss[dos]/limit_pref_ados
The prefactor of atomic DOS (site-projected DOS) loss at the limit of the training, Should be larger than or equal to 0. i.e. the training step goes to infinity.
- start_pref_acdf:
- type:
float
|int
, optional, default:0.0
argument path:loss[dos]/start_pref_acdf
The prefactor of Cumulative integral of atomic DOS loss at the start of the training. Should be larger than or equal to 0. If set to none-zero value, the Cumulative integral of atomic DOS label should be provided by file Cumulative integral of atomic DOS.npy in each data system. If both start_pref_Cumulative integral of atomic DOS and limit_pref_Cumulative integral of atomic DOS are set to 0, then the Cumulative integral of atomic DOS will be ignored.
- limit_pref_acdf:
- type:
float
|int
, optional, default:0.0
argument path:loss[dos]/limit_pref_acdf
The prefactor of Cumulative integral of atomic DOS loss at the limit of the training, Should be larger than or equal to 0. i.e. the training step goes to infinity.
When type is set to
tensor
:- pref:
- type:
float
|int
argument path:loss[tensor]/pref
The prefactor of the weight of global loss. It should be larger than or equal to 0. If controls the weight of loss corresponding to global label, i.e. ‘polarizability.npy` or dipole.npy, whose shape should be #frames x [9 or 3]. If it’s larger than 0.0, this npy should be included.
- pref_atomic:
- type:
float
|int
argument path:loss[tensor]/pref_atomic
The prefactor of the weight of atomic loss. It should be larger than or equal to 0. If controls the weight of loss corresponding to atomic label, i.e. atomic_polarizability.npy or atomic_dipole.npy, whose shape should be #frames x ([9 or 3] x #selected atoms). If it’s larger than 0.0, this npy should be included. Both pref and pref_atomic should be provided, and either can be set to 0.0.
- loss_dict:
- type:
dict
, optionalargument path:loss_dict
The dictionary of definitions of multiple loss functions in multi-task mode. Each loss_dict[fitting_key], with user-defined name fitting_key in model/fitting_net_dict, is the single definition of loss function, whose type should be set to tensor, ener or left unset.
- training:
- type:
dict
argument path:training
The training options.
- training_data:
- type:
dict
, optionalargument path:training/training_data
Configurations of training data.
- systems:
- type:
typing.List[str]
|str
argument path:training/training_data/systems
The data systems for training. This key can be provided with a list that specifies the systems, or be provided with a string by which the prefix of all systems are given and the list of the systems is automatically generated.
- set_prefix:
- type:
str
, optional, default:set
argument path:training/training_data/set_prefix
The prefix of the sets in the systems.
- batch_size:
- type:
typing.List[int]
|str
|int
, optional, default:auto
argument path:training/training_data/batch_size
This key can be
list: the length of which is the same as the systems. The batch size of each system is given by the elements of the list.
int: all systems use the same batch size.
string “auto”: automatically determines the batch size so that the batch_size times the number of atoms in the system is no less than 32.
string “auto:N”: automatically determines the batch size so that the batch_size times the number of atoms in the system is no less than N.
string “mixed:N”: the batch data will be sampled from all systems and merged into a mixed system with the batch size N. Only support the se_atten descriptor.
If MPI is used, the value should be considered as the batch size per task.
- auto_prob:
- type:
str
, optional, default:prob_sys_size
, alias: auto_prob_styleargument path:training/training_data/auto_prob
Determine the probability of systems automatically. The method is assigned by this key and can be
“prob_uniform” : the probability all the systems are equal, namely 1.0/self.get_nsystems()
“prob_sys_size” : the probability of a system is proportional to the number of batches in the system
“prob_sys_size;stt_idx:end_idx:weight;stt_idx:end_idx:weight;…” : the list of systems is devided into blocks. A block is specified by stt_idx:end_idx:weight, where stt_idx is the starting index of the system, end_idx is then ending (not including) index of the system, the probabilities of the systems in this block sums up to weight, and the relatively probabilities within this block is proportional to the number of batches in the system.
- sys_probs:
- type:
NoneType
|typing.List[float]
, optional, default:None
, alias: sys_weightsargument path:training/training_data/sys_probs
A list of float if specified. Should be of the same length as systems, specifying the probability of each system.
- validation_data:
- type:
NoneType
|dict
, optional, default:None
argument path:training/validation_data
Configurations of validation data. Similar to that of training data, except that a numb_btch argument may be configured
- systems:
- type:
typing.List[str]
|str
argument path:training/validation_data/systems
The data systems for validation. This key can be provided with a list that specifies the systems, or be provided with a string by which the prefix of all systems are given and the list of the systems is automatically generated.
- set_prefix:
- type:
str
, optional, default:set
argument path:training/validation_data/set_prefix
The prefix of the sets in the systems.
- batch_size:
- type:
typing.List[int]
|str
|int
, optional, default:auto
argument path:training/validation_data/batch_size
This key can be
list: the length of which is the same as the systems. The batch size of each system is given by the elements of the list.
int: all systems use the same batch size.
string “auto”: automatically determines the batch size so that the batch_size times the number of atoms in the system is no less than 32.
string “auto:N”: automatically determines the batch size so that the batch_size times the number of atoms in the system is no less than N.
- auto_prob:
- type:
str
, optional, default:prob_sys_size
, alias: auto_prob_styleargument path:training/validation_data/auto_prob
Determine the probability of systems automatically. The method is assigned by this key and can be
“prob_uniform” : the probability all the systems are equal, namely 1.0/self.get_nsystems()
“prob_sys_size” : the probability of a system is proportional to the number of batches in the system
“prob_sys_size;stt_idx:end_idx:weight;stt_idx:end_idx:weight;…” : the list of systems is devided into blocks. A block is specified by stt_idx:end_idx:weight, where stt_idx is the starting index of the system, end_idx is then ending (not including) index of the system, the probabilities of the systems in this block sums up to weight, and the relatively probabilities within this block is proportional to the number of batches in the system.
- sys_probs:
- type:
NoneType
|typing.List[float]
, optional, default:None
, alias: sys_weightsargument path:training/validation_data/sys_probs
A list of float if specified. Should be of the same length as systems, specifying the probability of each system.
- numb_btch:
- type:
int
, optional, default:1
, alias: numb_batchargument path:training/validation_data/numb_btch
An integer that specifies the number of batches to be sampled for each validation period.
- mixed_precision:
- type:
dict
, optionalargument path:training/mixed_precision
Configurations of mixed precision.
- output_prec:
- type:
str
, optional, default:float32
argument path:training/mixed_precision/output_prec
The precision for mixed precision params. ” “The trainable variables precision during the mixed precision training process, ” “supported options are float32 only currently.
- compute_prec:
- type:
str
argument path:training/mixed_precision/compute_prec
The precision for mixed precision compute. ” “The compute precision during the mixed precision training process, “” “supported options are float16 and bfloat16 currently.
- numb_steps:
- type:
int
, alias: stop_batchargument path:training/numb_steps
Number of training batch. Each training uses one batch of data.
- seed:
- type:
NoneType
|int
, optionalargument path:training/seed
The random seed for getting frames from the training data set.
- disp_file:
- type:
str
, optional, default:lcurve.out
argument path:training/disp_file
The file for printing learning curve.
- disp_freq:
- type:
int
, optional, default:1000
argument path:training/disp_freq
The frequency of printing learning curve.
- save_freq:
- type:
int
, optional, default:1000
argument path:training/save_freq
The frequency of saving check point.
- save_ckpt:
- type:
str
, optional, default:model.ckpt
argument path:training/save_ckpt
The path prefix of saving check point files.
- max_ckpt_keep:
- type:
int
, optional, default:5
argument path:training/max_ckpt_keep
The maximum number of checkpoints to keep. The oldest checkpoints will be deleted once the number of checkpoints exceeds max_ckpt_keep. Defaults to 5.
- disp_training:
- type:
bool
, optional, default:True
argument path:training/disp_training
Displaying verbose information during training.
- time_training:
- type:
bool
, optional, default:True
argument path:training/time_training
Timing durining training.
- profiling:
- type:
bool
, optional, default:False
argument path:training/profiling
Profiling during training.
- profiling_file:
- type:
str
, optional, default:timeline.json
argument path:training/profiling_file
Output file for profiling.
- enable_profiler:
- type:
bool
, optional, default:False
argument path:training/enable_profiler
Enable TensorFlow Profiler (available in TensorFlow 2.3) to analyze performance. The log will be saved to tensorboard_log_dir.
- tensorboard:
- type:
bool
, optional, default:False
argument path:training/tensorboard
Enable tensorboard
- tensorboard_log_dir:
- type:
str
, optional, default:log
argument path:training/tensorboard_log_dir
The log directory of tensorboard outputs
- tensorboard_freq:
- type:
int
, optional, default:1
argument path:training/tensorboard_freq
The frequency of writing tensorboard events.
- data_dict:
- type:
dict
, optionalargument path:training/data_dict
The dictionary of multi DataSystems in multi-task mode. Each data_dict[fitting_key], with user-defined name fitting_key in model/fitting_net_dict, contains training data and optional validation data definitions.
- fitting_weight:
- type:
dict
, optionalargument path:training/fitting_weight
Each fitting_weight[fitting_key], with user-defined name fitting_key in model/fitting_net_dict, is the training weight of fitting net fitting_key. Fitting nets with higher weights will be selected with higher probabilities to be trained in one step. Weights will be normalized and minus ones will be ignored. If not set, each fitting net will be equally selected when training.
- nvnmd:
- type:
dict
, optionalargument path:nvnmd
The nvnmd options.
- version:
- type:
int
argument path:nvnmd/version
configuration the nvnmd version (0 | 1), 0 for 4 types, 1 for 32 types
- max_nnei:
- type:
int
argument path:nvnmd/max_nnei
configuration the max number of neighbors, 128|256 for version 0, 128 for version 1
- net_size:
- type:
int
argument path:nvnmd/net_size
configuration the number of nodes of fitting_net, just can be set as 128
- map_file:
- type:
str
argument path:nvnmd/map_file
A file containing the mapping tables to replace the calculation of embedding nets
- config_file:
- type:
str
argument path:nvnmd/config_file
A file containing the parameters about how to implement the model in certain hardware
- weight_file:
- type:
str
argument path:nvnmd/weight_file
a *.npy file containing the weights of the model
- enable:
- type:
bool
argument path:nvnmd/enable
enable the nvnmd training
- restore_descriptor:
- type:
bool
argument path:nvnmd/restore_descriptor
enable to restore the parameter of embedding_net from weight.npy
- restore_fitting_net:
- type:
bool
argument path:nvnmd/restore_fitting_net
enable to restore the parameter of fitting_net from weight.npy
- quantize_descriptor:
- type:
bool
argument path:nvnmd/quantize_descriptor
enable the quantizatioin of descriptor
- quantize_fitting_net:
- type:
bool
argument path:nvnmd/quantize_fitting_net
enable the quantizatioin of fitting_net