deepmd.utils package

class deepmd.utils.DeepmdData(sys_path: str, set_prefix: str = 'set', shuffle_test: bool = True, type_map: Optional[List[str]] = None, optional_type_map: bool = True, modifier=None, trn_all_set: bool = False, sort_atoms: bool = True)[source]

Bases: object

Class for a data system.

It loads data from hard disk, and mantains the data as a data_dict

Parameters
sys_path

Path to the data system

set_prefix

Prefix for the directories of different sets

shuffle_test

If the test data are shuffled

type_map

Gives the name of different atom types

optional_type_map

If the type_map.raw in each system is optional

modifier

Data modifier that has the method modify_data

trn_all_set

Use all sets as training dataset. Otherwise, if the number of sets is more than 1, the last set is left for test.

sort_atomsbool

Sort atoms by atom types. Required to enable when the data is directly feeded to descriptors except mixed types.

Methods

add(key, ndof[, atomic, must, high_prec, ...])

Add a data item that to be loaded.

avg(key)

Return the average value of an item.

check_batch_size(batch_size)

Check if the system can get a batch of data with batch_size frames.

check_test_size(test_size)

Check if the system can get a test dataset with test_size frames.

get_atom_type()

Get atom types.

get_batch(batch_size)

Get a batch of data with batch_size frames.

get_data_dict()

Get the data_dict.

get_natoms()

Get number of atoms.

get_natoms_vec(ntypes)

Get number of atoms and number of atoms in different types.

get_ntypes()

Number of atom types in the system.

get_numb_batch(batch_size, set_idx)

Get the number of batches in a set.

get_numb_set()

Get number of training sets.

get_sys_numb_batch(batch_size)

Get the number of batches in the data system.

get_test([ntests])

Get the test data with ntests frames.

get_type_map()

Get the type map.

reduce(key_out, key_in)

Generate a new item from the reduction of another atom.

reset_get_batch

add(key: str, ndof: int, atomic: bool = False, must: bool = False, high_prec: bool = False, type_sel: Optional[List[int]] = None, repeat: int = 1, default: float = 0.0, dtype: Optional[dtype] = None)[source]

Add a data item that to be loaded.

Parameters
key

The key of the item. The corresponding data is stored in sys_path/set.*/key.npy

ndof

The number of dof

atomic

The item is an atomic property. If False, the size of the data should be nframes x ndof If True, the size of data should be nframes x natoms x ndof

must

The data file sys_path/set.*/key.npy must exist. If must is False and the data file does not exist, the data_dict[find_key] is set to 0.0

high_prec

Load the data and store in float64, otherwise in float32

type_sel

Select certain type of atoms

repeat

The data will be repeated repeat times.

defaultfloat, default=0.

default value of data

dtypenp.dtype, optional

the dtype of data, overwrites high_prec if provided

avg(key)[source]

Return the average value of an item.

check_batch_size(batch_size)[source]

Check if the system can get a batch of data with batch_size frames.

check_test_size(test_size)[source]

Check if the system can get a test dataset with test_size frames.

get_atom_type() List[int][source]

Get atom types.

get_batch(batch_size: int) dict[source]

Get a batch of data with batch_size frames. The frames are randomly picked from the data system.

Parameters
batch_size

size of the batch

get_data_dict() dict[source]

Get the data_dict.

get_natoms()[source]

Get number of atoms.

get_natoms_vec(ntypes: int)[source]

Get number of atoms and number of atoms in different types.

Parameters
ntypes

Number of types (may be larger than the actual number of types in the system).

Returns
natoms

natoms[0]: number of local atoms natoms[1]: total number of atoms held by this processor natoms[i]: 2 <= i < Ntypes+2, number of type i atoms

get_ntypes() int[source]

Number of atom types in the system.

get_numb_batch(batch_size: int, set_idx: int) int[source]

Get the number of batches in a set.

get_numb_set() int[source]

Get number of training sets.

get_sys_numb_batch(batch_size: int) int[source]

Get the number of batches in the data system.

get_test(ntests: int = -1) dict[source]

Get the test data with ntests frames.

Parameters
ntests

Size of the test data set. If ntests is -1, all test data will be get.

get_type_map() List[str][source]

Get the type map.

reduce(key_out: str, key_in: str)[source]

Generate a new item from the reduction of another atom.

Parameters
key_out

The name of the reduced item

key_in

The name of the data item to be reduced

reset_get_batch()[source]
class deepmd.utils.DeepmdDataSystem(systems: List[str], batch_size: int, test_size: int, rcut: Optional[float] = None, set_prefix: str = 'set', shuffle_test: bool = True, type_map: Optional[List[str]] = None, optional_type_map: bool = True, modifier=None, trn_all_set=False, sys_probs=None, auto_prob_style='prob_sys_size', sort_atoms: bool = True)[source]

Bases: object

Class for manipulating many data systems.

It is implemented with the help of DeepmdData

Attributes
default_mesh

Mesh for each system.

Methods

add(key, ndof[, atomic, must, high_prec, ...])

Add a data item that to be loaded.

add_dict(adict)

Add items to the data system by a dict.

get_batch([sys_idx])

Get a batch of data from the data systems.

get_batch_mixed()

Get a batch of data from the data systems in the mixed way.

get_batch_size()

Get the batch size.

get_batch_standard([sys_idx])

Get a batch of data from the data systems in the standard way.

get_nbatches()

Get the total number of batches.

get_nsystems()

Get the number of data systems.

get_ntypes()

Get the number of types.

get_sys(idx)

Get a certain data system.

get_sys_ntest([sys_idx])

Get number of tests for the currently selected system, or one defined by sys_idx.

get_test([sys_idx, n_test])

Get test data from the the data systems.

get_type_map()

Get the type map.

reduce(key_out, key_in)

Generate a new item from the reduction of another atom.

compute_energy_shift

get_data_dict

print_summary

set_sys_probs

add(key: str, ndof: int, atomic: bool = False, must: bool = False, high_prec: bool = False, type_sel: Optional[List[int]] = None, repeat: int = 1, default: float = 0.0)[source]

Add a data item that to be loaded.

Parameters
key

The key of the item. The corresponding data is stored in sys_path/set.*/key.npy

ndof

The number of dof

atomic

The item is an atomic property. If False, the size of the data should be nframes x ndof If True, the size of data should be nframes x natoms x ndof

must

The data file sys_path/set.*/key.npy must exist. If must is False and the data file does not exist, the data_dict[find_key] is set to 0.0

high_prec

Load the data and store in float64, otherwise in float32

type_sel

Select certain type of atoms

repeat

The data will be repeated repeat times.

default, default=0.

Default value of data

add_dict(adict: dict) None[source]

Add items to the data system by a dict. adict should have items like .. code-block:: python.

adict[key] = {

“ndof”: ndof, “atomic”: atomic, “must”: must, “high_prec”: high_prec, “type_sel”: type_sel, “repeat”: repeat,

}

For the explaination of the keys see add

compute_energy_shift(rcond=None, key='energy')[source]
property default_mesh: List[ndarray]

Mesh for each system.

get_batch(sys_idx: Optional[int] = None) dict[source]

Get a batch of data from the data systems.

Parameters
sys_idxint

The index of system from which the batch is get. If sys_idx is not None, sys_probs and auto_prob_style are ignored If sys_idx is None, automatically determine the system according to sys_probs or auto_prob_style, see the following. This option does not work for mixed systems.

Returns
dict

The batch data

get_batch_mixed() dict[source]

Get a batch of data from the data systems in the mixed way.

Returns
dict

The batch data

get_batch_size() int[source]

Get the batch size.

get_batch_standard(sys_idx: Optional[int] = None) dict[source]

Get a batch of data from the data systems in the standard way.

Parameters
sys_idxint

The index of system from which the batch is get. If sys_idx is not None, sys_probs and auto_prob_style are ignored If sys_idx is None, automatically determine the system according to sys_probs or auto_prob_style, see the following.

Returns
dict

The batch data

get_data_dict(ii: int = 0) dict[source]
get_nbatches() int[source]

Get the total number of batches.

get_nsystems() int[source]

Get the number of data systems.

get_ntypes() int[source]

Get the number of types.

get_sys(idx: int) DeepmdData[source]

Get a certain data system.

get_sys_ntest(sys_idx=None)[source]

Get number of tests for the currently selected system, or one defined by sys_idx.

get_test(sys_idx: Optional[int] = None, n_test: int = -1)[source]

Get test data from the the data systems.

Parameters
sys_idx

The test dat of system with index sys_idx will be returned. If is None, the currently selected system will be returned.

n_test

Number of test data. If set to -1 all test data will be get.

get_type_map() List[str][source]

Get the type map.

print_summary(name)[source]
reduce(key_out, key_in)[source]

Generate a new item from the reduction of another atom.

Parameters
key_out

The name of the reduced item

key_in

The name of the data item to be reduced

set_sys_probs(sys_probs=None, auto_prob_style: str = 'prob_sys_size')[source]
class deepmd.utils.LearningRateExp(start_lr: float, stop_lr: float = 5e-08, decay_steps: int = 5000, decay_rate: float = 0.95)[source]

Bases: object

The exponentially decaying learning rate.

The learning rate at step \(t\) is given by

\[\alpha(t) = \alpha_0 \lambda ^ { t / \tau }\]

where \(\alpha\) is the learning rate, \(\alpha_0\) is the starting learning rate, \(\lambda\) is the decay rate, and \(\tau\) is the decay steps.

Parameters
start_lr

Starting learning rate \(\alpha_0\)

stop_lr

Stop learning rate \(\alpha_1\)

decay_steps

Learning rate decay every this number of steps \(\tau\)

decay_rate

The decay rate \(\lambda\). If stop_step is provided in build, then it will be determined automatically and overwritten.

Methods

build(global_step[, stop_step])

Build the learning rate.

start_lr()

Get the start lr.

value(step)

Get the lr at a certain step.

build(global_step: Tensor, stop_step: Optional[int] = None) Tensor[source]

Build the learning rate.

Parameters
global_step

The tf Tensor prividing the global training step

stop_step

The stop step. If provided, the decay_rate will be determined automatically and overwritten.

Returns
learning_rate

The learning rate

start_lr() float[source]

Get the start lr.

value(step: int) float[source]

Get the lr at a certain step.

class deepmd.utils.PairTab(filename: str)[source]

Bases: object

Pairwise tabulated potential.

Parameters
filename

File name for the short-range tabulated potential. The table is a text data file with (N_t + 1) * N_t / 2 + 1 columes. The first colume is the distance between atoms. The second to the last columes are energies for pairs of certain types. For example we have two atom types, 0 and 1. The columes from 2nd to 4th are for 0-0, 0-1 and 1-1 correspondingly.

Methods

get()

Get the serialized table.

reinit(filename)

Initialize the tabulated interaction.

get() Tuple[array, array][source]

Get the serialized table.

reinit(filename: str) None[source]

Initialize the tabulated interaction.

Parameters
filename

File name for the short-range tabulated potential. The table is a text data file with (N_t + 1) * N_t / 2 + 1 columes. The first colume is the distance between atoms. The second to the last columes are energies for pairs of certain types. For example we have two atom types, 0 and 1. The columes from 2nd to 4th are for 0-0, 0-1 and 1-1 correspondingly.

class deepmd.utils.Plugin[source]

Bases: object

A class to register and restore plugins.

Examples

>>> plugin = Plugin()
>>> @plugin.register("xx")
    def xxx():
        pass
>>> print(plugin.plugins['xx'])
Attributes
pluginsDict[str, object]

plugins

Methods

get_plugin(key)

Visit a plugin by key.

register(key)

Register a plugin.

get_plugin(key) object[source]

Visit a plugin by key.

Parameters
keystr

key of the plugin

Returns
object

the plugin

register(key: str) Callable[[object], object][source]

Register a plugin.

Parameters
keystr

key of the plugin

Returns
Callable[[object], object]

decorator

class deepmd.utils.PluginVariant(*args, **kwargs)[source]

Bases: object

A class to remove type from input arguments.

Submodules

deepmd.utils.argcheck module

Alias for backward compatibility.

deepmd.utils.argcheck.gen_args(**kwargs) List[Argument][source]
deepmd.utils.argcheck.gen_doc(*, make_anchor=True, make_link=True, **kwargs)[source]
deepmd.utils.argcheck.gen_json(**kwargs)[source]
deepmd.utils.argcheck.list_to_doc(xx)[source]
deepmd.utils.argcheck.normalize(data)[source]
deepmd.utils.argcheck.type_embedding_args()[source]

deepmd.utils.batch_size module

class deepmd.utils.batch_size.AutoBatchSize(initial_batch_size: int = 1024, factor: float = 2.0)[source]

Bases: AutoBatchSize

Methods

execute(callable, start_index, natoms)

Excuate a method with given batch size.

execute_all(callable, total_size, natoms, ...)

Excuate a method with all given data.

is_gpu_available()

Check if GPU is available.

is_oom_error(e)

Check if the exception is an OOM error.

is_gpu_available() bool[source]

Check if GPU is available.

Returns
bool

True if GPU is available

is_oom_error(e: Exception) bool[source]

Check if the exception is an OOM error.

Parameters
eException

Exception

deepmd.utils.compat module

Alias for backward compatibility.

deepmd.utils.compat.convert_input_v0_v1(jdata: Dict[str, Any], warning: bool = True, dump: Optional[Union[str, Path]] = None) Dict[str, Any][source]

Convert input from v0 format to v1.

Parameters
jdataDict[str, Any]

loaded json/yaml file

warningbool, optional

whether to show deprecation warning, by default True

dumpOptional[Union[str, Path]], optional

whether to dump converted file, by default None

Returns
Dict[str, Any]

converted output

deepmd.utils.compat.convert_input_v1_v2(jdata: Dict[str, Any], warning: bool = True, dump: Optional[Union[str, Path]] = None) Dict[str, Any][source]
deepmd.utils.compat.deprecate_numb_test(jdata: Dict[str, Any], warning: bool = True, dump: Optional[Union[str, Path]] = None) Dict[str, Any][source]

Deprecate numb_test since v2.1. It has taken no effect since v2.0.

See #1243.

Parameters
jdataDict[str, Any]

loaded json/yaml file

warningbool, optional

whether to show deprecation warning, by default True

dumpOptional[Union[str, Path]], optional

whether to dump converted file, by default None

Returns
Dict[str, Any]

converted output

deepmd.utils.compat.update_deepmd_input(jdata: Dict[str, Any], warning: bool = True, dump: Optional[Union[str, Path]] = None) Dict[str, Any][source]

deepmd.utils.compress module

deepmd.utils.compress.get_extra_side_embedding_net_variable(self, graph_def, type_side_suffix, varialbe_name, suffix)[source]
deepmd.utils.compress.get_two_side_type_embedding(self, graph)[source]
deepmd.utils.compress.get_type_embedding(self, graph)[source]
deepmd.utils.compress.make_data(self, xx)[source]

deepmd.utils.convert module

deepmd.utils.convert.convert_012_to_21(input_model: str, output_model: str)[source]

Convert DP 0.12 graph to 2.1 graph.

Parameters
input_modelstr

filename of the input graph

output_modelstr

filename of the output graph

deepmd.utils.convert.convert_10_to_21(input_model: str, output_model: str)[source]

Convert DP 1.0 graph to 2.1 graph.

Parameters
input_modelstr

filename of the input graph

output_modelstr

filename of the output graph

deepmd.utils.convert.convert_12_to_21(input_model: str, output_model: str)[source]

Convert DP 1.2 graph to 2.1 graph.

Parameters
input_modelstr

filename of the input graph

output_modelstr

filename of the output graph

deepmd.utils.convert.convert_13_to_21(input_model: str, output_model: str)[source]

Convert DP 1.3 graph to 2.1 graph.

Parameters
input_modelstr

filename of the input graph

output_modelstr

filename of the output graph

deepmd.utils.convert.convert_20_to_21(input_model: str, output_model: str)[source]

Convert DP 2.0 graph to 2.1 graph.

Parameters
input_modelstr

filename of the input graph

output_modelstr

filename of the output graph

deepmd.utils.convert.convert_dp012_to_dp10(file: str)[source]

Convert DP 0.12 graph text to 1.0 graph text.

Parameters
filestr

filename of the graph text

deepmd.utils.convert.convert_dp10_to_dp11(file: str)[source]

Convert DP 1.0 graph text to 1.1 graph text.

Parameters
filestr

filename of the graph text

deepmd.utils.convert.convert_dp12_to_dp13(file: str)[source]

Convert DP 1.2 graph text to 1.3 graph text.

Parameters
filestr

filename of the graph text

deepmd.utils.convert.convert_dp13_to_dp20(fname: str)[source]

Convert DP 1.3 graph text to 2.0 graph text.

Parameters
fnamestr

filename of the graph text

deepmd.utils.convert.convert_dp20_to_dp21(fname: str)[source]
deepmd.utils.convert.convert_pb_to_pbtxt(pbfile: str, pbtxtfile: str)[source]

Convert DP graph to graph text.

Parameters
pbfilestr

filename of the input graph

pbtxtfilestr

filename of the output graph text

deepmd.utils.convert.convert_pbtxt_to_pb(pbtxtfile: str, pbfile: str)[source]

Convert DP graph text to graph.

Parameters
pbtxtfilestr

filename of the input graph text

pbfilestr

filename of the output graph

deepmd.utils.convert.convert_to_21(input_model: str, output_model: str, version: Optional[str] = None)[source]

Convert DP graph to 2.1 graph.

Parameters
input_modelstr

filename of the input graph

output_modelstr

filename of the output graph

versionstr

version of the input graph, if not specified, it will be detected automatically

deepmd.utils.convert.detect_model_version(input_model: str)[source]

Detect DP graph version.

Parameters
input_modelstr

filename of the input graph

deepmd.utils.data module

Alias for backward compatibility.

class deepmd.utils.data.DeepmdData(sys_path: str, set_prefix: str = 'set', shuffle_test: bool = True, type_map: Optional[List[str]] = None, optional_type_map: bool = True, modifier=None, trn_all_set: bool = False, sort_atoms: bool = True)[source]

Bases: object

Class for a data system.

It loads data from hard disk, and mantains the data as a data_dict

Parameters
sys_path

Path to the data system

set_prefix

Prefix for the directories of different sets

shuffle_test

If the test data are shuffled

type_map

Gives the name of different atom types

optional_type_map

If the type_map.raw in each system is optional

modifier

Data modifier that has the method modify_data

trn_all_set

Use all sets as training dataset. Otherwise, if the number of sets is more than 1, the last set is left for test.

sort_atomsbool

Sort atoms by atom types. Required to enable when the data is directly feeded to descriptors except mixed types.

Methods

add(key, ndof[, atomic, must, high_prec, ...])

Add a data item that to be loaded.

avg(key)

Return the average value of an item.

check_batch_size(batch_size)

Check if the system can get a batch of data with batch_size frames.

check_test_size(test_size)

Check if the system can get a test dataset with test_size frames.

get_atom_type()

Get atom types.

get_batch(batch_size)

Get a batch of data with batch_size frames.

get_data_dict()

Get the data_dict.

get_natoms()

Get number of atoms.

get_natoms_vec(ntypes)

Get number of atoms and number of atoms in different types.

get_ntypes()

Number of atom types in the system.

get_numb_batch(batch_size, set_idx)

Get the number of batches in a set.

get_numb_set()

Get number of training sets.

get_sys_numb_batch(batch_size)

Get the number of batches in the data system.

get_test([ntests])

Get the test data with ntests frames.

get_type_map()

Get the type map.

reduce(key_out, key_in)

Generate a new item from the reduction of another atom.

reset_get_batch

add(key: str, ndof: int, atomic: bool = False, must: bool = False, high_prec: bool = False, type_sel: Optional[List[int]] = None, repeat: int = 1, default: float = 0.0, dtype: Optional[dtype] = None)[source]

Add a data item that to be loaded.

Parameters
key

The key of the item. The corresponding data is stored in sys_path/set.*/key.npy

ndof

The number of dof

atomic

The item is an atomic property. If False, the size of the data should be nframes x ndof If True, the size of data should be nframes x natoms x ndof

must

The data file sys_path/set.*/key.npy must exist. If must is False and the data file does not exist, the data_dict[find_key] is set to 0.0

high_prec

Load the data and store in float64, otherwise in float32

type_sel

Select certain type of atoms

repeat

The data will be repeated repeat times.

defaultfloat, default=0.

default value of data

dtypenp.dtype, optional

the dtype of data, overwrites high_prec if provided

avg(key)[source]

Return the average value of an item.

check_batch_size(batch_size)[source]

Check if the system can get a batch of data with batch_size frames.

check_test_size(test_size)[source]

Check if the system can get a test dataset with test_size frames.

get_atom_type() List[int][source]

Get atom types.

get_batch(batch_size: int) dict[source]

Get a batch of data with batch_size frames. The frames are randomly picked from the data system.

Parameters
batch_size

size of the batch

get_data_dict() dict[source]

Get the data_dict.

get_natoms()[source]

Get number of atoms.

get_natoms_vec(ntypes: int)[source]

Get number of atoms and number of atoms in different types.

Parameters
ntypes

Number of types (may be larger than the actual number of types in the system).

Returns
natoms

natoms[0]: number of local atoms natoms[1]: total number of atoms held by this processor natoms[i]: 2 <= i < Ntypes+2, number of type i atoms

get_ntypes() int[source]

Number of atom types in the system.

get_numb_batch(batch_size: int, set_idx: int) int[source]

Get the number of batches in a set.

get_numb_set() int[source]

Get number of training sets.

get_sys_numb_batch(batch_size: int) int[source]

Get the number of batches in the data system.

get_test(ntests: int = -1) dict[source]

Get the test data with ntests frames.

Parameters
ntests

Size of the test data set. If ntests is -1, all test data will be get.

get_type_map() List[str][source]

Get the type map.

reduce(key_out: str, key_in: str)[source]

Generate a new item from the reduction of another atom.

Parameters
key_out

The name of the reduced item

key_in

The name of the data item to be reduced

reset_get_batch()[source]

deepmd.utils.data_system module

Alias for backward compatibility.

class deepmd.utils.data_system.DeepmdDataSystem(systems: List[str], batch_size: int, test_size: int, rcut: Optional[float] = None, set_prefix: str = 'set', shuffle_test: bool = True, type_map: Optional[List[str]] = None, optional_type_map: bool = True, modifier=None, trn_all_set=False, sys_probs=None, auto_prob_style='prob_sys_size', sort_atoms: bool = True)[source]

Bases: object

Class for manipulating many data systems.

It is implemented with the help of DeepmdData

Attributes
default_mesh

Mesh for each system.

Methods

add(key, ndof[, atomic, must, high_prec, ...])

Add a data item that to be loaded.

add_dict(adict)

Add items to the data system by a dict.

get_batch([sys_idx])

Get a batch of data from the data systems.

get_batch_mixed()

Get a batch of data from the data systems in the mixed way.

get_batch_size()

Get the batch size.

get_batch_standard([sys_idx])

Get a batch of data from the data systems in the standard way.

get_nbatches()

Get the total number of batches.

get_nsystems()

Get the number of data systems.

get_ntypes()

Get the number of types.

get_sys(idx)

Get a certain data system.

get_sys_ntest([sys_idx])

Get number of tests for the currently selected system, or one defined by sys_idx.

get_test([sys_idx, n_test])

Get test data from the the data systems.

get_type_map()

Get the type map.

reduce(key_out, key_in)

Generate a new item from the reduction of another atom.

compute_energy_shift

get_data_dict

print_summary

set_sys_probs

add(key: str, ndof: int, atomic: bool = False, must: bool = False, high_prec: bool = False, type_sel: Optional[List[int]] = None, repeat: int = 1, default: float = 0.0)[source]

Add a data item that to be loaded.

Parameters
key

The key of the item. The corresponding data is stored in sys_path/set.*/key.npy

ndof

The number of dof

atomic

The item is an atomic property. If False, the size of the data should be nframes x ndof If True, the size of data should be nframes x natoms x ndof

must

The data file sys_path/set.*/key.npy must exist. If must is False and the data file does not exist, the data_dict[find_key] is set to 0.0

high_prec

Load the data and store in float64, otherwise in float32

type_sel

Select certain type of atoms

repeat

The data will be repeated repeat times.

default, default=0.

Default value of data

add_dict(adict: dict) None[source]

Add items to the data system by a dict. adict should have items like .. code-block:: python.

adict[key] = {

“ndof”: ndof, “atomic”: atomic, “must”: must, “high_prec”: high_prec, “type_sel”: type_sel, “repeat”: repeat,

}

For the explaination of the keys see add

compute_energy_shift(rcond=None, key='energy')[source]
property default_mesh: List[ndarray]

Mesh for each system.

get_batch(sys_idx: Optional[int] = None) dict[source]

Get a batch of data from the data systems.

Parameters
sys_idxint

The index of system from which the batch is get. If sys_idx is not None, sys_probs and auto_prob_style are ignored If sys_idx is None, automatically determine the system according to sys_probs or auto_prob_style, see the following. This option does not work for mixed systems.

Returns
dict

The batch data

get_batch_mixed() dict[source]

Get a batch of data from the data systems in the mixed way.

Returns
dict

The batch data

get_batch_size() int[source]

Get the batch size.

get_batch_standard(sys_idx: Optional[int] = None) dict[source]

Get a batch of data from the data systems in the standard way.

Parameters
sys_idxint

The index of system from which the batch is get. If sys_idx is not None, sys_probs and auto_prob_style are ignored If sys_idx is None, automatically determine the system according to sys_probs or auto_prob_style, see the following.

Returns
dict

The batch data

get_data_dict(ii: int = 0) dict[source]
get_nbatches() int[source]

Get the total number of batches.

get_nsystems() int[source]

Get the number of data systems.

get_ntypes() int[source]

Get the number of types.

get_sys(idx: int) DeepmdData[source]

Get a certain data system.

get_sys_ntest(sys_idx=None)[source]

Get number of tests for the currently selected system, or one defined by sys_idx.

get_test(sys_idx: Optional[int] = None, n_test: int = -1)[source]

Get test data from the the data systems.

Parameters
sys_idx

The test dat of system with index sys_idx will be returned. If is None, the currently selected system will be returned.

n_test

Number of test data. If set to -1 all test data will be get.

get_type_map() List[str][source]

Get the type map.

print_summary(name)[source]
reduce(key_out, key_in)[source]

Generate a new item from the reduction of another atom.

Parameters
key_out

The name of the reduced item

key_in

The name of the data item to be reduced

set_sys_probs(sys_probs=None, auto_prob_style: str = 'prob_sys_size')[source]
deepmd.utils.data_system.prob_sys_size_ext(keywords, nsystems, nbatch)[source]
deepmd.utils.data_system.process_sys_probs(sys_probs, nbatch)[source]

deepmd.utils.errors module

exception deepmd.utils.errors.GraphTooLargeError[source]

Bases: Exception

The graph is too large, exceeding protobuf’s hard limit of 2GB.

exception deepmd.utils.errors.GraphWithoutTensorError[source]

Bases: Exception

exception deepmd.utils.errors.OutOfMemoryError[source]

Bases: Exception

This error is caused by out-of-memory (OOM).

deepmd.utils.finetune module

deepmd.utils.finetune.replace_model_params_with_pretrained_model(jdata: Dict[str, Any], pretrained_model: str)[source]

Replace the model params in input script according to pretrained model.

Parameters
jdataDict[str, Any]

input script

pretrained_modelstr

filename of the pretrained model

deepmd.utils.graph module

deepmd.utils.graph.get_attention_layer_nodes_from_graph_def(graph_def: GraphDef, suffix: str = '') Dict[source]

Get the attention layer nodes with the given tf.GraphDef object.

Parameters
graph_def

The input tf.GraphDef object

suffixstr, optional

The scope suffix

Returns
Dict

The attention layer nodes within the given tf.GraphDef object

deepmd.utils.graph.get_attention_layer_variables_from_graph_def(graph_def: GraphDef, suffix: str = '') Dict[source]

Get the attention layer variables with the given tf.GraphDef object.

Parameters
graph_deftf.GraphDef

The input tf.GraphDef object

suffixstr, optional

The suffix of the scope

Returns
Dict

The attention layer variables within the given tf.GraphDef object

deepmd.utils.graph.get_embedding_net_nodes(model_file: str, suffix: str = '') Dict[source]

Get the embedding net nodes with the given frozen model(model_file).

Parameters
model_file

The input frozen model path

suffixstr, optional

The suffix of the scope

Returns
Dict

The embedding net nodes with the given frozen model

deepmd.utils.graph.get_embedding_net_nodes_from_graph_def(graph_def: GraphDef, suffix: str = '') Dict[source]

Get the embedding net nodes with the given tf.GraphDef object.

Parameters
graph_def

The input tf.GraphDef object

suffixstr, optional

The scope suffix

Returns
Dict

The embedding net nodes within the given tf.GraphDef object

deepmd.utils.graph.get_embedding_net_variables(model_file: str, suffix: str = '') Dict[source]

Get the embedding net variables with the given frozen model(model_file).

Parameters
model_file

The input frozen model path

suffixstr, optional

The suffix of the scope

Returns
Dict

The embedding net variables within the given frozen model

deepmd.utils.graph.get_embedding_net_variables_from_graph_def(graph_def: GraphDef, suffix: str = '') Dict[source]

Get the embedding net variables with the given tf.GraphDef object.

Parameters
graph_def

The input tf.GraphDef object

suffixstr, optional

The suffix of the scope

Returns
Dict

The embedding net variables within the given tf.GraphDef object

deepmd.utils.graph.get_extra_embedding_net_suffix(type_one_side: bool)[source]

Get the extra embedding net suffix according to the value of type_one_side.

Parameters
type_one_side

The value of type_one_side

Returns
str

The extra embedding net suffix

deepmd.utils.graph.get_extra_embedding_net_variables_from_graph_def(graph_def: GraphDef, suffix: str, extra_suffix: str, layer_size: int)[source]

Get extra embedding net variables from the given tf.GraphDef object. The “extra embedding net” means the embedding net with only type embeddings input, which occurs in “se_atten_v2” and “se_a_ebd_v2” descriptor.

Parameters
graph_def

The input tf.GraphDef object

suffixstr

The “common” suffix in the descriptor

extra_suffixstr

This value depends on the value of “type_one_side”. It should always be “_one_side_ebd” or “_two_side_ebd”

layer_sizeint

The layer size of the embedding net

Returns
Dict

The extra embedding net variables within the given tf.GraphDef object

deepmd.utils.graph.get_fitting_net_nodes(model_file: str) Dict[source]

Get the fitting net nodes with the given frozen model(model_file).

Parameters
model_file

The input frozen model path

Returns
Dict

The fitting net nodes with the given frozen model

deepmd.utils.graph.get_fitting_net_nodes_from_graph_def(graph_def: GraphDef, suffix: str = '') Dict[source]

Get the fitting net nodes with the given tf.GraphDef object.

Parameters
graph_def

The input tf.GraphDef object

suffix

suffix of the scope

Returns
Dict

The fitting net nodes within the given tf.GraphDef object

deepmd.utils.graph.get_fitting_net_variables(model_file: str, suffix: str = '') Dict[source]

Get the fitting net variables with the given frozen model(model_file).

Parameters
model_file

The input frozen model path

suffix

suffix of the scope

Returns
Dict

The fitting net variables within the given frozen model

deepmd.utils.graph.get_fitting_net_variables_from_graph_def(graph_def: GraphDef, suffix: str = '') Dict[source]

Get the fitting net variables with the given tf.GraphDef object.

Parameters
graph_def

The input tf.GraphDef object

suffix

suffix of the scope

Returns
Dict

The fitting net variables within the given tf.GraphDef object

deepmd.utils.graph.get_pattern_nodes_from_graph_def(graph_def: GraphDef, pattern: str) Dict[source]

Get the pattern nodes with the given tf.GraphDef object.

Parameters
graph_def

The input tf.GraphDef object

pattern

The node pattern within the graph_def

Returns
Dict

The fitting net nodes within the given tf.GraphDef object

deepmd.utils.graph.get_tensor_by_name(model_file: str, tensor_name: str) Tensor[source]

Load tensor value from the frozen model(model_file).

Parameters
model_filestr

The input frozen model path

tensor_namestr

Indicates which tensor which will be loaded from the frozen model

Returns
tf.Tensor

The tensor which was loaded from the frozen model

Raises
GraphWithoutTensorError

Whether the tensor_name is within the frozen model

deepmd.utils.graph.get_tensor_by_name_from_graph(graph: Graph, tensor_name: str) Tensor[source]

Load tensor value from the given tf.Graph object.

Parameters
graphtf.Graph

The input TensorFlow graph

tensor_namestr

Indicates which tensor which will be loaded from the frozen model

Returns
tf.Tensor

The tensor which was loaded from the frozen model

Raises
GraphWithoutTensorError

Whether the tensor_name is within the frozen model

deepmd.utils.graph.get_tensor_by_type(node, data_type: dtype) Tensor[source]

Get the tensor value within the given node according to the input data_type.

Parameters
node

The given tensorflow graph node

data_type

The data type of the node

Returns
tf.Tensor

The tensor value of the given node

deepmd.utils.graph.get_type_embedding_net_nodes_from_graph_def(graph_def: GraphDef, suffix: str = '') Dict[source]

Get the type embedding net nodes with the given tf.GraphDef object.

Parameters
graph_def

The input tf.GraphDef object

suffixstr, optional

The scope suffix

Returns
Dict

The type embedding net nodes within the given tf.GraphDef object

deepmd.utils.graph.get_type_embedding_net_variables_from_graph_def(graph_def: GraphDef, suffix: str = '') Dict[source]

Get the type embedding net variables with the given tf.GraphDef object.

Parameters
graph_deftf.GraphDef

The input tf.GraphDef object

suffixstr, optional

The suffix of the scope

Returns
Dict

The embedding net variables within the given tf.GraphDef object

deepmd.utils.graph.get_variables_from_graph_def_as_numpy_array(graph_def: GraphDef, pattern: str)[source]

Get variables from the given tf.GraphDef object, with numpy array returns.

Parameters
graph_def

The input tf.GraphDef object

patternstr

The name of variable

Returns
np.ndarray

The numpy array of the variable

deepmd.utils.graph.load_graph_def(model_file: str) Tuple[Graph, GraphDef][source]

Load graph as well as the graph_def from the frozen model(model_file).

Parameters
model_filestr

The input frozen model path

Returns
tf.Graph

The graph loaded from the frozen model

tf.GraphDef

The graph_def loaded from the frozen model

deepmd.utils.learning_rate module

class deepmd.utils.learning_rate.LearningRateExp(start_lr: float, stop_lr: float = 5e-08, decay_steps: int = 5000, decay_rate: float = 0.95)[source]

Bases: object

The exponentially decaying learning rate.

The learning rate at step \(t\) is given by

\[\alpha(t) = \alpha_0 \lambda ^ { t / \tau }\]

where \(\alpha\) is the learning rate, \(\alpha_0\) is the starting learning rate, \(\lambda\) is the decay rate, and \(\tau\) is the decay steps.

Parameters
start_lr

Starting learning rate \(\alpha_0\)

stop_lr

Stop learning rate \(\alpha_1\)

decay_steps

Learning rate decay every this number of steps \(\tau\)

decay_rate

The decay rate \(\lambda\). If stop_step is provided in build, then it will be determined automatically and overwritten.

Methods

build(global_step[, stop_step])

Build the learning rate.

start_lr()

Get the start lr.

value(step)

Get the lr at a certain step.

build(global_step: Tensor, stop_step: Optional[int] = None) Tensor[source]

Build the learning rate.

Parameters
global_step

The tf Tensor prividing the global training step

stop_step

The stop step. If provided, the decay_rate will be determined automatically and overwritten.

Returns
learning_rate

The learning rate

start_lr() float[source]

Get the start lr.

value(step: int) float[source]

Get the lr at a certain step.

deepmd.utils.multi_init module

deepmd.utils.multi_init.replace_model_params_with_frz_multi_model(jdata: Dict[str, Any], pretrained_model: str)[source]

Replace the model params in input script according to pretrained frozen multi-task united model.

Parameters
jdataDict[str, Any]

input script

pretrained_modelstr

filename of the pretrained frozen multi-task united model

deepmd.utils.neighbor_stat module

class deepmd.utils.neighbor_stat.NeighborStat(ntypes: int, rcut: float, one_type: bool = False)[source]

Bases: object

Class for getting training data information.

It loads data from DeepmdData object, and measures the data info, including neareest nbor distance between atoms, max nbor size of atoms and the output data range of the environment matrix.

Parameters
ntypes

The num of atom types

rcut

The cut-off radius

one_typebool, optional, default=False

Treat all types as a single type.

Methods

get_stat(data)

Get the data statistics of the training data, including nearest nbor distance between atoms, max nbor size of atoms.

get_stat(data: DeepmdDataSystem) Tuple[float, List[int]][source]

Get the data statistics of the training data, including nearest nbor distance between atoms, max nbor size of atoms.

Parameters
data

Class for manipulating many data systems. It is implemented with the help of DeepmdData.

Returns
min_nbor_dist

The nearest distance between neighbor atoms

max_nbor_size

A list with ntypes integers, denotes the actual achieved max sel

deepmd.utils.network module

deepmd.utils.network.embedding_net(xx, network_size, precision, activation_fn=<function tanh>, resnet_dt=False, name_suffix='', stddev=1.0, bavg=0.0, seed=None, trainable=True, uniform_seed=False, initial_variables=None, mixed_prec=None)[source]

The embedding network.

The embedding network function \(\mathcal{N}\) is constructed by is the composition of multiple layers \(\mathcal{L}^{(i)}\):

\[\mathcal{N} = \mathcal{L}^{(n)} \circ \mathcal{L}^{(n-1)} \circ \cdots \circ \mathcal{L}^{(1)}\]

A layer \(\mathcal{L}\) is given by one of the following forms, depending on the number of nodes: [1]

\[\begin{split}\mathbf{y}=\mathcal{L}(\mathbf{x};\mathbf{w},\mathbf{b})= \begin{cases} \boldsymbol{\phi}(\mathbf{x}^T\mathbf{w}+\mathbf{b}) + \mathbf{x}, & N_2=N_1 \\ \boldsymbol{\phi}(\mathbf{x}^T\mathbf{w}+\mathbf{b}) + (\mathbf{x}, \mathbf{x}), & N_2 = 2N_1\\ \boldsymbol{\phi}(\mathbf{x}^T\mathbf{w}+\mathbf{b}), & \text{otherwise} \\ \end{cases}\end{split}\]

where \(\mathbf{x} \in \mathbb{R}^{N_1}\) is the input vector and \(\mathbf{y} \in \mathbb{R}^{N_2}\) is the output vector. \(\mathbf{w} \in \mathbb{R}^{N_1 \times N_2}\) and \(\mathbf{b} \in \mathbb{R}^{N_2}\) are weights and biases, respectively, both of which are trainable if trainable is True. \(\boldsymbol{\phi}\) is the activation function.

Parameters
xxTensor

Input tensor \(\mathbf{x}\) of shape [-1,1]

network_sizelist of int

Size of the embedding network. For example [16,32,64]

precision:

Precision of network weights. For example, tf.float64

activation_fn:

Activation function \(\boldsymbol{\phi}\)

resnet_dtbool

Using time-step in the ResNet construction

name_suffixstr

The name suffix append to each variable.

stddevfloat

Standard deviation of initializing network parameters

bavgfloat

Mean of network intial bias

seedint

Random seed for initializing network parameters

trainablebool

If the network is trainable

uniform_seedbool

Only for the purpose of backward compatibility, retrieves the old behavior of using the random seed

initial_variablesdict

The input dict which stores the embedding net variables

mixed_prec

The input dict which stores the mixed precision setting for the embedding net

References

1

Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identitymappings in deep residual networks. InComputer Vision - ECCV 2016,pages 630-645. Springer International Publishing, 2016.

deepmd.utils.network.embedding_net_rand_seed_shift(network_size)[source]
deepmd.utils.network.one_layer(inputs, outputs_size, activation_fn=<function tanh>, precision=tf.float64, stddev=1.0, bavg=0.0, name='linear', scope='', reuse=None, seed=None, use_timestep=False, trainable=True, useBN=False, uniform_seed=False, initial_variables=None, mixed_prec=None, final_layer=False)[source]
deepmd.utils.network.one_layer_rand_seed_shift()[source]
deepmd.utils.network.variable_summaries(var: VariableV1, name: str)[source]

Attach a lot of summaries to a Tensor (for TensorBoard visualization).

Parameters
vartf.Variable

[description]

namestr

variable name

deepmd.utils.pair_tab module

Alias for backward compatibility.

class deepmd.utils.pair_tab.PairTab(filename: str)[source]

Bases: object

Pairwise tabulated potential.

Parameters
filename

File name for the short-range tabulated potential. The table is a text data file with (N_t + 1) * N_t / 2 + 1 columes. The first colume is the distance between atoms. The second to the last columes are energies for pairs of certain types. For example we have two atom types, 0 and 1. The columes from 2nd to 4th are for 0-0, 0-1 and 1-1 correspondingly.

Methods

get()

Get the serialized table.

reinit(filename)

Initialize the tabulated interaction.

get() Tuple[array, array][source]

Get the serialized table.

reinit(filename: str) None[source]

Initialize the tabulated interaction.

Parameters
filename

File name for the short-range tabulated potential. The table is a text data file with (N_t + 1) * N_t / 2 + 1 columes. The first colume is the distance between atoms. The second to the last columes are energies for pairs of certain types. For example we have two atom types, 0 and 1. The columes from 2nd to 4th are for 0-0, 0-1 and 1-1 correspondingly.

deepmd.utils.parallel_op module

class deepmd.utils.parallel_op.ParallelOp(builder: Callable[[...], Tuple[Dict[str, Tensor], Tuple[Tensor]]], nthreads: Optional[int] = None, config: Optional[ConfigProto] = None)[source]

Bases: object

Run an op with data parallelism.

Parameters
builderCallable[…, Tuple[Dict[str, tf.Tensor], Tuple[tf.Tensor]]]

returns two objects: a dict which stores placeholders by key, and a tuple with the final op(s)

nthreadsint, optional

the number of threads

configtf.ConfigProto, optional

tf.ConfigProto

Examples

>>> from deepmd.env import tf
>>> from deepmd.utils.parallel_op import ParallelOp
>>> def builder():
...     x = tf.placeholder(tf.int32, [1])
...     return {"x": x}, (x + 1)
...
>>> p = ParallelOp(builder, nthreads=4)
>>> def feed():
...     for ii in range(10):
...         yield {"x": [ii]}
...
>>> print(*p.generate(tf.Session(), feed()))
[1] [2] [3] [4] [5] [6] [7] [8] [9] [10]

Methods

generate(sess, feed)

Returns a generator.

generate(sess: Session, feed: Generator[Dict[str, Any], None, None]) Generator[Tuple, None, None][source]

Returns a generator.

Parameters
sesstf.Session

TensorFlow session

feedGenerator[dict, None, None]

generator which yields feed_dict

Yields
Generator[Tuple, None, None]

generator which yields session returns

deepmd.utils.path module

Alias for backward compatibility.

class deepmd.utils.path.DPH5Path(path: str)[source]

Bases: DPPath

The path class to data system (DeepmdData) for HDF5 files.

Parameters
pathstr

path

Notes

OS - HDF5 relationship:

directory - Group file - Dataset

Methods

glob(pattern)

Search path using the glob pattern.

is_dir()

Check if self is directory.

is_file()

Check if self is file.

load_numpy()

Load NumPy array.

load_txt([dtype])

Load NumPy array from text.

rglob(pattern)

This is like calling DPPath.glob() with **/ added in front of the given relative pattern.

glob(pattern: str) List[DPPath][source]

Search path using the glob pattern.

Parameters
patternstr

glob pattern

Returns
List[DPPath]

list of paths

is_dir() bool[source]

Check if self is directory.

is_file() bool[source]

Check if self is file.

load_numpy() ndarray[source]

Load NumPy array.

Returns
np.ndarray

loaded NumPy array

load_txt(dtype: Optional[dtype] = None, **kwargs) ndarray[source]

Load NumPy array from text.

Returns
np.ndarray

loaded NumPy array

rglob(pattern: str) List[DPPath][source]

This is like calling DPPath.glob() with **/ added in front of the given relative pattern.

Parameters
patternstr

glob pattern

Returns
List[DPPath]

list of paths

class deepmd.utils.path.DPOSPath(path: str)[source]

Bases: DPPath

The OS path class to data system (DeepmdData) for real directories.

Parameters
pathstr

path

Methods

glob(pattern)

Search path using the glob pattern.

is_dir()

Check if self is directory.

is_file()

Check if self is file.

load_numpy()

Load NumPy array.

load_txt(**kwargs)

Load NumPy array from text.

rglob(pattern)

This is like calling DPPath.glob() with **/ added in front of the given relative pattern.

glob(pattern: str) List[DPPath][source]

Search path using the glob pattern.

Parameters
patternstr

glob pattern

Returns
List[DPPath]

list of paths

is_dir() bool[source]

Check if self is directory.

is_file() bool[source]

Check if self is file.

load_numpy() ndarray[source]

Load NumPy array.

Returns
np.ndarray

loaded NumPy array

load_txt(**kwargs) ndarray[source]

Load NumPy array from text.

Returns
np.ndarray

loaded NumPy array

rglob(pattern: str) List[DPPath][source]

This is like calling DPPath.glob() with **/ added in front of the given relative pattern.

Parameters
patternstr

glob pattern

Returns
List[DPPath]

list of paths

class deepmd.utils.path.DPPath(path: str)[source]

Bases: ABC

The path class to data system (DeepmdData).

Parameters
pathstr

path

Methods

glob(pattern)

Search path using the glob pattern.

is_dir()

Check if self is directory.

is_file()

Check if self is file.

load_numpy()

Load NumPy array.

load_txt(**kwargs)

Load NumPy array from text.

rglob(pattern)

This is like calling DPPath.glob() with **/ added in front of the given relative pattern.

abstract glob(pattern: str) List[DPPath][source]

Search path using the glob pattern.

Parameters
patternstr

glob pattern

Returns
List[DPPath]

list of paths

abstract is_dir() bool[source]

Check if self is directory.

abstract is_file() bool[source]

Check if self is file.

abstract load_numpy() ndarray[source]

Load NumPy array.

Returns
np.ndarray

loaded NumPy array

abstract load_txt(**kwargs) ndarray[source]

Load NumPy array from text.

Returns
np.ndarray

loaded NumPy array

abstract rglob(pattern: str) List[DPPath][source]

This is like calling DPPath.glob() with **/ added in front of the given relative pattern.

Parameters
patternstr

glob pattern

Returns
List[DPPath]

list of paths

deepmd.utils.plugin module

Alias for backward compatibility.

class deepmd.utils.plugin.Plugin[source]

Bases: object

A class to register and restore plugins.

Examples

>>> plugin = Plugin()
>>> @plugin.register("xx")
    def xxx():
        pass
>>> print(plugin.plugins['xx'])
Attributes
pluginsDict[str, object]

plugins

Methods

get_plugin(key)

Visit a plugin by key.

register(key)

Register a plugin.

get_plugin(key) object[source]

Visit a plugin by key.

Parameters
keystr

key of the plugin

Returns
object

the plugin

register(key: str) Callable[[object], object][source]

Register a plugin.

Parameters
keystr

key of the plugin

Returns
Callable[[object], object]

decorator

class deepmd.utils.plugin.PluginVariant(*args, **kwargs)[source]

Bases: object

A class to remove type from input arguments.

class deepmd.utils.plugin.VariantABCMeta(name, bases, namespace, **kwargs)[source]

Bases: VariantMeta, ABCMeta

Methods

__call__(*args, **kwargs)

Remove type and keys that starts with underline.

mro(/)

Return a type's method resolution order.

register(subclass)

Register a virtual subclass of an ABC.

class deepmd.utils.plugin.VariantMeta[source]

Bases: object

Methods

__call__(*args, **kwargs)

Remove type and keys that starts with underline.

deepmd.utils.random module

Alias for backward compatibility.

deepmd.utils.random.choice(a: ndarray, p: Optional[ndarray] = None)[source]

Generates a random sample from a given 1-D array.

Parameters
anp.ndarray

A random sample is generated from its elements.

pnp.ndarray

The probabilities associated with each entry in a.

Returns
np.ndarray

arrays with results and their shapes

deepmd.utils.random.random(size=None)[source]

Return random floats in the half-open interval [0.0, 1.0).

Parameters
size

Output shape.

Returns
np.ndarray

Arrays with results and their shapes.

deepmd.utils.random.seed(val: Optional[int] = None)[source]

Seed the generator.

Parameters
valint

Seed.

deepmd.utils.random.shuffle(x: ndarray)[source]

Modify a sequence in-place by shuffling its contents.

Parameters
xnp.ndarray

The array or list to be shuffled.

deepmd.utils.sess module

deepmd.utils.sess.run_sess(sess: Session, *args, **kwargs)[source]

Run session with erorrs caught.

Parameters
sesstf.Session

TensorFlow Session

*args

Variable length argument list.

**kwargs

Arbitrary keyword arguments.

Returns
Any

the result of sess.run()

deepmd.utils.spin module

class deepmd.utils.spin.Spin(use_spin: Optional[List[bool]] = None, spin_norm: Optional[List[float]] = None, virtual_len: Optional[List[float]] = None)[source]

Bases: object

Class for spin.

Parameters
use_spin

Whether to use atomic spin model for each atom type

spin_norm

The magnitude of atomic spin for each atom type with spin

virtual_len

The distance between virtual atom representing spin and its corresponding real atom for each atom type with spin

Methods

build([reuse, suffix])

Build the computational graph for the spin.

get_ntypes_spin()

Returns the number of atom types which contain spin.

get_spin_norm()

Returns the list of magnitude of atomic spin for each atom type.

get_use_spin()

Returns the list of whether to use spin for each atom type.

get_virtual_len()

Returns the list of distance between real atom and virtual atom for each atom type.

build(reuse=None, suffix='')[source]

Build the computational graph for the spin.

Parameters
reuse

The weights in the networks should be reused when get the variable.

suffix

Name suffix to identify this descriptor

Returns
embedded_types

The computational graph for embedded types

get_ntypes_spin() int[source]

Returns the number of atom types which contain spin.

get_spin_norm() List[float][source]

Returns the list of magnitude of atomic spin for each atom type.

get_use_spin() List[bool][source]

Returns the list of whether to use spin for each atom type.

get_virtual_len() List[float][source]

Returns the list of distance between real atom and virtual atom for each atom type.

deepmd.utils.tabulate module

class deepmd.utils.tabulate.DPTabulate(descrpt: ~deepmd.descriptor.descriptor.Descriptor, neuron: ~typing.List[int], graph: ~tensorflow.python.framework.ops.Graph, graph_def: ~tensorflow.core.framework.graph_pb2.GraphDef, type_one_side: bool = False, exclude_types: ~typing.List[~typing.List[int]] = [], activation_fn: ~typing.Callable[[~tensorflow.python.framework.tensor.Tensor], ~tensorflow.python.framework.tensor.Tensor] = <function tanh>, suffix: str = '')[source]

Bases: object

Class for tabulation.

Compress a model, which including tabulating the embedding-net. The table is composed of fifth-order polynomial coefficients and is assembled from two sub-tables. The first table takes the stride(parameter) as it’s uniform stride, while the second table takes 10 * stride as it’s uniform stride The range of the first table is automatically detected by deepmd-kit, while the second table ranges from the first table’s upper boundary(upper) to the extrapolate(parameter) * upper.

Parameters
descrpt

Descriptor of the original model

neuron

Number of neurons in each hidden layers of the embedding net \(\\mathcal{N}\)

graphtf.Graph

The graph of the original model

graph_deftf.GraphDef

The graph_def of the original model

type_one_side

Try to build N_types tables. Otherwise, building N_types^2 tables

exclude_typesList[List[int]]

The excluded pairs of types which have no interaction with each other. For example, [[0, 1]] means no interaction between type 0 and type 1.

activation_function

The activation function in the embedding net. Supported options are {“tanh”,”gelu”} in common.ACTIVATION_FN_DICT.

suffixstr, optional

The suffix of the scope

Methods

build(min_nbor_dist, extrapolate, stride0, ...)

Build the tables for model compression.

build(min_nbor_dist: float, extrapolate: float, stride0: float, stride1: float) Tuple[Dict[str, int], Dict[str, int]][source]

Build the tables for model compression.

Parameters
min_nbor_dist

The nearest distance between neighbor atoms

extrapolate

The scale of model extrapolation

stride0

The uniform stride of the first table

stride1

The uniform stride of the second table

Returns
lowerdict[str, int]

The lower boundary of environment matrix by net

upperdict[str, int]

The upper boundary of environment matrix by net

deepmd.utils.type_embed module

class deepmd.utils.type_embed.TypeEmbedNet(neuron: List[int] = [], resnet_dt: bool = False, activation_function: Optional[str] = 'tanh', precision: str = 'default', trainable: bool = True, seed: Optional[int] = None, uniform_seed: bool = False, padding: bool = False, **kwargs)[source]

Bases: object

Type embedding network.

Parameters
neuronlist[int]

Number of neurons in each hidden layers of the embedding net

resnet_dt

Time-step dt in the resnet construction: y = x + dt * phi (Wx + b)

activation_function

The activation function in the embedding net. Supported options are “relu”, “relu6”, “softplus”, “sigmoid”, “tanh”, “gelu”, “gelu_tf”, “None”, “none”.

precision

The precision of the embedding net parameters. Supported options are “default”, “float16”, “float32”, “float64”, “bfloat16”.

trainable

If the weights of embedding net are trainable.

seed

Random seed for initializing the network parameters.

uniform_seed

Only for the purpose of backward compatibility, retrieves the old behavior of using the random seed

padding

Concat the zero padding to the output, as the default embedding of empty type.

Methods

build(ntypes[, reuse, suffix])

Build the computational graph for the descriptor.

init_variables(graph, graph_def[, suffix, ...])

Init the type embedding net variables with the given dict.

build(ntypes: int, reuse=None, suffix='')[source]

Build the computational graph for the descriptor.

Parameters
ntypes

Number of atom types.

reuse

The weights in the networks should be reused when get the variable.

suffix

Name suffix to identify this descriptor

Returns
embedded_types

The computational graph for embedded types

init_variables(graph: Graph, graph_def: GraphDef, suffix='', model_type='original_model') None[source]

Init the type embedding net variables with the given dict.

Parameters
graphtf.Graph

The input frozen model graph

graph_deftf.GraphDef

The input frozen model graph_def

suffix

Name suffix to identify this descriptor

model_type

Indicator of whether this model is a compressed model

deepmd.utils.type_embed.embed_atom_type(ntypes: int, natoms: Tensor, type_embedding: Tensor)[source]

Make the embedded type for the atoms in system. The atoms are assumed to be sorted according to the type, thus their types are described by a tf.Tensor natoms, see explanation below.

Parameters
ntypes:

Number of types.

natoms:

The number of atoms. This tensor has the length of Ntypes + 2 natoms[0]: number of local atoms natoms[1]: total number of atoms held by this processor natoms[i]: 2 <= i < Ntypes+2, number of type i atoms

type_embedding:

The type embedding. It has the shape of [ntypes, embedding_dim]

Returns
atom_embedding

The embedded type of each atom. It has the shape of [numb_atoms, embedding_dim]

deepmd.utils.weight_avg module

Alias for backward compatibility.

deepmd.utils.weight_avg.weighted_average(errors: List[Dict[str, Tuple[float, float]]]) Dict[source]

Compute wighted average of prediction errors (MAE or RMSE) for model.

Parameters
errorsList[Dict[str, Tuple[float, float]]]

List: the error of systems Dict: the error of quantities, name given by the key str: the name of the quantity, must starts with ‘mae’ or ‘rmse’ Tuple: (error, weight)

Returns
Dict

weighted averages