deepmd.train package

Submodules

deepmd.train.run_options module

Module taking care of important package constants.

class deepmd.train.run_options.RunOptions(init_model: Optional[str] = None, init_frz_model: Optional[str] = None, restart: Optional[str] = None, log_path: Optional[str] = None, log_level: int = 0, mpi_log: str = 'master')[source]

Bases: object

Class with inf oon how to run training (cluster, MPI and GPU config).

Attributes
gpus: Optional[List[int]]

list of GPUs if any are present else None

is_chief: bool

in distribured training it is true for tha main MPI process in serail it is always true

world_size: int

total worker count

my_rank: int

index of the MPI task

nodename: str

name of the node

node_list_List[str]

the list of nodes of the current mpirun

my_device: str

deviice type - gpu or cpu

Methods

print_resource_summary()

Print build and current running cluster configuration summary.

gpus: Optional[List[int]]
property is_chief

Whether my rank is 0.

my_device: str
my_rank: int
nodelist: List[int]
nodename: str
print_resource_summary()[source]

Print build and current running cluster configuration summary.

world_size: int

deepmd.train.trainer module

class deepmd.train.trainer.DPTrainer(jdata, run_opt, is_compress=False)[source]

Bases: object

Methods

save_compressed()

Save the compressed graph

build

get_evaluation_results

get_feed_dict

get_global_step

print_header

print_on_training

save_checkpoint

train

valid_on_the_fly

build(data=None, stop_batch=0)[source]
get_evaluation_results(batch_list)[source]
get_feed_dict(batch, is_training)[source]
get_global_step()[source]
static print_header(fp, train_results, valid_results)[source]
static print_on_training(fp, train_results, valid_results, cur_batch, cur_lr)[source]
save_checkpoint(cur_batch: int)[source]
save_compressed()[source]

Save the compressed graph

train(train_data=None, valid_data=None)[source]
valid_on_the_fly(fp, train_batches, valid_batches, print_header=False)[source]