deepmd.tf.utils.batch_size

Module Contents

Classes

AutoBatchSize

This class allows DeePMD-kit to automatically decide the maximum

class deepmd.tf.utils.batch_size.AutoBatchSize(initial_batch_size: int = 1024, factor: float = 2.0)[source]

Bases: deepmd.utils.batch_size.AutoBatchSize

This class allows DeePMD-kit to automatically decide the maximum batch size that will not cause an OOM error.

Parameters:
initial_batch_sizeint, default: 1024

initial batch size (number of total atoms) when DP_INFER_BATCH_SIZE is not set

factorfloat, default: 2.

increased factor

Notes

In some CPU environments, the program may be directly killed when OOM. In this case, by default the batch size will not be increased for CPUs. The environment variable DP_INFER_BATCH_SIZE can be set as the batch size.

In other cases, we assume all OOM error will raise OutOfMemoryError.

Attributes:
current_batch_sizeint

current batch size (number of total atoms)

maximum_working_batch_sizeint

maximum working batch size

minimal_not_working_batch_sizeint

minimal not working batch size

is_gpu_available() bool[source]

Check if GPU is available.

Returns:
bool

True if GPU is available

is_oom_error(e: Exception) bool[source]

Check if the exception is an OOM error.

Parameters:
eException

Exception