12. Runtime environment variables#

Note

For build-time environment variables, see Install from source code.

12.1. All interfaces#

DP_INTER_OP_PARALLELISM_THREADS#

Alias: TF_INTER_OP_PARALLELISM_THREADS Default: 0

Control parallelism within TensorFlow (when TensorFlow is built against Eigen) and PyTorch native OPs for CPU devices. See How to control the parallelism of a job for details.

DP_INTRA_OP_PARALLELISM_THREADS#

Alias: TF_INTRA_OP_PARALLELISM_THREADS** Default: 0

Control parallelism within TensorFlow (when TensorFlow is built against Eigen) and PyTorch native OPs. See How to control the parallelism of a job for details.

12.2. Environment variables of dependencies#

12.3. Python interface only#

DP_INTERFACE_PREC#

Choices: high, low; Default: high

Control high (double) or low (float) precision of training.

DP_AUTO_PARALLELIZATION#

Choices: 0, 1; Default: 0

TensorFlow Enable auto parallelization for CPU operators.

DP_JIT#

Choices: 0, 1; Default: 0

TensorFlow Enable JIT. Note that this option may either improve or decrease the performance. Requires TensorFlow to support JIT.

DP_INFER_BATCH_SIZE#

Default: 1024 on CPUs and as maximum as possible until out-of-memory on GPUs

Inference batch size, calculated by multiplying the number of frames with the number of atoms.

DP_BACKEND#

Default: tensorflow

Default backend.

NUM_WORKERS#

Default: 8 or the number of cores (whichever is smaller)

PyTorch Number of subprocesses to use for data loading in the PyTorch backend. See PyTorch documentation for details.

12.4. C++ interface only#

These environment variables also apply to third-party programs using the C++ interface, such as LAMMPS.

DP_PLUGIN_PATH#

Type: List of paths, split by : on Unix and ; on Windows

List of customized OP plugin libraries to load, such as /path/to/plugin1.so:/path/to/plugin2.so on Linux and /path/to/plugin1.dll;/path/to/plugin2.dll on Windows.