Installation

Easy installation methods

There various easy methods to install DeePMD-kit. Choose one that you prefer. If you want to build by yourself, jump to the next two sections.

After your easy installation, DeePMD-kit (dp) and LAMMPS (lmp) will be available to execute. You can try dp -h and lmp -h to see the help. mpirun is also available considering you may want to run LAMMPS in parallel.

Install off-line packages

Both CPU and GPU version offline packages are avaiable in the Releases page.

Some packages are splited into two files due to size limit of GitHub. One may merge them into one after downloading:

cat deepmd-kit-2.0.0-cuda11.1_gpu-Linux-x86_64.sh.0 deepmd-kit-2.0.0-cuda11.1_gpu-Linux-x86_64.sh.1 > deepmd-kit-2.0.0-cuda11.1_gpu-Linux-x86_64.sh

Install with conda

DeePMD-kit is avaiable with conda. Install Anaconda or Miniconda first.

One may create an environment that contains the CPU version of DeePMD-kit and LAMMPS:

conda create -n deepmd deepmd-kit=*=*cpu lammps-dp=*=*cpu -c https://conda.deepmodeling.org

Or one may want to create a GPU environment containing CUDA Toolkit:

conda create -n deepmd deepmd-kit=*=*gpu lammps-dp=*=*gpu cudatoolkit=11.1 -c https://conda.deepmodeling.org -c nvidia

One could change the CUDA Toolkit version from 11.1 to 10.1 or 10.0.

One may speficy the DeePMD-kit version such as 2.0.0 using

conda create -n deepmd deepmd-kit=2.0.0=*cpu lammps-dp=2.0.0=*cpu -c https://conda.deepmodeling.org

One may enable the environment using

conda activate deepmd

Install with docker

A docker for installing the DeePMD-kit is available here.

To pull the CPU version:

docker pull ghcr.io/deepmodeling/deepmd-kit:2.0.0_cpu

To pull the GPU version:

docker pull ghcr.io/deepmodeling/deepmd-kit:2.0.0_cuda10.1_gpu

Install from source code

Please follow our github webpage to download the latest released version and development version.

Or get the DeePMD-kit source code by git clone

cd /some/workspace
git clone --recursive https://github.com/deepmodeling/deepmd-kit.git deepmd-kit

The --recursive option clones all submodules needed by DeePMD-kit.

For convenience, you may want to record the location of source to a variable, saying deepmd_source_dir by

cd deepmd-kit
deepmd_source_dir=`pwd`

Install the python interface

Install the Tensorflow’s python interface

First, check the python version on your machine

python --version

We follow the virtual environment approach to install the tensorflow’s Python interface. The full instruction can be found on the tensorflow’s official website. Now we assume that the Python interface will be installed to virtual environment directory $tensorflow_venv

virtualenv -p python3 $tensorflow_venv
source $tensorflow_venv/bin/activate
pip install --upgrade pip
pip install --upgrade tensorflow==2.3.0

It is notice that everytime a new shell is started and one wants to use DeePMD-kit, the virtual environment should be activated by

source $tensorflow_venv/bin/activate

if one wants to skip out of the virtual environment, he/she can do

deactivate

If one has multiple python interpreters named like python3.x, it can be specified by, for example

virtualenv -p python3.7 $tensorflow_venv

If one does not need the GPU support of deepmd-kit and is concerned about package size, the CPU-only version of tensorflow should be installed by

pip install --upgrade tensorflow-cpu==2.3.0	

To verify the installation, run

python -c "import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000])))"

One should remember to activate the virtual environment every time he/she uses deepmd-kit.

Install the DeePMD-kit’s python interface

Execute

cd $deepmd_source_dir
pip install .

One may set the following environment variables before executing pip:

Environment variables

Allowed value

Default value

Usage

DP_VARIANT

cpu, cuda, rocm

cpu

Build CPU variant or GPU variant with CUDA or ROCM support.

CUDA_TOOLKIT_ROOT_DIR

Path

Detected automatically

The path to the CUDA toolkit directory.

ROCM_ROOT

Path

Detected automatically

The path to the ROCM toolkit directory.

To test the installation, one should firstly jump out of the source directory

cd /some/other/workspace

then execute

dp -h

It will print the help information like

usage: dp [-h] {train,freeze,test} ...

DeePMD-kit: A deep learning package for many-body potential energy
representation and molecular dynamics

optional arguments:
  -h, --help           show this help message and exit

Valid subcommands:
  {train,freeze,test}
    train              train a model
    freeze             freeze the model
    test               test the model

Install the C++ interface

If one does not need to use DeePMD-kit with Lammps or I-Pi, then the python interface installed in the previous section does everything and he/she can safely skip this section.

Install the Tensorflow’s C++ interface

Check the compiler version on your machine

gcc --version

The C++ interface of DeePMD-kit was tested with compiler gcc >= 4.8. It is noticed that the I-Pi support is only compiled with gcc >= 4.9.

First the C++ interface of Tensorflow should be installed. It is noted that the version of Tensorflow should be in consistent with the python interface. You may follow the instruction to install the corresponding C++ interface.

Install the DeePMD-kit’s C++ interface

Now goto the source code directory of DeePMD-kit and make a build place.

cd $deepmd_source_dir/source
mkdir build 
cd build

I assume you want to install DeePMD-kit into path $deepmd_root, then execute cmake

cmake -DTENSORFLOW_ROOT=$tensorflow_root -DCMAKE_INSTALL_PREFIX=$deepmd_root ..

where the variable tensorflow_root stores the location where the TensorFlow’s C++ interface is installed.

One may add the following arguments to cmake:

CMake Aurgements

Allowed value

Default value

Usage

-DTENSORFLOW_ROOT=<value>

Path

-

The Path to TensorFlow’s C++ interface.

-DCMAKE_INSTALL_PREFIX=<value>

Path

-

The Path where DeePMD-kit will be installed.

-DUSE_CUDA_TOOLKIT=<value>

TRUE or FALSE

FALSE

If TRUE, Build GPU support with CUDA toolkit.

-DCUDA_TOOLKIT_ROOT_DIR=<value>

Path

Detected automatically

The path to the CUDA toolkit directory.

-DUSE_ROCM_TOOLKIT=<value>

TRUE or FALSE

FALSE

If TRUE, Build GPU support with ROCM toolkit.

-DROCM_ROOT=<value>

Path

Detected automatically

The path to the ROCM toolkit directory.

If the cmake has executed successfully, then

make -j4
make install

The option -j4 means using 4 processes in parallel. You may want to use a different number according to your hardware.

If everything works fine, you will have the following executable and libraries installed in $deepmd_root/bin and $deepmd_root/lib

$ ls $deepmd_root/bin
dp_ipi
$ ls $deepmd_root/lib
libdeepmd_ipi.so  libdeepmd_op.so  libdeepmd.so

Install LAMMPS’s DeePMD-kit module

DeePMD-kit provide module for running MD simulation with LAMMPS. Now make the DeePMD-kit module for LAMMPS.

cd $deepmd_source_dir/source/build
make lammps

DeePMD-kit will generate a module called USER-DEEPMD in the build directory. If you need low precision version, move env_low.sh to env.sh in the directory. Now download the LAMMPS code (29Oct2020 or later), and uncompress it:

cd /some/workspace
wget https://github.com/lammps/lammps/archive/stable_29Oct2020.tar.gz
tar xf stable_29Oct2020.tar.gz

The source code of LAMMPS is stored in directory lammps-stable_29Oct2020. Now go into the LAMMPS code and copy the DeePMD-kit module like this

cd lammps-stable_29Oct2020/src/
cp -r $deepmd_source_dir/source/build/USER-DEEPMD .

Now build LAMMPS

make yes-kspace
make yes-user-deepmd
make mpi -j4

If everything works fine, you will end up with an executable lmp_mpi.

./lmp_mpi -h

The DeePMD-kit module can be removed from LAMMPS source code by

make no-user-deepmd

Install i-PI

The i-PI works in a client-server model. The i-PI provides the server for integrating the replica positions of atoms, while the DeePMD-kit provides a client named dp_ipi that computes the interactions (including energy, force and virial). The server and client communicates via the Unix domain socket or the Internet socket. A full instruction of i-PI can be found here. The source code and a complete installation instructions of i-PI can be found here. To use i-PI with already existing drivers, install and update using Pip:

pip install -U i-PI

Test with Pytest:

pip install pytest
pytest --pyargs ipi.tests

Building conda packages

One may want to keep both convenience and personalization of the DeePMD-kit. To achieve this goal, one can consider builing conda packages. We provide building scripts in deepmd-kit-recipes organization. These building tools are driven by conda-build and conda-smithy.

For example, if one wants to turn on MPIIO package in LAMMPS, go to lammps-dp-feedstock repository and modify recipe/build.sh. -D PKG_MPIIO=OFF should be changed to -D PKG_MPIIO=ON. Then go to the main directory and executing

./build-locally.py

This requires the Docker has been installed. After the building, the packages will be generated in build_artifacts/linux-64 and build_artifacts/noarch, and then one can install then execuating

conda create -n deepmd lammps-dp -c file:///path/to/build_artifacts -c https://conda.deepmodeling.org -c nvidia

One may also upload packages to one’s Anaconda channel, so they can be installed on other machines:

anaconda upload /path/to/build_artifacts/linux-64/*.tar.bz2 /path/to/build_artifacts/noarch/*.tar.bz2