1.1. Easy install
There are various easy methods to install DeePMD-kit. Choose one that you prefer. If you want to build by yourself, jump to the next two sections.
After your easy installation, DeePMD-kit (dp
) and LAMMPS (lmp
) will be available to execute. You can try dp -h
and lmp -h
to see the help. mpirun
is also available considering you may want to train models or run LAMMPS in parallel.
Note
Note: The off-line packages and conda packages require the GNU C Library 2.17 or above. The GPU version requires compatible NVIDIA driver to be installed in advance. It is possible to force conda to override detection when installation, but these requirements are still necessary during runtime. You can refer to DeepModeling conda FAQ for more information.
1.1.1. Install off-line packages
Both CPU and GPU version offline packages are available in the Releases page.
Some packages are splited into two files due to size limit of GitHub. One may merge them into one after downloading:
cat deepmd-kit-2.2.9-cuda118-Linux-x86_64.sh.0 deepmd-kit-2.2.9-cuda118-Linux-x86_64.sh.1 > deepmd-kit-2.2.9-cuda118-Linux-x86_64.sh
One may enable the environment using
conda activate /path/to/deepmd-kit
1.1.2. Install with conda
DeePMD-kit is available with conda. Install Anaconda, Miniconda, or miniforge first. You can refer to DeepModeling conda FAQ for how to setup a conda environment.
1.1.2.1. conda-forge channel
DeePMD-kit is available on the conda-forge channel:
conda create -n deepmd deepmd-kit lammps horovod -c conda-forge
The supported platforms include Linux x86-64, macOS x86-64, and macOS arm64. Read conda-forge FAQ to learn how to install CUDA-enabled packages.
1.1.2.2. Official channel
Danger
Deprecated since version 3.0.0: The official channel will be deprecated since 3.0.0. Old packages will still be available at https://conda.deepmodeling.com. Maintainers will build packages in the conda-forge organization together with other conda-forge members.
One may create an environment that contains the CPU version of DeePMD-kit and LAMMPS:
conda create -n deepmd deepmd-kit=*=*cpu libdeepmd=*=*cpu lammps -c https://conda.deepmodeling.com -c defaults
Or one may want to create a GPU environment containing CUDA Toolkit:
conda create -n deepmd deepmd-kit=*=*gpu libdeepmd=*=*gpu lammps cudatoolkit=11.6 horovod -c https://conda.deepmodeling.com -c defaults
One could change the CUDA Toolkit version from 10.2
or 11.6
.
One may specify the DeePMD-kit version such as 2.2.9
using
conda create -n deepmd deepmd-kit=2.2.9=*cpu libdeepmd=2.2.9=*cpu lammps horovod -c https://conda.deepmodeling.com -c defaults
One may enable the environment using
conda activate deepmd
1.1.3. Install with docker
A docker for installing the DeePMD-kit is available here.
To pull the CPU version:
docker pull ghcr.io/deepmodeling/deepmd-kit:2.1.1_cpu
To pull the GPU version:
docker pull ghcr.io/deepmodeling/deepmd-kit:2.1.1_cuda11.6_gpu
To pull the ROCm version:
docker pull deepmodeling/dpmdkit-rocm:dp2.0.3-rocm4.5.2-tf2.6-lmp29Sep2021
1.1.4. Install Python interface with pip
If you have no existing TensorFlow installed, you can use pip
to install the pre-built package of the Python interface with CUDA 12 supported:
pip install deepmd-kit[gpu,cu12]
cu12
is required only when CUDA Toolkit and cuDNN were not installed.
To install the package built against CUDA 11.8, use
pip install deepmd-kit-cu11[gpu,cu11]
Or install the CPU version without CUDA supported:
pip install deepmd-kit[cpu]
The LAMMPS module and the i-Pi driver are only provided on Linux and macOS. To install LAMMPS and/or i-Pi, add lmp
and/or ipi
to extras:
pip install deepmd-kit[gpu,cu12,lmp,ipi]
MPICH is required for parallel running. (The macOS arm64 package doesn’t support MPI yet.)
It is suggested to install the package into an isolated environment. The supported platform includes Linux x86-64 and aarch64 with GNU C Library 2.28 or above, macOS x86-64 and arm64, and Windows x86-64. A specific version of TensorFlow which is compatible with DeePMD-kit will be also installed.
Warning
If your platform is not supported, or want to build against the installed TensorFlow, or want to enable ROCM support, please build from source.