diff --git a/deepmd/pd/model/descriptor/repflows.py b/deepmd/pd/model/descriptor/repflows.py index 4ea72daf02..756562e333 100644 --- a/deepmd/pd/model/descriptor/repflows.py +++ b/deepmd/pd/model/descriptor/repflows.py @@ -69,8 +69,9 @@ def border_op( argument8, ) -> paddle.Tensor: raise NotImplementedError( - "border_op is not available since customized Paddle OP library is not built when freezing the model. " - "See documentation for DPA3 for details." + "The 'border_op' operator is unavailable because the custom Paddle OP library was not built when freezing the model.\n" + "To install 'border_op', run: python source/op/pd/setup.py install\n" + "For more information, please refer to the DPA3 documentation." ) # Note: this hack cannot actually save a model that can be run using LAMMPS. diff --git a/deepmd/pd/model/descriptor/repformers.py b/deepmd/pd/model/descriptor/repformers.py index 09e9b51c83..4151833f35 100644 --- a/deepmd/pd/model/descriptor/repformers.py +++ b/deepmd/pd/model/descriptor/repformers.py @@ -66,8 +66,9 @@ def border_op( argument8, ) -> paddle.Tensor: raise NotImplementedError( - "border_op is not available since customized Paddle OP library is not built when freezing the model. " - "See documentation for DPA3 for details." + "The 'border_op' operator is unavailable because the custom Paddle OP library was not built when freezing the model.\n" + "To install 'border_op', run: python source/op/pd/setup.py install\n" + "For more information, please refer to the DPA3 documentation." ) # Note: this hack cannot actually save a model that can be run using LAMMPS. diff --git a/doc/install/easy-install.md b/doc/install/easy-install.md index bbf8d9b1d9..9f4d6d8861 100644 --- a/doc/install/easy-install.md +++ b/doc/install/easy-install.md @@ -185,7 +185,10 @@ Switch to the TensorFlow {{ tensorflow_icon }} tab for more information. ::::{tab-item} CUDA 12.6 ```bash -pip install paddlepaddle-gpu==3.0.0 -i https://www.paddlepaddle.org.cn/packages/stable/cu126/ +# release version +pip install paddlepaddle-gpu==3.1.1 -i https://www.paddlepaddle.org.cn/packages/stable/cu126/ +# nightly-build version +# pip install --pre paddlepaddle-gpu -i https://www.paddlepaddle.org.cn/packages/nightly/cu126/ pip install deepmd-kit ``` @@ -194,7 +197,10 @@ pip install deepmd-kit ::::{tab-item} CUDA 11.8 ```bash -pip install paddlepaddle-gpu==3.0.0 -i https://www.paddlepaddle.org.cn/packages/stable/cu118/ +# release version +pip install paddlepaddle-gpu==3.1.1 -i https://www.paddlepaddle.org.cn/packages/stable/cu118/ +# nightly-build version +# pip install --pre paddlepaddle-gpu -i https://www.paddlepaddle.org.cn/packages/nightly/cu118/ pip install deepmd-kit ``` @@ -203,7 +209,10 @@ pip install deepmd-kit ::::{tab-item} CPU ```bash -pip install paddlepaddle==3.0.0 -i https://www.paddlepaddle.org.cn/packages/stable/cpu/ +# release version +pip install paddlepaddle==3.1.1 -i https://www.paddlepaddle.org.cn/packages/stable/cpu/ +# nightly-build version +# pip install --pre paddlepaddle -i https://www.paddlepaddle.org.cn/packages/nightly/cpu/ pip install deepmd-kit ``` diff --git a/doc/install/install-from-source.md b/doc/install/install-from-source.md index 1dc72c51fa..a21b8913db 100644 --- a/doc/install/install-from-source.md +++ b/doc/install/install-from-source.md @@ -99,11 +99,22 @@ To install Paddle, run ```sh # cu126 -pip install paddlepaddle-gpu==3.0.0 -i https://www.paddlepaddle.org.cn/packages/stable/cu126/ +# release version +pip install paddlepaddle-gpu==3.1.1 -i https://www.paddlepaddle.org.cn/packages/stable/cu126/ +# nightly-build version +# pip install --pre paddlepaddle-gpu -i https://www.paddlepaddle.org.cn/packages/nightly/cu126/ + # cu118 -pip install paddlepaddle-gpu==3.0.0 -i https://www.paddlepaddle.org.cn/packages/stable/cu118/ +# release version +pip install paddlepaddle-gpu==3.1.1 -i https://www.paddlepaddle.org.cn/packages/stable/cu118/ +# nightly-build version +# pip install --pre paddlepaddle-gpu -i https://www.paddlepaddle.org.cn/packages/nightly/cu118/ + # cpu -pip install paddlepaddle==3.0.0 -i https://www.paddlepaddle.org.cn/packages/stable/cpu/ +# release version +pip install paddlepaddle==3.1.1 -i https://www.paddlepaddle.org.cn/packages/stable/cpu/ +# nightly-build version +# pip install --pre paddlepaddle -i https://www.paddlepaddle.org.cn/packages/nightly/cpu/ ``` ::: diff --git a/doc/model/dpa3.md b/doc/model/dpa3.md index c63b26f90a..0ff46c438f 100644 --- a/doc/model/dpa3.md +++ b/doc/model/dpa3.md @@ -1,4 +1,4 @@ -# Descriptor DPA3 {{ pytorch_icon }} {{ jax_icon }} {{ dpmodel_icon }} +# Descriptor DPA3 {{ pytorch_icon }} {{ jax_icon }} {{ paddle_icon }} {{ dpmodel_icon }} :::{note} **Supported backends**: PyTorch {{ pytorch_icon }}, JAX {{ jax_icon }}, DP {{ dpmodel_icon }} @@ -40,7 +40,11 @@ Virial RMSEs were averaged exclusively for systems containing virial labels (`Al Note that we set `float32` in all DPA3 models, while `float64` in other models by default. -## Requirements of installation from source code {{ pytorch_icon }} +## Requirements of installation from source code {{ pytorch_icon }} {{ paddle_icon }} + +::::{tab-set} + +:::{tab-item} PyTorch {{ pytorch_icon }} To run the DPA3 model on LAMMPS via source code installation (users can skip this step if using [easy installation](../install/easy-install.md)), @@ -53,6 +57,25 @@ If one runs LAMMPS with MPI, the customized OP library for the C++ interface sho If one runs LAMMPS with MPI and CUDA devices, it is recommended to compile the customized OP library for the C++ interface with a [CUDA-Aware MPI](https://developer.nvidia.com/mpi-solutions-gpus) library and CUDA, otherwise the communication between GPU cards falls back to the slower CPU implementation. +::: + +:::{tab-item} Paddle {{ paddle_icon }} + +The customized OP library for the Python interface can be installed by + +```sh +cd deepmd-kit/source/op/pd +python setup.py install +``` + +If one runs LAMMPS with MPI, the customized OP library for the C++ interface should be compiled against the same MPI library as the runtime MPI. +If one runs LAMMPS with MPI and CUDA devices, it is recommended to compile the customized OP library for the C++ interface with a [CUDA-Aware MPI](https://developer.nvidia.com/mpi-solutions-gpus) library and CUDA, +otherwise the communication between GPU cards falls back to the slower CPU implementation. + +::: + +:::: + ## Limitations of the JAX backend with LAMMPS {{ jax_icon }} When using the JAX backend, 2 or more MPI ranks are not supported. One must set `map` to `yes` using the [`atom_modify`](https://docs.lammps.org/atom_modify.html) command. diff --git a/doc/train/training.md b/doc/train/training.md index ca0b46c0ef..6ccb43bbd7 100644 --- a/doc/train/training.md +++ b/doc/train/training.md @@ -29,10 +29,10 @@ $ dp --pt train input.json :::{tab-item} Paddle {{ paddle_icon }} ```bash -# training model in eager mode +# training model $ dp --pd train input.json -# [experimental] training model with CINN compiler for better performance, +# [experimental] training models with the CINN compiler (~40%+ speedup) # see: https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/guides/paddle_v3_features/cinn_cn.html ## If the shape(s) of batch input data are dynamic during training(default). $ CINN=1 dp --pd train input.json