From d0e6ccac5e35330a11a7afb9cc6464d3bfaab744 Mon Sep 17 00:00:00 2001 From: wangyan Date: Fri, 1 Nov 2024 22:18:41 +0800 Subject: [PATCH 1/2] add pt compression info --- doc/freeze/compress.md | 19 +++++++++++++++++-- 1 file changed, 17 insertions(+), 2 deletions(-) diff --git a/doc/freeze/compress.md b/doc/freeze/compress.md index e26c85e45a..d7fdc92326 100644 --- a/doc/freeze/compress.md +++ b/doc/freeze/compress.md @@ -1,7 +1,7 @@ -# Compress a model {{ tensorflow_icon }} +# Compress a model {{ tensorflow_icon }} {{ pytorch_icon }} :::{note} -**Supported backends**: TensorFlow {{ tensorflow_icon }} +**Supported backends**: TensorFlow {{ tensorflow_icon }}, PyTorch {{ pytorch_icon }} ::: ## Theory @@ -64,10 +64,25 @@ In the compressed DP model, the number of neighbors is precisely indexed at the Once the frozen model is obtained from DeePMD-kit, we can get the neural network structure and its parameters (weights, biases, etc.) from the trained model, and compress it in the following way: +::::{tab-set} + +:::{tab-item} TensorFlow {{ tensorflow_icon }} + ```bash dp compress -i graph.pb -o graph-compress.pb ``` +::: + +:::{tab-item} PyTorch {{ pytorch_icon }} + +```bash +dp compress -i model.pt -o model-compress.pt +``` + +::: +:::: + where `-i` gives the original frozen model, `-o` gives the compressed model. Several other command line options can be passed to `dp compress`, which can be checked with ```bash From 186b87167826368121e8db66f2ea0b62d6985ac2 Mon Sep 17 00:00:00 2001 From: wangyan Date: Sat, 2 Nov 2024 09:58:41 +0800 Subject: [PATCH 2/2] change file .pt to .pth --- doc/freeze/compress.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/freeze/compress.md b/doc/freeze/compress.md index d7fdc92326..4f30458df1 100644 --- a/doc/freeze/compress.md +++ b/doc/freeze/compress.md @@ -77,7 +77,7 @@ dp compress -i graph.pb -o graph-compress.pb :::{tab-item} PyTorch {{ pytorch_icon }} ```bash -dp compress -i model.pt -o model-compress.pt +dp compress -i model.pth -o model-compress.pth ``` :::