Skip to content

Commit 89127c9

Browse files
committed
docs: update DPA-2 citation (#4483)
<!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Updated references in the bibliography for the DPA-2 model to include a new article entry for 2024. - Added a new reference for an attention-based descriptor. - **Bug Fixes** - Corrected reference links in documentation to point to updated DOI links instead of arXiv. - **Documentation** - Revised entries in the credits and model documentation to reflect the latest citations and details. - Enhanced clarity and detail in fine-tuning documentation for TensorFlow and PyTorch implementations. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Signed-off-by: Jinzhe Zeng <jinzhe.zeng@rutgers.edu> (cherry picked from commit deaeec9)
1 parent 3c2db6a commit 89127c9

File tree

7 files changed

+32
-22
lines changed

7 files changed

+32
-22
lines changed

CITATIONS.bib

Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -128,26 +128,26 @@ @article{Zhang_NpjComputMater_2024_v10_p94
128128
doi = {10.1038/s41524-024-01278-7},
129129
}
130130

131-
@misc{Zhang_2023_DPA2,
131+
@article{Zhang_npjComputMater_2024_v10_p293,
132132
annote = {DPA-2},
133133
author = {
134134
Duo Zhang and Xinzijian Liu and Xiangyu Zhang and Chengqian Zhang and Chun
135-
Cai and Hangrui Bi and Yiming Du and Xuejian Qin and Jiameng Huang and
136-
Bowen Li and Yifan Shan and Jinzhe Zeng and Yuzhi Zhang and Siyuan Liu and
137-
Yifan Li and Junhan Chang and Xinyan Wang and Shuo Zhou and Jianchuan Liu
138-
and Xiaoshan Luo and Zhenyu Wang and Wanrun Jiang and Jing Wu and Yudi Yang
139-
and Jiyuan Yang and Manyi Yang and Fu-Qiang Gong and Linshuang Zhang and
140-
Mengchao Shi and Fu-Zhi Dai and Darrin M. York and Shi Liu and Tong Zhu and
141-
Zhicheng Zhong and Jian Lv and Jun Cheng and Weile Jia and Mohan Chen and
142-
Guolin Ke and Weinan E and Linfeng Zhang and Han Wang
135+
Cai and Hangrui Bi and Yiming Du and Xuejian Qin and Anyang Peng and
136+
Jiameng Huang and Bowen Li and Yifan Shan and Jinzhe Zeng and Yuzhi Zhang
137+
and Siyuan Liu and Yifan Li and Junhan Chang and Xinyan Wang and Shuo Zhou
138+
and Jianchuan Liu and Xiaoshan Luo and Zhenyu Wang and Wanrun Jiang and
139+
Jing Wu and Yudi Yang and Jiyuan Yang and Manyi Yang and Fu-Qiang Gong and
140+
Linshuang Zhang and Mengchao Shi and Fu-Zhi Dai and Darrin M. York and Shi
141+
Liu and Tong Zhu and Zhicheng Zhong and Jian Lv and Jun Cheng and Weile Jia
142+
and Mohan Chen and Guolin Ke and Weinan E and Linfeng Zhang and Han Wang
143143
},
144-
title = {
145-
{DPA-2: Towards a universal large atomic model for molecular and material
146-
simulation}
147-
},
148-
publisher = {arXiv},
149-
year = 2023,
150-
doi = {10.48550/arXiv.2312.15492},
144+
title = {{DPA-2: a large atomic model as a multi-task learner}},
145+
journal = {npj Comput. Mater},
146+
year = 2024,
147+
volume = 10,
148+
number = 1,
149+
pages = 293,
150+
doi = {10.1038/s41524-024-01493-2},
151151
}
152152

153153
@article{Zhang_PhysPlasmas_2020_v27_p122704,

deepmd/dpmodel/descriptor/dpa2.py

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -387,7 +387,7 @@ def __init__(
387387
use_tebd_bias: bool = False,
388388
type_map: Optional[list[str]] = None,
389389
) -> None:
390-
r"""The DPA-2 descriptor. see https://arxiv.org/abs/2312.15492.
390+
r"""The DPA-2 descriptor[1]_.
391391
392392
Parameters
393393
----------
@@ -434,6 +434,11 @@ def __init__(
434434
sw: torch.Tensor
435435
The switch function for decaying inverse distance.
436436
437+
References
438+
----------
439+
.. [1] Zhang, D., Liu, X., Zhang, X. et al. DPA-2: a
440+
large atomic model as a multi-task learner. npj
441+
Comput Mater 10, 293 (2024). https://doi.org/10.1038/s41524-024-01493-2
437442
"""
438443

439444
def init_subclass_params(sub_data, sub_class):

deepmd/pt/model/descriptor/dpa2.py

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -100,7 +100,7 @@ def __init__(
100100
use_tebd_bias: bool = False,
101101
type_map: Optional[list[str]] = None,
102102
) -> None:
103-
r"""The DPA-2 descriptor. see https://arxiv.org/abs/2312.15492.
103+
r"""The DPA-2 descriptor[1]_.
104104
105105
Parameters
106106
----------
@@ -147,6 +147,11 @@ def __init__(
147147
sw: torch.Tensor
148148
The switch function for decaying inverse distance.
149149
150+
References
151+
----------
152+
.. [1] Zhang, D., Liu, X., Zhang, X. et al. DPA-2: a
153+
large atomic model as a multi-task learner. npj
154+
Comput Mater 10, 293 (2024). https://doi.org/10.1038/s41524-024-01493-2
150155
"""
151156
super().__init__()
152157

doc/credits.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ Cite DeePMD-kit and methods
5454
.. bibliography::
5555
:filter: False
5656

57-
Zhang_2023_DPA2
57+
Zhang_npjComputMater_2024_v10_p293
5858

5959
- If frame-specific parameters (`fparam`, e.g. electronic temperature) is used,
6060

doc/model/dpa2.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
**Supported backends**: PyTorch {{ pytorch_icon }}, JAX {{ jax_icon }}, DP {{ dpmodel_icon }}
55
:::
66

7-
The DPA-2 model implementation. See https://arxiv.org/abs/2312.15492 for more details.
7+
The DPA-2 model implementation. See https://doi.org/10.1038/s41524-024-01493-2 for more details.
88

99
Training example: `examples/water/dpa2/input_torch_medium.json`, see [README](../../examples/water/dpa2/README.md) for inputs in different levels.
1010

doc/train/finetuning.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -94,7 +94,7 @@ The model section will be overwritten (except the `type_map` subsection) by that
9494

9595
#### Fine-tuning from a multi-task pre-trained model
9696

97-
Additionally, within the PyTorch implementation and leveraging the flexibility offered by the framework and the multi-task training process proposed in DPA2 [paper](https://arxiv.org/abs/2312.15492),
97+
Additionally, within the PyTorch implementation and leveraging the flexibility offered by the framework and the multi-task training process proposed in DPA2 [paper](https://doi.org/10.1038/s41524-024-01493-2),
9898
we also support more general multitask pre-trained models, which includes multiple datasets for pre-training. These pre-training datasets share a common descriptor while maintaining their individual fitting nets,
9999
as detailed in the paper above.
100100

doc/train/multi-task-training.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ and the Adam optimizer is executed to minimize $L^{(t)}$ for one step to update
2626
In the case of multi-GPU parallel training, different GPUs will independently select their tasks.
2727
In the DPA-2 model, this multi-task training framework is adopted.[^1]
2828

29-
[^1]: Duo Zhang, Xinzijian Liu, Xiangyu Zhang, Chengqian Zhang, Chun Cai, Hangrui Bi, Yiming Du, Xuejian Qin, Jiameng Huang, Bowen Li, Yifan Shan, Jinzhe Zeng, Yuzhi Zhang, Siyuan Liu, Yifan Li, Junhan Chang, Xinyan Wang, Shuo Zhou, Jianchuan Liu, Xiaoshan Luo, Zhenyu Wang, Wanrun Jiang, Jing Wu, Yudi Yang, Jiyuan Yang, Manyi Yang, Fu-Qiang Gong, Linshuang Zhang, Mengchao Shi, Fu-Zhi Dai, Darrin M. York, Shi Liu, Tong Zhu, Zhicheng Zhong, Jian Lv, Jun Cheng, Weile Jia, Mohan Chen, Guolin Ke, Weinan E, Linfeng Zhang, Han Wang, [arXiv preprint arXiv:2312.15492 (2023)](https://arxiv.org/abs/2312.15492) licensed under a [Creative Commons Attribution (CC BY) license](http://creativecommons.org/licenses/by/4.0/).
29+
[^1]: Duo Zhang, Xinzijian Liu, Xiangyu Zhang, Chengqian Zhang, Chun Cai, Hangrui Bi, Yiming Du, Xuejian Qin, Anyang Peng, Jiameng Huang, Bowen Li, Yifan Shan, Jinzhe Zeng, Yuzhi Zhang, Siyuan Liu, Yifan Li, Junhan Chang, Xinyan Wang, Shuo Zhou, Jianchuan Liu, Xiaoshan Luo, Zhenyu Wang, Wanrun Jiang, Jing Wu, Yudi Yang, Jiyuan Yang, Manyi Yang, Fu-Qiang Gong, Linshuang Zhang, Mengchao Shi, Fu-Zhi Dai, Darrin M. York, Shi Liu, Tong Zhu, Zhicheng Zhong, Jian Lv, Jun Cheng, Weile Jia, Mohan Chen, Guolin Ke, Weinan E, Linfeng Zhang, Han Wang, DPA-2: a large atomic model as a multi-task learner. npj Comput Mater 10, 293 (2024). [DOI: 10.1038/s41524-024-01493-2](https://doi.org/10.1038/s41524-024-01493-2) licensed under a [Creative Commons Attribution (CC BY) license](http://creativecommons.org/licenses/by/4.0/).
3030

3131
Compared with the previous TensorFlow implementation, the new support in PyTorch is more flexible and efficient.
3232
In particular, it makes multi-GPU parallel training and even tasks beyond DFT possible,

0 commit comments

Comments
 (0)