Skip to content

Commit ac9d2aa

Browse files
committed
Updates citation format and adds acknowledgment
Converts BibTeX citation to proper arXiv format with eprint ID and classification Adds OpenSeek to acknowledgments for kernel development support Includes additional author Liangdong Wang and Guang Liu in citation
1 parent b101b0e commit ac9d2aa

File tree

1 file changed

+9
-5
lines changed

1 file changed

+9
-5
lines changed

README.md

Lines changed: 9 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -235,18 +235,22 @@ This project is licensed under the BSD 3-Clause License. See [LICENSE](LICENSE)
235235
If you use Flash-DMA in your research, please cite:
236236

237237
```bibtex
238-
@misc{flash_dma_2025,
239-
title={Trainable Dynamic Mask Sparse Attention},
240-
author={Jingze Shi and Yifan Wu and Bingheng Wu and Yiran Peng and Yuyu Luo},
241-
year={2025},
242-
url={https://github.com/SmallDoges/flash-dmattn}
238+
@misc{shi2025trainabledynamicmasksparse,
239+
title={Trainable Dynamic Mask Sparse Attention},
240+
author={Jingze Shi and Yifan Wu and Bingheng Wu and Yiran Peng and Liangdong Wang and Guang Liu and Yuyu Luo},
241+
year={2025},
242+
eprint={2508.02124},
243+
archivePrefix={arXiv},
244+
primaryClass={cs.AI},
245+
url={https://arxiv.org/abs/2508.02124},
243246
}
244247
```
245248

246249
## Acknowledgments
247250

248251
This project builds upon and integrates several excellent works:
249252

253+
- **[OpenSeek](https://github.com/FlagAI-Open/OpenSeek)** - Kernel development support
250254
- **[Flash-Attention](https://github.com/Dao-AILab/flash-attention)** - Memory-efficient attention computation
251255
- **[NVIDIA CUTLASS](https://github.com/NVIDIA/cutlass)** - High-performance matrix operations library
252256

0 commit comments

Comments
 (0)