Skip to content

Commit df69839

Browse files
committed
Updates flash attention integration
Points the integration to the renamed sparse attention package so setup guidance stays accurate.
1 parent 3dd3392 commit df69839

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

examples/modeling/modeling_doge.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -45,9 +45,9 @@
4545
from .configuration_doge import DogeConfig
4646

4747
try:
48-
from flash_dmattn.integrations.flash_dynamic_mask_attention import flash_dynamic_mask_attention_forward
48+
from flash_sparse_attn.integrations.flash_sparse_attention import flash_dynamic_mask_attention_forward
4949
except ImportError:
50-
print("Please install flash_dmattn to use this model: pip install flash-dmattn")
50+
print("Please install flash_sparse_attn to use this model: pip install flash-sparse-attn")
5151

5252
if is_torch_flex_attn_available():
5353
from torch.nn.attention.flex_attention import BlockMask

0 commit comments

Comments
 (0)