Skip to content

Commit 0ff5746

Browse files
committed
change back to warning
1 parent f5019c8 commit 0ff5746

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

megatron/model/transformer.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -604,7 +604,8 @@ def __init__(self, init_method,
604604
raise ImportError('einops is not installed, please install with pip install einops')
605605

606606
if self.checkpoint_core_attention:
607-
raise NotImplementedError("Using selective recomputation with flash-attn: this is not implemented.")
607+
print_rank_0(" Warning, using selective recomputation with flash-attn: this is already handled in the "
608+
"flash-attn library and has no effect.")
608609
self.core_attention_flash = FlashSelfAttention(
609610
causal=True, attention_dropout=args.attention_dropout
610611
)

0 commit comments

Comments
 (0)