Skip to content

Commit c318686

Browse files
authored
Update attention_backends.md to format kernels (#12757)
1 parent 6028613 commit c318686

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

docs/source/en/optimization/attention_backends.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ This guide will show you how to set and use the different attention backends.
3232

3333
The [`~ModelMixin.set_attention_backend`] method iterates through all the modules in the model and sets the appropriate attention backend to use. The attention backend setting persists until [`~ModelMixin.reset_attention_backend`] is called.
3434

35-
The example below demonstrates how to enable the `_flash_3_hub` implementation for FlashAttention-3 from the [kernel](https://github.com/huggingface/kernels) library, which allows you to instantly use optimized compute kernels from the Hub without requiring any setup.
35+
The example below demonstrates how to enable the `_flash_3_hub` implementation for FlashAttention-3 from the [`kernels`](https://github.com/huggingface/kernels) library, which allows you to instantly use optimized compute kernels from the Hub without requiring any setup.
3636

3737
> [!NOTE]
3838
> FlashAttention-3 is not supported for non-Hopper architectures, in which case, use FlashAttention with `set_attention_backend("flash")`.
@@ -156,4 +156,4 @@ Refer to the table below for a complete list of available attention backends and
156156
| `_sage_qk_int8_pv_fp16_triton` | [SageAttention](https://github.com/thu-ml/SageAttention) | INT8 QK + FP16 PV (Triton) |
157157
| `xformers` | [xFormers](https://github.com/facebookresearch/xformers) | Memory-efficient attention |
158158

159-
</details>
159+
</details>

0 commit comments

Comments
 (0)