Skip to content

Conversation

@underfituu
Copy link
Contributor

What this PR does / why we need it?

The code bug caused an empty bubble. When the npu_paged_cache_load operator was called, it forcibly transferred seq_len2 to the device, which triggered synchronization and interrupted the CPU operator's launch stream.

Does this PR introduce any user-facing change?

How was this patch tested?

Signed-off-by: underfituu <hzhucong@163.com>
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to improve performance by avoiding unnecessary device synchronization in prefix cache handling for Ascend NPUs. The changes primarily involve adjusting the device placement of tensors before they are passed to NPU kernels.

The modifications in vllm_ascend/attention/mla_v1.py correctly move a tensor to the NPU earlier to prevent a blocking transfer. The changes in vllm_ascend/torchair/torchair_mla.py remove an explicit tensor transfer, relying on the kernel to handle it, which also addresses the synchronization issue.

My review identifies a potential missed optimization in vllm_ascend/torchair/torchair_mla.py where a tensor is still being created on the CPU and passed to an NPU kernel, which could cause similar performance issues. I've provided a suggestion to align it with the approach taken in mla_v1.py.

Comment on lines 788 to 789
seq_len_chunk = prefill_metadata.chunked_context.chunk_seq_lens[i]
seq_len = torch.stack([seq_len_base, seq_len_chunk])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The tensor seq_len is constructed on the CPU and subsequently passed to the torch_npu.atb.npu_ring_mla kernel at line 824. This might lead to implicit synchronization and performance degradation, which is similar to the issue this PR aims to fix for npu_paged_cache_load.

In vllm_ascend/attention/mla_v1.py, a similar pattern was addressed by ensuring the tensor is on the NPU device before being used in the kernel. To maintain consistency and prevent potential performance bottlenecks, seq_len should be moved to the NPU.

I suggest moving the tensor to the correct device during its creation.

Suggested change
seq_len_chunk = prefill_metadata.chunked_context.chunk_seq_lens[i]
seq_len = torch.stack([seq_len_base, seq_len_chunk])
seq_len_chunk = prefill_metadata.chunked_context.chunk_seq_lens[i]
seq_len = torch.stack([seq_len_base, seq_len_chunk]).to(q_nope.device, non_blocking=True)

@github-actions
Copy link

github-actions bot commented Nov 6, 2025

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Signed-off-by: underfituu <hzhucong@163.com>
Signed-off-by: underfituu <hzhucong@163.com>
@underfituu underfituu force-pushed the dev_bugfix_prefix_cache_pref branch 2 times, most recently from b5b624e to 8269e8e Compare November 6, 2025 08:08
@yiz-liu yiz-liu added ready read for review ready-for-test start test by label for PR labels Nov 6, 2025
@underfituu underfituu force-pushed the dev_bugfix_prefix_cache_pref branch from 8269e8e to 1a01fc0 Compare November 6, 2025 08:59
Signed-off-by: underfituu <hzhucong@163.com>
@underfituu underfituu force-pushed the dev_bugfix_prefix_cache_pref branch from 1a01fc0 to 57de640 Compare November 6, 2025 13:48
@underfituu
Copy link
Contributor Author

cherry pick:#4022 (main)

@wangxiyuan wangxiyuan merged commit 7ea17fb into vllm-project:v0.11.0-dev Nov 10, 2025
16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

module:tests ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants