Skip to content

Conversation

@dragondream-chen
Copy link
Contributor

What this PR does / why we need it?

Does this PR introduce any user-facing change?

How was this patch tested?

Signed-off-by: chenmenglong <chenmenglong1@huawei.com>
Signed-off-by: chenmenglong <chenmenglong1@huawei.com>
@github-actions
Copy link

github-actions bot commented Nov 7, 2025

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for shared expert data parallelism for DeepSeek models on Ascend hardware. The changes span across model configuration, custom operators, and model implementation to enable this feature, primarily focusing on sequence parallelism adjustments. I've identified a critical issue in the speculative decoding proposer where an input tensor is not correctly handled for sequence parallelism, which will lead to incorrect model execution. Additionally, there's a high-severity issue with an incorrect fake implementation for a custom operator, which could cause problems with torch.compile.

Comment on lines 487 to 489
hidden_states = self.model(input_ids=input_ids,
positions=positions,
hidden_states=hidden_states)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The hidden_states tensor, which is an input to the model, is not being reduced with maybe_pad_and_reduce before the model call. This is inconsistent with how positions is handled just a few lines above, and also inconsistent with the dummy_run implementation where previous_hidden_states is correctly reduced.

When sequence parallelism is enabled, all per-token input tensors to the model should be sharded. Failing to reduce hidden_states will lead to incorrect shapes and data being fed into the model, causing incorrect outputs. This is a critical correctness bug.

                        hidden_states = torch.ops.vllm.maybe_pad_and_reduce(hidden_states)
                        hidden_states = self.model(input_ids=input_ids,
                                                positions=positions,
                                                hidden_states=hidden_states)


direct_register_custom_op(op_name="maybe_chunk_residual",
op_func=_maybe_chunk_residual_impl,
fake_impl=lambda x, residual: x,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The fake_impl for the maybe_chunk_residual custom operator is incorrect. It returns x, but the actual function returns a modified residual. While x might have the correct shape for shape inference, this is semantically wrong and breaks the convention used by other fake implementations in this file, which return placeholder tensors (e.g., from torch.empty). This can lead to subtle bugs or incorrect behavior when using torch.compile's meta backend, as it misrepresents the data flow.

A more correct approach would be to return a placeholder tensor that has the correct shape and dtype, similar to other fake implementations in this file.

Suggested change
fake_impl=lambda x, residual: x,
fake_impl=lambda x, residual: torch.empty_like(x, dtype=residual.dtype),

Signed-off-by: chenmenglong <chenmenglong1@huawei.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant