-
Notifications
You must be signed in to change notification settings - Fork 600
[Feat] shared expert dp for deepseek and deepseek_mtp for v0.11.0dev #4064
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feat] shared expert dp for deepseek and deepseek_mtp for v0.11.0dev #4064
Conversation
Signed-off-by: chenmenglong <chenmenglong1@huawei.com>
Signed-off-by: chenmenglong <chenmenglong1@huawei.com>
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces support for shared expert data parallelism for DeepSeek models on Ascend hardware. The changes span across model configuration, custom operators, and model implementation to enable this feature, primarily focusing on sequence parallelism adjustments. I've identified a critical issue in the speculative decoding proposer where an input tensor is not correctly handled for sequence parallelism, which will lead to incorrect model execution. Additionally, there's a high-severity issue with an incorrect fake implementation for a custom operator, which could cause problems with torch.compile.
| hidden_states = self.model(input_ids=input_ids, | ||
| positions=positions, | ||
| hidden_states=hidden_states) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The hidden_states tensor, which is an input to the model, is not being reduced with maybe_pad_and_reduce before the model call. This is inconsistent with how positions is handled just a few lines above, and also inconsistent with the dummy_run implementation where previous_hidden_states is correctly reduced.
When sequence parallelism is enabled, all per-token input tensors to the model should be sharded. Failing to reduce hidden_states will lead to incorrect shapes and data being fed into the model, causing incorrect outputs. This is a critical correctness bug.
hidden_states = torch.ops.vllm.maybe_pad_and_reduce(hidden_states)
hidden_states = self.model(input_ids=input_ids,
positions=positions,
hidden_states=hidden_states)|
|
||
| direct_register_custom_op(op_name="maybe_chunk_residual", | ||
| op_func=_maybe_chunk_residual_impl, | ||
| fake_impl=lambda x, residual: x, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The fake_impl for the maybe_chunk_residual custom operator is incorrect. It returns x, but the actual function returns a modified residual. While x might have the correct shape for shape inference, this is semantically wrong and breaks the convention used by other fake implementations in this file, which return placeholder tensors (e.g., from torch.empty). This can lead to subtle bugs or incorrect behavior when using torch.compile's meta backend, as it misrepresents the data flow.
A more correct approach would be to return a placeholder tensor that has the correct shape and dtype, similar to other fake implementations in this file.
| fake_impl=lambda x, residual: x, | |
| fake_impl=lambda x, residual: torch.empty_like(x, dtype=residual.dtype), |
Signed-off-by: chenmenglong <chenmenglong1@huawei.com>
What this PR does / why we need it?
Does this PR introduce any user-facing change?
How was this patch tested?