Skip to content

Conversation

@1092626063
Copy link
Contributor

@1092626063 1092626063 commented Nov 8, 2025

What this PR does / why we need it?

  • support piecewise + mtp,对比纯piecewise性能提升约30%
  • fullgraph + mtp: 精度乱码
  • allgatherEP + mtp:叠加存在tensor shape报错

Need to modify vllm:

  1. vllm with tag v0.11.0
  2. vllm/vllm/config/speculative.py: add "glm4_moe_mtp" in SpeculativeMethod List
  3. vllm/vllm/model_executor/models/glm4_moe_mtp.py:
    class SharedHeadadd __init__(): add prefix=maybe_prefix(prefix,"head") to ParallelLMHead
    class Glm4MoeMultiTokenPredictorLayer __init__(): add prefix=maybe_prefix(prefix,"shared_head") to SharedHead

Does this PR introduce any user-facing change?

How was this patch tested?

vllm serve /home/glm4.5_w8a8_with_float_mtp/ \ --data-parallel-size 2 \ --tensor-parallel-size 8 \ --seed 1024 \ --served-model-name dsv3 \ --max-model-len 35000 \ --max-num-batched-tokens 16384 \ --max-num-seqs 32 \ --no-enable-prefix-caching \ --quantization ascend \ --trust-remote-code \ --enable-expert-parallel \ --gpu-memory-utilization 0.9 \ --speculative-config '{"num_speculative_tokens": 1, "model":"/home/glm4.5_w8a8_with_float_mtp", "method":"glm4_moe_mtp"}' \ --additional-config \ '{"ascend_scheduler_config":{"enabled":true, "enable_chunked_prefill":true}}' \ --compilation-config \ '{"cudagraph_capture_sizes": [1,4,8,16,32,48,64]}' \

@github-actions
Copy link

github-actions bot commented Nov 8, 2025

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for glm4_moe_mtp in speculative decoding. The changes correctly integrate the new model type across the configuration, proposer selection, and model execution logic. The fix to separate glm4_moe_mtp from the deepseek_mtp method is appropriate. My main feedback concerns code duplication in the configuration file, where adding support for the new model type has exacerbated a pre-existing maintainability issue. A refactoring to consolidate similar logic would be beneficial.

Comment on lines +138 to +146
elif (self.draft_model_config.hf_config.model_type ==
"glm4_moe_mtp"):
self.method = "glm4_moe_mtp"
if self.num_speculative_tokens > 1:
logger.warning(
"All GLM4 MTP models only have " \
"one layer. Might need some code changes " \
"to support multiple layers."
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This elif block for glm4_moe_mtp is nearly identical to the blocks for ernie_mtp, qwen3_next_mtp, and longcat_flash_mtp. This code duplication makes the code harder to maintain and can lead to inconsistencies if one block is updated but others are not.

A refactoring to group these similar MTP model types would improve maintainability. For example, you could use a dictionary to handle these model types in a single, generalized block:

# Example of a possible refactoring
MTP_MODEL_TYPES_WITH_OWN_METHOD = {
    "ernie_mtp": "Ernie",
    "glm4_moe_mtp": "GLM4",
    "qwen3_next_mtp": "Qwen3Next",
    "longcat_flash_mtp": "LongCat",
}
model_type = self.draft_model_config.hf_config.model_type

# ...
elif model_type in MTP_MODEL_TYPES_WITH_OWN_METHOD:
    self.method = model_type
    if self.num_speculative_tokens > 1:
        model_name = MTP_MODEL_TYPES_WITH_OWN_METHOD[model_type]
        logger.warning(
            f"All {model_name} MTP models only have "
            "one layer. Might need some code changes "
            "to support multiple layers."
        )
# ...

While a full refactoring is outside the scope of the current changes, it's a valuable improvement to consider for code health.

@1092626063 1092626063 changed the title 【WIP】glm4.5 support mtp with piecewise graph 【WIP】glm4.5 support mtp Nov 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant