-
Notifications
You must be signed in to change notification settings - Fork 544
[bugfix] Fixed the bug in retrieving the quantization method for mlp.experts (e.g., DeepSeek_v3.2_exp w8a8) #4035
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: v0.11.0-dev
Are you sure you want to change the base?
Conversation
…experts (e.g., DeepSeek_v3.2_exp w8a8) Signed-off-by: yangqinghao-cmss <yangqinghao_yewu@cmss.chinamobile.com>
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request addresses a KeyError that occurs when retrieving quantization methods for Mixture-of-Experts (MoE) layers, where quantization configurations are defined on a per-expert basis. The changes correctly handle this by iterating through expert-specific keys. My review includes two high-severity comments to enhance the correctness and robustness of the implementation. I've suggested using startswith for more precise prefix matching and adding checks for cases where no expert layers are found to prevent potential runtime errors.
| experts_quant_description = [ | ||
| self.quant_description[layer] | ||
| for layer in self.quant_description if prefix in layer | ||
| ] | ||
| is_skipped = any( | ||
| quantization == "FLOAT" | ||
| for quantization in experts_quant_description | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This logic can be improved for correctness and robustness:
- Correctness: Using
prefix in layeris less precise thanlayer.startswith(prefix + '.')for hierarchical module names. It could lead to incorrect matches if another module name contains the expert prefix as a substring. - Robustness: If no expert layers are found for the given
prefix,experts_quant_descriptionwill be an empty list.any()on an empty list returnsFalse, causingis_skippedto beFalse. This will likely lead to aRuntimeErrorlater inget_linear_quant_type. It's safer to treat this case as skipped (unquantized).
Here is a suggested improvement that addresses these points.
| experts_quant_description = [ | |
| self.quant_description[layer] | |
| for layer in self.quant_description if prefix in layer | |
| ] | |
| is_skipped = any( | |
| quantization == "FLOAT" | |
| for quantization in experts_quant_description | |
| ) | |
| experts_quant_description = [ | |
| self.quant_description[layer] | |
| for layer in self.quant_description if layer.startswith(prefix + '.') | |
| ] | |
| if not experts_quant_description: | |
| is_skipped = True | |
| else: | |
| is_skipped = any( | |
| quantization == "FLOAT" | |
| for quantization in experts_quant_description) |
| experts_quant_description = set( | ||
| quant_description[layer] | ||
| for layer in quant_description if prefix in layer | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using prefix in layer is less precise than layer.startswith(prefix + '.') for hierarchical module names. It could lead to incorrect matches if another module name contains the expert prefix as a substring. Using startswith is safer and more correct. Additionally, using a set comprehension {...} is more idiomatic than set(...).
| experts_quant_description = set( | |
| quant_description[layer] | |
| for layer in quant_description if prefix in layer | |
| ) | |
| experts_quant_description = { | |
| quant_description[layer] | |
| for layer in quant_description if layer.startswith(prefix + '.') | |
| } |
What this PR does / why we need it?
When retrieving the quantization method for MOE (e.g., the quantization file of DeepSeek v3.2 exp do not match the model's naming convention in eager mode), a KeyError is raised: "model.layers.3.mlp.experts.weight not in self.quant_description". However the quantization file is like :
Does this PR introduce any user-facing change?
How was this patch tested?
vllm-ascend: 0.11.0rc0