Skip to content

Conversation

@ngxson
Copy link
Collaborator

@ngxson ngxson commented Dec 2, 2025

Adding this message to avoid user flooding us with issues

@ngxson ngxson marked this pull request as ready for review December 2, 2025 10:24
@ngxson ngxson requested a review from CISC as a code owner December 2, 2025 10:24
@CISC
Copy link
Collaborator

CISC commented Dec 2, 2025

Just out of curiosity, what quantization are they using?

@ngxson
Copy link
Collaborator Author

ngxson commented Dec 2, 2025

It seems like a custom version of FP8, which is why we don't yet support it

@ngxson
Copy link
Collaborator Author

ngxson commented Dec 2, 2025

More info here: huggingface/transformers#42518 (that was closed but the weight will still have it)

@ngxson ngxson merged commit 2c453c6 into ggml-org:master Dec 2, 2025
6 of 7 checks passed
@github-actions github-actions bot added the python python script changes label Dec 2, 2025
@ngxson
Copy link
Collaborator Author

ngxson commented Dec 2, 2025

Hmm, this doesn't seem to work anymore with the latest version of their quant. But I think it's not important as pre-quant GGUF will be ready provided

We ill fix this error message when the model is released

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

python python script changes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants