Hey, not sure if llama-cpp-rs is supposed to run any gguf model but i tried with the latest ministral model from unsloth : huggingface.co/unsloth/Ministral-3-3B-Instruct-2512-GGUF but it failed
I executed
cargo run --release --bin simple -- --prompt "The way to kill a linux process is" local ~/Downloads/Ministral-3-3B-Instruct-2512-UD-Q4_K_XL.gguf
and got the following error
Error: unable to load model
Caused by:
null result from llama cpp
any idea?