imatrix Quantization of mistralai/Devstral-2-123B-Instruct-2512
NOTE ik_llama.cpp can also run your existing GGUFs from bartowski, unsloth, mradermacher, etc if you want to try it out before downloading my quants.
Some of ik's new quants are supported with Nexesenex/croco.cpp fork of KoboldCPP with Windows builds for CUDA 12.9. Also check for Windows builds by Thireus here. which have been CUDA 12.8.
These quants provide best in class perplexity for the given memory footprint.
Big Thanks
Shout out to Wendell and the Level1Techs crew, the community Forums, YouTube Channel! BIG thanks for providing BIG hardware expertise and access to run these experiments and make these great quants available to the community!!!
Also thanks to all the folks in the quanting and inferencing community on BeaverAI Club Discord and on r/LocalLLaMA for tips and tricks helping each other run, test, and benchmark all the fun new models! Thanks to huggingface for hosting all these big quants!
Finally, I really appreciate the support from aifoundry.org so check out their open source RISC-V based solutions!
Quant Collection
Perplexity computed against wiki.test.raw.
- Baseline
Q8_0123.723 GiB (8.500 BPW)- 3.7919 +/- 0.01980
IQ4_KSS 68.536 GiB (4.709 BPW)
Final estimate: PPL over 594 chunks for n_ctx=512 = 3.8832 +/- 0.02076
๐ Secret Recipe
#!/usr/bin/env bash
custom="
## Attention [0-87]
## Keep qkv the same to allow --merge-qkv
blk\..*\.attn_q.*\.weight=iq6_k
blk\..*\.attn_k.*\.weight=iq6_k
blk\..*\.attn_v.*\.weight=iq6_k
blk\..*\.attn_output.*\.weight=iq6_k
## Dense Layers [0-87]
blk\..*\.ffn_down\.weight=iq4_ks
blk\..*\.ffn_(gate|up)\.weight=iq4_kss
## Non-Repeating layers
token_embd\.weight=iq4_k
output\.weight=iq6_k
"""
custom=$(
echo "$custom" | grep -v '^#' | \
sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
--custom-q "$custom" \
--imatrix /mnt/data/models/ubergarm/Devstral-2-123B-Instruct-2512-GGUF/imatrix-Devstral-2-123B-Instruct-2512-Q8_0.dat \
/mnt/data/models/ubergarm/Devstral-2-123B-Instruct-2512-GGUF/Devstral-2-123B-Instruct-2512-BF16-00001-of-00006.gguf \
/mnt/data/models/ubergarm/Devstral-2-123B-Instruct-2512-GGUF/Devstral-2-123B-Instruct-2512-IQ4_KSS.gguf \
IQ4_KSS \
128
Quick Start
This is a DENSE model and not an MoE so you will want as many of the 88 layers as possible running on VRAM.
If you can fit the entire model in 2x GPUs, try adding -sm graph for the new ik_llama.cpp tensor parallel implementation.
# Example running full offload on 2x GPUs on ik_llama.cpp
./build/bin/llama-server \
--model "$model"\
--alias ubergarm/Devstral-2-123B-Instruct-2512-GGUF \
-ctk q8_0 -ctv q8_0 \
--ctx-size 32768 \
--merge-qkv \
-ngl 99 \
--threads 1 \
--host 127.0.0.1 \
--port 8080 \
--parallel 1 \
--jinja
# Example running Hybrid CPU+GPU(s) on ik_llama.cpp
# adjust the -ngl to fit as many of the 88 layers as possible without OOMing for your desired context
# adjust the threads to match your number of physical cores
./build/bin/llama-server \
--model "$model"\
--alias ubergarm/Devstral-2-123B-Instruct-2512-GGUF \
-ctk q8_0 -ctv q8_0 \
--ctx-size 32768 \
--merge-qkv \
-ngl 20 \
--threads 16 \
--host 127.0.0.1 \
--port 8080 \
--parallel 1 \
--no-mmap \
--jinja
References
- Downloads last month
- 826
Model tree for ubergarm/Devstral-2-123B-Instruct-2512-GGUF
Base model
mistralai/Devstral-2-123B-Instruct-2512