Qwen3 VL 8B Thinking gguf

Image to text, and text to text.

quantized models Comparison

Type Bits Quality Description
IQ1 1-bit very Low Minimal footprint; worse than Q2/IQ2
Q2/IQ2 2-bit 🟥 Low Minimal footprint; only for tests
Q3/IQ3 3-bit 🟧 Low–Med “Medium” variant
Q4/IQ4 4-bit 🟩 Med–High “Medium” — 4-bit
**Q5 ** 5-bit 🟩🟩 High Excellent general-purpose quant
**Q6_K ** 6-bit 🟩🟩🟩 Very High Almost FP16 quality, larger size
**Q8 ** 8-bit 🟩🟩🟩🟩 Near-lossless baseline
Downloads last month
253
GGUF
Model size
8B params
Architecture
qwen3vl
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for John1604/Qwen3-VL-8B-Thinking-gguf

Quantized
(26)
this model