genai-archive/Qwopus3.5-9B-v3-mlx-mxfp4

This model was converted to MLX format from Jackrong/Qwopus3.5-9B-v3 using mlx-vlm version 0.4.0. Refer to the original model card for more details on the model.

Use with mlx

pip install -U mlx-vlm
python -m mlx_vlm.generate --model genai-archive/Qwopus3.5-9B-v3-mlx-mxfp4 --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
Downloads last month
626
Safetensors
Model size
2B params
Tensor type
U8
U32
BF16
F32
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for genai-archive/Qwopus3.5-9B-v3-mlx-mxfp4

Finetuned
Qwen/Qwen3.5-9B
Adapter
(41)
this model