Qwen2.5-0.5B-Instruct (MLX, 4-bit)
This repository contains an MLX-converted and 4-bit quantized version of Qwen/Qwen2.5-0.5B-Instruct.
- No fine-tuning or training was performed
- Format conversion + post-training quantization only
- Recommended default for on-device usage
Usage
pip install -U mlx-lm
mlx_lm.generate \
--model Irfanuruchi/Qwen2.5-0.5B-Instruct-MLX-4bit \
--prompt "Write a helpful onboarding message for an iOS app in 3 bullet points."
Bench notes (MacBook Pro M3 Pro)
- Prompt tokens: 45
- Generation tokens: 100
- Generation speed: ~292.9 tokens/sec
- Peak memory: ~0.319 GB
Tooling
- mlx-lm: 0.30.2
- mlx: bundled with Apple MLX (no public version string)
Related models
- 8-bit variant (higher quality):
https://huggingface.co/Irfanuruchi/Qwen2.5-0.5B-Instruct-MLX-8bit
- Downloads last month
- 21
Model size
77.3M params
Tensor type
BF16
·
U32 ·
Hardware compatibility
Log In to add your hardware
4-bit