This model has some serious quality issues. It's not broken and still chat but it significantly underperforms the original model on complicated tasks.
This is LFM2.5-1.2B-Instruct quantized with llm-compressor to NVFP4. The model is compatible with vLLM (tested: v0.13.0). Tested with an RTX 4090.
- Developed by: The Kaitchup
- License: Apache 2.0 license
How to Support My Work
"buy me a kofi" Subscribe to The Kaitchup. This helps me a lot to continue quantizing and evaluating models for free.
- Downloads last month
- 41
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
馃檵
Ask for provider support
Model tree for kaitchup/LFM2.5-1.2B-Instruct-NVFP4
Base model
LiquidAI/LFM2.5-1.2B-Base
Finetuned
LiquidAI/LFM2.5-1.2B-Instruct