Qwen2.5-Coder-1.5B LoRA (DIVERSE)
LoRA adapter fine-tuned on CodeGen-Diverse-5K dataset.
Performance
- Pass@1: 21.9%
- Best checkpoint: step-500
Usage
from transformers import AutoModelForCausalLM
from peft import PeftModel
base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-Coder-1.5B-Instruct")
model = PeftModel.from_pretrained(base_model, "erdem_kandilci/qwen2.5-coder-1.5b-lora-diverse")
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for erdem12345/qwen2.5-coder-1.5b-lora-diverse
Base model
Qwen/Qwen2.5-1.5B
Finetuned
Qwen/Qwen2.5-Coder-1.5B
Finetuned
Qwen/Qwen2.5-Coder-1.5B-Instruct