Model Card for results/LinalgZero-SFT-LoRA

This model is a fine-tuned version of atomwalk12/LinalgZero-SFT on the atomwalk12/linalgzero-grpo dataset. It has been trained using ART.

Quick start

from transformers import pipeline

question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="atomwalk12/LinalgZero-GRPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])

Training procedure

Visualize in Weights & Biases

This model was trained with ART.

Framework versions

  • TRL: 0.20.0
  • Transformers: 4.56.2
  • Pytorch: 2.7.1
  • Datasets: 4.4.1
  • Tokenizers: 0.22.1
  • ART: 0.5.3
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train atomwalk12/LinalgZero-GRPO