Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

JacobiForcing

community
https://github.com/hao-ai-lab/JacobiForcing
Activity Feed

AI & ML interests

efficient LLM inference

Lanxiang Hu's profile pictureYichao Fu's profile pictureKouSiqi's profile picture
Organization Card
Community About org cards

Jacobi Forcing is a new training technique that converts LLMs into native casual parallel decoders. Jacobi Forcing models enjoy up to 4.5x higher tokens-per-forward and 4x wall-clock speedup on coding and math tasks, while retraining near-AR generation quality.

models 2

JacobiForcing/JacobiForcing_Math_7B_v1

Updated Dec 15, 2025 • 5

JacobiForcing/JacobiForcing_Coder_7B_v1

8B • Updated Dec 15, 2025 • 26

datasets 6

JacobiForcing/OpenThoughts_Math_training_data_n64w32

Viewer • Updated Dec 15, 2025 • 230k • 26

JacobiForcing/OpenThoughts_Math_training_data_n16w16

Viewer • Updated Dec 15, 2025 • 251k • 27

JacobiForcing/OpenCodeInstruct_training_data_n32w16

Viewer • Updated Dec 15, 2025 • 2.16M • 55

JacobiForcing/OpenCodeInstruct_training_data_n16w16

Viewer • Updated Dec 15, 2025 • 624k • 38

JacobiForcing/OpenCodeInstruct_length_sorted

Viewer • Updated Aug 26, 2025 • 1.64M • 41

JacobiForcing/OpenThought2_length_bucketed

Viewer • Updated Aug 7, 2025 • 1.14M • 60
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs