mlx-community/YanoljaNEXT-Rosetta-12B-2510-mlx-bf16

This model mlx-community/YanoljaNEXT-Rosetta-12B-2510-mlx-bf16 was converted to MLX format from yanolja/YanoljaNEXT-Rosetta-12B-2510 using mlx-lm version 0.28.1.

You can find more similar translation-related MLX model quants for an Apple Mac Studio at https://huggingface.co/bibproj

Model Description

This model is a 12-billion parameter, decoder-only language model built on the Gemma3 architecture and fine-tuned by Yanolja NEXT. It is specifically designed to translate structured data (JSON format) while preserving the original data structure.

The model was trained on a multilingual dataset covering the following languages equally:

  • Arabic
  • Bulgarian
  • Chinese
  • Czech
  • Danish
  • Dutch
  • English
  • Finnish
  • French
  • German
  • Greek
  • Gujarati
  • Hebrew
  • Hindi
  • Hungarian
  • Indonesian
  • Italian
  • Japanese
  • Korean
  • Persian
  • Polish
  • Portuguese
  • Romanian
  • Russian
  • Slovak
  • Spanish
  • Swedish
  • Tagalog
  • Thai
  • Turkish
  • Ukrainian
  • Vietnamese

While optimized for these languages, it may also perform effectively on other languages supported by the base Gemma3 model.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("mlx-community/YanoljaNEXT-Rosetta-12B-2510-mlx-bf16")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
Downloads last month
31
Safetensors
Model size
13B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for mlx-community/YanoljaNEXT-Rosetta-12B-2510-mlx-bf16

Finetuned
(1)
this model