Shisa V2.1

Shisa V2.1 is an update to our Shisa V2 family of bilingual Japanese and English (JA/EN) general-purpose chat models trained by Shisa.AI. These models aim to excel in Japanese language tasks while retaining robust English capabilities.

Following the release of our Shisa V2 405B model, we resampled a new version of our Shisa V2 dataset and found a noticeable performance uplift. V2.1 started as a "quick" update with this and some other data fixes, but led to updates and improvements to almost all of our Shisa V2 datasets, as well as several new additions for better instruction following, translation, politeness, and other Japanese-specific language improvements. Beyond re-tuned SFT and DPO recipes, some of these new models integrate additional model-merging and RL stages as well.

The initial Shisa V2.1 release includes improved versions of our most popular 14B and 70B class models as well as new models at 1.2B, 3B, and 8B sizes suitable for local and edge-based use cases. Each has class-leading Japanese-language performance at their respective size.

License Model Parameters Context Length JA AVG EN AVG
LFM shisa-v2.1-lfm2-1.2b 1.2B 32K 43.4 27.6
Llama 3.2 shisa-v2.1-llama3.2-3b 3B 128K 57.9 43.2
Apache 2.0 shisa-v2.1-qwen3-8b 8B 32K/128K 67.8 57.8
MIT shisa-v2.1-unphi4-14b 14B 16K 72.6 57.7
Llama 3.3 shisa-v2.1-llama3.3-70b 70B 128K 73.1 66.0

Performance

Shisa V2.1 model development was guided by the latest version (V2.1) of our internal multieval test battery and is not directly comparable to our previous Shisa V2 testing so for convenience, we have included a table showing direct comparisons to our prior models:

Model JA AVG EN AVG Shaberi V2.1
shisa-v2-llama3.1-405b 74.7 67.5 8.31
shisa-v2.1-llama3.3-70b 73.1 66.0 8.03
shisa-v2.1-unphi4-14b 72.6 57.7 7.71
shisa-v2-llama3.3-70b 69.0 64.3 7.68
shisa-v2-unphi4-14b 68.7 66.7 7.62
shisa-v2.1-qwen3-8b 67.8 57.8 7.35
shisa-v2-llama3.1-8b 58.7 55.1 6.43
shisa-v2.1-llama3.2-3b 57.9 43.2 6.23
shisa-v2.1-lfm2-1.2b 43.4 27.6 5.35

Shisa V2.1 14B exceeds Shisa V2 70B in Japanese language benchmark performance, and our Shisa V2.1 70B is approaching Shisa V2 405B performance. These gains were achieved without any benchmark-specific targeted training, and based on both the broad coverage of our evals as well as our own production deployments of variants of these models, we believe they reflect actual improvements in real-world Japanese-language capabilities.

Besides changes to our MultiEval composition, our Shaberi scores are not comparable to earlier scores as we have made some modifications to the prompts and also now use GPT-5.1 (gpt-5.1-2025-11-13) as our Shaberi LLM Judge. GPT-5.1 scores answers significantly lower than the GPT-4.1 (gpt-4.1-2025-04-14) judge we used for V2.

Japanese MT-Bench

As pointed out in our Shisa V2 405B Overview Report, as our models have approached and surpassed the commonly used LLM Judge (GPT-4) in Japanese language capabilities, those judgments have become less useful for model development. For Shisa V2, we switched our Japanese MT-Bench testing to using GPT-4.1 for stricter and more discriminating scores, and for Shisa V2.1 we switched again to GPT-5.1, which we found to be even better at helping us judge model quality.

However, for easier reference and comparison to other models, we’ve included a table of our JA MT-Bench results using a dedicated "canonical version" test harness that should be more directly comparable with the scores posted by other models online. Our repo also contains the raw answers and judgments for our tested models for replication or future re-judging. (It also includes a number of custom statistical analysis scripts which may be of interest.)

Model GPT-4-Turbo GPT-4o GPT-4.1 GPT-5.1
shisa-ai/shisa-v2-llama3.1-405b 9.43 8.92 9.13 7.66
shisa-ai/shisa-v2.1-llama3.3-70b 9.26 8.74 8.69 7.24
shisa-ai/shisa-v2.1-unphi4-14b 9.28 8.50 8.68 7.07
shisa-ai/shisa-v2-llama3.3-70b 9.07 8.44 8.41 6.82
shisa-ai/shisa-v2-unphi4-14b 8.69 8.38 8.30 6.51
shisa-ai/shisa-v2.1-qwen3-8b 8.93 8.10 8.04 6.39
shisa-ai/shisa-v2-qwen2.5-7b 8.16 7.62 7.31 5.79
shisa-ai/shisa-v2-llama3.1-8b 7.97 7.41 6.99 5.44
shisa-ai/shisa-v2.1-llama3.2-3b 7.55 6.94 6.42 4.92
shisa-ai/shisa-v2.1-lfm2-1.2b 6.69 6.13 5.55 4.19

Cross-Lingual Token Leakage

While reviewing eval results, we noticed that many models can score highly on Japanese language benchmarks but still output non-Japanese words or sub-words (tokens). Internally we refer to this as Cross-Lingual Token Leakage (CLTL). It has also been referred to more generally as "word-level language confusion" (Marchisio et al., "Understanding and Mitigating Language Confusion in LLMs," Cohere).

We see many strong multilingual models that exhibit language confusion behavior, but quantifying (and reliably identifying) this issue is harder than one might expect because not only do Japanese and Chinese share Unicode code-planes, but also many valid English words can commonly appear in Japanese text. (Think "AI", "VR", or common words and acronyms like "Google" or "NATO"). This is compounded by the fact that even frontier models suffer from “token blindness” - they are often unable to disentangle the meaning from the actual language of the tokens and often fail to recognize wrong-language tokens.

For Shisa V2.1, we have developed a brand-new class of Japanese evaluation benchmark specifically designed to identify CLTL, which can both measure and specifically identify wrong language tokens.

Base Model Shisa V2.1 Model Base Leak % Shisa V2.1 Leak % Leakage Improvement
Llama-3.2-3B-Instruct shisa-v2.1-llama3.2-3b 11.48% 0.24% 47.8×
LFM2-1.2B shisa-v2.1-lfm2-1.2b 4.32% 0.32% 13.5×
Qwen3-8B shisa-v2.1-qwen3-8b 2.18% 0.44% 5.0×
Llama-3.3-70B-Instruct shisa-v2.1-llama3.3-70b 1.90% 0.36% 5.3×
phi-4 shisa-v2.1-unphi4-14b 0.12% 0.06% 2.0×

We believe eliminating both CLTL and language confusion in general is of the utmost importance for deploying LLMs for most Japanese-language production use cases (e.g., translation, customer service, or even basic writing tasks) and we plan to continue to both improve our detection heuristics and to integrate it into all our future evaluation grading, as well as use our better CLTL detection to further improve our training methods. We will be publishing more details in-depth in a future writeup.

MultiEval 2.1

Our primary MultiEval V2.1 suite is a mixed battery of 10 Japanese and 7 English/general evaluations designed to give a broad picture of overall model performance across a variety of common general language tasks.

Japanese

  • Shaberi v2.1 - Our public fork of LightBlue’s Shaberi suite, extended for reasoning models, updated judges, output viewing, and errata fixes; despite some known issues, it remains our primary functional benchmark for quickly evaluating general Japanese LLM performance. All Shaberi scores in V2.1 are judged by GPT‑5.1 (gpt-5.1-2025-11-13).
    • ELYZA Tasks 100 - A set of 100 complex Japanese instructions and tasks graded on a 5‑point rubric, targeting realistic instruction-following and generation quality.
    • Japanese MT-Bench - A high quality Japanese adaptation of MT-Bench with eight categories of conversational and writing outputs, evaluated by an LLM judge on a 1–10 scale to capture stylistic and qualitative differences.
    • Rakuda - An adaptation of Rakuda, a Japanese-language trivia QA benchmark covering Japanese-focused factual knowledge and recall; scored via exact-match style metrics.
    • Tengu - A heterogeneous grab-bag of Japanese tasks (reasoning, QA, and writing) that is a useful secondary stress test for general capability.
  • M‑IFEval (JA slice) - Our public fork of LightBlue’s multilingual IFEval which fixes a number of errata; in the main Japanese composite we currently use only the Japanese subset, exposing it as M‑IFja, with rule-based instruction‑compliance scoring. We report the loose score.
  • shisa-jp-ifeval - Shisa.AI's own Japanese-specific IFEval variant, carefully replacing English‑centric constraints (spelling, capitalization, etc.) with verifiable Japanese constraints (mora counting, script choice, honorifics, etc.) and rule-based scoring.
  • shisa-jp-rp-bench - A Japanese roleplay/persona benchmark based on Aratako’s Japanese-RP-Bench, using multi-turn conversations and pairwise LLM judging (Gemini 2.0 Flash) with a Bradley-Terry model to produce stable RP rankings.
  • shisa-jp-tl-bench – English↔Japanese translation shootout: the target model’s translations are compared pairwise against a frozen base set and judged by a dedicated LLM judge (Gemini 2.5 Flash), then aggregated with a Bradley-Terry logistic model into a 0–10‑style score.
  • kiseki-eval - A private Shisa.AI translation eval focused on subtle aspects of Japanese such as tone, gendering, and terms of endearment; translations are judged by Gemini 2.5 Pro using an Ultrafeedback‑style 1–5 rubric.
  • chotto-eval - Another Shisa.AI internal eval, cross‑lingual multi‑turn interaction set mimicking real conversational flows; we do pairwise LLM‑vs‑LLM comparison against a fixed chotto.chat strong internal baseline model (which has a roughly 50% win/loss against Claude Opus 4.1 and Gemini 2.5 Flash) using Gemini 2.5 Pro as evaluator.

English/General

  • MixEval Easy - A fast, mixed English reasoning benchmark (mixeval_easy task in Lighteval/Inspect) combining free‑form and multiple‑choice questions with 0.96 correlation with 2024 Chatbot Arena rankings; scored both by the task’s exact metrics and by LLM judges (Flow‑Judge flowaicom/Flow-Judge-v0.1 and a GPT judge, default gpt-4.1-mini-2025-04-14) via the HF Lighteval runner.
  • MixEval Hard - A harder subset of MixEval (mixeval_hard) designed to better separate strong models, run through the same Lighteval/Inspect pipeline and Flow‑Judge + GPT‑judge scoring as MixEval Easy.
  • LiveBench - Our public fork of LiveBench, a contamination‑aware, continually updated English benchmark covering coding, math, reasoning, language, data analysis, and instruction following, judged with LiveBench's own rule-based and ground-truth scoring (no LLM judge). We use the latest public dataset LiveBench-2024-11-25. Our fork supports concurrent runs, GPT-5.1 reasoning semantics, and other fixes.
  • GPQA Diamond - PhD‑level multiple‑choice science QA from the Diamond split of GPQA (Lighteval gpqa:diamond task using Idavidrein/gpqa); we score with Inspect’s multiple‑choice choice metric and an additional robustness pass that recovers bare letter answers, so this remains a pure reference‑based metric (no LLM judge).
  • Google IFEval (EN) – An English-language instruction‑following benchmark from Google Research (ifeval task in Lighteval/Inspect over google/IFEval), scored with the original rule-based check_following functions; we report the loose prompt‑level accuracy, with no LLM judge involved.
  • IFBench – Our public fork of AI2's IFBench IFEval-inspired (but less saturated) instruction‑following suite, loose prompt-level accuracies using IFBench's own verification functions (no LLM judge). We fix some evaluation bugs and add a response generation script for parallel execution against OpenAI-compatible endpoints.
  • HumanEval+ – Using our public fork of EvalPlus, which adds direct OpenAI/Gemini support as well as parallel generation support, we run HumanEval+ and report the plus-pass@1 score (reference/test-based judgment).
Eval V2.1 1.2B V2.1 3B V2.1 8B V2.1 14B V2.1 70B
JA AVG 43.4 57.9 67.8 72.6 73.1
EN AVG 27.6 43.2 57.8 57.7 66.0
Shaberi v2.1 5.347 6.231 7.353 7.713 8.029
ELYZA 100 5.480 6.700 7.660 8.270 8.360
JA MT-Bench 5.496 6.026 7.783 7.867 7.967
Rakuda 5.500 6.425 7.150 7.562 8.275
Tengu 4.913 5.774 6.817 7.152 7.513
M-IFEval (JA) 0.355 0.424 0.471 0.529 0.587
shisa-jp-ifeval 0.167 0.260 0.347 0.407 0.453
shisa-jp-rp-bench 2.521 4.714 4.792 4.745 4.656
shisa-jp-tl-bench 3.425 7.886 8.917 9.494 9.551
kiseki-eval 3.182 3.318 3.580 3.977 3.970
chotto-eval 0.200 0.218 0.455 0.545 0.382
MixEval Easy 0.422 0.719 0.802 0.848 0.915
MixEval Hard 0.266 0.493 0.607 0.655 0.789
LiveBench 13.2 21.7 45.7 38.8 50.0
GPQA Diamond 0.141 0.303 0.328 0.429 0.419
IFEval 0.512 0.584 0.791 0.623 0.880
IFBench 0.173 0.214 0.259 0.265 0.340
HumanEval+ 0.287 0.494 0.805 0.829 0.780

Credits

The Shisa V2.1 models were developed by Leonard Lin and Adam Lensenmayer of Shisa.AI.

Our models would not exist without the contributions of the open source and open models community and our generous compute sponsors.

Primary compute for Shisa V2.1 model training was provided by AMD and all our final Shisa V2.1 models (and several hundred ablations) were trained on an AMD MI300X node provided by the AMD Developer Cloud using Axolotl. RL was done directly with Hugging Face TRL and merging with Arcee AI's mergekit.

Additional compute and credits used for ablations, data processing, and evaluations were provided by: Chutes, Hot Aisle, Emerald Compute System, Lambda, and Strata.

We are also thankful for the active support from the AWS Activate, Google for Startups, and NVIDIA Inception programs.

Additional Models

While not part of our official Shisa V2.1 release, over the course of training we have created a few other interesting models along the way.

  • 037-rakuten-2.0-mini-instruct-1.5b-v2new-dpo405b - trained to commemorate a Rakuten AI office visit/onsite talk, this was our first experiment trying our new Shisa V2.x dataset on a smaller model (1.5B) - its performance is superseded by our Shisa V2.1 1.2B model, but it further enhanced our confidence that our training could improve even very strong Japanese SLMs.
  • shisa-v2.1c-lfm2-350m - this small model was trained for the LiquidAI Tokyo Hackathon and again surprised us. It is SOTA for Japanese at <1B (and appears to largely fix the base model's CLTL issues). There is also a dedicated JA-EN translation tune that is surprisingly capable.
  • 031-swallow-8b-0.5-base-v2new-dpo405b - trained on the latest Swallow 0.5 CPT base model, this was SOTA when published, but is superseded by our Shisa V2.1 8B model. Due to the Gemma + Llama dual licensing, we do not believe this model is actually suitable for commercial or research use, but we were curious to see the results of our new training on a strong Japanese non-instruction-tuned base model.
  • shisa-v1-7b-v2.1 - our original Shisa 7B V1 model was trained 2 years ago, and we celebrated the anniversary by experimenting with additional re-training of the original model to see how much it could be improved- a fair bit actually, but its performance is still much lower than modern models. We recommend using our Shisa V2.1 8B model instead.

During the course of Shisa V2.1 development we have ported the HF megablocks kernel to shisa-ai/megablocks-hip, with numerically validated, 4-5X performance gains for HF/TRL based trainers on AMD MI300X GPUs. We have also ported Aux Loss Free Balancing as an Axolotl plugin for multiple MoE architectures and a distributed-compatible (FSDP2 & DSZ3) Muon-clip optimizer.

For more details on our development and insights, please visit the Shisa V2 Github repository and the Shisa.AI website.


Per the Llama Community License Agreements, the official names of the Llama-based models are "Llama 3.2 Shisa V2.1 3B" and "Llama 3.3 Shisa V2.1 70B"

Downloads last month
84
Safetensors
Model size
308k params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for shisa-ai/shisa-v2.1-qwen3-8b

Base model

Qwen/Qwen3-8B-Base
Finetuned
Qwen/Qwen3-8B
Finetuned
(632)
this model
Quantizations
1 model

Dataset used to train shisa-ai/shisa-v2.1-qwen3-8b

Collection including shisa-ai/shisa-v2.1-qwen3-8b