Model Stock: All we need is just a few fine-tuned models
Paper
•
2403.19522
•
Published
•
13
This is a merge of pre-trained language models created using mergekit.
This model was merged using the Model Stock merge method using Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v8 as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v8
- model: Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v8.7
- model: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
- model: Qwen/Qwen2.5-14B-Instruct
- model: Qwen/Qwen2.5-14B-Instruct-1M
- model: Qwen/Qwen2.5-Coder-14B-Instruct
- model: prithivMLmods/Equuleus-Opus-14B-Exp
- model: sometimesanotion/Lamarck-14B-v0.7-Fusion
- model: sometimesanotion/LamarckInfusion-14B-v1
- model: suayptalha/Lamarckvergence-14B
base_model: Lunzima/NQLSG-Qwen2.5-14B-MegaFusion-v8
chat_template: auto
dtype: bfloat16
merge_method: model_stock
parameters:
int8_mask: true
tokenizer:
source: base