rammurmu commited on
Commit
c6d36b4
·
verified ·
1 Parent(s): 5bb5ee6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +172 -3
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- title: LibreChat
3
  emoji: 🪶
4
  colorFrom: pink
5
  colorTo: blue
@@ -9,5 +9,174 @@ license: mit
9
  app_port: 3080
10
  thumbnail: >-
11
  https://cdn-uploads.huggingface.co/production/uploads/6380f0cd471a4550ff258598/ASSYdCAc5M0h_eiuoZNd3.png
12
- short_description: LibreChat - All-In-One AI Conversations
13
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: RunAsh Chat
3
  emoji: 🪶
4
  colorFrom: pink
5
  colorTo: blue
 
9
  app_port: 3080
10
  thumbnail: >-
11
  https://cdn-uploads.huggingface.co/production/uploads/6380f0cd471a4550ff258598/ASSYdCAc5M0h_eiuoZNd3.png
12
+ short_description: RunAsh-Chat - All-In-One AI Conversations
13
+
14
+
15
+ ---
16
+
17
+ # 🚀 RunAsh-Chat: A LibreChat-Inspired Open-Source Conversational AI
18
+
19
+ ![RunAsh-Chat Logo](https://huggingface.co/datasets/runash-ai/logo/resolve/main/runash-chat-logo.png)
20
+ *Built for freedom, powered by openness.*
21
+
22
+ > *“LibreChat, but better — faster, smarter, and fully yours.”*
23
+
24
+ ---
25
+
26
+ ## Model Description
27
+
28
+ **RunAsh-Chat** is an open-source, instruction-tuned large language model designed to replicate and enhance the conversational capabilities of the popular [LibreChat](https://github.com/LibreChat/LibreChat) ecosystem — while introducing improved reasoning, safety, and multi-turn dialogue handling.
29
+
30
+ Built upon the **Mistral-7B** or **Llama-3-8B** base architecture (depending on variant), RunAsh-Chat is fine-tuned on a curated dataset of high-quality, human-aligned conversations, code assistance prompts, and ethical safety filters. It is optimized for use in self-hosted AI chat interfaces like LibreChat, Ollama, Text Generation WebUI, and local LLM APIs.
31
+
32
+ Unlike many closed or commercial alternatives, **RunAsh-Chat is 100% free to use, modify, and deploy** — even commercially — under the Apache 2.0 license.
33
+
34
+ ### Key Features
35
+
36
+ ✅ **LibreChat-Ready**: Seamless drop-in replacement for models used in LibreChat deployments
37
+ ✅ **Multi-Turn Context**: Excellent memory of conversation history (up to 8K tokens)
38
+ ✅ **Code & Math Ready**: Strong performance on programming, logic, and quantitative reasoning
39
+ ✅ **Safety-Enhanced**: Built-in moderation to avoid harmful, biased, or toxic outputs
40
+ ✅ **Lightweight & Fast**: Optimized for CPU/GPU inference with GGUF, AWQ, and GPTQ support
41
+ ✅ **Multilingual**: Supports English, Spanish, French, German, Portuguese, Russian, Chinese, and more
42
+
43
+ ---
44
+
45
+ ## Model Variants
46
+
47
+ | Variant | Base Model | Quantization | Context Length | Link |
48
+ |--------|------------|--------------|----------------|------|
49
+ | `RunAsh-Chat-v1.0-Mistral-7B` | Mistral-7B-v0.1 | Q4_K_M GGUF | 8K | [🤗 Hugging Face](https://huggingface.co/runash-ai/RunAsh-Chat-v1.0-Mistral-7B) |
50
+ | `RunAsh-Chat-v1.0-Llama3-8B` | Llama-3-8B-Instruct | Q4_K_S GGUF | 8K | [🤗 Hugging Face](https://huggingface.co/runash-ai/RunAsh-Chat-v1.0-Llama3-8B) |
51
+ | `RunAsh-Chat-v1.0-Mistral-7B-AWQ` | Mistral-7B-v0.1 | AWQ (4-bit) | 8K | [🤗 Hugging Face](https://huggingface.co/runash-ai/RunAsh-Chat-v1.0-Mistral-7B-AWQ) |
52
+
53
+ > 💡 **Tip**: Use GGUF variants for CPU/Apple Silicon; AWQ/GPTQ for NVIDIA GPUs.
54
+
55
+ ---
56
+
57
+ ## Usage Examples
58
+
59
+ ### Using with Hugging Face `transformers`
60
+
61
+ ```python
62
+ from transformers import AutoTokenizer, AutoModelForCausalLM
63
+ import torch
64
+
65
+ model_name = "runash-ai/RunAsh-Chat-v1.0-Mistral-7B"
66
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
67
+ model = AutoModelForCausalLM.from_pretrained(
68
+ model_name,
69
+ torch_dtype=torch.float16,
70
+ device_map="auto"
71
+ )
72
+
73
+ messages = [
74
+ {"role": "system", "content": "You are RunAsh-Chat, a helpful assistant."},
75
+ {"role": "user", "content": "Explain quantum computing in simple terms."}
76
+ ]
77
+
78
+ inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
79
+ outputs = model.generate(inputs, max_new_tokens=512, temperature=0.7, do_sample=True)
80
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
81
+
82
+ print(response)
83
+ ```
84
+
85
+ ### Using with Ollama
86
+
87
+ ```bash
88
+ ollama pull runash-chat:7b
89
+ ollama run runash-chat "What's the capital of Canada?"
90
+ ```
91
+
92
+ ### Using with LibreChat
93
+
94
+ 1. Download the GGUF model file (e.g., `RunAsh-Chat-v1.0-Mistral-7B.Q4_K_M.gguf`)
95
+ 2. Place it in your `models/` folder
96
+ 3. In `config.yml`:
97
+ ```yaml
98
+ model: "RunAsh-Chat-v1.0-Mistral-7B"
99
+ provider: "ollama" # or "local"
100
+ ```
101
+
102
+ ---
103
+
104
+ ## Training Data & Fine-Tuning
105
+
106
+ RunAsh-Chat was fine-tuned using a hybrid dataset including:
107
+
108
+ - **Alpaca** and **Alpaca-CoT** datasets
109
+ - **OpenAssistant** conversations
110
+ - **Self-instruct** and **Dolly** data
111
+ - **Human-curated chat logs** from open-source AI communities
112
+ - **Ethical filtering**: Removed toxic, biased, or harmful examples using rule-based and model-based moderation
113
+
114
+ Fine-tuning was performed using **LoRA** with **QLoRA** for memory efficiency, on 4× A100 40GB GPUs over 3 epochs.
115
+
116
+ ---
117
+
118
+ ## Limitations & Ethical Considerations
119
+
120
+ ⚠️ **Not a replacement for human judgment** — always validate outputs for critical applications.
121
+ ⚠️ **May hallucinate** facts, especially in niche domains — verify with trusted sources.
122
+ ⚠️ **Bias mitigation is ongoing** — while trained for fairness, residual biases may persist.
123
+ ⚠️ **Not designed for medical/legal advice** — consult professionals.
124
+
125
+ RunAsh-Chat is **not** a general-purpose AI agent. It is intended for **educational, personal, and non-commercial research use** — though commercial use is permitted under Apache 2.0.
126
+
127
+ ---
128
+
129
+ ## License
130
+
131
+ This model is released under the **Apache License 2.0** — the same as Mistral and Llama 3. You are free to:
132
+
133
+ - Use it commercially
134
+ - Modify and redistribute
135
+ - Build derivative models
136
+
137
+ **Attribution is appreciated but not required.**
138
+
139
+ > *“LibreChat inspired us. We built something better — and gave it back to the community.”*
140
+
141
+ ---
142
+
143
+ ## Citation
144
+
145
+ If you use RunAsh-Chat in your research or project, please cite:
146
+
147
+ ```bibtex
148
+ @software{runash_chat_2024,
149
+ author = {RunAsh AI Collective},
150
+ title = {RunAsh-Chat: A LibreChat-Inspired Open-Source Chat Model},
151
+ year = {2024},
152
+ publisher = {Hugging Face},
153
+ url = {https://huggingface.co/runash-ai/RunAsh-Chat-v1.0-Mistral-7B}
154
+ }
155
+ ```
156
+
157
+ ---
158
+
159
+ ## Community & Support
160
+
161
+ 🔗 **GitHub**: https://github.com/runash-ai/runash-chat
162
+ 💬 **Discord**: https://discord.gg/runash-ai
163
+ 🐞 **Report Issues**: https://github.com/runash-ai/runash-chat/issues
164
+ 🚀 **Contribute**: We welcome fine-tuning datasets, translations, and optimizations!
165
+
166
+ ---
167
+
168
+ ## Acknowledgments
169
+
170
+ We gratefully acknowledge the work of:
171
+
172
+ - Mistral AI for Mistral-7B
173
+ - Meta for Llama 3
174
+ - The LibreChat community for inspiring accessible AI
175
+ - Hugging Face for open model hosting and tools
176
+
177
+ ---
178
+
179
+ *RunAsh-Chat — Because freedom shouldn’t come with a price tag.*
180
+ *Made with ❤️ by the RunAsh AI Collective*
181
+
182
+ ---