Spaces:
Runtime error
Runtime error
File size: 6,411 Bytes
d5b829d c6d36b4 9c24539 475abc1 325ae74 475abc1 325ae74 d5b829d 3180261 cf5f1d3 c6d36b4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 |
---
title: RunAsh Chat
emoji: 🪶
colorFrom: pink
colorTo: blue
sdk: docker
pinned: true
license: mit
app_port: 3080
thumbnail: >-
https://cdn-uploads.huggingface.co/production/uploads/6380f0cd471a4550ff258598/ASSYdCAc5M0h_eiuoZNd3.png
short_description: RunAsh-Chat - All-In-One AI Conversations
---
# 🚀 RunAsh-Chat: A LibreChat-Inspired Open-Source Conversational AI

*Built for freedom, powered by openness.*
> *“LibreChat, but better — faster, smarter, and fully yours.”*
---
## Model Description
**RunAsh-Chat** is an open-source, instruction-tuned large language model designed to replicate and enhance the conversational capabilities of the popular [LibreChat](https://github.com/LibreChat/LibreChat) ecosystem — while introducing improved reasoning, safety, and multi-turn dialogue handling.
Built upon the **Mistral-7B** or **Llama-3-8B** base architecture (depending on variant), RunAsh-Chat is fine-tuned on a curated dataset of high-quality, human-aligned conversations, code assistance prompts, and ethical safety filters. It is optimized for use in self-hosted AI chat interfaces like LibreChat, Ollama, Text Generation WebUI, and local LLM APIs.
Unlike many closed or commercial alternatives, **RunAsh-Chat is 100% free to use, modify, and deploy** — even commercially — under the Apache 2.0 license.
### Key Features
✅ **LibreChat-Ready**: Seamless drop-in replacement for models used in LibreChat deployments
✅ **Multi-Turn Context**: Excellent memory of conversation history (up to 8K tokens)
✅ **Code & Math Ready**: Strong performance on programming, logic, and quantitative reasoning
✅ **Safety-Enhanced**: Built-in moderation to avoid harmful, biased, or toxic outputs
✅ **Lightweight & Fast**: Optimized for CPU/GPU inference with GGUF, AWQ, and GPTQ support
✅ **Multilingual**: Supports English, Spanish, French, German, Portuguese, Russian, Chinese, and more
---
## Model Variants
| Variant | Base Model | Quantization | Context Length | Link |
|--------|------------|--------------|----------------|------|
| `RunAsh-Chat-v1.0-Mistral-7B` | Mistral-7B-v0.1 | Q4_K_M GGUF | 8K | [🤗 Hugging Face](https://huggingface.co/runash-ai/RunAsh-Chat-v1.0-Mistral-7B) |
| `RunAsh-Chat-v1.0-Llama3-8B` | Llama-3-8B-Instruct | Q4_K_S GGUF | 8K | [🤗 Hugging Face](https://huggingface.co/runash-ai/RunAsh-Chat-v1.0-Llama3-8B) |
| `RunAsh-Chat-v1.0-Mistral-7B-AWQ` | Mistral-7B-v0.1 | AWQ (4-bit) | 8K | [🤗 Hugging Face](https://huggingface.co/runash-ai/RunAsh-Chat-v1.0-Mistral-7B-AWQ) |
> 💡 **Tip**: Use GGUF variants for CPU/Apple Silicon; AWQ/GPTQ for NVIDIA GPUs.
---
## Usage Examples
### Using with Hugging Face `transformers`
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_name = "runash-ai/RunAsh-Chat-v1.0-Mistral-7B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
messages = [
{"role": "system", "content": "You are RunAsh-Chat, a helpful assistant."},
{"role": "user", "content": "Explain quantum computing in simple terms."}
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, max_new_tokens=512, temperature=0.7, do_sample=True)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
### Using with Ollama
```bash
ollama pull runash-chat:7b
ollama run runash-chat "What's the capital of Canada?"
```
### Using with LibreChat
1. Download the GGUF model file (e.g., `RunAsh-Chat-v1.0-Mistral-7B.Q4_K_M.gguf`)
2. Place it in your `models/` folder
3. In `config.yml`:
```yaml
model: "RunAsh-Chat-v1.0-Mistral-7B"
provider: "ollama" # or "local"
```
---
## Training Data & Fine-Tuning
RunAsh-Chat was fine-tuned using a hybrid dataset including:
- **Alpaca** and **Alpaca-CoT** datasets
- **OpenAssistant** conversations
- **Self-instruct** and **Dolly** data
- **Human-curated chat logs** from open-source AI communities
- **Ethical filtering**: Removed toxic, biased, or harmful examples using rule-based and model-based moderation
Fine-tuning was performed using **LoRA** with **QLoRA** for memory efficiency, on 4× A100 40GB GPUs over 3 epochs.
---
## Limitations & Ethical Considerations
⚠️ **Not a replacement for human judgment** — always validate outputs for critical applications.
⚠️ **May hallucinate** facts, especially in niche domains — verify with trusted sources.
⚠️ **Bias mitigation is ongoing** — while trained for fairness, residual biases may persist.
⚠️ **Not designed for medical/legal advice** — consult professionals.
RunAsh-Chat is **not** a general-purpose AI agent. It is intended for **educational, personal, and non-commercial research use** — though commercial use is permitted under Apache 2.0.
---
## License
This model is released under the **Apache License 2.0** — the same as Mistral and Llama 3. You are free to:
- Use it commercially
- Modify and redistribute
- Build derivative models
**Attribution is appreciated but not required.**
> *“LibreChat inspired us. We built something better — and gave it back to the community.”*
---
## Citation
If you use RunAsh-Chat in your research or project, please cite:
```bibtex
@software{runash_chat_2024,
author = {RunAsh AI Collective},
title = {RunAsh-Chat: A LibreChat-Inspired Open-Source Chat Model},
year = {2024},
publisher = {Hugging Face},
url = {https://huggingface.co/runash-ai/RunAsh-Chat-v1.0-Mistral-7B}
}
```
---
## Community & Support
🔗 **GitHub**: https://github.com/runash-ai/runash-chat
💬 **Discord**: https://discord.gg/runash-ai
🐞 **Report Issues**: https://github.com/runash-ai/runash-chat/issues
🚀 **Contribute**: We welcome fine-tuning datasets, translations, and optimizations!
---
## Acknowledgments
We gratefully acknowledge the work of:
- Mistral AI for Mistral-7B
- Meta for Llama 3
- The LibreChat community for inspiring accessible AI
- Hugging Face for open model hosting and tools
---
*RunAsh-Chat — Because freedom shouldn’t come with a price tag.*
*Made with ❤️ by the RunAsh AI Collective*
---
|