--- title: RunAsh Chat emoji: πŸͺΆ colorFrom: pink colorTo: blue sdk: docker pinned: true license: mit app_port: 3080 thumbnail: >- https://cdn-uploads.huggingface.co/production/uploads/6380f0cd471a4550ff258598/ASSYdCAc5M0h_eiuoZNd3.png short_description: RunAsh-Chat - All-In-One AI Conversations --- # πŸš€ RunAsh-Chat: A LibreChat-Inspired Open-Source Conversational AI ![RunAsh-Chat Logo](https://huggingface.co/datasets/runash-ai/logo/resolve/main/runash-chat-logo.png) *Built for freedom, powered by openness.* > *β€œLibreChat, but better β€” faster, smarter, and fully yours.”* --- ## Model Description **RunAsh-Chat** is an open-source, instruction-tuned large language model designed to replicate and enhance the conversational capabilities of the popular [LibreChat](https://github.com/LibreChat/LibreChat) ecosystem β€” while introducing improved reasoning, safety, and multi-turn dialogue handling. Built upon the **Mistral-7B** or **Llama-3-8B** base architecture (depending on variant), RunAsh-Chat is fine-tuned on a curated dataset of high-quality, human-aligned conversations, code assistance prompts, and ethical safety filters. It is optimized for use in self-hosted AI chat interfaces like LibreChat, Ollama, Text Generation WebUI, and local LLM APIs. Unlike many closed or commercial alternatives, **RunAsh-Chat is 100% free to use, modify, and deploy** β€” even commercially β€” under the Apache 2.0 license. ### Key Features βœ… **LibreChat-Ready**: Seamless drop-in replacement for models used in LibreChat deployments βœ… **Multi-Turn Context**: Excellent memory of conversation history (up to 8K tokens) βœ… **Code & Math Ready**: Strong performance on programming, logic, and quantitative reasoning βœ… **Safety-Enhanced**: Built-in moderation to avoid harmful, biased, or toxic outputs βœ… **Lightweight & Fast**: Optimized for CPU/GPU inference with GGUF, AWQ, and GPTQ support βœ… **Multilingual**: Supports English, Spanish, French, German, Portuguese, Russian, Chinese, and more --- ## Model Variants | Variant | Base Model | Quantization | Context Length | Link | |--------|------------|--------------|----------------|------| | `RunAsh-Chat-v1.0-Mistral-7B` | Mistral-7B-v0.1 | Q4_K_M GGUF | 8K | [πŸ€— Hugging Face](https://huggingface.co/runash-ai/RunAsh-Chat-v1.0-Mistral-7B) | | `RunAsh-Chat-v1.0-Llama3-8B` | Llama-3-8B-Instruct | Q4_K_S GGUF | 8K | [πŸ€— Hugging Face](https://huggingface.co/runash-ai/RunAsh-Chat-v1.0-Llama3-8B) | | `RunAsh-Chat-v1.0-Mistral-7B-AWQ` | Mistral-7B-v0.1 | AWQ (4-bit) | 8K | [πŸ€— Hugging Face](https://huggingface.co/runash-ai/RunAsh-Chat-v1.0-Mistral-7B-AWQ) | > πŸ’‘ **Tip**: Use GGUF variants for CPU/Apple Silicon; AWQ/GPTQ for NVIDIA GPUs. --- ## Usage Examples ### Using with Hugging Face `transformers` ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_name = "runash-ai/RunAsh-Chat-v1.0-Mistral-7B" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype=torch.float16, device_map="auto" ) messages = [ {"role": "system", "content": "You are RunAsh-Chat, a helpful assistant."}, {"role": "user", "content": "Explain quantum computing in simple terms."} ] inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device) outputs = model.generate(inputs, max_new_tokens=512, temperature=0.7, do_sample=True) response = tokenizer.decode(outputs[0], skip_special_tokens=True) print(response) ``` ### Using with Ollama ```bash ollama pull runash-chat:7b ollama run runash-chat "What's the capital of Canada?" ``` ### Using with LibreChat 1. Download the GGUF model file (e.g., `RunAsh-Chat-v1.0-Mistral-7B.Q4_K_M.gguf`) 2. Place it in your `models/` folder 3. In `config.yml`: ```yaml model: "RunAsh-Chat-v1.0-Mistral-7B" provider: "ollama" # or "local" ``` --- ## Training Data & Fine-Tuning RunAsh-Chat was fine-tuned using a hybrid dataset including: - **Alpaca** and **Alpaca-CoT** datasets - **OpenAssistant** conversations - **Self-instruct** and **Dolly** data - **Human-curated chat logs** from open-source AI communities - **Ethical filtering**: Removed toxic, biased, or harmful examples using rule-based and model-based moderation Fine-tuning was performed using **LoRA** with **QLoRA** for memory efficiency, on 4Γ— A100 40GB GPUs over 3 epochs. --- ## Limitations & Ethical Considerations ⚠️ **Not a replacement for human judgment** β€” always validate outputs for critical applications. ⚠️ **May hallucinate** facts, especially in niche domains β€” verify with trusted sources. ⚠️ **Bias mitigation is ongoing** β€” while trained for fairness, residual biases may persist. ⚠️ **Not designed for medical/legal advice** β€” consult professionals. RunAsh-Chat is **not** a general-purpose AI agent. It is intended for **educational, personal, and non-commercial research use** β€” though commercial use is permitted under Apache 2.0. --- ## License This model is released under the **Apache License 2.0** β€” the same as Mistral and Llama 3. You are free to: - Use it commercially - Modify and redistribute - Build derivative models **Attribution is appreciated but not required.** > *β€œLibreChat inspired us. We built something better β€” and gave it back to the community.”* --- ## Citation If you use RunAsh-Chat in your research or project, please cite: ```bibtex @software{runash_chat_2024, author = {RunAsh AI Collective}, title = {RunAsh-Chat: A LibreChat-Inspired Open-Source Chat Model}, year = {2024}, publisher = {Hugging Face}, url = {https://huggingface.co/runash-ai/RunAsh-Chat-v1.0-Mistral-7B} } ``` --- ## Community & Support πŸ”— **GitHub**: https://github.com/runash-ai/runash-chat πŸ’¬ **Discord**: https://discord.gg/runash-ai 🐞 **Report Issues**: https://github.com/runash-ai/runash-chat/issues πŸš€ **Contribute**: We welcome fine-tuning datasets, translations, and optimizations! --- ## Acknowledgments We gratefully acknowledge the work of: - Mistral AI for Mistral-7B - Meta for Llama 3 - The LibreChat community for inspiring accessible AI - Hugging Face for open model hosting and tools --- *RunAsh-Chat β€” Because freedom shouldn’t come with a price tag.* *Made with ❀️ by the RunAsh AI Collective* ---