Update README.md
Browse files
README.md
CHANGED
|
@@ -1,5 +1,71 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: mit
|
| 3 |
-
tags:
|
| 4 |
-
- unsloth
|
| 5 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
tags:
|
| 4 |
+
- unsloth
|
| 5 |
+
- Uncensored
|
| 6 |
+
- text-generation-inference
|
| 7 |
+
- transformers
|
| 8 |
+
- unsloth
|
| 9 |
+
- llama
|
| 10 |
+
- trl
|
| 11 |
+
datasets:
|
| 12 |
+
- openerotica/mixed-rp
|
| 13 |
+
- kingbri/PIPPA-shareGPT
|
| 14 |
+
language:
|
| 15 |
+
- en
|
| 16 |
+
base_model:
|
| 17 |
+
- N-Bot-Int/OpenRP3B-Llama3.2
|
| 18 |
+
new_version: N-Bot-Int/OpenElla3-Llama3.2B
|
| 19 |
+
pipeline_tag: text-generation
|
| 20 |
+
library_name: peft
|
| 21 |
+
---
|
| 22 |
+
<a href="https://ibb.co/GvDjFcVp"><img src="https://i.ibb.co/pvTpnJ3w/image.webp" alt="image" border="0"></a>
|
| 23 |
+
|
| 24 |
+
# Llama3.2 - OpenElla3A
|
| 25 |
+
- OpenElla, is a Llama3.2 **3B** Parameter Model,
|
| 26 |
+
That is fine-tuned for Roleplaying purposes, even with limited Parameters.
|
| 27 |
+
This is achieved through Series of Dataset Finetuning, with 2 Dataset with different
|
| 28 |
+
Weight, Aiming to Counter Llama3.2's Generalist Approach and focus On Specializing with
|
| 29 |
+
Roleplaying and Acting.
|
| 30 |
+
|
| 31 |
+
- OpenElla3A Excells in Outputting **RAW** and **UNCENSORED** Output
|
| 32 |
+
However **LACKS THE PROPER TRAINING FOR OBIDIENCE**, Due to this, OpenElla3 Model **A**
|
| 33 |
+
Are Only Used for Training purposes, if you seek to train or Distill A Llama Model to
|
| 34 |
+
Force it to generate Uncensored Content then please do so with care and ethical considerations
|
| 35 |
+
|
| 36 |
+
- OpenElla3A is
|
| 37 |
+
- **Developed by:** N-Bot-Int
|
| 38 |
+
- **License:** apache-2.0
|
| 39 |
+
- **Parent Model from model:** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
|
| 40 |
+
- **Sequential Trained from Model:** N-Bot-Int/OpenRP3B-Llama3.2
|
| 41 |
+
- **Dataset Combined Using:** Mosher-R1(Propietary Software)
|
| 42 |
+
|
| 43 |
+
- OpenElla3A Is **NOT YET RANKED WITH ANY METRICS**
|
| 44 |
+
- Feel free to support by Emailing me: <link src="mailto:nexus.networkinteractives@gmail.com">nexus.networkinteractives@gmail.com</link>
|
| 45 |
+
|
| 46 |
+
- # Notice
|
| 47 |
+
- **For a Good Experience, Please use**
|
| 48 |
+
- Low temperature 1.5, min_p = 0.1 and max_new_tokens = 128
|
| 49 |
+
|
| 50 |
+
|
| 51 |
+
- # Detail card:
|
| 52 |
+
- Parameter
|
| 53 |
+
- 3 Billion Parameters
|
| 54 |
+
- (Please visit your GPU Vendor if you can Run 3B models)
|
| 55 |
+
|
| 56 |
+
- Training
|
| 57 |
+
- 500 steps
|
| 58 |
+
- Mixed-RP Startup Dataset
|
| 59 |
+
- 200 steps
|
| 60 |
+
- PIPPA-ShareGPT for bypassing Llama's Censorship Model
|
| 61 |
+
- 150 steps(Re-fining)
|
| 62 |
+
- PIPPA-ShareGPT to further increase weight of uncensorsed prompt
|
| 63 |
+
|
| 64 |
+
- Finetuning tool:
|
| 65 |
+
- Unsloth AI
|
| 66 |
+
- This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
| 67 |
+
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
| 68 |
+
- Fine-tuned Using:
|
| 69 |
+
- Google Colab
|
| 70 |
+
|
| 71 |
+
|