Upload 5 files
Browse files- README.md +77 -1
- config.json +25 -0
- special_tokens_map.json +7 -0
- tokenizer.json +0 -0
- tokenizer_config.json +10 -0
README.md
CHANGED
|
@@ -1,3 +1,79 @@
|
|
| 1 |
---
|
| 2 |
-
license:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
license:
|
| 3 |
+
- creativeml-openrail-m
|
| 4 |
+
language:
|
| 5 |
+
- en
|
| 6 |
+
tags:
|
| 7 |
+
- generated_from_trainer
|
| 8 |
+
- text generation
|
| 9 |
+
- pytorch
|
| 10 |
+
- casual-lm
|
| 11 |
+
metrics:
|
| 12 |
+
- accuracy
|
| 13 |
+
model-index:
|
| 14 |
+
- name: openchatgpt-neox-r1
|
| 15 |
+
results: []
|
| 16 |
---
|
| 17 |
+
|
| 18 |
+
# openchatgpt-neox-r1
|
| 19 |
+
|
| 20 |
+
This model is a fine-tuned version of [EleutherAI/pythia-125m-deduped](https://huggingface.co/EleutherAI/pythia-125m-deduped) on the openchatgpt safe-r1 dataset.
|
| 21 |
+
It achieves the following results on the evaluation set:
|
| 22 |
+
- Loss: 1.3585
|
| 23 |
+
- Accuracy: 0.9169
|
| 24 |
+
|
| 25 |
+
## Model description
|
| 26 |
+
|
| 27 |
+
Finetune based on the inner workings of ChatGPT. I won't elaborate on that. You must have a faint idea of how prompt is made for it to spit anything that's not garbled mess.
|
| 28 |
+
|
| 29 |
+
This is effectively a schizophrenic idea that met the light of day. Practically a collab of 3 students in a virtual shed.
|
| 30 |
+
|
| 31 |
+
BTW, Pythia is so much better omg.
|
| 32 |
+
|
| 33 |
+
## Intended uses & limitations
|
| 34 |
+
|
| 35 |
+
Intended uses & limitations fall in line with OpenAI's. Dataset used consists of safe texts (i.e. not highly sexual/erotica type stuff). NSFW version of the dataset is not planned to exist at the moment.
|
| 36 |
+
|
| 37 |
+
Keep in mind that this is a 125m version of GPT-NeoX (Pythia). My 1050Ti Mobile couldn't even handle that without gradient thingmabobs, 8BitAdam was also used. If anyone knows how to effectively finetune larger models on free colabs - feel free to let me know. Pile tokenizer also has one downside compared to native GPT-2/3 - `Assistant` is not 1 token, but 2.
|
| 38 |
+
|
| 39 |
+
## Training and evaluation data
|
| 40 |
+
|
| 41 |
+
Data was split in ratio of 95%/5%. Preproccess included removing mentions of OpenAI wherever it was not deemed appropriete (GPT-2 has one of the appropriete mentions). Whole dataset consists of just shy off 3k input-output pairs. One input has multiple outputs (read as: one message has multiple variants of an answer). <<<1% (3 total) are curated lines (i.e. a huge mistake was spotted that needed corrections). At least 3 lines (<<<1% of line count, but more of byte count) are broken.
|
| 42 |
+
|
| 43 |
+
Heavy bias on IT.
|
| 44 |
+
|
| 45 |
+
## Training procedure
|
| 46 |
+
|
| 47 |
+
Input and output were straight up concatenated due to the nature of how ChatGPT works.
|
| 48 |
+
|
| 49 |
+
This time dataset was batched into groups of 2048 tokens. Meaning i got 628/31 groups for training/eval. Maybe that's what made the difference. EOS was also being used after the final separator.
|
| 50 |
+
|
| 51 |
+
### Training hyperparameters
|
| 52 |
+
|
| 53 |
+
The following hyperparameters were used during training:
|
| 54 |
+
- learning_rate: 5e-05
|
| 55 |
+
- train_batch_size: 1
|
| 56 |
+
- eval_batch_size: 1
|
| 57 |
+
- seed: 42
|
| 58 |
+
- gradient_accumulation_steps: 2
|
| 59 |
+
- total_train_batch_size: 2
|
| 60 |
+
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
| 61 |
+
- lr_scheduler_type: linear
|
| 62 |
+
- num_epochs: 3.0
|
| 63 |
+
- mixed_precision_training: Native AMP
|
| 64 |
+
|
| 65 |
+
### Training results
|
| 66 |
+
|
| 67 |
+
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|
| 68 |
+
|:-------------:|:-----:|:----:|:---------------:|:--------:|
|
| 69 |
+
| 1.1311 | 1.0 | 1377 | 1.3116 | 0.9127 |
|
| 70 |
+
| 0.6691 | 2.0 | 2754 | 1.2978 | 0.9160 |
|
| 71 |
+
| 0.3463 | 3.0 | 4131 | 1.3585 | 0.9169 |
|
| 72 |
+
|
| 73 |
+
|
| 74 |
+
### Framework versions
|
| 75 |
+
|
| 76 |
+
- Transformers 4.25.1
|
| 77 |
+
- Pytorch 1.13.1+cu116
|
| 78 |
+
- Datasets 2.8.0
|
| 79 |
+
- Tokenizers 0.13.2
|
config.json
ADDED
|
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"_name_or_path": "EleutherAI/pythia-125m-deduped",
|
| 3 |
+
"architectures": [
|
| 4 |
+
"GPTNeoXForCausalLM"
|
| 5 |
+
],
|
| 6 |
+
"bos_token_id": 0,
|
| 7 |
+
"eos_token_id": 0,
|
| 8 |
+
"hidden_act": "gelu",
|
| 9 |
+
"hidden_size": 768,
|
| 10 |
+
"initializer_range": 0.02,
|
| 11 |
+
"intermediate_size": 3072,
|
| 12 |
+
"layer_norm_eps": 1e-05,
|
| 13 |
+
"max_position_embeddings": 2048,
|
| 14 |
+
"model_type": "gpt_neox",
|
| 15 |
+
"num_attention_heads": 12,
|
| 16 |
+
"num_hidden_layers": 12,
|
| 17 |
+
"rotary_emb_base": 10000,
|
| 18 |
+
"rotary_pct": 0.25,
|
| 19 |
+
"tie_word_embeddings": false,
|
| 20 |
+
"torch_dtype": "float32",
|
| 21 |
+
"transformers_version": "4.25.1",
|
| 22 |
+
"use_cache": false,
|
| 23 |
+
"use_parallel_residual": true,
|
| 24 |
+
"vocab_size": 50278
|
| 25 |
+
}
|
special_tokens_map.json
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"bos_token": "<|endoftext|>",
|
| 3 |
+
"eos_token": "<|endoftext|>",
|
| 4 |
+
"pad_token": "<|endoftext|>",
|
| 5 |
+
"sep_token": "<|STK_SP|>",
|
| 6 |
+
"unk_token": "<|endoftext|>"
|
| 7 |
+
}
|
tokenizer.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
tokenizer_config.json
ADDED
|
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"add_prefix_space": false,
|
| 3 |
+
"bos_token": "<|endoftext|>",
|
| 4 |
+
"eos_token": "<|endoftext|>",
|
| 5 |
+
"model_max_length": 1000000000000000019884624838656,
|
| 6 |
+
"name_or_path": "EleutherAI/pythia-125m-deduped",
|
| 7 |
+
"special_tokens_map_file": "/fsx/home-hailey/.cache/huggingface/hub/models--EleutherAI--gpt-neox-20b/snapshots/3523781c8df75f7741687a4284f6f70e1afa12f4/special_tokens_map.json",
|
| 8 |
+
"tokenizer_class": "GPTNeoXTokenizer",
|
| 9 |
+
"unk_token": "<|endoftext|>"
|
| 10 |
+
}
|