File size: 2,678 Bytes
6c0108e 0b7962a 6c0108e 71549ea 6c0108e 76b6ccd 6c0108e 76b6ccd 6c0108e 76b6ccd 6c0108e 76b6ccd 6c0108e 76b6ccd 6c0108e 76b6ccd 6c0108e 76b6ccd 6c0108e 967d4b7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 |
---
language:
- or
---
# Model Card for Model ID
[](https://creativecommons.org/licenses/by-nc-sa/4.0/)
## Model description
odiagenAI-model-v0 is based on Llama-7b and finetuned with 52k Odia translated data from the open-source Stanford-Alpaca, resulting in good Odia instruction understanding and response generation capabilities.
The code of Odia data generation and other detailed information can be found in our Github project repository: https://github.com/shantipriyap/OdiaGenAI.
This repo contains a low-rank adapter for LLaMA-7b fit on the Stanford Alpaca dataset.
## Training hyper-parameters
| Parameter | Value |
| ------ | ------ |
| Batch size | 128 |
| Learning rate | 3e-4 |
| Epochs | 2 |
|Cutoff length | 256 |
|Weight_decay | 0.001 |
|Warmup_rate | 0.1 |
|LR_scheduler | linear |
|Lora r | 16 |
|Lora target modules | (q_proj, k_proj, v_proj, o_proj) |
Model can be easily loaded with AutoModelForCausalLM.
``` python
import torch
from peft import PeftModel
import transformers
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
from peft import PeftModel, PeftConfig
from transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig
base_model_path = "meta-llama/Llama-2-7b-hf"
adapter_path = "OdiaGenAI/odiagenAI-model-v0"
tokenizer = AutoTokenizer.from_pretrained(base_model_path, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.float16,
)
base_model = AutoModelForCausalLM.from_pretrained(
base_model_path,
quantization_config=bnb_config,
device_map="auto",
trust_remote_code=True
)
model = PeftModel.from_pretrained(base_model, adapter_path)
instruction = "ଭାରତ ବିଷୟରେ କିଛି କୁହନ୍ତୁ"
device = "cuda" if torch.cuda.is_available() else "cpu"
inputs = tokenizer(instruction, return_tensors="pt").to(device)
input_ids = inputs["input_ids"].to(device)
generation_config = GenerationConfig(
temperature=0.1,
top_p=0.75,
top_k=40,
num_beams=4,
)
with torch.no_grad():
generation_output = model.generate(
input_ids=input_ids,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
max_new_tokens=128,
)
s = generation_output.sequences[0]
output = tokenizer.decode(s)
print(output)
```
Instructions for running it can be found at https://github.com/shantipriyap/OdiaGenAI.
|