wwe180 commited on
Commit
0e90f01
·
verified ·
1 Parent(s): cd7ff52

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -21
README.md CHANGED
@@ -1,15 +1,18 @@
1
  ---
2
  base_model:
3
- - Sao10K/L3-8B-Stheno-v3.1
4
- - NousResearch/Meta-Llama-3-8B-Instruct
5
- - openchat/openchat-3.6-8b-20240522
6
- - hfl/llama-3-chinese-8b-instruct-v2-lora
7
  library_name: transformers
8
  tags:
9
  - mergekit
10
  - merge
11
-
 
12
  ---
 
 
 
 
 
13
  # merge
14
 
15
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
@@ -25,19 +28,24 @@ The following models were included in the merge:
25
  * [Sao10K/L3-8B-Stheno-v3.1](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.1)
26
  * [openchat/openchat-3.6-8b-20240522](https://huggingface.co/openchat/openchat-3.6-8b-20240522) + [hfl/llama-3-chinese-8b-instruct-v2-lora](https://huggingface.co/hfl/llama-3-chinese-8b-instruct-v2-lora)
27
 
28
- ### Configuration
29
-
30
- The following YAML configuration was used to produce this model:
31
-
32
- ```yaml
33
- slices:
34
- - sources:
35
- - model: "Sao10K/L3-8B-Stheno-v3.1"
36
- layer_range: [0, 22]
37
- - sources:
38
- - model: "openchat/openchat-3.6-8b-20240522+hfl/llama-3-chinese-8b-instruct-v2-lora"
39
- layer_range: [10,32]
40
- merge_method: passthrough
41
- base_model: "NousResearch/Meta-Llama-3-8B-Instruct"
42
- dtype: bfloat16
43
- ```
 
 
 
 
 
 
1
  ---
2
  base_model:
3
+ - wwe180/Llama3-10B-lingyang-v1
 
 
 
4
  library_name: transformers
5
  tags:
6
  - mergekit
7
  - merge
8
+ license:
9
+ - other
10
  ---
11
+
12
+ #The model is experimental, so the results cannot be guaranteed.
13
+
14
+ After simple testing, the effect is good, stronger than llama-3-8b!
15
+
16
  # merge
17
 
18
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
 
28
  * [Sao10K/L3-8B-Stheno-v3.1](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.1)
29
  * [openchat/openchat-3.6-8b-20240522](https://huggingface.co/openchat/openchat-3.6-8b-20240522) + [hfl/llama-3-chinese-8b-instruct-v2-lora](https://huggingface.co/hfl/llama-3-chinese-8b-instruct-v2-lora)
30
 
31
+ ## 💻 Usage
32
+
33
+ ```python
34
+ !pip install -qU transformers accelerate
35
+
36
+ from transformers import AutoTokenizer
37
+ import transformers
38
+ import torch
39
+
40
+ model = "Llama3-10B-lingyang-v1"
41
+ messages = [{"role": "user", "content": "What is a large language model?"}]
42
+
43
+ tokenizer = AutoTokenizer.from_pretrained(model)
44
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
45
+ pipeline = transformers.pipeline(
46
+ "text-generation",
47
+ model=model,
48
+ torch_dtype=torch.float16,
49
+ device_map="auto",
50
+ )
51
+