nicolay-r commited on
Commit
4af8f0c
·
verified ·
1 Parent(s): 437a611

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +55 -71
README.md CHANGED
@@ -39,9 +39,62 @@ This is the model card of a 🤗 transformers model that has been pushed on the
39
 
40
  ### Direct Use
41
 
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
 
44
- [More Information Needed]
45
 
46
  ### Downstream Use [optional]
47
 
@@ -128,74 +181,5 @@ Use the code below to get started with the model.
128
 
129
  [More Information Needed]
130
 
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
-
189
- ## More Information [optional]
190
-
191
- [More Information Needed]
192
-
193
- ## Model Card Authors [optional]
194
-
195
- [More Information Needed]
196
-
197
- ## Model Card Contact
198
-
199
- [More Information Needed]
200
 
201
 
 
39
 
40
  ### Direct Use
41
 
42
+ Please proceed the following example **that purely relies on tranformers and torch** on google colab or here:
43
+
44
+ 1. Setup ask method for inferring FlanT5 as follows:
45
+ ```python
46
+ def ask(prompt):
47
+ inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False)
48
+ inputs.to(device)
49
+ output = model.generate(**inputs, max_length=320, temperature=1)
50
+ return tokenizer.batch_decode(output, skip_special_tokens=True)[0]
51
+ ```
52
+
53
+ 2. Setup chain and expected output labels:
54
+ ```python
55
+ def emotion_extraction_chain(context, target):
56
+ # Setup labels.
57
+ labels_list = ["anger", "disgust", "fear", "joy", "sadness", "surprise", "neutral"]
58
+ # Setup Chain-of-Thought
59
+ step1 = f"Given the conversation {context}, which text spans are possibly causes emotion on {target}?"
60
+ span = ask(step1)
61
+ step2 = f"{step1}. The mentioned text spans are about {span}. Based on the common sense, what " + f"is the implicit opinion towards the mentioned text spans that causes emotion on {target}, and why?"
62
+ opinion = ask(step2)
63
+ step3 = f"{step2}. The opinion towards the text spans that causes emotion on {target} is {opinion}. " + f"Based on such opinion, what is the emotion state of {target}?"
64
+ emotion_state = ask(step3)
65
+ step4 = f"{step3}. The emotion state is {emotion_state}. Based on these contexts, summarize and return the emotion cause only." + "Choose from: {}.".format(", ".join(labels_list))
66
+ # Return the final response.
67
+ return ask(step4)
68
+ ```
69
+
70
+ 3. Initialize `device`, `model` and `tokenizer` as follows:
71
+ ```python
72
+ from transformers import AutoTokenizer, T5ForConditionalGeneration
73
+
74
+ model_path = "nicolay-r/flan-t5-emotion-cause-thor-base"
75
+ device = "cuda:0"
76
+
77
+ model = T5ForConditionalGeneration.from_pretrained(model_path)
78
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
79
+ model.to(device)
80
+ ```
81
+
82
+ 4. Apply it!
83
+ ```python
84
+ # setup history context (conv_turn_1)
85
+ conv_turn_1 = "John: ohh you made up!"
86
+ # setup utterance.
87
+ conv_turn_2 = "Jake: yaeh, I could not be mad at him for too long!"
88
+ context = conv_turn_1 + conv_turn_2
89
+ # Target is considered as the whole conv-turn mentioned in context.
90
+ target = conv_turn_2
91
+ flant5_response = emotion_extraction_chain(context, target)
92
+ print(f"Emotion state of the speaker of `{target}` is: {flant5_response}")
93
+ ```
94
+
95
+ The response is as follows:
96
+ >>> Emotion state of the speaker of `Jake: yaeh, I could not be mad at him for too long!` is: **anger**
97
 
 
98
 
99
  ### Downstream Use [optional]
100
 
 
181
 
182
  [More Information Needed]
183
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
184
 
185