amphora commited on
Commit
6d8281a
1 Parent(s): a10d621

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -1
README.md CHANGED
@@ -4,4 +4,40 @@ widget:
4
  - text: " Chinese stocks’ plunge on Monday over fears about China’s new leadership team may be misguided, consulting firm Teneo said. Chinese stocks in Hong Kong and New York, especially internet tech giants such as [TGT], dropped on the first trading day after Chinese President Xi Jinping cemented his firm grip on power with a new core leadership team filled with his loyalists."
5
  - text: "[TGT] stocks dropped 42% while Samsung rallied."
6
  - text: "Tesla stocks dropped 42% while [TGT] rallied."
7
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  - text: " Chinese stocks’ plunge on Monday over fears about China’s new leadership team may be misguided, consulting firm Teneo said. Chinese stocks in Hong Kong and New York, especially internet tech giants such as [TGT], dropped on the first trading day after Chinese President Xi Jinping cemented his firm grip on power with a new core leadership team filled with his loyalists."
5
  - text: "[TGT] stocks dropped 42% while Samsung rallied."
6
  - text: "Tesla stocks dropped 42% while [TGT] rallied."
7
+
8
+ tags:
9
+ - t5
10
+ license:
11
+ - apache-2.0
12
+ ---
13
+
14
+ ## Model Description
15
+
16
+ FinABSA is a T5-Large model trained for Aspect-Based Sentiment Analysis(ABSA) tasks using [SEntFiN 1.0](https://asistdl.onlinelibrary.wiley.com/doi/10.1002/asi.24634?af=R). Unlike traditional sentiment analysis models which predict a single sentiment label for each sentence, FinABSA has been trained to disambiguate sentences containing multiple aspects. By replacing the target aspect with a [TGT] token the model predicts the sentiment concentrating to the aspect.
17
+
18
+ ## How to use
19
+
20
+ You can use this model directly using the AutoModelForSeq2SeqLM class.
21
+
22
+ ```python
23
+ >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
24
+
25
+ >>> tokenizer = AutoTokenizer.from_pretrained("amphora/FinABSA")
26
+ >>> model = AutoModelForSeq2SeqLM.from_pretrained("amphora/FinABSA")
27
+
28
+ >>> input_str = "[TGT] stocks dropped 42% while Samsung rallied."
29
+ >>> input = tokenizer(input_str, return_tensors='pt')
30
+ >>> output = model.generate(**input, max_length=20)
31
+ >>> print(output)
32
+ The sentiment for [TGT] in the given sentence is NEGATIVE.
33
+
34
+ >>> input_str = "Tesla stocks dropped 42% while [TGT] rallied."
35
+ >>> input = tokenizer(input_str, return_tensors='pt')
36
+ >>> output = model.generate(**input, max_length=20)
37
+ >>> print(output)
38
+ The sentiment for [TGT] in the given sentence is POSITIVE.
39
+ ```
40
+
41
+ ## Evaluation Results
42
+
43
+ Using a test split arbitarly extracted from [SEntFiN 1.0](https://asistdl.onlinelibrary.wiley.com/doi/10.1002/asi.24634?af=R) the model scores an average accuracy of 87%.