oliverwang15
commited on
Commit
•
603cac6
1
Parent(s):
221d36a
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,88 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
datasets:
|
4 |
+
- oliverwang15/fingpt_chatglm2_sentiment_instruction_lora_ft_dataset
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
metrics:
|
8 |
+
- accuracy
|
9 |
+
- f1
|
10 |
+
---
|
11 |
+
|
12 |
+
## [FinGPT_ChatGLM2_Sentiment_Instruction_LoRA_FT(FinGPT v3)](https://github.com/AI4Finance-Foundation/FinGPT/tree/master/fingpt/FinGPT-v3) is a LLM finetuned with LoRA method on the News and Tweets sentiment analysis dataset which achieve best scores on most of the financial sentiment analysis datasets.
|
13 |
+
|
14 |
+
## Ⅰ. Try our model
|
15 |
+
|
16 |
+
``` python
|
17 |
+
from transformers import AutoModel, AutoTokenizer
|
18 |
+
from peft import PeftModel
|
19 |
+
|
20 |
+
# Load Models
|
21 |
+
base_model = "THUDM/chatglm2-6b"
|
22 |
+
peft_model = "oliverwang15/FinGPT_ChatGLM2_Sentiment_Instruction_LoRA_FT"
|
23 |
+
tokenizer = AutoTokenizer.from_pretrained(base_model, trust_remote_code=True)
|
24 |
+
model = AutoModel.from_pretrained(base_model, trust_remote_code=True, device_map = "auto")
|
25 |
+
model = PeftModel.from_pretrained(model, peft_model)
|
26 |
+
|
27 |
+
# Make prompts
|
28 |
+
prompt = [
|
29 |
+
'''Instruction: What is the sentiment of this news? Please choose an answer from {negative/neutral/positive}
|
30 |
+
Input: FINANCING OF ASPOCOMP 'S GROWTH Aspocomp is aggressively pursuing its growth strategy by increasingly focusing on technologically more demanding HDI printed circuit boards PCBs .
|
31 |
+
Answer: ''',
|
32 |
+
'''Instruction: What is the sentiment of this news? Please choose an answer from {negative/neutral/positive}
|
33 |
+
Input: According to Gran , the company has no plans to move all production to Russia , although that is where the company is growing .
|
34 |
+
Answer: ''',
|
35 |
+
'''Instruction: What is the sentiment of this news? Please choose an answer from {negative/neutral/positive}
|
36 |
+
Input: A tinyurl link takes users to a scamming site promising that users can earn thousands of dollars by becoming a Google ( NASDAQ : GOOG ) Cash advertiser .
|
37 |
+
Answer: ''',
|
38 |
+
]
|
39 |
+
|
40 |
+
# Generate results
|
41 |
+
tokens = tokenizer(prompt, return_tensors='pt', padding=True, max_length=512)
|
42 |
+
res = model.generate(**tokens, max_length=512)
|
43 |
+
res_sentences = [tokenizer.decode(i) for i in res]
|
44 |
+
out_text = [o.split("Answer: ")[1] for o in res_sentences]
|
45 |
+
|
46 |
+
# show results
|
47 |
+
for sentiment in out_text:
|
48 |
+
print(sentiment)
|
49 |
+
|
50 |
+
# Output:
|
51 |
+
# positive
|
52 |
+
# neutral
|
53 |
+
# negative
|
54 |
+
```
|
55 |
+
|
56 |
+
## Ⅱ. Benchmark Results
|
57 |
+
|
58 |
+
| ACC/F1 Micro | BloombergGPT | ChatGLM2 | ChatGLM2 (8-bit*) | FinGPT v3 | FinGPT v3 (8-bit*) |
|
59 |
+
| ---------------------- | ------------ | -------- | ---------------- | --------- | ----------------- |
|
60 |
+
| FPB [1] | - | 0.464 | 0.476 | **0.8** | 0.784 |
|
61 |
+
| FiQA-SA [2] | - | 0.822 | **0.833** | 0.815 | 0.818 |
|
62 |
+
| TFNS [3] | - | 0.331 | 0.332 | **0.738** | 0.721 |
|
63 |
+
| NWGI [4] | - | 0.560 | 0.561 | **0.588** | **0.588** |
|
64 |
+
| **Macro F1** | | | | | |
|
65 |
+
| FPB [1] | - | 0.487 | 0.5 | **0.774** | 0.754 |
|
66 |
+
| FiQA-SA [2] | - | 0.56 | 0.57 | **0.665** | 0.645 |
|
67 |
+
| TFNS [3] | - | 0.34 | 0.34 | **0.681** | 0.652 |
|
68 |
+
| NWGI [4] | - | 0.489 | 0.492 | **0.579** | 0.576 |
|
69 |
+
| **Weighted F1** | | | | | |
|
70 |
+
| FPB [1] | 0.511 | 0.381 | 0.398 | **0.795** | 0.778 |
|
71 |
+
| FiQA-SA [2] | 0.751 | 0.79 | 0.801 | **0.806** | 0.801 |
|
72 |
+
| TFNS [3] | - | 0.189 | 0.19 | **0.74** | 0.721 |
|
73 |
+
| NWGI [4] | - | 0.449 | 0.452 | **0.578** | **0.578** |
|
74 |
+
|
75 |
+
* '8-bit' doesn't refer to finetuning in 8-bit, but refers to loading the trained model and inferencing in 8-bit mode.
|
76 |
+
|
77 |
+
[[1] Financial_Phrasebank (FPB) ](https://huggingface.co/datasets/financial_phrasebank) is a financial news sentiment analysis benchmark, the labels are "positive", "negative" and "neutral". We use the same split as BloombergGPT. BloombergGPT only use 5-shots in the test to show their model's outstanding performance without further finetuning. However, is our task, all data in the 'train' part were used in finetuning, So our results are far better than Bloomberg's.
|
78 |
+
|
79 |
+
[[2] FiQA SA](https://huggingface.co/datasets/pauri32/fiqa-2018) consists of 17k sentences from microblog headlines and financial news. These labels were changed to "positive", "negative" and "neutral" according to BloombergGPT's paper. We have tried to use the same split as BloombergGPT's paper. However, the amounts of each label can't match exactly when the seed was set to 42.
|
80 |
+
|
81 |
+
[[3] Twitter Financial News Sentiment (TFNS)](https://huggingface.co/datasets/zeroshot/twitter-financial-news-sentiment) dataset is an English-language dataset containing an annotated corpus of finance-related tweets. This dataset is used to classify finance-related tweets for their sentiment. The dataset holds 11,932 documents annotated with 3 labels: "Bearish" ("negative"), "Bullish" ("positive"), and "Neutral".
|
82 |
+
|
83 |
+
[[4] News With GPT Instruction (MWGI)](https://huggingface.co/datasets/oliverwang15/news_with_gpt_instructions) is a dataset whose labels were generated by ChatGPT. The train set has 16.2k samples and the test set has 4.05k samples. The dataset not only contains 7 classification labels: "strong negative", "moderately negative", "mildly negative", "neutral", "mildly positive", "moderately positive", "strong positive". but it also has the reasons for that result, which might be helpful in the instruction finetuning.
|
84 |
+
|
85 |
+
## Ⅲ. How to Train
|
86 |
+
|
87 |
+
Coming Soon.
|
88 |
+
|