mstafam commited on
Commit
54ee276
1 Parent(s): f0ef3b5
Files changed (1) hide show
  1. README.md +16 -15
README.md CHANGED
@@ -10,31 +10,32 @@ tags:
10
  ---
11
 
12
  ### Model Overview:
13
- This NLP model is fine-tuned with a focus on analyzing sentiment in financial text and news headlines. It was trained using the [bert-base-uncased](https://huggingface.co/bert-base-uncased) model on the [financial_phrasebank](https://huggingface.co/datasets/financial_phrasebank) and [auditor_sentiment](https://huggingface.co/datasets/FinanceInc/auditor_sentiment) datasets. It achieves the following accuracies in the trained datasets:
14
 
15
- **financial_phrasebank accuracy:** 0.993
16
- **auditor_senitment accuracy:** 0.974
 
17
 
18
  ### Training Hyperparameters:
19
 
20
- **Learning Rate:** 2e-05
21
- **Train Batch Size:** 16
22
- **Eval Batch Size:** 16
23
- **Random Seed:** 42
24
- **Optimizer:** AdamW-betas(0.9, 0.999)
25
- **Learning Rate Scheduler:** Linear
26
- **Number of Epochs:** 6
27
- **Number of Warmup Steps:** 0.2 * Number of Training Steps
28
 
29
  ### How To Use:
30
 
31
  ```
32
- >> from transformers import pipeline
33
- >> pipe = pipeline("sentiment-analysis", model="mstafam/fine-tuned-bert-financial-sentimental-analysis")
34
 
35
- >> text = "Example company has seen a 5% increase in revenue this quarter."
36
 
37
- >> pipe(text)
38
 
39
  [{'label': 'Positive', 'score': 0.9993795156478882}]
40
  ```
 
10
  ---
11
 
12
  ### Model Overview:
13
+ This NLP model is fine-tuned with a focus on analyzing sentiment in financial text and news headlines. It was trained using the [bert-base-uncased](https://huggingface.co/bert-base-uncased) model on the [financial_phrasebank](https://huggingface.co/datasets/financial_phrasebank) and [auditor_sentiment](https://huggingface.co/datasets/FinanceInc/auditor_sentiment) datasets.
14
 
15
+ **Accuracies:**
16
+ **financial_phrasebank accuracy:** 0.993\
17
+ **auditor_senitment accuracy:** 0.974\
18
 
19
  ### Training Hyperparameters:
20
 
21
+ **Learning Rate:** 2e-05\
22
+ **Train Batch Size:** 16\
23
+ **Eval Batch Size:** 16\
24
+ **Random Seed:** 42\
25
+ **Optimizer:** AdamW-betas(0.9, 0.999)\
26
+ **Learning Rate Scheduler:** Linear\
27
+ **Number of Epochs:** 6\
28
+ **Number of Warmup Steps:** 0.2 * Number of Training Steps\
29
 
30
  ### How To Use:
31
 
32
  ```
33
+ from transformers import pipeline
34
+ pipe = pipeline("sentiment-analysis", model="mstafam/fine-tuned-bert-financial-sentimental-analysis")
35
 
36
+ text = "Example company has seen a 5% increase in revenue this quarter."
37
 
38
+ print(pipe(text))
39
 
40
  [{'label': 'Positive', 'score': 0.9993795156478882}]
41
  ```