ahmedrachid
commited on
Commit
•
0c4dfaa
1
Parent(s):
7866e58
Update README.md
Browse files
README.md
CHANGED
@@ -12,9 +12,9 @@ widget:
|
|
12 |
---
|
13 |
### FinancialBERT for Sentiment Analysis
|
14 |
|
15 |
-
[*FinancialBERT*](https://huggingface.co/ahmedrachid/FinancialBERT) is a BERT model pre-trained on a large corpora of financial texts. The purpose is to enhance financial NLP research and practice in financial domain,
|
16 |
|
17 |
-
|
18 |
|
19 |
### Training data
|
20 |
FinancialBERT model was fine-tuned on [Financial PhraseBank](https://www.researchgate.net/publication/251231364_FinancialPhraseBank-v10), a dataset consisting of 4840 Financial News categorised by sentiment (negative, neutral, positive).
|
@@ -37,7 +37,7 @@ The evaluation metrics used are: Precision, Recall and F1-score. The following i
|
|
37 |
| weighted avg | 0.98 | 0.98 | 0.98 | 970 |
|
38 |
|
39 |
### How to use
|
40 |
-
|
41 |
```python
|
42 |
>>> from transformers import BertTokenizer, BertForSequenceClassification
|
43 |
>>> from transformers import pipeline
|
|
|
12 |
---
|
13 |
### FinancialBERT for Sentiment Analysis
|
14 |
|
15 |
+
[*FinancialBERT*](https://huggingface.co/ahmedrachid/FinancialBERT) is a BERT model pre-trained on a large corpora of financial texts. The purpose is to enhance financial NLP research and practice in financial domain, hoping that financial practitioners and researchers can benefit from this model without the necessity of the significant computational resources required to train the model.
|
16 |
|
17 |
+
The model was fine-tuned for Sentiment Analysis task on _Financial PhraseBank_ dataset. Experiments show that this model outperforms the general BERT and other financial domain-specific models.
|
18 |
|
19 |
### Training data
|
20 |
FinancialBERT model was fine-tuned on [Financial PhraseBank](https://www.researchgate.net/publication/251231364_FinancialPhraseBank-v10), a dataset consisting of 4840 Financial News categorised by sentiment (negative, neutral, positive).
|
|
|
37 |
| weighted avg | 0.98 | 0.98 | 0.98 | 970 |
|
38 |
|
39 |
### How to use
|
40 |
+
The model can be used thanks to Transformers pipeline for sentiment analysis.
|
41 |
```python
|
42 |
>>> from transformers import BertTokenizer, BertForSequenceClassification
|
43 |
>>> from transformers import pipeline
|