ahmedrachid commited on
Commit
a37b975
1 Parent(s): fdb01cc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -10,14 +10,16 @@ widget:
10
  - text: Bids or offers include at least 1,000 shares and the value of the shares must correspond to at least EUR 4,000.
11
  - text: Raute reported a loss per share of EUR 0.86 for the first half of 2009 , against EPS of EUR 0.74 in the corresponding period of 2008.
12
  ---
 
 
13
  *FinancialBERT* is a BERT model pre-trained on a large corpora of financial texts. The purpose is to enhance financial NLP research and practice in financial domain, we hope financial practitioners and researchers can benefit from our model without the necessity of the significant computational resources required to train the model.
14
 
15
  We fine-tuned our model on Sentiment Analysis task using _FinancialPhraseBank_ dataset, experiments show that our model outperforms the general BERT and other financial domain-specific models.
16
 
17
- # Training data
18
  FinancialBERT model was fine-tuned on Financial PhraseBank, a dataset consisting of 4840 Financial News categorised by sentiment (negative, neutral, positive).
19
 
20
- # How to use
21
  Our model can be used thanks to Transformers pipeline for sentiment analysis.
22
  ```python
23
  >>> from transformers import BertTokenizer, BertForSequenceClassification
 
10
  - text: Bids or offers include at least 1,000 shares and the value of the shares must correspond to at least EUR 4,000.
11
  - text: Raute reported a loss per share of EUR 0.86 for the first half of 2009 , against EPS of EUR 0.74 in the corresponding period of 2008.
12
  ---
13
+ ### PEGASUS for Financial Summarization
14
+
15
  *FinancialBERT* is a BERT model pre-trained on a large corpora of financial texts. The purpose is to enhance financial NLP research and practice in financial domain, we hope financial practitioners and researchers can benefit from our model without the necessity of the significant computational resources required to train the model.
16
 
17
  We fine-tuned our model on Sentiment Analysis task using _FinancialPhraseBank_ dataset, experiments show that our model outperforms the general BERT and other financial domain-specific models.
18
 
19
+ ### Training data
20
  FinancialBERT model was fine-tuned on Financial PhraseBank, a dataset consisting of 4840 Financial News categorised by sentiment (negative, neutral, positive).
21
 
22
+ ### How to use
23
  Our model can be used thanks to Transformers pipeline for sentiment analysis.
24
  ```python
25
  >>> from transformers import BertTokenizer, BertForSequenceClassification