sumitranjan commited on
Commit
3f30e91
1 Parent(s): 74d8e52

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -0
README.md CHANGED
@@ -5,3 +5,38 @@ widget:
5
  - text: "Amazon reports BLOWOUT earnings, beating revenue estimates and raising Q3 guidance"
6
  - text: "Company went through great loss due to lawsuit in Q1"
7
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  - text: "Amazon reports BLOWOUT earnings, beating revenue estimates and raising Q3 guidance"
6
  - text: "Company went through great loss due to lawsuit in Q1"
7
  ---
8
+
9
+ ## What is Earning Call Transcript?
10
+
11
+ An earnings call is a teleconference, or webcast, in which a public company discusses the financial results of a reporting period. The name comes from earnings per share, the bottom line number in the income statement divided by the number of shares outstanding.
12
+
13
+ Example of Earning call Transcipt: https://www.fool.com/earnings/call-transcripts/2022/04/29/apple-aapl-q2-2022-earnings-call-transcript
14
+
15
+ Scraped 10 years of earning call transcript data for 10 companies like Apple, google, microsoft, Nvidia, Amazon, Intel, Cisco etc. Annotate the data in various categories of sentences like Negative, Positive, Litigious, Constraining and Uncertainity
16
+
17
+ And then used Loughran-McDonald sentiment lexicon and Use FinancialPhraseBank [Malo, P., Sinha, A., Korhonen, P., Wallenius, J., & Takala, P. (2014). Good debt or bad debt: Detecting semantic orientations in economic texts. Journal of the Association for Information Science and Technology, 65(4), 782-796.] for data annotation.
18
+
19
+ ## What is RoBERTa
20
+
21
+ RoBERTa builds on BERT’s language masking strategy and modifies key hyperparameters in BERT, including removing BERT’s next-sentence pretraining objective, and training with much larger mini-batches and learning rates. RoBERTa was also trained on an order of magnitude more data than BERT, for a longer amount of time. This allows RoBERTa representations to generalize even better to downstream tasks compared to BERT.
22
+
23
+ ## Hyperparameters
24
+
25
+ | Parameter | |
26
+ | ----------------- | :---: |
27
+ | Learning rate | 1e-5 |
28
+ | Epochs | 12 |
29
+ | Max Seq Length | 240 |
30
+ | Batch size | 128 |
31
+
32
+ ## Results
33
+
34
+ Best Result of `Micro F1` - 91.8%
35
+
36
+ ## Usage
37
+ ```python
38
+ from transformers import AutoModelForSequenceClassification, AutoTokenizer
39
+
40
+ tokenizer = AutoTokenizer.from_pretrained("NLPScholars/Roberta-Earning-Call-Transcript-Classification")
41
+ model = AutoModelForSequenceClassification.from_pretrained("NLPScholars/Roberta-Earning-Call-Transcript-Classification")
42
+ ```