Datasets:

Modalities:
Text
Languages:
English
ArXiv:
Libraries:
Datasets
License:
nlpaueb commited on
Commit
ef5977c
·
1 Parent(s): ffdcb8b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -131,7 +131,7 @@ The dataset was curated by [Loukas et al. (2022)](https://arxiv.org/abs/2203.064
131
  FiNER-139 is compiled from approx. 10k annual and quarterly English reports (filings) of publicly traded companies downloaded from the [US Securities
132
  and Exchange Commission's (SEC)](https://www.sec.gov/) [Electronic Data Gathering, Analysis, and Retrieval (EDGAR)](https://www.sec.gov/edgar.shtml) system. <br>
133
  The reports span a 5-year period, from 2016 to 2020. They are annotated with XBRL tags by professional auditors and describe the performance and projections of the companies. XBRL defines approx. 6k entity types from the US-GAAP taxonomy. FiNER-139 is annotated with the 139 most frequent XBRL entity types with at least 1,000 appearances. <br>
134
- We used regular expressions to extract the text notes from the Financial Statements Item of each filing, which is the primary source of XBRL tags in annual and quarterly reports. We used the **IOB2** annotation scheme to distinguish tokens at the beginning, inside, or outside of tagged expressions, which leads to 279 possible token labels.
135
  </div>
136
 
137
  ### Annotations
@@ -193,8 +193,8 @@ In the Proceedings of the 60th Annual Meeting of the Association for Computation
193
  <img align="center" src="https://i.ibb.co/0yz81K9/sec-bert-logo.png" alt="SEC-BERT" width="400"/>
194
 
195
  <div style="text-align: justify">
196
- We also pre-train our own BERT models (**SEC-BERT**) for the financial domain, intended to assist financial NLP research and FinTech applications. <br>
197
- **SEC-BERT** consists of the following models:
198
 
199
  * [**SEC-BERT-BASE**](https://huggingface.co/nlpaueb/sec-bert-base): Same architecture as BERT-BASE trained on financial documents.
200
  * [**SEC-BERT-NUM**](https://huggingface.co/nlpaueb/sec-bert-num): Same as SEC-BERT-BASE but we replace every number token with a [NUM] pseudo-token handling all numeric expressions in a uniform manner, disallowing their fragmentation
 
131
  FiNER-139 is compiled from approx. 10k annual and quarterly English reports (filings) of publicly traded companies downloaded from the [US Securities
132
  and Exchange Commission's (SEC)](https://www.sec.gov/) [Electronic Data Gathering, Analysis, and Retrieval (EDGAR)](https://www.sec.gov/edgar.shtml) system. <br>
133
  The reports span a 5-year period, from 2016 to 2020. They are annotated with XBRL tags by professional auditors and describe the performance and projections of the companies. XBRL defines approx. 6k entity types from the US-GAAP taxonomy. FiNER-139 is annotated with the 139 most frequent XBRL entity types with at least 1,000 appearances. <br>
134
+ We used regular expressions to extract the text notes from the Financial Statements Item of each filing, which is the primary source of XBRL tags in annual and quarterly reports. We used the <strong>IOB2</strong> annotation scheme to distinguish tokens at the beginning, inside, or outside of tagged expressions, which leads to 279 possible token labels.
135
  </div>
136
 
137
  ### Annotations
 
193
  <img align="center" src="https://i.ibb.co/0yz81K9/sec-bert-logo.png" alt="SEC-BERT" width="400"/>
194
 
195
  <div style="text-align: justify">
196
+ We also pre-train our own BERT models (<strong>SEC-BERT</strong>) for the financial domain, intended to assist financial NLP research and FinTech applications. <br>
197
+ <strong>SEC-BERT</strong> consists of the following models:
198
 
199
  * [**SEC-BERT-BASE**](https://huggingface.co/nlpaueb/sec-bert-base): Same architecture as BERT-BASE trained on financial documents.
200
  * [**SEC-BERT-NUM**](https://huggingface.co/nlpaueb/sec-bert-num): Same as SEC-BERT-BASE but we replace every number token with a [NUM] pseudo-token handling all numeric expressions in a uniform manner, disallowing their fragmentation