Datasets:
Upload earnings_calls_10k_disclosures.jsonl
Browse files## Dataset Summary
The LLM-ADE-fin dataset is curated to train (next-token) for LLM-ADE research, specifically designed to test for financial domain expertise.
It consists of 75,849 sequences, amounting to approximately 16.8 million tokens, using the Llama tokenizer.
The data focuses on the 500 constituent companies of the S&P 500 index as of January 2024 and includes:
1. Sections from the Management Discussions and Risk Factors of the most recent 10-K filings.
2. Transcripts from earnings calls over the last five years, sourced from the companies' investor relations sections.
3. Transcripts from various investor events, including analyst day presentations, company-hosted or industry conferences, and business updates.
We have deliberately excluded financial statements due to their graphical and tabular format, which is not compatible with next-token prediction training methods.
The original data, predominantly in PDF format, underwent the following preprocessing steps after Optical Character Recognition (OCR):
1. Conversion of Unicode/HTML entities to ASCII characters.
2. Correction of spacing errors and punctuation mistakes.
3. Removal of sequences with excessive references to images or tables.
4. Exclusion of sequences with excessive OCR artifacts.
5. Separation of incorrectly merged words.
6. Deduplication using locality-sensitive hashing with MinHash (threshold of 0.95)
While we have made efforts to ensure the integrity and cleanliness of the dataset, it is important to note that some imperfections may persist. This was a deliberate decison so that the dataset is representative of real-world application. Our preprocessing was biased towards exclusion, resulting in the removal of approximately 35% of the tokens initially captured through OCR to maintain a high-quality corpus.
Looking ahead, we are committed to expanding our dataset by:
1. Broadening the number of companies included and extending the historical data.
2. Refining our filtering techniques for cleaner data and reduce the need for data exclusion.
3. Implementing semantic deduplication to enhance the dataset's utility.
- .gitattributes +1 -0
- earnings_calls_10k_disclosures.jsonl +3 -0
@@ -53,3 +53,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
53 |
*.jpg filter=lfs diff=lfs merge=lfs -text
|
54 |
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
55 |
*.webp filter=lfs diff=lfs merge=lfs -text
|
|
|
|
53 |
*.jpg filter=lfs diff=lfs merge=lfs -text
|
54 |
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
55 |
*.webp filter=lfs diff=lfs merge=lfs -text
|
56 |
+
earnings_calls_10k_disclosures.jsonl filter=lfs diff=lfs merge=lfs -text
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e58c16d9c7fc574d276fc512183ecf26cbd2e206845c1b0b966c405e97bf2da1
|
3 |
+
size 69300404
|