Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
earnings_10k / README.md
stepchoi's picture
Update README.md
2613991 verified
metadata
language:
  - en
license: apache-2.0

Dataset Summary

This dataset is curated to train (next-token) the LLM-ADE model(https://arxiv.org/abs/2404.13028), specifically designed to imbue it with financial domain expertise. It consists of 75,849 sequences, amounting to approximately 16.8 million tokens, using the Llama tokenizer. We have deliberately unlabled the sequences wrt the company to reflect real world data and train the model to process knowledge from unlabelled data.

The data focuses on the 500 constituent companies of the S&P 500 index as of January 2024 and includes:

  1. Sections from the Management Discussions and Risk Factors of the most recent 10-K filings.
  2. Transcripts from earnings calls over the last five years, sourced from the companies' investor relations sections.
  3. Transcripts from various investor events, including analyst day presentations, company-hosted or industry conferences, and business updates.

We have deliberately excluded financial statements due to their graphical and tabular format, which is not compatible with next-token prediction training methods.

The original data, predominantly in PDF format, underwent the following preprocessing steps after Optical Character Recognition (OCR):

  1. Conversion of Unicode/HTML entities to ASCII characters.
  2. Correction of spacing errors and punctuation mistakes.
  3. Removal of sequences with excessive references to images or tables.
  4. Exclusion of sequences with excessive OCR artifacts.
  5. Separation of incorrectly merged words.
  6. Deduplication using locality-sensitive hashing with MinHash (threshold of 0.95)

While we have made efforts to ensure the integrity and cleanliness of the dataset, it is important to note that some imperfections may persist. This was an intentional decision so that the dataset can reflect real-world applications. Our preprocessing was biased towards exclusion, resulting in the removal of approximately 35% of the tokens initially captured through OCR to maintain a high-quality corpus.

Looking ahead, we are committed to expanding our dataset by:

  1. Broadening the number of companies included and extending the historical data.
  2. Refining our filtering techniques for cleaner data and reduce the need for data exclusion.
  3. Implementing semantic deduplication to enhance the dataset's utility.