raven-data / README.md
adriantheuma's picture
updated readme
e876292
metadata
license: apache-2.0
configs:
  - config_name: default
    data_files:
      - split: train
        path:
          - ott-qa/train.json
          - phrase-bank/train.json
          - tat-qa/train.json
          - template/train.json
          - wiki-sql/train.json
      - split: test
        path:
          - ott-qa/test.json
          - phrase-bank/test.json
          - tat-qa/test.json
          - template/test.json
          - wiki-sql/test.json
      - split: val
        path:
          - ott-qa/dev.json
          - phrase-bank/dev.json
          - tat-qa/dev.json
          - template/dev.json
          - wiki-sql/dev.json

Raven Dataset

The dataset that we use to fine-tune Raven is composed from four distinct question-answering datasets. Two are specifically from the financial domain with the remaining being generic and incorporating questions over both tables and text.

TAT-QA.

Table-and-Text Question Answering consists of 16,552 questions generated by financial experts associated with 2,757 hybrid contexts drawn from real-world financial reports.

Financial PhraseBank

Financial PhraseBank consists of 4,846 phrases derived from English financial news on listed companies in OMX Helsinki. The dataset contains phrase-level annotation by financial markets experts, that categorises each sample sentence exclusively from an investor's standpoint as either positive, negative, or neutral.

Wiki-SQL

Wiki-SQL consists of 80,654 manually annotated crowd sourced examples of natural language questions and corresponding SQL queries over 24,241 tables found on Wikipedia

OTT-QA

Similar to \textsc{TAT-QA}, Open Table-and-Text Question Answering consists of 43,683 questions over tabular data and unstructured text across diverse domains. The majority of questions necessitate multi-hop inference involving both forms of data.

Data preparation

The datasets described above have diverse formats and are not suited for fine-tuning Raven as-is. We employ a data conversion pipeline to convert these four datasets into a homogeneous dataset suitable to fine-tune our financial model. In general, we extract up to four key attributes from the original datasets. These are (1) instruction that describes the task to perform, for example, Determine the sentiment of the following phrase, or the question What is the percentage change in revenue after the adoption of ASC 606? (2) input that provides more context such as the phrase to classify or a passage, (3) data that accompanies the context, in tabular format, (4) derivation that produces the answer or expected response. Refer to templates for examples of the full prompt.

To obtain a balanced dataset we randomly sub-sample larger datasets such that we obtain a uniformly distributed dataset among the different sources. The size of the final training dataset is 47.6K samples, validation 5.26K and test 5.81K.