hansards / README.md
albertvillanova's picture
Convert dataset sizes from base 2 to base 10 in the dataset card (#1)
2b83480
metadata
paperswithcode_id: null
pretty_name: hansards
dataset_info:
  - config_name: senate
    features:
      - name: fr
        dtype: string
      - name: en
        dtype: string
    splits:
      - name: test
        num_bytes: 5711686
        num_examples: 25553
      - name: train
        num_bytes: 40324278
        num_examples: 182135
    download_size: 15247360
    dataset_size: 46035964
  - config_name: house
    features:
      - name: fr
        dtype: string
      - name: en
        dtype: string
    splits:
      - name: test
        num_bytes: 22906629
        num_examples: 122290
      - name: train
        num_bytes: 191459584
        num_examples: 947969
    download_size: 67584000
    dataset_size: 214366213

Dataset Card for "hansards"

Table of Contents

Dataset Description

Dataset Summary

This release contains 1.3 million pairs of aligned text chunks (sentences or smaller fragments) from the official records (Hansards) of the 36th Canadian Parliament.

The complete Hansards of the debates in the House and Senate of the 36th Canadian Parliament, as far as available, were aligned. The corpus was then split into 5 sets of sentence pairs: training (80% of the sentence pairs), two sets of sentence pairs for testing (5% each), and two sets of sentence pairs for final evaluation (5% each). The current release consists of the training and testing sets. The evaluation sets are reserved for future MT evaluation purposes and currently not available.

Caveats

  1. This release contains only sentence pairs. Even though the order of the sentences is the same as in the original, there may be gaps resulting from many-to-one, many-to-many, or one-to-many alignments that were filtered out. Therefore, this release may not be suitable for discourse-related research.
  2. Neither the sentence splitting nor the alignments are perfect. In particular, watch out for pairs that differ considerably in length. You may want to filter these out before you do any statistical training.

The alignment of the Hansards was performed as part of the ReWrite project under funding from the DARPA TIDES program.

Supported Tasks and Leaderboards

More Information Needed

Languages

More Information Needed

Dataset Structure

Data Instances

house

  • Size of downloaded dataset files: 67.58 MB
  • Size of the generated dataset: 214.37 MB
  • Total amount of disk used: 281.95 MB

An example of 'train' looks as follows.

{
    "en": "Mr. Walt Lastewka (Parliamentary Secretary to Minister of Industry, Lib.):",
    "fr": "M. Walt Lastewka (secrétaire parlementaire du ministre de l'Industrie, Lib.):"
}

senate

  • Size of downloaded dataset files: 15.25 MB
  • Size of the generated dataset: 46.03 MB
  • Total amount of disk used: 61.28 MB

An example of 'train' looks as follows.

{
    "en": "Mr. Walt Lastewka (Parliamentary Secretary to Minister of Industry, Lib.):",
    "fr": "M. Walt Lastewka (secrétaire parlementaire du ministre de l'Industrie, Lib.):"
}

Data Fields

The data fields are the same among all splits.

house

  • fr: a string feature.
  • en: a string feature.

senate

  • fr: a string feature.
  • en: a string feature.

Data Splits

name train test
house 947969 122290
senate 182135 25553

Dataset Creation

Curation Rationale

More Information Needed

Source Data

Initial Data Collection and Normalization

More Information Needed

Who are the source language producers?

More Information Needed

Annotations

Annotation process

More Information Needed

Who are the annotators?

More Information Needed

Personal and Sensitive Information

More Information Needed

Considerations for Using the Data

Social Impact of Dataset

More Information Needed

Discussion of Biases

More Information Needed

Other Known Limitations

More Information Needed

Additional Information

Dataset Curators

More Information Needed

Licensing Information

More Information Needed

Citation Information


Contributions

Thanks to @patrickvonplaten, @thomwolf, @albertvillanova for adding this dataset.