File size: 983 Bytes
d99c8a3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: source
dtype: string
- name: source_labels
dtype: string
- name: rouge_scores
dtype: string
- name: paper_id
dtype: string
- name: target
dtype: string
- name: full_source_text
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 8025935
num_examples: 1992
- name: test
num_bytes: 3099651
num_examples: 618
- name: validation
num_bytes: 2892738
num_examples: 619
download_size: 6363296
dataset_size: 14018324
---
# Dataset Card for "tokenized_T5_base"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |