ccdv commited on
Commit
b62b9af
1 Parent(s): 85273ec
Files changed (1) hide show
  1. README.md +20 -1
README.md CHANGED
@@ -15,12 +15,31 @@ task_ids:
15
 
16
 
17
  Adapted from this [repo](https://github.com/armancohan/long-summarization).\
18
- Note that original data are pre-tokenized. This dataset returns ' '.join(text).\
 
19
  This dataset is compatible with the [`run_summarization.py`](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization) script from Transformers if you add this line to the `summarization_name_mapping` variable:
20
  ```python
21
  "ccdv/pubmed-summarization": ("article", "abstract")
22
  ```
23
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
  # Cite original article
25
  ```
26
  @inproceedings{cohan-etal-2018-discourse,
 
15
 
16
 
17
  Adapted from this [repo](https://github.com/armancohan/long-summarization).\
18
+ Note that original data are pre-tokenized so this dataset returns " ".join(text).\
19
+ This dataset returns .\
20
  This dataset is compatible with the [`run_summarization.py`](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization) script from Transformers if you add this line to the `summarization_name_mapping` variable:
21
  ```python
22
  "ccdv/pubmed-summarization": ("article", "abstract")
23
  ```
24
 
25
+ ### Data Fields
26
+
27
+ - `id`: paper id
28
+ - `article`: a string containing the body of the paper
29
+ - `abstract`: a string containing the abstract of the paper
30
+
31
+ ### Data Splits
32
+
33
+ This dataset has 3 splits: _train_, _validation_, and _test_. \
34
+ Token counts are white space based.
35
+
36
+ | Dataset Split | Number of Instances | Avg. tokens |
37
+ | ------------- | --------------------|:----------------------|
38
+ | Train | 119,924 | 3043 / 215 |
39
+ | Validation | 6,633 | 3111 / 216 |
40
+ | Test | 6,658 | 3092 / 219 |
41
+
42
+
43
  # Cite original article
44
  ```
45
  @inproceedings{cohan-etal-2018-discourse,