Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
100K<n<1M
Language Creators:
found
Annotations Creators:
machine-generated
Source Datasets:
original
ArXiv:
Tags:
split-and-rephrase
License:
system HF staff commited on
Commit
dc5ef44
1 Parent(s): 1432695

Update files from the datasets library (from 1.7.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.7.0

Files changed (1) hide show
  1. README.md +17 -4
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  ---
3
 
4
  # Dataset Card for "wiki_split"
@@ -6,12 +7,12 @@
6
  ## Table of Contents
7
  - [Dataset Description](#dataset-description)
8
  - [Dataset Summary](#dataset-summary)
9
- - [Supported Tasks](#supported-tasks)
10
  - [Languages](#languages)
11
  - [Dataset Structure](#dataset-structure)
12
  - [Data Instances](#data-instances)
13
  - [Data Fields](#data-fields)
14
- - [Data Splits Sample Size](#data-splits-sample-size)
15
  - [Dataset Creation](#dataset-creation)
16
  - [Curation Rationale](#curation-rationale)
17
  - [Source Data](#source-data)
@@ -43,7 +44,7 @@ One million English sentences, each split into two sentences that together prese
43
  Google's WikiSplit dataset was constructed automatically from the publicly available Wikipedia revision history. Although
44
  the dataset contains some inherent noise, it can serve as valuable training data for models that split or merge sentences.
45
 
46
- ### Supported Tasks
47
 
48
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
49
 
@@ -81,7 +82,7 @@ The data fields are the same among all splits.
81
  - `simple_sentence_1`: a `string` feature.
82
  - `simple_sentence_2`: a `string` feature.
83
 
84
- ### Data Splits Sample Size
85
 
86
  | name |train |validation|test|
87
  |-------|-----:|---------:|---:|
@@ -95,10 +96,22 @@ The data fields are the same among all splits.
95
 
96
  ### Source Data
97
 
 
 
 
 
 
 
98
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
99
 
100
  ### Annotations
101
 
 
 
 
 
 
 
102
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
103
 
104
  ### Personal and Sensitive Information
 
1
  ---
2
+ paperswithcode_id: wikisplit
3
  ---
4
 
5
  # Dataset Card for "wiki_split"
 
7
  ## Table of Contents
8
  - [Dataset Description](#dataset-description)
9
  - [Dataset Summary](#dataset-summary)
10
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
11
  - [Languages](#languages)
12
  - [Dataset Structure](#dataset-structure)
13
  - [Data Instances](#data-instances)
14
  - [Data Fields](#data-fields)
15
+ - [Data Splits](#data-splits)
16
  - [Dataset Creation](#dataset-creation)
17
  - [Curation Rationale](#curation-rationale)
18
  - [Source Data](#source-data)
 
44
  Google's WikiSplit dataset was constructed automatically from the publicly available Wikipedia revision history. Although
45
  the dataset contains some inherent noise, it can serve as valuable training data for models that split or merge sentences.
46
 
47
+ ### Supported Tasks and Leaderboards
48
 
49
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
50
 
 
82
  - `simple_sentence_1`: a `string` feature.
83
  - `simple_sentence_2`: a `string` feature.
84
 
85
+ ### Data Splits
86
 
87
  | name |train |validation|test|
88
  |-------|-----:|---------:|---:|
 
96
 
97
  ### Source Data
98
 
99
+ #### Initial Data Collection and Normalization
100
+
101
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
102
+
103
+ #### Who are the source language producers?
104
+
105
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
106
 
107
  ### Annotations
108
 
109
+ #### Annotation process
110
+
111
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
112
+
113
+ #### Who are the annotators?
114
+
115
  [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
116
 
117
  ### Personal and Sensitive Information