Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
n<1K
Language Creators:
found
Annotations Creators:
found
Source Datasets:
original
Tags:
License:
system HF staff commited on
Commit
f268737
1 Parent(s): a79c060

Update files from the datasets library (from 1.7.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.7.0

Files changed (2) hide show
  1. README.md +4 -3
  2. onestop_english.py +2 -3
README.md CHANGED
@@ -19,6 +19,7 @@ task_categories:
19
  task_ids:
20
  - multi-class-classification
21
  - text-simplification
 
22
  ---
23
 
24
  # Dataset Card for OneStopEnglish corpus
@@ -26,12 +27,12 @@ task_ids:
26
  ## Table of Contents
27
  - [Dataset Description](#dataset-description)
28
  - [Dataset Summary](#dataset-summary)
29
- - [Supported Tasks](#supported-tasks-and-leaderboards)
30
  - [Languages](#languages)
31
  - [Dataset Structure](#dataset-structure)
32
  - [Data Instances](#data-instances)
33
- - [Data Fields](#data-instances)
34
- - [Data Splits](#data-instances)
35
  - [Dataset Creation](#dataset-creation)
36
  - [Curation Rationale](#curation-rationale)
37
  - [Source Data](#source-data)
19
  task_ids:
20
  - multi-class-classification
21
  - text-simplification
22
+ paperswithcode_id: onestopenglish
23
  ---
24
 
25
  # Dataset Card for OneStopEnglish corpus
27
  ## Table of Contents
28
  - [Dataset Description](#dataset-description)
29
  - [Dataset Summary](#dataset-summary)
30
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
31
  - [Languages](#languages)
32
  - [Dataset Structure](#dataset-structure)
33
  - [Data Instances](#data-instances)
34
+ - [Data Fields](#data-fields)
35
+ - [Data Splits](#data-splits)
36
  - [Dataset Creation](#dataset-creation)
37
  - [Curation Rationale](#curation-rationale)
38
  - [Source Data](#source-data)
onestop_english.py CHANGED
@@ -128,7 +128,6 @@ class OnestopEnglish(datasets.GeneratorBasedBuilder):
128
  def _generate_examples(self, split_key, data_dir):
129
  """Yields examples for a given split of dataset."""
130
  split_text, split_labels = self._get_examples_from_split(split_key, data_dir)
131
- for text, label in zip(split_text, split_labels):
132
- data_key = split_key + "_" + text
133
  feature_dict = {"text": text, "label": label}
134
- yield data_key, feature_dict
128
  def _generate_examples(self, split_key, data_dir):
129
  """Yields examples for a given split of dataset."""
130
  split_text, split_labels = self._get_examples_from_split(split_key, data_dir)
131
+ for id_, (text, label) in enumerate(zip(split_text, split_labels)):
 
132
  feature_dict = {"text": text, "label": label}
133
+ yield id_, feature_dict