lhoestq HF staff commited on
Commit
cc9f890
1 Parent(s): 7741454

Dataset infos in yaml (#4926)

Browse files

* wip

* fix Features yaml

* splits to yaml

* add _to_yaml_list

* style

* example: conll2000

* example: crime_and_punish

* add pyyaml dependency

* remove unused imports

* remove validation tests

* style

* allow dataset_infos to be struct or list in YAML

* fix test

* style

* update "datasets-cli test" + remove "version"

* remove config definitions in conll2000 and crime_and_punish

* remove versions for conll2000 and crime_and_punish

* move conll2000 and cap dummy data

* fix test

* add tests

* comments and tests

* more test

* don't mention the dataset_infos.json file in docs

* nit in docs

* docs

* dataset_infos -> dataset_info

* again

* use id2label in class_label

* update conll2000

* fix utf-8 yaml dump

* --save_infos -> --save_info

* Apply suggestions from code review

Co-authored-by: Polina Kazakova <polina@huggingface.co>

* style

* fix reloading a single dataset_info

* push info to README.md in push_to_hub

* update test

Co-authored-by: Polina Kazakova <polina@huggingface.co>

Commit from https://github.com/huggingface/datasets/commit/67e65c90e9490810b89ee140da11fdd13c356c9c

README.md CHANGED
@@ -2,10 +2,20 @@
2
  language:
3
  - en
4
  paperswithcode_id: null
5
- pretty_name: Crime and Punishment
 
 
 
 
 
 
 
 
 
 
6
  ---
7
 
8
- # Dataset Card for Crime and Punishment
9
 
10
  ## Table of Contents
11
  - [Dataset Description](#dataset-description)
@@ -144,4 +154,4 @@ The data fields are the same among all splits.
144
 
145
  ### Contributions
146
 
147
- Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
2
  language:
3
  - en
4
  paperswithcode_id: null
5
+ pretty_name: CrimeAndPunish
6
+ dataset_info:
7
+ dataset_size: 1270540
8
+ download_size: 1201735
9
+ features:
10
+ - dtype: string
11
+ name: line
12
+ splits:
13
+ - name: train
14
+ num_bytes: 1270540
15
+ num_examples: 21969
16
  ---
17
 
18
+ # Dataset Card for "crime_and_punish"
19
 
20
  ## Table of Contents
21
  - [Dataset Description](#dataset-description)
154
 
155
  ### Contributions
156
 
157
+ Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
crime_and_punish.py CHANGED
@@ -8,36 +8,7 @@ _URL = "https://www.gutenberg.org/files/2554/2554-h/2554-h.htm"
8
  _DATA_URL = "https://raw.githubusercontent.com/patrickvonplaten/datasets/master/crime_and_punishment.txt"
9
 
10
 
11
- class CrimeAndPunishConfig(datasets.BuilderConfig):
12
- """BuilderConfig for Crime and Punish."""
13
-
14
- def __init__(self, data_url, **kwargs):
15
- """BuilderConfig for BlogAuthorship
16
-
17
- Args:
18
- data_url: `string`, url to the dataset (word or raw level)
19
- **kwargs: keyword arguments forwarded to super.
20
- """
21
- super(CrimeAndPunishConfig, self).__init__(
22
- version=datasets.Version(
23
- "1.0.0",
24
- ),
25
- **kwargs,
26
- )
27
- self.data_url = data_url
28
-
29
-
30
  class CrimeAndPunish(datasets.GeneratorBasedBuilder):
31
-
32
- VERSION = datasets.Version("0.1.0")
33
- BUILDER_CONFIGS = [
34
- CrimeAndPunishConfig(
35
- name="crime-and-punish",
36
- data_url=_DATA_URL,
37
- description="word level dataset. No processing is needed other than replacing newlines with <eos> tokens.",
38
- ),
39
- ]
40
-
41
  def _info(self):
42
  return datasets.DatasetInfo(
43
  # This is the description that will appear on the datasets page.
@@ -58,17 +29,14 @@ class CrimeAndPunish(datasets.GeneratorBasedBuilder):
58
  def _split_generators(self, dl_manager):
59
  """Returns SplitGenerators."""
60
 
61
- if self.config.name == "crime-and-punish":
62
- data = dl_manager.download_and_extract(self.config.data_url)
63
 
64
- return [
65
- datasets.SplitGenerator(
66
- name=datasets.Split.TRAIN,
67
- gen_kwargs={"data_file": data, "split": "train"},
68
- ),
69
- ]
70
- else:
71
- raise ValueError(f"{self.config.name} does not exist")
72
 
73
  def _generate_examples(self, data_file, split):
74
 
8
  _DATA_URL = "https://raw.githubusercontent.com/patrickvonplaten/datasets/master/crime_and_punishment.txt"
9
 
10
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  class CrimeAndPunish(datasets.GeneratorBasedBuilder):
 
 
 
 
 
 
 
 
 
 
12
  def _info(self):
13
  return datasets.DatasetInfo(
14
  # This is the description that will appear on the datasets page.
29
  def _split_generators(self, dl_manager):
30
  """Returns SplitGenerators."""
31
 
32
+ data = dl_manager.download_and_extract(_DATA_URL)
 
33
 
34
+ return [
35
+ datasets.SplitGenerator(
36
+ name=datasets.Split.TRAIN,
37
+ gen_kwargs={"data_file": data, "split": "train"},
38
+ ),
39
+ ]
 
 
40
 
41
  def _generate_examples(self, data_file, split):
42
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"crime-and-punish": {"description": "\n", "citation": "", "homepage": "https://www.gutenberg.org/files/2554/2554-h/2554-h.htm", "license": "", "features": {"line": {"dtype": "string", "id": null, "_type": "Value"}}, "supervised_keys": null, "builder_name": "crime_and_punish", "config_name": "crime-and-punish", "version": {"version_str": "1.0.0", "description": null, "datasets_version_to_prepare": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1270540, "num_examples": 21969, "dataset_name": "crime_and_punish"}}, "download_checksums": {"https://raw.githubusercontent.com/patrickvonplaten/datasets/master/crime_and_punishment.txt": {"num_bytes": 1201735, "checksum": "3582bcff83e5e24ae5acb2935a191ea5ead66b11fc12fa19b0397834e8296c83"}}, "download_size": 1201735, "dataset_size": 1270540, "size_in_bytes": 2472275}}
 
dummy/{crime-and-punish/1.0.0 → 0.0.0}/dummy_data.zip RENAMED
File without changes