Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
100K<n<1M
Language Creators:
crowdsourced
Annotations Creators:
machine-generated
Source Datasets:
original
ArXiv:
Tags:
License:
albertvillanova HF staff commited on
Commit
2c5d0b3
1 Parent(s): 3144f92

Convert dataset to Parquet (#3)

Browse files

- Convert dataset to Parquet (faea15e3a81e54c2d0f84718653887a5b22eeb6d)
- Delete loading script (089b5ac71ebbba2bd0a91fc715200239c1365ef7)
- Delete legacy dataset_infos.json (795c8517b5a917074e413c5d37158a5cfc9988e2)

README.md CHANGED
@@ -20,6 +20,7 @@ task_ids:
20
  paperswithcode_id: dbpedia
21
  pretty_name: DBpedia
22
  dataset_info:
 
23
  features:
24
  - name: label
25
  dtype:
@@ -43,7 +44,6 @@ dataset_info:
43
  dtype: string
44
  - name: content
45
  dtype: string
46
- config_name: dbpedia_14
47
  splits:
48
  - name: train
49
  num_bytes: 178428970
@@ -51,8 +51,16 @@ dataset_info:
51
  - name: test
52
  num_bytes: 22310285
53
  num_examples: 70000
54
- download_size: 68341743
55
  dataset_size: 200739255
 
 
 
 
 
 
 
 
56
  ---
57
 
58
  # Dataset Card for DBpedia14
20
  paperswithcode_id: dbpedia
21
  pretty_name: DBpedia
22
  dataset_info:
23
+ config_name: dbpedia_14
24
  features:
25
  - name: label
26
  dtype:
44
  dtype: string
45
  - name: content
46
  dtype: string
 
47
  splits:
48
  - name: train
49
  num_bytes: 178428970
51
  - name: test
52
  num_bytes: 22310285
53
  num_examples: 70000
54
+ download_size: 119424374
55
  dataset_size: 200739255
56
+ configs:
57
+ - config_name: dbpedia_14
58
+ data_files:
59
+ - split: train
60
+ path: dbpedia_14/train-*
61
+ - split: test
62
+ path: dbpedia_14/test-*
63
+ default: true
64
  ---
65
 
66
  # Dataset Card for DBpedia14
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"dbpedia_14": {"description": "The DBpedia ontology classification dataset is constructed by picking 14 non-overlapping classes\nfrom DBpedia 2014. They are listed in classes.txt. From each of thse 14 ontology classes, we\nrandomly choose 40,000 training samples and 5,000 testing samples. Therefore, the total size\nof the training dataset is 560,000 and testing dataset 70,000.\nThere are 3 columns in the dataset (same for train and test splits), corresponding to class index\n(1 to 14), title and content. The title and content are escaped using double quotes (\"), and any\ninternal double quote is escaped by 2 double quotes (\"\"). There are no new lines in title or content.\n", "citation": "@article{lehmann2015dbpedia,\n title={DBpedia--a large-scale, multilingual knowledge base extracted from Wikipedia},\n author={Lehmann, Jens and Isele, Robert and Jakob, Max and Jentzsch, Anja and Kontokostas,\n Dimitris and Mendes, Pablo N and Hellmann, Sebastian and Morsey, Mohamed and Van Kleef,\n Patrick and Auer, S{\"o}ren and others},\n journal={Semantic web},\n volume={6},\n number={2},\n pages={167--195},\n year={2015},\n publisher={IOS Press}\n}\n", "homepage": "https://wiki.dbpedia.org/develop/datasets", "license": "Creative Commons Attribution-ShareAlike 3.0 and the GNU Free Documentation License", "features": {"label": {"num_classes": 14, "names": ["Company", "EducationalInstitution", "Artist", "Athlete", "OfficeHolder", "MeanOfTransportation", "Building", "NaturalPlace", "Village", "Animal", "Plant", "Album", "Film", "WrittenWork"], "id": null, "_type": "ClassLabel"}, "title": {"dtype": "string", "id": null, "_type": "Value"}, "content": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "d_bpedia14", "config_name": "dbpedia_14", "version": {"version_str": "2.0.0", "description": null, "major": 2, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 178428970, "num_examples": 560000, "dataset_name": "d_bpedia14"}, "test": {"name": "test", "num_bytes": 22310285, "num_examples": 70000, "dataset_name": "d_bpedia14"}}, "download_checksums": {"https://s3.amazonaws.com/fast-ai-nlp/dbpedia_csv.tgz": {"num_bytes": 68341743, "checksum": "42db5221ddedddb673a4cabcc5f3a7d869714c878bcfe4ba94b29d14aa38e417"}}, "download_size": 68341743, "post_processing_size": null, "dataset_size": 200739255, "size_in_bytes": 269080998}}
 
dbpedia_14.py DELETED
@@ -1,150 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
- """The DBpedia dataset for text classification."""
16
-
17
-
18
- import csv
19
-
20
- import datasets
21
-
22
-
23
- # TODO: Add BibTeX citation
24
- # Find for instance the citation on arxiv or on the dataset repo/website
25
- _CITATION = """\
26
- @article{lehmann2015dbpedia,
27
- title={DBpedia--a large-scale, multilingual knowledge base extracted from Wikipedia},
28
- author={Lehmann, Jens and Isele, Robert and Jakob, Max and Jentzsch, Anja and Kontokostas,
29
- Dimitris and Mendes, Pablo N and Hellmann, Sebastian and Morsey, Mohamed and Van Kleef,
30
- Patrick and Auer, S{\"o}ren and others},
31
- journal={Semantic web},
32
- volume={6},
33
- number={2},
34
- pages={167--195},
35
- year={2015},
36
- publisher={IOS Press}
37
- }
38
- """
39
-
40
- _DESCRIPTION = """\
41
- The DBpedia ontology classification dataset is constructed by picking 14 non-overlapping classes
42
- from DBpedia 2014. They are listed in classes.txt. From each of thse 14 ontology classes, we
43
- randomly choose 40,000 training samples and 5,000 testing samples. Therefore, the total size
44
- of the training dataset is 560,000 and testing dataset 70,000.
45
- There are 3 columns in the dataset (same for train and test splits), corresponding to class index
46
- (1 to 14), title and content. The title and content are escaped using double quotes ("), and any
47
- internal double quote is escaped by 2 double quotes (""). There are no new lines in title or content.
48
- """
49
-
50
- _HOMEPAGE = "https://wiki.dbpedia.org/develop/datasets"
51
-
52
- _LICENSE = "Creative Commons Attribution-ShareAlike 3.0 and the GNU Free Documentation License"
53
-
54
- _URLs = {
55
- "dbpedia_14": "https://s3.amazonaws.com/fast-ai-nlp/dbpedia_csv.tgz",
56
- }
57
-
58
-
59
- class DBpedia14Config(datasets.BuilderConfig):
60
- """BuilderConfig for DBpedia."""
61
-
62
- def __init__(self, **kwargs):
63
- """BuilderConfig for DBpedia.
64
-
65
- Args:
66
- **kwargs: keyword arguments forwarded to super.
67
- """
68
- super(DBpedia14Config, self).__init__(**kwargs)
69
-
70
-
71
- class DBpedia14(datasets.GeneratorBasedBuilder):
72
- """DBpedia 2014 Ontology Classification Dataset."""
73
-
74
- VERSION = datasets.Version("2.0.0")
75
-
76
- BUILDER_CONFIGS = [
77
- DBpedia14Config(
78
- name="dbpedia_14", version=VERSION, description="DBpedia 2014 Ontology Classification Dataset."
79
- ),
80
- ]
81
-
82
- def _info(self):
83
- features = datasets.Features(
84
- {
85
- "label": datasets.features.ClassLabel(
86
- names=[
87
- "Company",
88
- "EducationalInstitution",
89
- "Artist",
90
- "Athlete",
91
- "OfficeHolder",
92
- "MeanOfTransportation",
93
- "Building",
94
- "NaturalPlace",
95
- "Village",
96
- "Animal",
97
- "Plant",
98
- "Album",
99
- "Film",
100
- "WrittenWork",
101
- ]
102
- ),
103
- "title": datasets.Value("string"),
104
- "content": datasets.Value("string"),
105
- }
106
- )
107
- return datasets.DatasetInfo(
108
- description=_DESCRIPTION,
109
- features=features,
110
- supervised_keys=None,
111
- homepage=_HOMEPAGE,
112
- license=_LICENSE,
113
- citation=_CITATION,
114
- )
115
-
116
- def _split_generators(self, dl_manager):
117
- """Returns SplitGenerators."""
118
- my_urls = _URLs[self.config.name]
119
- archive = dl_manager.download(my_urls)
120
- return [
121
- datasets.SplitGenerator(
122
- name=datasets.Split.TRAIN,
123
- gen_kwargs={
124
- "filepath": "dbpedia_csv/train.csv",
125
- "files": dl_manager.iter_archive(archive),
126
- },
127
- ),
128
- datasets.SplitGenerator(
129
- name=datasets.Split.TEST,
130
- gen_kwargs={
131
- "filepath": "dbpedia_csv/test.csv",
132
- "files": dl_manager.iter_archive(archive),
133
- },
134
- ),
135
- ]
136
-
137
- def _generate_examples(self, filepath, files):
138
- """Yields examples."""
139
-
140
- for path, f in files:
141
- if path == filepath:
142
- lines = (line.decode("utf-8") for line in f)
143
- data = csv.reader(lines, delimiter=",", quoting=csv.QUOTE_NONNUMERIC)
144
- for id_, row in enumerate(data):
145
- yield id_, {
146
- "title": row[1],
147
- "content": row[2],
148
- "label": int(row[0]) - 1,
149
- }
150
- break
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dbpedia_14/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:05fed41640e97f93ffd442757f6a84170348cf0c7500ecbda9e95ddcd928c631
3
+ size 13272475
dbpedia_14/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0640e4664a99cc94c47db1d7b2e01c14455d5bbecb8183ad1f93bde59f3f28ee
3
+ size 106151899