system HF staff commited on
Commit
1ae4115
0 Parent(s):

Update files from the datasets library (from 1.17.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.17.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - unknown
10
+ multilinguality:
11
+ - monolingual
12
+ paperswithcode_id: null
13
+ pretty_name: ELI5-Category
14
+ size_categories:
15
+ - 100K<n<1M
16
+ source_datasets:
17
+ - extended|eli5
18
+ task_categories:
19
+ - question-answering
20
+ task_ids:
21
+ - abstractive-qa
22
+ - open-domain-qa
23
+ ---
24
+
25
+ # Dataset Card for ELI5-Category
26
+
27
+ ## Table of Contents
28
+ - [Dataset Description](#dataset-description)
29
+ - [Dataset Summary](#dataset-summary)
30
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
31
+ - [Languages](#languages)
32
+ - [Dataset Structure](#dataset-structure)
33
+ - [Data Instances](#data-instances)
34
+ - [Data Fields](#data-fields)
35
+ - [Data Splits](#data-splits)
36
+ - [Dataset Creation](#dataset-creation)
37
+ - [Curation Rationale](#curation-rationale)
38
+ - [Source Data](#source-data)
39
+ - [Annotations](#annotations)
40
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
41
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
42
+ - [Social Impact of Dataset](#social-impact-of-dataset)
43
+ - [Discussion of Biases](#discussion-of-biases)
44
+ - [Other Known Limitations](#other-known-limitations)
45
+ - [Additional Information](#additional-information)
46
+ - [Dataset Curators](#dataset-curators)
47
+ - [Licensing Information](#licensing-information)
48
+ - [Citation Information](#citation-information)
49
+ - [Contributions](#contributions)
50
+
51
+ ## Dataset Description
52
+
53
+
54
+ - **Homepage:** [ELI5-Category homepage](https://celeritasml.netlify.app/posts/2021-12-01-eli5c/)
55
+ - **Repository:** [ELI5-Category repository](https://github.com/rexarski/ANLY580-final-project)
56
+ - **Point of Contact:** [Jingsong Gao](mailto:jg2109@georgetown.edu)
57
+
58
+ ### Dataset Summary
59
+
60
+ The ELI5-Category dataset is a smaller but newer and categorized version of the original ELI5 dataset. It's an English-language dataset of questions and answers gathered from the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) subreddit where users ask factual questions requiring paragraph-length or longer answers. After 2017, a tagging system was introduced to this subreddit so that the questions can be categorized into different topics according to their tags. Since the training and validation set is built by questions in different topics, the dataset is expected to alleviate the train/validation overlapping issue in the original [ELI5 dataset](https://huggingface.co/datasets/eli5).
61
+
62
+ ### Supported Tasks and Leaderboards
63
+
64
+ - `abstractive-qa`, `open-domain-qa`: The dataset can be used to train a model for Open Domain Long Form Question Answering. An LFQA model is presented with a non-factoid and asked to retrieve relevant information from a knowledge source (such as [Wikipedia](https://www.wikipedia.org/)), then use it to generate a multi-sentence answer.
65
+
66
+ ### Languages
67
+
68
+ The text in the dataset is in English, as spoken by Reddit users on the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) subreddit. The associated BCP-47 code is `en`.
69
+
70
+ ## Dataset Structure
71
+
72
+ ### Data Instances
73
+
74
+ The structure of this dataset is very similar to the original [ELI5 dataset](https://huggingface.co/datasets/eli5). A typical data point comprises a question, with a `title` containing the main question and a `selftext` which sometimes elaborates on it, and a list of answers from the forum sorted by scores they obtained. Additionally, the URLs in each of the text fields have been extracted to respective lists and replaced by generic tokens in the text.
75
+ In addition to the original ELI5 dataset, the data point also has a `category` field. There are 11 common values of `category` in this dataset: `Biology`,`Chemistry`,`Culture`,`Earth Science`,`Economics`,`Engineering`,`Mathematics`,`Other`,`Physics`,`Psychology`,`Technology`, and a special `category`: `Repost` indicates the same question has been asked before.
76
+
77
+ An example from the ELI5-Category set looks as follows:
78
+ ```
79
+ {'q_id': '5lcm18',
80
+ 'title': 'Why do old games running on new hardware still have technical issues ?',
81
+ 'selftext': 'I am playing some mega man games on my Xbox One and experience slowdown when there are a lot of enemies on screen . but the Xbox One is significantly more powerful than the NES , so why is there still slowdown on this hardware ?',
82
+ 'category': 'Engineering',
83
+ 'subreddit': 'explainlikeimfive',
84
+ 'answers': {'a_id': ['dbuo48e', 'dbusfve'],
85
+ 'text': ["The XBox is emulating NES hardware and running the emulation at a set speed . If it ran it at as fast as possible , then it would be several times faster than the original NES game and would be unplayable . I ca n't speak for Mega Man exactly , but older games tended to run on a cycle locked to the screen refresh which was a fixed 60Hz or 50Hz . There was only one piece of hardware they ran on , so there was no need to adjust for different hardware speeds .",
86
+ "In that case , it 's probably on purpose - they want to emulate the experience as closely as possible , even including the slowdown and sprite flickering . Some emulators let you turn it off , but it 's usually turned on by default . In other cases , like if you 're trying to emulate PS2 games on your PC , the game might just run really slow in general . Even though your PC is way more powerful than a PS2 , it has to \" translate \" from PS2 language to PC language in realtime , which is much more difficult than running PS2 code on the PS2 itself ."],
87
+ 'score': [13, 3],
88
+ 'text_urls': [[],[]]},
89
+ 'title_urls': {'url': []},
90
+ 'selftext_urls': {'url': []}}
91
+ ```
92
+
93
+ ### Data Fields
94
+
95
+ - `q_id`: a string question identifier for each example, corresponding to its ID in the [Pushshift.io](https://files.pushshift.io/reddit/submissions/) Reddit submission dumps
96
+ - `subreddit`: always `explainlikeimfive`, indicating which subreddit the question came from
97
+ - `category`: tag of the question, the possible values are listed above.
98
+ - `title`: title of the question, with URLs extracted and replaced by `URL_n` tokens
99
+ - `title_urls`: list of the extracted URLs, the `n`th element of the list was replaced by `URL_n`
100
+ - `selftext`: either an empty string or an elaboration of the question
101
+ - `selftext_urls`: similar to `title_urls` but for `self_text`
102
+ - `answers`: a list of answers, each answer has:
103
+ - `a_id`: a string answer identifier for each answer, corresponding to its ID in the [Pushshift.io](https://files.pushshift.io/reddit/comments/) Reddit comments dumps.
104
+ - `text`: the answer text with the URLs normalized
105
+ - `score`: the number of upvotes - the number of downvotes the answer had received when the dumps were created
106
+ - `text_urls`: lists of the extracted URLs for every answer
107
+
108
+ ### Data Splits
109
+
110
+ In order to avoid having duplicate questions across sets, three non-overlapping subsets of `category` are used in the training, validation and test set. Also, a special validation set contains all the questions in the `Repost` category. A valid retriever-generator model should have consistent performances on both validation sets.
111
+ The final split sizes are as follows:
112
+
113
+ | | Train | Valid | Valid2 |Test |
114
+ | ----- | ------ | ----- | ---- | ---- |
115
+ | `Biology` | 32769 | | | |
116
+ | `Chemistry` | 6633 | | | |
117
+ | `Culture` | | 5446 | | |
118
+ | `Earth Science` | 677 | | | |
119
+ | `Economics` | 5901 | | | |
120
+ | `Engineering` | | | | 5411 |
121
+ | `Mathematics` | 1912 | | | |
122
+ | `Other` | 19312 | | | |
123
+ | `Physics` | 10196 | | | |
124
+ | `Psychology` | 338 | | | |
125
+ | `Technology` | 14034 | | | |
126
+ | `Repost` | | | 2375 | |
127
+ | **Total** | 91772 | 5446 | 2375 | 5411 |
128
+
129
+ ## Dataset Creation
130
+
131
+ ### Curation Rationale
132
+
133
+ ELI5-Category was built to provide a testbed for machines to learn how to answer more complex questions, which requires them to find and combine the information in a coherent manner. The dataset was built by gathering questions that were asked by community members of three subreddits, including [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/), along with the answers that were provided by other users. The [rules of the subreddit](https://www.reddit.com/r/explainlikeimfive/wiki/detailed_rules) make this data particularly well suited to training a model for abstractive question answering: the questions need to seek an objective explanation about well-established facts, and the answers provided need to be understandable to a layperson without any particular knowledge domain.
134
+
135
+ ### Source Data
136
+
137
+ #### Initial Data Collection and Normalization
138
+
139
+ The data was obtained by filtering submissions and comments from the subreddits of interest from the XML dumps of the [Reddit forum](https://www.reddit.com/) hosted on [Pushshift.io](https://files.pushshift.io/reddit/).
140
+
141
+ In order to further improve the quality of the selected examples, only questions with a score of at least 2 and at least one answer with a score of at least 2 were selected for the dataset. The dataset questions and answers span a period from January 2017 to June 2021.
142
+
143
+ #### Who are the source language producers?
144
+
145
+ The language producers are users of the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) subreddit between 2017 and 2021. No further demographic information was available from the data source.
146
+
147
+ ### Annotations
148
+
149
+ The dataset contains the `category` as an additional annotation for the topics of questions.
150
+
151
+ #### Annotation process
152
+
153
+ The dataset is auto-annotated by the tags of posts in the [Reddit forum](https://www.reddit.com/).
154
+
155
+ #### Who are the annotators?
156
+
157
+ The annotators are users/administrators of the [r/explainlikeimfive](https://www.reddit.com/r/explainlikeimfive/) subreddit between 2017 and 2021. No further demographic information was available from the data source.
158
+
159
+ ### Personal and Sensitive Information
160
+
161
+ The authors removed the speaker IDs from the [Pushshift.io](https://files.pushshift.io/reddit/) dumps but did not otherwise anonymize the data. Some questions and answers are about contemporary public figures or individuals who appeared in the news.
162
+
163
+ ## Considerations for Using the Data
164
+
165
+ ### Social Impact of Dataset
166
+
167
+ The dataset has a similar social impact to the original ELI5 dataset [Social Impact of Dataset](https://huggingface.co/datasets/eli5#social-impact-of-dataset).
168
+
169
+ ### Discussion of Biases
170
+
171
+ The dataset has similar considerations of biases to the original ELI5 dataset [Discussion of Biases](https://huggingface.co/datasets/eli5#discussion-of-biases).
172
+
173
+ ### Other Known Limitations
174
+
175
+ The dataset has similar limitations to the original ELI5 dataset [Other Known Limitations](https://huggingface.co/datasets/eli5#other-known-limitations).
176
+
177
+ ## Additional Information
178
+
179
+ ### Dataset Curators
180
+
181
+ The dataset was initially created by Jingsong Gao, Qinren Zhou, Rui Qiu, during a course project of `ANLY 580`: NLP for Data Analytics at Georgetown University.
182
+
183
+ ### Licensing Information
184
+
185
+ The licensing status of the dataset hinges on the legal status of the [Pushshift.io](https://files.pushshift.io/reddit/) data which is unclear.
186
+
187
+ ### Citation Information
188
+
189
+ ```
190
+ @inproceedings{eli5-category,
191
+ author = {Jingsong Gao and
192
+ Qingren Zhou and
193
+ Rui Qiu},
194
+ title = {{ELI5-Category:} A categorized open-domain QA dataset},
195
+ year = {2021}
196
+ }
197
+ ```
198
+
199
+ ### Contributions
200
+
201
+ Thanks to [@jingshenSN2](https://github.com/jingshenSN2), [@QinrenZhou](https://github.com/QinrenZhou), [@rexarski](https://github.com/rexarski) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"default": {"description": "The ELI5-Category dataset is a smaller but newer and categorized version of the original ELI5 dataset. After 2017, a tagging system was introduced to this subreddit so that the questions can be categorized into different topics according to their tags. Since the training and validation set is built by questions in different topics, the dataset is expected to alleviate the train/validation overlapping issue in the original ELI5 dataset.\n", "citation": "@inproceedings{eli5-category,\n author = {Jingsong Gao and\n Qingren Zhou and\n Rui Qiu},\n title = {{ELI5-Category:} A categorized open-domain QA dataset},\n year = {2021}\n}\n", "homepage": "", "license": "", "features": {"q_id": {"dtype": "string", "id": null, "_type": "Value"}, "title": {"dtype": "string", "id": null, "_type": "Value"}, "selftext": {"dtype": "string", "id": null, "_type": "Value"}, "category": {"dtype": "string", "id": null, "_type": "Value"}, "subreddit": {"dtype": "string", "id": null, "_type": "Value"}, "answers": {"a_id": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "text": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "score": {"feature": {"dtype": "int32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "text_urls": {"feature": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "length": -1, "id": null, "_type": "Sequence"}}, "title_urls": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "selftext_urls": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "eli5_category", "config_name": "default", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 166409797, "num_examples": 91772, "dataset_name": "eli5_category"}, "validation1": {"name": "validation1", "num_bytes": 13150585, "num_examples": 5446, "dataset_name": "eli5_category"}, "validation2": {"name": "validation2", "num_bytes": 4737744, "num_examples": 2375, "dataset_name": "eli5_category"}, "test": {"name": "test", "num_bytes": 10419098, "num_examples": 5411, "dataset_name": "eli5_category"}}, "download_checksums": {"https://jingshensn2.github.io/eli5c/datasets/eli5-category-train.json.gz": {"num_bytes": 62314944, "checksum": "9bfbed0d20608978fde8889f6383bfb695af575c81c3a3b2ec87c644928c725b"}, "https://jingshensn2.github.io/eli5c/datasets/eli5-category-validation-1.json.gz": {"num_bytes": 4997144, "checksum": "4b29d4c6eae0d474b629d1e3a825a2543491afec08da0fb0bb06cd573ce718cf"}, "https://jingshensn2.github.io/eli5c/datasets/eli5-category-validation-2.json.gz": {"num_bytes": 1759160, "checksum": "309b7d49de11494c7c7179980f710cbcb5c9a6ebed6f0a30fe2e02dca4f10009"}, "https://jingshensn2.github.io/eli5c/datasets/eli5-category-test.json.gz": {"num_bytes": 3850581, "checksum": "a2f30aa666e1570adc34315de286a67a20eaede4ba82a040521a8571c05d4d7b"}}, "download_size": 72921829, "post_processing_size": null, "dataset_size": 194717224, "size_in_bytes": 267639053}}
dummy/default/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:196da7c3a1da00997b57654a9112c55b2dc6a3855ab26c5a50c5279e8a601b2f
3
+ size 22695
eli5_category.py ADDED
@@ -0,0 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Lint as: python3
3
+ """ELI5-Category: A categorized open-domain QA dataset."""
4
+
5
+
6
+ import json
7
+
8
+ import datasets
9
+
10
+
11
+ logger = datasets.logging.get_logger(__name__)
12
+
13
+ _CITATION = """\
14
+ @inproceedings{eli5-category,
15
+ author = {Jingsong Gao and
16
+ Qingren Zhou and
17
+ Rui Qiu},
18
+ title = {{ELI5-Category:} A categorized open-domain QA dataset},
19
+ year = {2021}
20
+ }
21
+ """
22
+
23
+ _DESCRIPTION = """\
24
+ The ELI5-Category dataset is a smaller but newer and categorized version of the original ELI5 dataset. \
25
+ After 2017, a tagging system was introduced to this subreddit so that the questions can be categorized \
26
+ into different topics according to their tags. Since the training and validation set is built by questions \
27
+ in different topics, the dataset is expected to alleviate the train/validation overlapping issue \
28
+ in the original ELI5 dataset.
29
+ """
30
+
31
+
32
+ class ELI5CategoryConfig(datasets.BuilderConfig):
33
+ """BuilderConfig for ELI5Category."""
34
+
35
+ def __init__(self, **kwargs):
36
+ """BuilderConfig for ELI5Category.
37
+ Args:
38
+ **kwargs: keyword arguments forwarded to super.
39
+ """
40
+ super(ELI5CategoryConfig, self).__init__(**kwargs)
41
+
42
+
43
+ class ELI5Category(datasets.GeneratorBasedBuilder):
44
+ """ELI5-Category: A categorized open-domain QA dataset."""
45
+
46
+ VERSION = datasets.Version("1.0.0")
47
+
48
+ BUILDER_CONFIGS = [
49
+ ELI5CategoryConfig(
50
+ name="default",
51
+ version=datasets.Version("1.0.0"),
52
+ description="Default config",
53
+ ),
54
+ ]
55
+
56
+ DEFAULT_CONFIG_NAME = "default"
57
+
58
+ def _info(self):
59
+ return datasets.DatasetInfo(
60
+ description=_DESCRIPTION,
61
+ features=datasets.Features(
62
+ {
63
+ "q_id": datasets.Value("string"),
64
+ "title": datasets.Value("string"),
65
+ "selftext": datasets.Value("string"),
66
+ "category": datasets.Value("string"),
67
+ "subreddit": datasets.Value("string"),
68
+ "answers": {
69
+ "a_id": datasets.features.Sequence(datasets.Value("string")),
70
+ "text": datasets.features.Sequence(datasets.Value("string")),
71
+ "score": datasets.features.Sequence(datasets.Value("int32")),
72
+ "text_urls": datasets.features.Sequence(datasets.features.Sequence(datasets.Value("string"))),
73
+ },
74
+ "title_urls": datasets.features.Sequence(datasets.Value("string")),
75
+ "selftext_urls": datasets.features.Sequence(datasets.Value("string")),
76
+ }
77
+ ),
78
+ supervised_keys=None,
79
+ citation=_CITATION,
80
+ )
81
+
82
+ def _split_generators(self, dl_manager):
83
+ _URL = "https://jingshensn2.github.io/eli5c/datasets/"
84
+ downloaded_files = dl_manager.download_and_extract(
85
+ {
86
+ "train": _URL + "eli5-category-train.json.gz",
87
+ "val1": _URL + "eli5-category-validation-1.json.gz",
88
+ "val2": _URL + "eli5-category-validation-2.json.gz",
89
+ "test": _URL + "eli5-category-test.json.gz",
90
+ }
91
+ )
92
+ return [
93
+ datasets.SplitGenerator(
94
+ name=datasets.Split.TRAIN,
95
+ gen_kwargs={"filepath": downloaded_files["train"]},
96
+ ),
97
+ datasets.SplitGenerator(
98
+ name=datasets.Split("validation1"),
99
+ gen_kwargs={"filepath": downloaded_files["val1"]},
100
+ ),
101
+ datasets.SplitGenerator(
102
+ name=datasets.Split("validation2"),
103
+ gen_kwargs={"filepath": downloaded_files["val2"]},
104
+ ),
105
+ datasets.SplitGenerator(
106
+ name=datasets.Split.TEST,
107
+ gen_kwargs={"filepath": downloaded_files["test"]},
108
+ ),
109
+ ]
110
+
111
+ def _generate_examples(self, filepath):
112
+ logger.info("generating examples from = %s", filepath)
113
+ with open(filepath, encoding="utf-8") as f:
114
+ example = json.load(f)
115
+ for id_, row in enumerate(example):
116
+ yield id_, row