system HF staff commited on
Commit
fe022c3
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,169 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ - machine-generated
5
+ language_creators:
6
+ - machine-generated
7
+ languages:
8
+ - en
9
+ licenses:
10
+ - mit
11
+ multilinguality:
12
+ - monolingual
13
+ size_categories:
14
+ - n<1K
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - other
19
+ - sequence-modeling
20
+ task_ids:
21
+ - other-other-token-classification-of-text-errors
22
+ - slot-filling
23
+ ---
24
+
25
+ # Dataset Card for YouTube Caption Corrections
26
+
27
+ ## Table of Contents
28
+ - [Dataset Description](#dataset-description)
29
+ - [Dataset Summary](#dataset-summary)
30
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
31
+ - [Languages](#languages)
32
+ - [Dataset Structure](#dataset-structure)
33
+ - [Data Instances](#data-instances)
34
+ - [Data Fields](#data-instances)
35
+ - [Data Splits](#data-instances)
36
+ - [Dataset Creation](#dataset-creation)
37
+ - [Curation Rationale](#curation-rationale)
38
+ - [Source Data](#source-data)
39
+ - [Annotations](#annotations)
40
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
41
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
42
+ - [Social Impact of Dataset](#social-impact-of-dataset)
43
+ - [Discussion of Biases](#discussion-of-biases)
44
+ - [Other Known Limitations](#other-known-limitations)
45
+ - [Additional Information](#additional-information)
46
+ - [Dataset Curators](#dataset-curators)
47
+ - [Licensing Information](#licensing-information)
48
+ - [Citation Information](#citation-information)
49
+
50
+ ## Dataset Description
51
+
52
+ - **Homepage:** https://github.com/2dot71mily/youtube_captions_corrections
53
+ - **Repository:** https://github.com/2dot71mily/youtube_captions_corrections
54
+ - **Paper:** [N/A]
55
+ - **Leaderboard:** [N/A]
56
+ - **Point of Contact:** Emily McMilin
57
+
58
+ ### Dataset Summary
59
+
60
+ This dataset is built from pairs of YouTube captions where both an auto-generated and a manually-corrected caption are available for a single specified language. It currently only in English, but scripts at repo support other languages. The motivation for creating it was from viewing errors in auto-generated captions at a recent virtual conference, with the hope that there could be some way to help correct those errors.
61
+
62
+ The dataset in the repo at https://github.com/2dot71mily/youtube_captions_corrections records in a non-destructive manner all the differences between an auto-generated and a manually-corrected caption for thousands of videos. The dataset here focuses on the subset of those differences which are mutual and have the same size in token length difference, which means it excludes token insertion or deletion differences between the two captions. Therefore dataset here remains a non-destructive representation of the original auto-generated captions, but excludes some of the differences that are found in the manually-corrected captions.
63
+
64
+ ### Supported Tasks and Leaderboards
65
+
66
+ - `token-classification`: The tokens in `default_seq` are from the auto-generated YouTube captions. If `diff_type` is labeled greater than `0` at a given index, then the associated token in same index in the `default_seq` was found to be different to the token in the manually-corrected YouTube caption, and therefore we assume it is an error. A model can be trained to learn when there are errors in the auto-generated captions.
67
+
68
+ - `slot-filling`: The `correction_seq` is sparsely populated with tokens from the manually-corrected YouTube captions in the locations where there was found to be a difference to the token in the auto-generated YouTube captions. These 'incorrect' tokens in the `default_seq` can be masked in the locations where `diff_type` is labeled greater than `0`, so that a model can be trained to hopefully find a better word to fill in, rather than the 'incorrect' one.
69
+
70
+ End to end, the models could maybe first identify and then replace (with suitable alternatives) errors in YouTube and other auto-generated captions that are lacking manual corrections
71
+
72
+ ### Languages
73
+
74
+ English
75
+
76
+ ## Dataset Structure
77
+
78
+ ### Data Instances
79
+
80
+ If `diff_type` is labeled greater than `0` at a given index, then the associated token in same index in the `default_seq` was found to have a difference to the token in the manually-corrected YouTube caption. The `correction_seq` is sparsely populated with tokens from the manually-corrected YouTube captions at those locations of differences.
81
+
82
+ `diff_type` labels for tokens are as follows:
83
+ 0: No difference
84
+ 1: Case based difference, e.g. `hello` vs `Hello`
85
+ 2: Punctuation difference, e.g. `hello` vs `hello`
86
+ 3: Case and punctuation difference, e.g. `hello` vs `Hello,`
87
+ 4: Word difference with same stem, e.g. `thank` vs `thanked`
88
+ 5: Digit difference, e.g. `2` vs `two`
89
+ 6: Intra-word punctuation difference, e.g. `autogenerated` vs `auto-generated`
90
+ 7: Unknown type of difference, e.g. `laughter` vs `draft`
91
+ 8: Reserved for unspecified difference
92
+
93
+ {
94
+ 'video_titles': '_QUEXsHfsA0',
95
+ 'default_seq': ['you', 'see', "it's", 'a', 'laughter', 'but', 'by', 'the', 'time', 'you', 'see', 'this', 'it', "won't", 'be', 'so', 'we', 'have', 'a', 'big']
96
+ 'correction_seq': ['', 'see,', '', '', 'draft,', '', '', '', '', '', 'read', 'this,', '', '', 'be.', 'So', '', '', '', '']
97
+ 'diff_type': [0, 2, 0, 0, 7, 0, 0, 0, 0, 0, 7, 2, 0, 0, 2, 1, 0, 0, 0, 0]
98
+ }
99
+
100
+ ### Data Fields
101
+
102
+ - 'video_ids': Unique ID used by YouTube for each video. Can paste into `https://www.youtube.com/watch?v=<{video_ids}` to see video
103
+ - 'default_seq': Tokenized auto-generated YouTube captions for the video
104
+ - 'correction_seq': Tokenized manually-corrected YouTube captions only at those locations, where there is a difference between the auto-generated and manually-corrected captions
105
+ - 'diff_type': A value greater than `0` at every token where there is a difference between the auto-generated and manually-corrected captions
106
+
107
+ ### Data Splits
108
+
109
+ No data splits
110
+
111
+ ## Dataset Creation
112
+
113
+ ### Curation Rationale
114
+
115
+ It was created after viewing errors in auto-generated captions at a recent virtual conference, with the hope that there could be some way to help correct those errors.
116
+
117
+ ### Source Data
118
+
119
+ #### Initial Data Collection and Normalization
120
+
121
+ All captions are requested via `googleapiclient` and `youtube_transcript_api` at the `channel_id` and language granularity, using scripts written at https://github.com/2dot71mily/youtube_captions_corrections.
122
+
123
+ The captions are tokenized on spaces and the manually-corrected sequence has here been reduced to only include differences between it and the auto-generated sequence.
124
+
125
+ #### Who are the source language producers?
126
+
127
+ Auto-generated scripts are from YouTube and the manually-corrected scripts are from creators, and any support they may have (e.g. community or software support)
128
+
129
+ ### Annotations
130
+
131
+ #### Annotation process
132
+
133
+ Scripts at repo, https://github.com/2dot71mily/youtube_captions_corrections take a diff of the two captions and use this to create annotations.
134
+
135
+ #### Who are the annotators?
136
+
137
+ YouTube creators, and any support they may have (e.g. community or software support)
138
+
139
+ ### Personal and Sensitive Information
140
+
141
+ All content publicly available on YouTube
142
+
143
+ ## Considerations for Using the Data
144
+
145
+ ### Social Impact of Dataset
146
+
147
+ [More Information Needed]
148
+
149
+ ### Discussion of Biases
150
+
151
+ [More Information Needed]
152
+
153
+ ### Other Known Limitations
154
+
155
+ [More Information Needed]
156
+
157
+ ## Additional Information
158
+
159
+ ### Dataset Curators
160
+
161
+ Emily McMilin
162
+
163
+ ### Licensing Information
164
+
165
+ MIT License
166
+
167
+ ### Citation Information
168
+
169
+ https://github.com/2dot71mily/youtube_captions_corrections
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"default": {"description": "Dataset built from pairs of YouTube captions where both 'auto-generated' and\n'manually-corrected' captions are available for a single specified language.\nThis dataset labels two-way (e.g. ignoring single-sided insertions) same-length\ntoken differences in the `diff_type` column. The `default_seq` is composed of\ntokens from the 'auto-generated' captions. When a difference occurs between\nthe 'auto-generated' vs 'manually-corrected' captions types, the `correction_seq`\ncontains tokens from the 'manually-corrected' captions.\n", "citation": "", "homepage": "https://github.com/2dot71mily/youtube_captions_corrections", "license": "MIT License", "features": {"video_ids": {"dtype": "string", "id": null, "_type": "Value"}, "default_seq": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "correction_seq": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "diff_type": {"feature": {"num_classes": 9, "names": ["NO_DIFF", "CASE_DIFF", "PUNCUATION_DIFF", "CASE_AND_PUNCUATION_DIFF", "STEM_BASED_DIFF", "DIGIT_DIFF", "INTRAWORD_PUNC_DIFF", "UNKNOWN_TYPE_DIFF", "RESERVED_DIFF"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": {"input": "correction_seq", "output": "diff_type"}, "builder_name": "youtube_caption_corrections", "config_name": "default", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 355978939, "num_examples": 10769, "dataset_name": "youtube_caption_corrections"}}, "download_checksums": {"https://raw.githubusercontent.com/2dot71mily/youtube_captions_corrections/v1.0/data/transcripts/en/split/youtube_caption_corrections_0.json": {"num_bytes": 59854510, "checksum": "2ad86bec2bbc13275115f0aec2dba73985576efc580fdf02fd33502d99ece5f5"}, "https://raw.githubusercontent.com/2dot71mily/youtube_captions_corrections/v1.0/data/transcripts/en/split/youtube_caption_corrections_1.json": {"num_bytes": 54259781, "checksum": "41c1084beaea852789bf6946411b45b90ca0b3d047f23812848584b76850bcdc"}, "https://raw.githubusercontent.com/2dot71mily/youtube_captions_corrections/v1.0/data/transcripts/en/split/youtube_caption_corrections_2.json": {"num_bytes": 51478061, "checksum": "4c7af974482402009fccbf793021fa88a4d30b087e7f0280099a6d0dbb41d1b4"}, "https://raw.githubusercontent.com/2dot71mily/youtube_captions_corrections/v1.0/data/transcripts/en/split/youtube_caption_corrections_3.json": {"num_bytes": 56887103, "checksum": "4e462ac96437966d45561b34ebb7776d28b693ffe26d64f24510c6b8d4da8ad4"}}, "download_size": 222479455, "post_processing_size": null, "dataset_size": 355978939, "size_in_bytes": 578458394}}
dummy/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:092df39025cb986bbdba38f8a9f51105e9a916b6b86ae50938242f3d078a652a
3
+ size 33080
youtube_caption_corrections.py ADDED
@@ -0,0 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Dataset built from <auto-generated, manually corrected> caption pairs of
16
+ YouTube videos with labels capturing the differences between the two."""
17
+
18
+ from __future__ import absolute_import, division, print_function
19
+
20
+ import json
21
+
22
+ import datasets
23
+
24
+
25
+ _CITATION = ""
26
+
27
+ _DESCRIPTION = """\
28
+ Dataset built from pairs of YouTube captions where both 'auto-generated' and
29
+ 'manually-corrected' captions are available for a single specified language.
30
+ This dataset labels two-way (e.g. ignoring single-sided insertions) same-length
31
+ token differences in the `diff_type` column. The `default_seq` is composed of
32
+ tokens from the 'auto-generated' captions. When a difference occurs between
33
+ the 'auto-generated' vs 'manually-corrected' captions types, the `correction_seq`
34
+ contains tokens from the 'manually-corrected' captions.
35
+ """
36
+
37
+ _LICENSE = "MIT License"
38
+
39
+ _RELEASE_TAG = "v1.0"
40
+ _NUM_FILES = 4
41
+ _URLS = [
42
+ f"https://raw.githubusercontent.com/2dot71mily/youtube_captions_corrections/{_RELEASE_TAG}/data/transcripts/en/split/youtube_caption_corrections_{i}.json"
43
+ for i in range(_NUM_FILES)
44
+ ]
45
+
46
+
47
+ class YoutubeCaptionCorrections(datasets.GeneratorBasedBuilder):
48
+ """YouTube captions corrections."""
49
+
50
+ def _info(self):
51
+ return datasets.DatasetInfo(
52
+ description=_DESCRIPTION,
53
+ features=datasets.Features(
54
+ {
55
+ "video_ids": datasets.Value("string"),
56
+ "default_seq": datasets.Sequence(datasets.Value("string")),
57
+ "correction_seq": datasets.Sequence(datasets.Value("string")),
58
+ "diff_type": datasets.Sequence(
59
+ datasets.features.ClassLabel(
60
+ names=[
61
+ "NO_DIFF",
62
+ "CASE_DIFF",
63
+ "PUNCUATION_DIFF",
64
+ "CASE_AND_PUNCUATION_DIFF",
65
+ "STEM_BASED_DIFF",
66
+ "DIGIT_DIFF",
67
+ "INTRAWORD_PUNC_DIFF",
68
+ "UNKNOWN_TYPE_DIFF",
69
+ "RESERVED_DIFF",
70
+ ]
71
+ )
72
+ ),
73
+ }
74
+ ),
75
+ supervised_keys=("correction_seq", "diff_type"),
76
+ homepage="https://github.com/2dot71mily/youtube_captions_corrections",
77
+ license=_LICENSE,
78
+ )
79
+
80
+ def _split_generators(self, dl_manager):
81
+ """Returns SplitGenerators."""
82
+ downloaded_filepaths = dl_manager.download_and_extract(_URLS)
83
+ return [
84
+ datasets.SplitGenerator(
85
+ name=datasets.Split.TRAIN,
86
+ gen_kwargs={"filepaths": downloaded_filepaths},
87
+ ),
88
+ ]
89
+
90
+ def _generate_examples(self, filepaths):
91
+ """ Yields examples. """
92
+ for fp in filepaths:
93
+ with open(fp, "r", encoding="utf-8") as json_file:
94
+ json_lists = list(json_file)
95
+ for json_list_str in json_lists:
96
+ json_list = json.loads(json_list_str)
97
+
98
+ for ctr, result in enumerate(json_list):
99
+ response = {
100
+ "video_ids": result["video_ids"],
101
+ "diff_type": result["diff_type"],
102
+ "default_seq": result["default_seq"],
103
+ "correction_seq": result["correction_seq"],
104
+ }
105
+ yield ctr, response