File size: 8,300 Bytes
fe022c3
 
 
 
 
 
3dfe779
fe022c3
3dfe779
fe022c3
 
 
 
df343d3
fe022c3
 
 
 
ec2a5ea
 
fe022c3
 
79b99d1
36f9645
 
d39fe3f
 
 
 
 
 
 
 
 
 
 
 
05c7731
 
 
 
 
 
 
 
 
d39fe3f
 
a503cf4
d39fe3f
a503cf4
 
 
 
 
 
 
fe022c3
 
 
 
 
 
 
3cb4af0
fe022c3
 
 
3cb4af0
 
fe022c3
 
 
 
 
 
 
 
 
 
 
 
 
0730753
fe022c3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0730753
 
 
36f9645
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
---
annotations_creators:
- expert-generated
- machine-generated
language_creators:
- machine-generated
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- other
- text-generation
- fill-mask
task_ids:
- slot-filling
pretty_name: YouTube Caption Corrections
tags:
- token-classification-of-text-errors
dataset_info:
  features:
  - name: video_ids
    dtype: string
  - name: default_seq
    sequence: string
  - name: correction_seq
    sequence: string
  - name: diff_type
    sequence:
      class_label:
        names:
          '0': NO_DIFF
          '1': CASE_DIFF
          '2': PUNCUATION_DIFF
          '3': CASE_AND_PUNCUATION_DIFF
          '4': STEM_BASED_DIFF
          '5': DIGIT_DIFF
          '6': INTRAWORD_PUNC_DIFF
          '7': UNKNOWN_TYPE_DIFF
          '8': RESERVED_DIFF
  splits:
  - name: train
    num_bytes: 355978891
    num_examples: 10769
  download_size: 49050406
  dataset_size: 355978891
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---

# Dataset Card for YouTube Caption Corrections

## Table of Contents
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Contributions](#contributions)

## Dataset Description

- **Homepage:** https://github.com/2dot71mily/youtube_captions_corrections
- **Repository:** https://github.com/2dot71mily/youtube_captions_corrections
- **Paper:** [N/A]
- **Leaderboard:** [N/A]
- **Point of Contact:** Emily McMilin

### Dataset Summary

This dataset is built from pairs of YouTube captions where both an auto-generated and a manually-corrected caption are available for a single specified language. It currently only in English, but scripts at repo support other languages. The motivation for creating it was from viewing errors in auto-generated captions at a recent virtual conference, with the hope that there could be some way to help correct those errors.

The dataset in the repo at https://github.com/2dot71mily/youtube_captions_corrections records in a non-destructive manner all the differences between an auto-generated and a manually-corrected caption for thousands of videos. The dataset here focuses on the subset of those differences which are mutual and have the same size in token length difference, which means it excludes token insertion or deletion differences between the two captions. Therefore dataset here remains a non-destructive representation of the original auto-generated captions, but excludes some of the differences that are found in the manually-corrected captions.

### Supported Tasks and Leaderboards

- `token-classification`: The tokens in `default_seq` are from the auto-generated YouTube captions. If `diff_type` is labeled greater than `0` at a given index, then the associated token in same index in the `default_seq` was found to be different to the token in the manually-corrected YouTube caption, and therefore we assume it is an error. A model can be trained to learn when there are errors in the auto-generated captions.

- `slot-filling`: The `correction_seq` is sparsely populated with tokens from the manually-corrected YouTube captions in the locations where there was found to be a difference to the token in the auto-generated YouTube captions. These 'incorrect' tokens in the `default_seq` can be masked in the locations where `diff_type` is labeled greater than `0`, so that a model can be trained to hopefully find a better word to fill in, rather than the 'incorrect' one.

End to end, the models could maybe first identify and then replace (with suitable alternatives) errors in YouTube and other auto-generated captions that are lacking manual corrections

### Languages

English

## Dataset Structure

### Data Instances

If `diff_type` is labeled greater than `0` at a given index, then the associated token in same index in the `default_seq` was found to have a difference to the token in the manually-corrected YouTube caption. The `correction_seq` is sparsely populated with tokens from the manually-corrected YouTube captions at those locations of differences.

`diff_type` labels for tokens are as follows:
0: No difference
1: Case based difference, e.g. `hello` vs `Hello`
2: Punctuation difference, e.g. `hello` vs `hello`
3: Case and punctuation difference, e.g. `hello` vs `Hello,`
4: Word difference with same stem, e.g. `thank` vs `thanked`
5: Digit difference, e.g. `2` vs `two`
6: Intra-word punctuation difference, e.g. `autogenerated` vs `auto-generated`
7: Unknown type of difference, e.g. `laughter` vs `draft`
8: Reserved for unspecified difference

{
    'video_titles': '_QUEXsHfsA0', 
    'default_seq': ['you', 'see', "it's", 'a', 'laughter', 'but', 'by', 'the', 'time', 'you', 'see', 'this', 'it', "won't", 'be', 'so', 'we', 'have', 'a', 'big']
    'correction_seq':  ['', 'see,', '', '', 'draft,', '', '', '', '', '', 'read', 'this,', '', '', 'be.', 'So', '', '', '', '']
    'diff_type': [0, 2, 0, 0, 7, 0, 0, 0, 0, 0, 7, 2, 0, 0, 2, 1, 0, 0, 0, 0]
}

### Data Fields

- 'video_ids': Unique ID used by YouTube for each video. Can paste into `https://www.youtube.com/watch?v=<{video_ids}` to see video
- 'default_seq': Tokenized auto-generated YouTube captions for the video
- 'correction_seq':  Tokenized manually-corrected YouTube captions only at those locations, where there is a difference between the auto-generated and manually-corrected captions
- 'diff_type': A value greater than `0` at every token where there is a difference between the auto-generated and manually-corrected captions

### Data Splits

No data splits

## Dataset Creation

### Curation Rationale

It was created after viewing errors in auto-generated captions at a recent virtual conference, with the hope that there could be some way to help correct those errors.

### Source Data

#### Initial Data Collection and Normalization

All captions are requested via `googleapiclient` and `youtube_transcript_api` at the `channel_id` and language granularity, using scripts written at https://github.com/2dot71mily/youtube_captions_corrections.

The captions are tokenized on spaces and the manually-corrected sequence has here been reduced to only include differences between it and the auto-generated sequence.

#### Who are the source language producers?

Auto-generated scripts are from YouTube and the manually-corrected scripts are from creators, and any support they may have (e.g. community or software support)

### Annotations

#### Annotation process

Scripts at repo, https://github.com/2dot71mily/youtube_captions_corrections take a diff of the two captions and use this to create annotations.

#### Who are the annotators?

YouTube creators, and any support they may have (e.g. community or software support)

### Personal and Sensitive Information

All content publicly available on YouTube

## Considerations for Using the Data

### Social Impact of Dataset

[More Information Needed]

### Discussion of Biases

[More Information Needed]

### Other Known Limitations

[More Information Needed]

## Additional Information

### Dataset Curators

Emily McMilin

### Licensing Information

MIT License

### Citation Information

https://github.com/2dot71mily/youtube_captions_corrections

### Contributions

Thanks to [@2dot71mily](https://github.com/2dot71mily) for adding this dataset.