system HF staff commited on
Commit
38bcb9b
1 Parent(s): bb7c4b5

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

Files changed (1) hide show
  1. README.md +153 -0
README.md ADDED
@@ -0,0 +1,153 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ ---
3
+
4
+ # Dataset Card for "ted_multi"
5
+
6
+ ## Table of Contents
7
+ - [Dataset Description](#dataset-description)
8
+ - [Dataset Summary](#dataset-summary)
9
+ - [Supported Tasks](#supported-tasks)
10
+ - [Languages](#languages)
11
+ - [Dataset Structure](#dataset-structure)
12
+ - [Data Instances](#data-instances)
13
+ - [Data Fields](#data-fields)
14
+ - [Data Splits Sample Size](#data-splits-sample-size)
15
+ - [Dataset Creation](#dataset-creation)
16
+ - [Curation Rationale](#curation-rationale)
17
+ - [Source Data](#source-data)
18
+ - [Annotations](#annotations)
19
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
20
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
21
+ - [Social Impact of Dataset](#social-impact-of-dataset)
22
+ - [Discussion of Biases](#discussion-of-biases)
23
+ - [Other Known Limitations](#other-known-limitations)
24
+ - [Additional Information](#additional-information)
25
+ - [Dataset Curators](#dataset-curators)
26
+ - [Licensing Information](#licensing-information)
27
+ - [Citation Information](#citation-information)
28
+ - [Contributions](#contributions)
29
+
30
+ ## [Dataset Description](#dataset-description)
31
+
32
+ - **Homepage:** [https://github.com/neulab/word-embeddings-for-nmt](https://github.com/neulab/word-embeddings-for-nmt)
33
+ - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
34
+ - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
35
+ - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
36
+ - **Size of downloaded dataset files:** 335.91 MB
37
+ - **Size of the generated dataset:** 754.37 MB
38
+ - **Total amount of disk used:** 1090.27 MB
39
+
40
+ ### [Dataset Summary](#dataset-summary)
41
+
42
+ Massively multilingual (60 language) data set derived from TED Talk transcripts.
43
+ Each record consists of parallel arrays of language and text. Missing and
44
+ incomplete translations will be filtered out.
45
+
46
+ ### [Supported Tasks](#supported-tasks)
47
+
48
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
49
+
50
+ ### [Languages](#languages)
51
+
52
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
53
+
54
+ ## [Dataset Structure](#dataset-structure)
55
+
56
+ We show detailed information for up to 5 configurations of the dataset.
57
+
58
+ ### [Data Instances](#data-instances)
59
+
60
+ #### plain_text
61
+
62
+ - **Size of downloaded dataset files:** 335.91 MB
63
+ - **Size of the generated dataset:** 754.37 MB
64
+ - **Total amount of disk used:** 1090.27 MB
65
+
66
+ An example of 'validation' looks as follows.
67
+ ```
68
+ This example was too long and was cropped:
69
+
70
+ {
71
+ "talk_name": "shabana_basij_rasikh_dare_to_educate_afghan_girls",
72
+ "translations": "{\"language\": [\"ar\", \"az\", \"bg\", \"bn\", \"cs\", \"da\", \"de\", \"el\", \"en\", \"es\", \"fa\", \"fr\", \"he\", \"hi\", \"hr\", \"hu\", \"hy\", \"id\", \"it\", ..."
73
+ }
74
+ ```
75
+
76
+ ### [Data Fields](#data-fields)
77
+
78
+ The data fields are the same among all splits.
79
+
80
+ #### plain_text
81
+ - `translations`: a multilingual `string` variable, with possible languages including `ar`, `az`, `be`, `bg`, `bn`.
82
+ - `talk_name`: a `string` feature.
83
+
84
+ ### [Data Splits Sample Size](#data-splits-sample-size)
85
+
86
+ | name |train |validation|test|
87
+ |----------|-----:|---------:|---:|
88
+ |plain_text|258098| 6049|7213|
89
+
90
+ ## [Dataset Creation](#dataset-creation)
91
+
92
+ ### [Curation Rationale](#curation-rationale)
93
+
94
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
95
+
96
+ ### [Source Data](#source-data)
97
+
98
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
99
+
100
+ ### [Annotations](#annotations)
101
+
102
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
103
+
104
+ ### [Personal and Sensitive Information](#personal-and-sensitive-information)
105
+
106
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
107
+
108
+ ## [Considerations for Using the Data](#considerations-for-using-the-data)
109
+
110
+ ### [Social Impact of Dataset](#social-impact-of-dataset)
111
+
112
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
113
+
114
+ ### [Discussion of Biases](#discussion-of-biases)
115
+
116
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
117
+
118
+ ### [Other Known Limitations](#other-known-limitations)
119
+
120
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
121
+
122
+ ## [Additional Information](#additional-information)
123
+
124
+ ### [Dataset Curators](#dataset-curators)
125
+
126
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
127
+
128
+ ### [Licensing Information](#licensing-information)
129
+
130
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
131
+
132
+ ### [Citation Information](#citation-information)
133
+
134
+ ```
135
+ @InProceedings{qi-EtAl:2018:N18-2,
136
+ author = {Qi, Ye and Sachan, Devendra and Felix, Matthieu and Padmanabhan, Sarguna and Neubig, Graham},
137
+ title = {When and Why Are Pre-Trained Word Embeddings Useful for Neural Machine Translation?},
138
+ booktitle = {Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)},
139
+ month = {June},
140
+ year = {2018},
141
+ address = {New Orleans, Louisiana},
142
+ publisher = {Association for Computational Linguistics},
143
+ pages = {529--535},
144
+ abstract = {The performance of Neural Machine Translation (NMT) systems often suffers in low-resource scenarios where sufficiently large-scale parallel corpora cannot be obtained. Pre-trained word embeddings have proven to be invaluable for improving performance in natural language analysis tasks, which often suffer from paucity of data. However, their utility for NMT has not been extensively explored. In this work, we perform five sets of experiments that analyze when we can expect pre-trained word embeddings to help in NMT tasks. We show that such embeddings can be surprisingly effective in some cases -- providing gains of up to 20 BLEU points in the most favorable setting.},
145
+ url = {http://www.aclweb.org/anthology/N18-2084}
146
+ }
147
+
148
+ ```
149
+
150
+
151
+ ### Contributions
152
+
153
+ Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.