albertvillanova HF staff commited on
Commit
659fabb
1 Parent(s): 1b3f602

Fix missing tags in dataset cards (#4921)

Browse files

* Fix missing tags in dataset cards

* Force CI re-run

Commit from https://github.com/huggingface/datasets/commit/a0b6402e8f32a806b1ef0ff4b99fd58e54232d49

Files changed (1) hide show
  1. README.md +46 -11
README.md CHANGED
@@ -1,5 +1,32 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  pretty_name: TEDHrlr
 
 
 
 
 
 
 
3
  paperswithcode_id: null
4
  ---
5
 
@@ -31,9 +58,9 @@ paperswithcode_id: null
31
 
32
  ## Dataset Description
33
 
34
- - **Homepage:** [https://github.com/neulab/word-embeddings-for-nmt](https://github.com/neulab/word-embeddings-for-nmt)
35
- - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
36
- - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
37
  - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
38
  - **Size of downloaded dataset files:** 1749.12 MB
39
  - **Size of the generated dataset:** 268.61 MB
@@ -220,16 +247,24 @@ The data fields are the same among all splits.
220
  ### Citation Information
221
 
222
  ```
223
- @inproceedings{Ye2018WordEmbeddings,
224
- author = {Ye, Qi and Devendra, Sachan and Matthieu, Felix and Sarguna, Padmanabhan and Graham, Neubig},
225
- title = {When and Why are pre-trained word embeddings useful for Neural Machine Translation},
226
- booktitle = {HLT-NAACL},
227
- year = {2018},
228
- }
229
-
 
 
 
 
 
 
 
 
 
230
  ```
231
 
232
-
233
  ### Contributions
234
 
235
  Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
 
1
  ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language:
5
+ - az
6
+ - be
7
+ - en
8
+ - es
9
+ - fr
10
+ - gl
11
+ - he
12
+ - it
13
+ - pt
14
+ - ru
15
+ - "tr"
16
+ language_creators:
17
+ - expert-generated
18
+ license:
19
+ - cc-by-nc-nd-4.0
20
+ multilinguality:
21
+ - translation
22
  pretty_name: TEDHrlr
23
+ size_categories:
24
+ - 1M<n<10M
25
+ source_datasets:
26
+ - extended|ted_talks_iwslt
27
+ task_categories:
28
+ - translation
29
+ task_ids: []
30
  paperswithcode_id: null
31
  ---
32
 
 
58
 
59
  ## Dataset Description
60
 
61
+ - **Homepage:**
62
+ - **Repository:** https://github.com/neulab/word-embeddings-for-nmt
63
+ - **Paper:** [When and Why Are Pre-Trained Word Embeddings Useful for Neural Machine Translation?](https://aclanthology.org/N18-2084/)
64
  - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
65
  - **Size of downloaded dataset files:** 1749.12 MB
66
  - **Size of the generated dataset:** 268.61 MB
 
247
  ### Citation Information
248
 
249
  ```
250
+ @inproceedings{qi-etal-2018-pre,
251
+ title = "When and Why Are Pre-Trained Word Embeddings Useful for Neural Machine Translation?",
252
+ author = "Qi, Ye and
253
+ Sachan, Devendra and
254
+ Felix, Matthieu and
255
+ Padmanabhan, Sarguna and
256
+ Neubig, Graham",
257
+ booktitle = "Proceedings of the 2018 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)",
258
+ month = jun,
259
+ year = "2018",
260
+ address = "New Orleans, Louisiana",
261
+ publisher = "Association for Computational Linguistics",
262
+ url = "https://aclanthology.org/N18-2084",
263
+ doi = "10.18653/v1/N18-2084",
264
+ pages = "529--535",
265
+ }
266
  ```
267
 
 
268
  ### Contributions
269
 
270
  Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.