Pierre Colombo commited on
Commit
23d9f85
1 Parent(s): 2299233

Update documentation card of miam dataset (#4846)

Browse files

* Update README.md

* Fix dataset card

Co-authored-by: Albert Villanova del Moral <8515462+albertvillanova@users.noreply.github.com>

Commit from https://github.com/huggingface/datasets/commit/5caced4d733d2b49f3bd2572512b7c15cb22d865

Files changed (1) hide show
  1. README.md +21 -10
README.md CHANGED
@@ -240,9 +240,9 @@ For the `vm2` configuration, the different fields are:
240
 
241
  ## Additional Information
242
 
243
- ### Benchmark Curators
244
 
245
- Anonymous
246
 
247
  ### Licensing Information
248
 
@@ -251,13 +251,24 @@ This work is licensed under a [Creative Commons Attribution-NonCommercial-ShareA
251
  ### Citation Information
252
 
253
  ```
254
- @unpublished{
255
- anonymous2021cross-lingual,
256
- title={Cross-Lingual Pretraining Methods for Spoken Dialog},
257
- author={Anonymous},
258
- journal={OpenReview Preprint},
259
- year={2021},
260
- url{https://openreview.net/forum?id=c1oDhu_hagR},
261
- note={anonymous preprint under review}
 
 
 
 
 
 
 
262
  }
263
  ```
 
 
 
 
 
240
 
241
  ## Additional Information
242
 
243
+ ### Dataset Curators
244
 
245
+ Anonymous.
246
 
247
  ### Licensing Information
248
 
 
251
  ### Citation Information
252
 
253
  ```
254
+ @inproceedings{colombo-etal-2021-code,
255
+ title = "Code-switched inspired losses for spoken dialog representations",
256
+ author = "Colombo, Pierre and
257
+ Chapuis, Emile and
258
+ Labeau, Matthieu and
259
+ Clavel, Chlo{\'e}",
260
+ booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
261
+ month = nov,
262
+ year = "2021",
263
+ address = "Online and Punta Cana, Dominican Republic",
264
+ publisher = "Association for Computational Linguistics",
265
+ url = "https://aclanthology.org/2021.emnlp-main.656",
266
+ doi = "10.18653/v1/2021.emnlp-main.656",
267
+ pages = "8320--8337",
268
+ abstract = "Spoken dialogue systems need to be able to handle both multiple languages and multilinguality inside a conversation (\textit{e.g} in case of code-switching). In this work, we introduce new pretraining losses tailored to learn generic multilingual spoken dialogue representations. The goal of these losses is to expose the model to code-switched language. In order to scale up training, we automatically build a pretraining corpus composed of multilingual conversations in five different languages (French, Italian, English, German and Spanish) from OpenSubtitles, a huge multilingual corpus composed of 24.3G tokens. We test the generic representations on MIAM, a new benchmark composed of five dialogue act corpora on the same aforementioned languages as well as on two novel multilingual tasks (\textit{i.e} multilingual mask utterance retrieval and multilingual inconsistency identification). Our experiments show that our new losses achieve a better performance in both monolingual and multilingual settings.",
269
  }
270
  ```
271
+
272
+ ### Contributions
273
+
274
+ Thanks to [@eusip](https://github.com/eusip) and [@PierreColombo](https://github.com/PierreColombo) for adding this dataset.