PedroDKE commited on
Commit
6adace5
1 Parent(s): 1dad2d3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -8
README.md CHANGED
@@ -83,14 +83,18 @@ the number of hours for each book aligned in this repo:<br>
83
 
84
  when using this work, please cite the original paper and the LibrivoxDeEn authors
85
  ```
86
- @misc{jeuris2022,
87
- title = {LibriS2S: A German-English Speech-to-Speech Translation Corpus},
88
- author = {Jeuris, Pedro and Niehues, Jan},
89
- doi = {10.48550/ARXIV.2204.10593},
90
- url = {https://arxiv.org/abs/2204.10593},
91
- publisher = {arXiv},
92
- year = {2022},
93
- copyright = {Creative Commons Attribution Non Commercial Share Alike 4.0 International}
 
 
 
 
94
  }
95
  ```
96
  ```
 
83
 
84
  when using this work, please cite the original paper and the LibrivoxDeEn authors
85
  ```
86
+ @inproceedings{jeuris-niehues-2022-libris2s,
87
+ title = "{L}ibri{S}2{S}: A {G}erman-{E}nglish Speech-to-Speech Translation Corpus",
88
+ author = "Jeuris, Pedro and
89
+ Niehues, Jan",
90
+ booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
91
+ month = jun,
92
+ year = "2022",
93
+ address = "Marseille, France",
94
+ publisher = "European Language Resources Association",
95
+ url = "https://aclanthology.org/2022.lrec-1.98",
96
+ pages = "928--935",
97
+ abstract = "Recently, we have seen an increasing interest in the area of speech-to-text translation. This has led to astonishing improvements in this area. In contrast, the activities in the area of speech-to-speech translation is still limited, although it is essential to overcome the language barrier. We believe that one of the limiting factors is the availability of appropriate training data. We address this issue by creating LibriS2S, to our knowledge the first publicly available speech-to-speech training corpus between German and English. For this corpus, we used independently created audio for German and English leading to an unbiased pronunciation of the text in both languages. This allows the creation of a new text-to-speech and speech-to-speech translation model that directly learns to generate the speech signal based on the pronunciation of the source language. Using this created corpus, we propose Text-to-Speech models based on the example of the recently proposed FastSpeech 2 model that integrates source language information. We do this by adapting the model to take information such as the pitch, energy or transcript from the source speech as additional input.",
98
  }
99
  ```
100
  ```