Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -135,4 +135,71 @@ configs:
|
|
135 |
path: es2en/test_whspbas-*
|
136 |
- split: test_whsptny
|
137 |
path: es2en/test_whsptny-*
|
|
|
|
|
|
|
|
|
|
|
138 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
135 |
path: es2en/test_whspbas-*
|
136 |
- split: test_whsptny
|
137 |
path: es2en/test_whsptny-*
|
138 |
+
license: mit
|
139 |
+
language:
|
140 |
+
- de
|
141 |
+
- es
|
142 |
+
- en
|
143 |
---
|
144 |
+
|
145 |
+
|
146 |
+
# [SpeechQE: Estimating the Quality of Direct Speech Translation](https://aclanthology.org/2024.emnlp-main.1218)
|
147 |
+
This is a benchmark and training corpus for the task of quality estimation for speech translation (SpeechQE).
|
148 |
+
|
149 |
+
*(We provide test split first, and the training corpus will be provided later. However, if you want those quickly, please do not hesitate to ping me (hjhan@umd.edu)!)
|
150 |
+
|
151 |
+
## E2E Model Trained with SpeechQE-CoVoST2
|
152 |
+
|
153 |
+
|Task | E2E Model | Trained Domain
|
154 |
+
|---|---|---|
|
155 |
+
|SpeechQE for English-to-German Speech Translation |[h-j-han/SpeechQE-TowerInstruct-7B-en2de](https://huggingface.co/h-j-han/SpeechQE-TowerInstruct-7B-en2de)| CoVoST2|
|
156 |
+
|SpeechQE for Spanish-to-English Speech Translation |[h-j-han/SpeechQE-TowerInstruct-7B-es2en](https://huggingface.co/h-j-han/SpeechQE-TowerInstruct-7B-es2en)|CoVoST2|
|
157 |
+
|
158 |
+
|
159 |
+
## Setup
|
160 |
+
We provide code in Github repo : https://github.com/h-j-han/SpeechQE
|
161 |
+
```bash
|
162 |
+
$ git clone https://github.com/h-j-han/SpeechQE.git
|
163 |
+
$ cd SpeechQE
|
164 |
+
```
|
165 |
+
```bash
|
166 |
+
$ conda create -n speechqe Python=3.11 pytorch=2.0.1 pytorch-cuda=11.7 torchvision torchaudio -c pytorch -c nvidia
|
167 |
+
$ conda activate speechqe
|
168 |
+
$ pip install -r requirements.txt
|
169 |
+
```
|
170 |
+
|
171 |
+
## Download Audio Data
|
172 |
+
Download the audio data from Common Voice. Here, we use mozilla-foundation/common_voice_4_0.
|
173 |
+
```
|
174 |
+
import datasets
|
175 |
+
cv4en = datasets.load_dataset(
|
176 |
+
"mozilla-foundation/common_voice_4_0", "es", cache_dir='path/to/cv4/download',
|
177 |
+
)
|
178 |
+
```
|
179 |
+
## Evaluation with SpeechQE-CoVoST2
|
180 |
+
We provide SpeechQE benchmark: [h-j-han/SpeechQE-CoVoST2](https://huggingface.co/datasets/h-j-han/SpeechQE-CoVoST2).
|
181 |
+
BASE_AUDIO_PATH is the path of downloaded Common Voice dataset.
|
182 |
+
```bash
|
183 |
+
$ python speechqe/score_speechqe.py \
|
184 |
+
--speechqe_model=h-j-han/SpeechQE-TowerInstruct-7B-es2en \
|
185 |
+
--dataset_name=h-j-han/SpeechQE-CoVoST2 \
|
186 |
+
--base_audio_path=$BASE_AUDIO_PATH \
|
187 |
+
--dataset_config_name=es2en \
|
188 |
+
--test_split_name=test \
|
189 |
+
```
|
190 |
+
|
191 |
+
|
192 |
+
## Reference
|
193 |
+
Please find details in this paper :
|
194 |
+
```
|
195 |
+
@misc{han2024speechqe,
|
196 |
+
title={SpeechQE: Estimating the Quality of Direct Speech Translation},
|
197 |
+
author={HyoJung Han and Kevin Duh and Marine Carpuat},
|
198 |
+
year={2024},
|
199 |
+
eprint={2410.21485},
|
200 |
+
archivePrefix={arXiv},
|
201 |
+
primaryClass={cs.CL}
|
202 |
+
}
|
203 |
+
```
|
204 |
+
|
205 |
+
|