RaphaelOlivier commited on
Commit
c692cd0
1 Parent(s): 180d08d

reduce number of splits

Browse files
Files changed (1) hide show
  1. README.md +4 -11
README.md CHANGED
@@ -9,15 +9,7 @@ The dataset contains several splits. Each split consists of the same utterances,
9
 
10
  In addition we provide the original inputs (`natural` split)
11
 
12
- For each noise we actually provide a split labeled with the original or "natural" correct transcriptions, and one labeled with our selected "target" transcriptions. For instance, the split contains the modified original transcription, while the An ASR model that this dataset fools would get a low target WER and a high natural WER. An ASR model robust to this dataset would get a low natural WER and a high target WER. Therefore we provide in total 8 splits:
13
- * `natural_nat_txt`
14
- * `natural_tgt_txt`
15
- * `adv_0.04_nat_txt`
16
- * `adv_0.04_tgt_txt`
17
- * `adv_0.015_nat_txt`
18
- * `adv_0.015_tgt_txt`
19
- * `adv_0.015_RIR_nat_txt`
20
- * `adv_0.015_RIR_tgt_txt`
21
 
22
  ## Usage
23
  You should evaluate your model on this dataset as you would evaluate it on LibriSpeech. Here is an example with Wav2Vec2
@@ -47,12 +39,13 @@ def map_to_pred(batch):
47
 
48
  result = librispeech_adv_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"])
49
 
50
- print("WER:", wer(result["text"], result["transcription"]))
 
51
  ```
52
 
53
  *Result (WER)*:
54
 
55
- | "0.015 target text" | "0.015 natural text" | "0.04 target text" | "0.04 natural text"
56
  |---|---|---|---|
57
  | 58.2 | 108 | 49.5 | 108 |
58
 
 
9
 
10
  In addition we provide the original inputs (`natural` split)
11
 
12
+ For each split we actually provide two text keys: `true_text` which is the original LibriSpeech label, i.e. the sentence one can actually hear when listening to the audio; and `target_text`, which is the target sentence of our adversarial attack. An ASR model that this dataset fools would get a low WER on `target_text` and a high WER on `true_text`. An ASR model robust to this dataset would get the opposite.
 
 
 
 
 
 
 
 
13
 
14
  ## Usage
15
  You should evaluate your model on this dataset as you would evaluate it on LibriSpeech. Here is an example with Wav2Vec2
 
39
 
40
  result = librispeech_adv_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"])
41
 
42
+ print("WER on correct labels:", wer(result["true_text"], result["transcription"]))
43
+ print("WER on attack targets:", wer(result["target_text"], result["transcription"]))
44
  ```
45
 
46
  *Result (WER)*:
47
 
48
+ | "0.015 target_text" | "0.015 true_text" | "0.04 target_text" | "0.04 true_text"
49
  |---|---|---|---|
50
  | 58.2 | 108 | 49.5 | 108 |
51