Martijn Bartelds commited on
Commit
46d2500
1 Parent(s): 3ead97b

Update app files

Browse files
examples/falling_hood_mobiel_203936.wav CHANGED
Binary files a/examples/falling_hood_mobiel_203936.wav and b/examples/falling_hood_mobiel_203936.wav differ
examples/falling_huud_mobiel_201145.wav CHANGED
Binary files a/examples/falling_huud_mobiel_201145.wav and b/examples/falling_huud_mobiel_201145.wav differ
neural_acoustic_distance.py CHANGED
@@ -11,7 +11,7 @@ from transformers import AutoConfig
11
 
12
  st.title("Word-level Neural Acoustic Distance Visualizer")
13
 
14
- st.write("This tool visualizes pronunciation differences between two recordings of the same word. The two recordings have to be wave files (mono 16-bit PCM at 16 kHz) containing a single spoken word. \n\n\
15
  Choose any wav2vec 2.0 compatible model identifier on the [Hugging Face Model Hub](https://huggingface.co/models?filter=wav2vec2) and select the output layer you want to use.\n\n\
16
  To upload your own recordings select 'custom upload' in the audio file selection step. The first recording is put on the x-axis of the plot and the second one will be the reference recording for computing distance.\n\
17
  You should already see an example plot of two sample recordings.\n\n\
11
 
12
  st.title("Word-level Neural Acoustic Distance Visualizer")
13
 
14
+ st.write("This tool visualizes pronunciation differences between two recordings of the same word. The two recordings have to be wave files containing a single spoken word. \n\n\
15
  Choose any wav2vec 2.0 compatible model identifier on the [Hugging Face Model Hub](https://huggingface.co/models?filter=wav2vec2) and select the output layer you want to use.\n\n\
16
  To upload your own recordings select 'custom upload' in the audio file selection step. The first recording is put on the x-axis of the plot and the second one will be the reference recording for computing distance.\n\
17
  You should already see an example plot of two sample recordings.\n\n\