artitsu commited on
Commit
369e7ad
1 Parent(s): c4f2294

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -5
README.md CHANGED
@@ -24,18 +24,16 @@ The training recipe was based on wsj recipe in [espnet](https://github.com/espne
24
 
25
  This model is Hybrid CTC/Attention model with pre-trained HuBERT as the encoder.
26
 
27
- This model trained on Thai-central for being the supervised pre-trained model in transfer-based curriculum learning experiment.
28
 
29
- you can demo on colab with [this link](https://colab.research.google.com/drive/1stltGdpG9OV-sCl9QgkvEXZV7fGB2Ixe?usp=sharing). (Please note that you cannot inference >4 seconds of audio with free Google colab)
30
 
31
 
32
  ## Evaluation
33
 
34
  <!-- This section describes the evaluation protocols and provides the results. -->
35
 
36
- For evaluation, the metrics are CER and WER. before WER evaluation, transcriptions were re-tokenized using newmm tokenizer in [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp)
37
-
38
- In this reposirity, we also provide the vocabulary for building the newmm tokenizer using this script:
39
 
40
  ```python
41
  from pythainlp import word_tokenize
 
24
 
25
  This model is Hybrid CTC/Attention model with pre-trained HuBERT as the encoder.
26
 
27
+ This model was trained on Thai-central to be used as a supervised pre-trained model in order to be used for finetuning to other Thai dialects. (Experiment 2 in the paper).
28
 
29
+ We provide some demo code to do inference with this model on colab [here](https://colab.research.google.com/drive/1stltGdpG9OV-sCl9QgkvEXZV7fGB2Ixe?usp=sharing). (Please note that you cannot inference >4 seconds of audio with free Google colab)
30
 
31
 
32
  ## Evaluation
33
 
34
  <!-- This section describes the evaluation protocols and provides the results. -->
35
 
36
+ For evaluation, the metrics are CER and WER. Before WER evaluation, transcriptions were re-tokenized using newmm tokenizer in [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp)
 
 
37
 
38
  ```python
39
  from pythainlp import word_tokenize