huzy0 commited on
Commit
0c23626
·
verified ·
1 Parent(s): 182d1f5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -4,7 +4,6 @@ tags:
4
  - speech
5
  - best-rq
6
  - meralion
7
- license: mit
8
  language:
9
  - en
10
  ---
@@ -14,13 +13,15 @@ language:
14
  The MERaLiON-SpeechEncoder is a speech foundation model designed to support a wide range of downstream speech applications, like speech recognition, intent classification and speaker identification, among others. This version was trained on **200,000 hours of predominantly English data including 10,000 hours of Singapore-based speech**, to cater to the speech processing needs in Singapore and beyond. Gradual support for other languages, starting with major Southeast Asian ones are planned for subsequent releases.
15
 
16
  - **Developed by:** I<sup>2</sup>R, A\*STAR
17
- - **Funded by:** Singapore NRF
18
  - **Model type:** Speech Encoder
19
  - **Language(s):** English (Global & Singapore)
20
- - **License:** MIT
21
 
22
  For details on background, pre-training, tuning experiments and evaluation, please refer to our [technical report]().
23
 
 
 
 
24
  ### Model Description
25
 
26
  <img src="bestrq_model_training.png" alt="model_architecture" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
 
4
  - speech
5
  - best-rq
6
  - meralion
 
7
  language:
8
  - en
9
  ---
 
13
  The MERaLiON-SpeechEncoder is a speech foundation model designed to support a wide range of downstream speech applications, like speech recognition, intent classification and speaker identification, among others. This version was trained on **200,000 hours of predominantly English data including 10,000 hours of Singapore-based speech**, to cater to the speech processing needs in Singapore and beyond. Gradual support for other languages, starting with major Southeast Asian ones are planned for subsequent releases.
14
 
15
  - **Developed by:** I<sup>2</sup>R, A\*STAR
 
16
  - **Model type:** Speech Encoder
17
  - **Language(s):** English (Global & Singapore)
18
+ - **License:** To be advised
19
 
20
  For details on background, pre-training, tuning experiments and evaluation, please refer to our [technical report]().
21
 
22
+ Acknowledgement:
23
+ This research is supported by the National Research Foundation, Singapore and Infocomm Media Development Authority, Singapore under its National Large Language Models Funding Initiative.
24
+
25
  ### Model Description
26
 
27
  <img src="bestrq_model_training.png" alt="model_architecture" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/>