Simbolo commited on
Commit
14d47b5
1 Parent(s): 981e4f6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -3
README.md CHANGED
@@ -3,6 +3,13 @@ license: mit
3
  language:
4
  - my
5
  pipeline_tag: text-generation
 
 
 
 
 
 
 
6
  ---
7
 
8
  The Simbolo's Myanmar SAR GPT symbol is trained on a dataset of 1 million Burmese data and pre-trained using the GPT-2 architecture. Its purpose is to serve as a foundational pre-trained model for the Burmese language, facilitating fine-tuning for specific applications of different tasks such as creative writing, chatbot, machine translation etc.
@@ -33,6 +40,4 @@ We have yet to thoroughly investigate the potential bias inherent in this model.
33
 
34
  ### References and Citations
35
  1. Jiang, Shengyi & Huang, Xiuwen & Cai, Xiaonan & Lin, Nankai. (2021). Pre-trained Models and Evaluation Data for the Myanmar Language. 10.1007/978-3-030-92310-5_52.
36
- 2. Lin, N., Fu, Y., Chen, C., Yang, Z., & Jiang, S. (2021). LaoPLM: Pre-trained Language Models for Lao. ArXiv. /abs/2110.05896
37
-
38
-
 
3
  language:
4
  - my
5
  pipeline_tag: text-generation
6
+ metrics:
7
+ - code_eval
8
+ library_name: transformers
9
+ tags:
10
+ - burmese
11
+ - gpt2
12
+ - pre-trained
13
  ---
14
 
15
  The Simbolo's Myanmar SAR GPT symbol is trained on a dataset of 1 million Burmese data and pre-trained using the GPT-2 architecture. Its purpose is to serve as a foundational pre-trained model for the Burmese language, facilitating fine-tuning for specific applications of different tasks such as creative writing, chatbot, machine translation etc.
 
40
 
41
  ### References and Citations
42
  1. Jiang, Shengyi & Huang, Xiuwen & Cai, Xiaonan & Lin, Nankai. (2021). Pre-trained Models and Evaluation Data for the Myanmar Language. 10.1007/978-3-030-92310-5_52.
43
+ 2. Lin, N., Fu, Y., Chen, C., Yang, Z., & Jiang, S. (2021). LaoPLM: Pre-trained Language Models for Lao. ArXiv. /abs/2110.05896