wannaphong commited on
Commit
7547bc8
1 Parent(s): 23e52bb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -4
README.md CHANGED
@@ -8,16 +8,15 @@ datasets:
8
  - wannaphong/mark13
9
  ---
10
 
11
- # NumFaLM 3B
12
-
13
- NumFaLM 3B is a bilingual language model trained on Thai and English. The architecture model is Llama model that pretraining from scratch. It was build to open source AI and research for bilingual language model and improve small language model. We released the training srcipt and train datasets to you can research the training and datasets.
14
 
15
  - GitHub: [https://github.com/wannaphong/NumFaLM](https://github.com/wannaphong/NumFaLM)
16
  - Training script: [https://github.com/wannaphong/EasyLM/tree/numfa_pretraining](https://github.com/wannaphong/EasyLM/tree/numfa_pretraining)
17
  - Train Datasets: [wannaphong/mark13](https://huggingface.co/datasets/wannaphong/mark13)
18
 
19
 
20
- We fork EasyLM and add training by HuggingFace datasets but HuggingFace was down many time in during the time we train the model, so we can trained just 1 epoch. The model trained 1 epoch.
21
 
22
  # Acknowledgements
23
 
 
8
  - wannaphong/mark13
9
  ---
10
 
11
+ ## NumFaLM 3B
12
+ NumFaLM 3B is a bilingual language model trained in Thai and English. The architecture model is Llama model that pretraining from scratch. It was built to open source AI and research for bilingual language models and improve small language models. We released the training script and train datasets so you can research the training and datasets.
 
13
 
14
  - GitHub: [https://github.com/wannaphong/NumFaLM](https://github.com/wannaphong/NumFaLM)
15
  - Training script: [https://github.com/wannaphong/EasyLM/tree/numfa_pretraining](https://github.com/wannaphong/EasyLM/tree/numfa_pretraining)
16
  - Train Datasets: [wannaphong/mark13](https://huggingface.co/datasets/wannaphong/mark13)
17
 
18
 
19
+ We fork EasyLM and added training by HuggingFace datasets, but HuggingFace was down many times during the time we trained the model, so we can train just one epoch. The model trained one epoch.
20
 
21
  # Acknowledgements
22