Text Generation
Adapters
Thai
instruction-finetuning
Thaweewat commited on
Commit
9910946
β€’
1 Parent(s): 8c5c1c4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -15,7 +15,7 @@ datasets:
15
 
16
  # πŸƒπŸ‡ΉπŸ‡­ Buffala-LoRa-TH
17
 
18
- Buffala-LoRA is a 7B-parameter LLaMA model finetuned to follow instructions. It is trained on the Stanford Alpaca (TH Translated), WikiTH, Pantip and IAppQ&A dataset and makes use of the Huggingface LLaMA implementation. For more information, please visit [the project's website](https://github.com/tloen/alpaca-lora).
19
 
20
  ## Issues and what next?
21
  - The model still lacks a significant amount of world knowledge, so it is necessary to fine-tune it on larger Thai datasets > Next version: CCNet,OSCAR,thWiki
 
15
 
16
  # πŸƒπŸ‡ΉπŸ‡­ Buffala-LoRa-TH
17
 
18
+ Buffala-LoRA is a 7B-parameter LLaMA model finetuned to follow instructions. It is trained on the Stanford Alpaca (TH Translated), Wisesignt, WikiTH, Pantip and IAppQ&A dataset and makes use of the Huggingface LLaMA implementation. For more information, please visit [the project's website](https://github.com/tloen/alpaca-lora).
19
 
20
  ## Issues and what next?
21
  - The model still lacks a significant amount of world knowledge, so it is necessary to fine-tune it on larger Thai datasets > Next version: CCNet,OSCAR,thWiki