Update README.md
Browse files
README.md
CHANGED
@@ -15,10 +15,10 @@ datasets:
|
|
15 |
|
16 |
# ππΉπ Buffala-LoRa-TH
|
17 |
|
18 |
-
Buffala-LoRA is a 7B-parameter LLaMA model finetuned to follow instructions. It is trained on the Stanford Alpaca (TH), WikiTH, Pantip and IAppQ&A dataset and makes use of the Huggingface LLaMA implementation. For more information, please visit [the project's website](https://github.com/tloen/alpaca-lora).
|
19 |
|
20 |
## Issues and what next?
|
21 |
-
- The model still lacks a significant amount of world knowledge, so it is necessary to fine-tune it on larger Thai datasets > Next version: CCNet,OSCAR,
|
22 |
- Currently, there is no translation prompt. We plan to fine-tune the model on the SCB Thai-English dataset soon.
|
23 |
- The model works well with the LangChain Search agent (Serpapi), which serves as a hotfix for world knowledge. > Plan for Spaces with search chain demo
|
24 |
- Lacked of chat capabilities, waiting for LangChain implementation.
|
|
|
15 |
|
16 |
# ππΉπ Buffala-LoRa-TH
|
17 |
|
18 |
+
Buffala-LoRA is a 7B-parameter LLaMA model finetuned to follow instructions. It is trained on the Stanford Alpaca (TH Translated), WikiTH, Pantip and IAppQ&A dataset and makes use of the Huggingface LLaMA implementation. For more information, please visit [the project's website](https://github.com/tloen/alpaca-lora).
|
19 |
|
20 |
## Issues and what next?
|
21 |
+
- The model still lacks a significant amount of world knowledge, so it is necessary to fine-tune it on larger Thai datasets > Next version: CCNet,OSCAR,thWiki
|
22 |
- Currently, there is no translation prompt. We plan to fine-tune the model on the SCB Thai-English dataset soon.
|
23 |
- The model works well with the LangChain Search agent (Serpapi), which serves as a hotfix for world knowledge. > Plan for Spaces with search chain demo
|
24 |
- Lacked of chat capabilities, waiting for LangChain implementation.
|