Update README.md
Browse files
README.md
CHANGED
@@ -11,9 +11,9 @@ pipeline_tag: text-generation
|
|
11 |
---
|
12 |
# Traditional Chinese Llama2
|
13 |
|
14 |
-
-
|
15 |
-
|
16 |
-
|
17 |
|
18 |
Thanks for these references:
|
19 |
- NTU NLP Lab's alapaca dataset: [alpaca-tw_en-align.json](./alpaca-tw-en-align.json): [ntunpllab](https://github.com/ntunlplab/traditional-chinese-alpaca) translate Stanford Alpaca 52k dataset
|
|
|
11 |
---
|
12 |
# Traditional Chinese Llama2
|
13 |
|
14 |
+
- Github repo: https://github.com/MIBlue119/traditional_chinese_llama2/
|
15 |
+
- This is a practice to finetune Llama2 on traditional chinese instruction dataset at Llama2 chat model.
|
16 |
+
- Use qlora and the alpaca translated dataset to finetune llama2-7b model at rtx3090(24GB VRAM) with 9 hours.
|
17 |
|
18 |
Thanks for these references:
|
19 |
- NTU NLP Lab's alapaca dataset: [alpaca-tw_en-align.json](./alpaca-tw-en-align.json): [ntunpllab](https://github.com/ntunlplab/traditional-chinese-alpaca) translate Stanford Alpaca 52k dataset
|