Update README.md
Browse files
README.md
CHANGED
@@ -63,8 +63,13 @@ Example:
|
|
63 |
* 1 epoch
|
64 |
* From chat LLaMA-2-7b
|
65 |
|
66 |
-
# llama-2-13b-tagalog-v0.3 loras (09/01/2023)
|
67 |
-
* Fine tuned on
|
68 |
-
* 3
|
|
|
|
|
|
|
|
|
|
|
69 |
* From LLaMA-2-13b
|
70 |
* Trying LLaMA-2-13b chat/other base and curated dataset for next attempts
|
|
|
63 |
* 1 epoch
|
64 |
* From chat LLaMA-2-7b
|
65 |
|
66 |
+
# llama-2-13b-tagalog-v0.3 loras (09/01-02/2023)
|
67 |
+
* Fine tuned on experimental datasets of ~1k items (Tagalog-focused dataset, based off Tagalog sentences augmented by LLaMA-2-13b base to create a 3-turn dialogue dataset between Human and Assistant)
|
68 |
+
* 3 fine-tuned for 1 epoch, rank = 16
|
69 |
+
* 3a for 1 epoch, rank = 8
|
70 |
+
* 3b for 2 epochs
|
71 |
+
* 3c for 1 epoch, lr = 1e-4, warmup steps = 0.1
|
72 |
+
* 3d for 1 epoch, lr = 2e-4, warmup steps = 0.1, rank = 32, lora alpha = 64
|
73 |
+
* 3e for 2 epochs, lr = 2e-4, warmup steps = 0.1, rank = 32, lora alpha = 64
|
74 |
* From LLaMA-2-13b
|
75 |
* Trying LLaMA-2-13b chat/other base and curated dataset for next attempts
|