Update README.md
Browse files
README.md
CHANGED
@@ -32,3 +32,14 @@ Experimental Tagalog loras: safe or accurate outputs not guaranteed (not for pro
|
|
32 |
* 30 epochs (v0.1a further trained and cut-off before overfit)
|
33 |
* From chat LLaMA-2-7b
|
34 |
* Lora of [chat-tagalog v0.1d](https://huggingface.co/922-Narra/llama-2-7b-chat-tagalog-v0.1d)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
* 30 epochs (v0.1a further trained and cut-off before overfit)
|
33 |
* From chat LLaMA-2-7b
|
34 |
* Lora of [chat-tagalog v0.1d](https://huggingface.co/922-Narra/llama-2-7b-chat-tagalog-v0.1d)
|
35 |
+
|
36 |
+
# llama-2-7b-tagalog-v0.2 loras (08/26/2023)
|
37 |
+
* Fine tuned on dataset of ~10k items (mixed)
|
38 |
+
* 1/2/3 epochs
|
39 |
+
* From chat LLaMA-2-7b
|
40 |
+
* Future attempt planned with cleaner chat/dialogue data
|
41 |
+
|
42 |
+
# hopia-3b-v0.1 (08/26/2023)
|
43 |
+
* Fine tuned on a small dataset of 14 items, manually edited
|
44 |
+
* 20 epochs
|
45 |
+
* From Open LLaMA 3b
|