Update README.md
Browse files
README.md
CHANGED
@@ -20,6 +20,8 @@ base_model: beratcmn/Llama-3-11.5B
|
|
20 |
This model is a Proof of Concept. First 2 Llama-3-8B models has been merged using `Mergekit` and pre-training continued using `QLora` and `Unsloth` for 1000 samples from `roneneldan/TinyStories`.
|
21 |
Loss still decreases each epoch so I believe this is a successful experiment where there is a lot of room to experiment.
|
22 |
|
|
|
|
|
23 |
Llama-3-11.5B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
24 |
* [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B)
|
25 |
* [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B)
|
|
|
20 |
This model is a Proof of Concept. First 2 Llama-3-8B models has been merged using `Mergekit` and pre-training continued using `QLora` and `Unsloth` for 1000 samples from `roneneldan/TinyStories`.
|
21 |
Loss still decreases each epoch so I believe this is a successful experiment where there is a lot of room to experiment.
|
22 |
|
23 |
+
[Wandb Report](https://wandb.ai/beratcmn/huggingface/reports/beratcmn-Llama-3-11-5B-v0-1--Vmlldzo3NjUzMTgx)
|
24 |
+
|
25 |
Llama-3-11.5B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
26 |
* [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B)
|
27 |
* [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B)
|