Omartificial-Intelligence-Space
commited on
Commit
•
bf5f7ad
1
Parent(s):
75e3d06
Update readme.md
Browse files
README.md
CHANGED
@@ -15,12 +15,12 @@ Al Baka is an Experimental Fine Tuned Model based on the new released LLAMA3-8B
|
|
15 |
|
16 |
- **Model Type:** Causal decoder-only
|
17 |
- **Language(s):** Arabic
|
18 |
-
- **Base Model:** [LLAMA-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B)
|
19 |
- **Dataset:** [Yasbok/Alpaca_arabic_instruct](https://huggingface.co/datasets/Yasbok/Alpaca_arabic_instruct)
|
20 |
|
21 |
## Model Details
|
22 |
|
23 |
-
- The model was fine-tuned in 4-bit precision using [unsloth](
|
24 |
|
25 |
- The run is performed only for 1000 steps with a single Google Colab T4 GPU NVIDIA GPU with 15 GB of available memory.
|
26 |
|
@@ -94,4 +94,4 @@ tokenizer.batch_decode(outputs)
|
|
94 |
|
95 |
### Recommendations
|
96 |
|
97 |
-
- [unsloth](
|
|
|
15 |
|
16 |
- **Model Type:** Causal decoder-only
|
17 |
- **Language(s):** Arabic
|
18 |
+
- **Base Model:** [LLAMA-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B)
|
19 |
- **Dataset:** [Yasbok/Alpaca_arabic_instruct](https://huggingface.co/datasets/Yasbok/Alpaca_arabic_instruct)
|
20 |
|
21 |
## Model Details
|
22 |
|
23 |
+
- The model was fine-tuned in 4-bit precision using [unsloth](https://github.com/unslothai/unsloth)
|
24 |
|
25 |
- The run is performed only for 1000 steps with a single Google Colab T4 GPU NVIDIA GPU with 15 GB of available memory.
|
26 |
|
|
|
94 |
|
95 |
### Recommendations
|
96 |
|
97 |
+
- [unsloth](https://github.com/unslothai/unsloth) for finetuning models. You can get a 2x faster finetuned model which can be exported to any format or uploaded to Hugging Face.
|