Update README.md
Browse files
README.md
CHANGED
@@ -20,7 +20,9 @@ This repo contains the fine-tuned model for the Turkish Llama 3 Project and its
|
|
20 |
|
21 |
The actual trained model is an adapter model of [Unsloth's Llama 3-8B quantized model](https://huggingface.co/unsloth/llama-3-8b-bnb-4bit), which is then converted into .gguf format using llama.cpp and into .bin format for vLLM.
|
22 |
|
23 |
-
You can access the fine-tuning code
|
|
|
|
|
24 |
|
25 |
## Example Usage
|
26 |
You can use the adapter model with PEFT.
|
|
|
20 |
|
21 |
The actual trained model is an adapter model of [Unsloth's Llama 3-8B quantized model](https://huggingface.co/unsloth/llama-3-8b-bnb-4bit), which is then converted into .gguf format using llama.cpp and into .bin format for vLLM.
|
22 |
|
23 |
+
You can access the fine-tuning code [here](https://colab.research.google.com/drive/1QRaqYxjfnFvwA_9jb7V0Z5bJr-PuHH7w?usp=sharing).
|
24 |
+
|
25 |
+
Trained with NVIDIA L4 with 150 steps, took around 8 minutes.
|
26 |
|
27 |
## Example Usage
|
28 |
You can use the adapter model with PEFT.
|