Jyotiyadav commited on
Commit
d3c1110
1 Parent(s): 9cc5df1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -16
README.md CHANGED
@@ -10,7 +10,7 @@ tags:
10
  - phi
11
  ---
12
 
13
- # Finetune Mistral, Gemma, Llama 2-5x faster with 70% less memory via Unsloth!
14
 
15
  We have a Google Colab Tesla T4 notebook for Phi-3 here: https://colab.research.google.com/drive/1NvkBmkHfucGO3Ve9s1NKZvMNlw5p83ym?usp=sharing
16
 
@@ -18,21 +18,22 @@ We have a Google Colab Tesla T4 notebook for Phi-3 here: https://colab.research.
18
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/buy%20me%20a%20coffee%20button.png" width="200"/>](https://ko-fi.com/unsloth)
19
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
20
 
21
- ## ✨ Finetune for Free
22
 
23
- All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
 
 
 
 
 
 
 
 
 
 
 
24
 
25
- | Unsloth supports | Free Notebooks | Performance | Memory use |
26
- |-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
27
- | **Llama-3 8b** | [▶️ Start on Colab](https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing) | 2.4x faster | 58% less |
28
- | **Gemma 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/10NbwlsRChbma1v55m8LAPYG15uQv6HLo?usp=sharing) | 2.4x faster | 58% less |
29
- | **Mistral 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
30
- | **Llama-2 7b** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lBzz5KeZJKXjvivbYvmGarix9Ao6Wxe5?usp=sharing) | 2.2x faster | 43% less |
31
- | **TinyLlama** | [▶️ Start on Colab](https://colab.research.google.com/drive/1AZghoNBQaMDgWJpi4RbffGM1h6raLUj9?usp=sharing) | 3.9x faster | 74% less |
32
- | **CodeLlama 34b** A100 | [▶️ Start on Colab](https://colab.research.google.com/drive/1y7A0AxE3y8gdj4AVkl2aZX47Xu3P1wJT?usp=sharing) | 1.9x faster | 27% less |
33
- | **Mistral 7b** 1xT4 | [▶️ Start on Kaggle](https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook) | 5x faster\* | 62% less |
34
- | **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
35
 
36
- - This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
37
- - This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
38
- - \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
 
10
  - phi
11
  ---
12
 
13
+ # Finetune Phi-3-mini-4k-instruct 2-5x faster with 70% less memory via Unsloth!
14
 
15
  We have a Google Colab Tesla T4 notebook for Phi-3 here: https://colab.research.google.com/drive/1NvkBmkHfucGO3Ve9s1NKZvMNlw5p83ym?usp=sharing
16
 
 
18
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/buy%20me%20a%20coffee%20button.png" width="200"/>](https://ko-fi.com/unsloth)
19
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
20
 
21
+ ## ✨ Results
22
 
23
+ | Step | Training Loss |
24
+ |------|---------------|
25
+ | 5 | 1.748100 |
26
+ | 10 | 1.584900 |
27
+ | 15 | 1.406200 |
28
+ | 20 | 1.274800 |
29
+ | 25 | 0.983400 |
30
+ | 30 | 0.939900 |
31
+ | 35 | 1.156100 |
32
+ | 40 | 0.883000 |
33
+ | 45 | 0.813900 |
34
+ | 50 | 0.721000 |
35
 
 
 
 
 
 
 
 
 
 
 
36
 
37
+ ## Base Model
38
+
39
+ https://huggingface.co/microsoft/Phi-3-mini-4k-instruct