Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,5 @@
|
|
1 |
---
|
2 |
-
base_model: unsloth/phi-3-
|
3 |
language:
|
4 |
- en
|
5 |
license: apache-2.0
|
@@ -15,8 +15,55 @@ tags:
|
|
15 |
|
16 |
- **Developed by:** Deeokay
|
17 |
- **License:** apache-2.0
|
18 |
-
- **Finetuned from model :** unsloth/phi-3-
|
19 |
|
20 |
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
21 |
|
22 |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
base_model: unsloth/phi-3-mini-4k-instruct-bnb-4bit
|
3 |
language:
|
4 |
- en
|
5 |
license: apache-2.0
|
|
|
15 |
|
16 |
- **Developed by:** Deeokay
|
17 |
- **License:** apache-2.0
|
18 |
+
- **Finetuned from model :** unsloth/phi-3-mini-4k-instruct-bnb-4bit
|
19 |
|
20 |
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|
21 |
|
22 |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
23 |
+
|
24 |
+
# README
|
25 |
+
|
26 |
+
This is a test model on a the following
|
27 |
+
- a private dataset focused for Students in MYP (IB) Program for my niece
|
28 |
+
- Works with Ollama create with just "FROM path/to/model" as Modelfile (standard template works no issues)
|
29 |
+
|
30 |
+
# HOW TO USE
|
31 |
+
|
32 |
+
The whole point of conversion for me was I wanted to be able to to use it through Ollama or (other local options)
|
33 |
+
For Ollama, it required to be a GGUF file. Once you have this it is pretty straight forward (if it is in llama3 which this model is)
|
34 |
+
|
35 |
+
Quick Start:
|
36 |
+
- You must already have Ollama running in your setting
|
37 |
+
- Download the unsloth.Q4_K_M.gguf model from Files
|
38 |
+
- In the same directory create a file call "Modelfile"
|
39 |
+
- Inside the "Modelfile" type
|
40 |
+
|
41 |
+
```python
|
42 |
+
FROM ./unsloth.Q4_K_M.gguf # or which ever GGUF file
|
43 |
+
|
44 |
+
```
|
45 |
+
- Save a go back to the folder (folder where model + Modelfile exisit)
|
46 |
+
- Now in terminal make sure you are in the same location of the folder and type in the following command
|
47 |
+
|
48 |
+
```python
|
49 |
+
ollama create mycustomai # "mycustomai" <- you can name it anything u want
|
50 |
+
```
|
51 |
+
|
52 |
+
This GGUF is based on unsloth/Phi-3-mini-4k-instruct thus ollama doesn't need anything else to auto configure this model
|
53 |
+
|
54 |
+
After than you should be able to use this model to chat!
|
55 |
+
|
56 |
+
|
57 |
+
|
58 |
+
# NOTE: DISCLAIMER
|
59 |
+
|
60 |
+
Please note this is not for the purpose of production, but results of self tought Fine Tuning
|
61 |
+
|
62 |
+
The Special Tokens where kept the same and the training data has the following Template:
|
63 |
+
|
64 |
+
```
|
65 |
+
<s><|user|>{question}<|end|>
|
66 |
+
<|assistant|>{answer}<|end|></s>
|
67 |
+
|
68 |
+
```
|
69 |
+
|