Sweaterdog
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -1,7 +1,6 @@
|
|
1 |
---
|
2 |
base_model:
|
3 |
- unsloth/Qwen2.5-7B-bnb-4bit
|
4 |
-
- unsloth/gemma-2-9b-it-bnb-4bit
|
5 |
- unsloth/Llama-3.2-3B-Instruct
|
6 |
tags:
|
7 |
- text-generation-inference
|
@@ -22,7 +21,7 @@ datasets:
|
|
22 |
|
23 |
- **Developed by:** Sweaterdog
|
24 |
- **License:** apache-2.0
|
25 |
-
- **Finetuned from model :** unsloth/Qwen2.5-7B-bnb-4bit and unsloth/
|
26 |
|
27 |
The MindCraft LLM tuning CSV file can be found here, this can be tweaked as needed. [MindCraft-LLM](https://huggingface.co/datasets/Sweaterdog/MindCraft-LLM-tuning/raw/main/Gemini-Minecraft%20-%20training_data_minecraft_updated.csv)
|
28 |
|
@@ -46,7 +45,7 @@ This model is built and designed to play Minecraft via the extension named "[Min
|
|
46 |
#
|
47 |
Well, you see, I do not have the most powerful computer, and Unsloth, the thing I'm using for fine tuning, has a google colab set up, so I am waiting for GPU time to tune the models, but they will be released ASAP, I promise.
|
48 |
# How to Use
|
49 |
-
In order to use this model, A, download the GGUF file of the version you want, either a Qwen, or
|
50 |
#
|
51 |
1.Download the .gguf Model u want. For this example it is in the standard Windows "Download" Folder
|
52 |
|
|
|
1 |
---
|
2 |
base_model:
|
3 |
- unsloth/Qwen2.5-7B-bnb-4bit
|
|
|
4 |
- unsloth/Llama-3.2-3B-Instruct
|
5 |
tags:
|
6 |
- text-generation-inference
|
|
|
21 |
|
22 |
- **Developed by:** Sweaterdog
|
23 |
- **License:** apache-2.0
|
24 |
+
- **Finetuned from model :** unsloth/Qwen2.5-7B-bnb-4bit and unsloth/Llama-3.2-3B-Instruct
|
25 |
|
26 |
The MindCraft LLM tuning CSV file can be found here, this can be tweaked as needed. [MindCraft-LLM](https://huggingface.co/datasets/Sweaterdog/MindCraft-LLM-tuning/raw/main/Gemini-Minecraft%20-%20training_data_minecraft_updated.csv)
|
27 |
|
|
|
45 |
#
|
46 |
Well, you see, I do not have the most powerful computer, and Unsloth, the thing I'm using for fine tuning, has a google colab set up, so I am waiting for GPU time to tune the models, but they will be released ASAP, I promise.
|
47 |
# How to Use
|
48 |
+
In order to use this model, A, download the GGUF file of the version you want, either a Qwen, or Llama model, and then the Modelfile, after you download both, in the Modelfile, change the directory of the model, to your model. Here is a simple guide if needed for the rest:
|
49 |
#
|
50 |
1.Download the .gguf Model u want. For this example it is in the standard Windows "Download" Folder
|
51 |
|