File size: 7,322 Bytes
893d689
7aaf7cf
 
e4615d0
893d689
 
 
 
 
e4615d0
893d689
 
 
 
7aaf7cf
 
893d689
 
 
 
 
 
d86e89d
893d689
1c11055
b634819
 
 
 
 
 
 
 
 
9ea9a03
b634819
 
 
4767d35
47cf4e6
4767d35
b634819
47cf4e6
5d9fe91
47cf4e6
5d9fe91
 
 
 
738c024
d86e89d
58f44e1
dff4299
738c024
dff4299
738c024
dff4299
738c024
dff4299
738c024
dff4299
738c024
dff4299
738c024
dff4299
738c024
dff4299
 
 
58f44e1
4782668
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41020f7
3689add
41020f7
 
 
3689add
dbbf61c
 
 
ddb2733
 
e625d5c
 
3689add
e625d5c
 
ddb2733
e625d5c
893d689
7aaf7cf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
---
base_model:
- unsloth/Qwen2.5-7B-bnb-4bit
- unsloth/Llama-3.2-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- llama3
- trl
license: apache-2.0
language:
- en
datasets:
- Sweaterdog/MindCraft-LLM-tuning
---

# Uploaded  model

- **Developed by:** Sweaterdog
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-7B-bnb-4bit and unsloth/Llama-3.2-3B-Instruct

The MindCraft LLM tuning CSV file can be found here, this can be tweaked as needed. [MindCraft-LLM](https://huggingface.co/datasets/Sweaterdog/MindCraft-LLM-tuning)

# What is the Purpose?

This model is built and designed to play Minecraft via the extension named "[MindCraft](https://github.com/kolbytn/mindcraft)" Which allows language models, like the ones provided in the files section, to play Minecraft.
- Why a new model?
  # 
  While, yes, models that aren't fine tuned to play Minecraft *Can* play Minecraft, most are slow, innaccurate, and not as smart, in the fine tuning, it expands reasoning, conversation examples, and command (tool) usage.
- What kind of Dataset was used?
  # 
   I'm deeming the first generation of this model, Hermesv1, for future generations, they will be named *"Andy"* based from the actual MindCraft plugin's default character. it was trained for reasoning by using examples of in-game "Vision" as well as examples of spacial reasoning, for expanding thinking, I also added puzzle examples where the model broke down the process step by step to reach the goal.
- Why choose Qwen2.5 for the base model?
  # 
  During testing, to find the best local LLM for playing Minecraft, I came across two, Gemma 2, and Qwen2.5, these two were by far the best at playing Minecraft before fine-tuning, and I knew, once tuned, it would become better.
- If Gemma 2 and Qwen 2.5 are the best before fine tuning, why include Llama 3.2, especially the lower intelligence, 3B parameter version?
  # 
  That is a great question, I know since Llama 3.2 3b has low amounts of parameters, it is dumb, and doesn't play minecraft well without fine tuning, but, it is a lot smaller than other models which are for people with less powerful computers, and the hope is, once the model is tuned, it will become much better at minecraft.

- Why is it taking so long to release more tuned models?
  # 
  Well, you see, I do not have the most powerful computer, and Unsloth, the thing I'm using for fine tuning, has a google colab set up, so I am waiting for GPU time to tune the models, but they will be released ASAP, I promise.

- Will there ever be vision fine tuning?
  #
  Yes! In MindCraft there will be vision support for VLM's *(vision language models)*, Most likely, the model will be Qwen2-VL-7b, or LLaMa3.2-11b-vision since they are relatively new, yes, I am still holding out hope for llama3.2
# How to Use
In order to use this model, A, download the GGUF file of the version you want, either a Qwen, or Llama model, and then the Modelfile, after you download both, in the Modelfile, change the directory of the model, to your model. Here is a simple guide if needed for the rest:
# 
1. Download the .gguf Model u want. For this example it is in the standard Windows "Download" Folder

2. Download the Modelfile

3. Open the Modelfile with / in notepad, or you can rename it to Modelfile.txt, and change the GGUF path, for example, this is my PATH "C:\Users\SweaterDog\OneDrive\Documents\Raw GGUF Files\Hermes-1.0\Hermes-1.Q8_0.gguf"

4. Safe + Close Modelfile

5. Rename "Modelfile.txt" into "Modelfile" if you changed it before-hand

6. Open CMD and type in "ollama create Hermes1 -f Modelfile" (You can change the name to anything you'd like, for this example, I am just using the same name as the GGUF)

7. Wait until finished

8. In the CMD window, type "ollama run Hermes1" (replace the name with whatever you called it)

9. (Optional, needed for versions after the 11/15/24 update) If you downloaded a model that was tuned from Qwen, and in the model name you kept Qwen, you need to go into the file "prompter.js" and remove the qwen section, if you named it something that doesn't include qwen in the name, you can skip this step.

# How to fine tune a Gemini Model
1. Download the CSV for [MindCraft-LLM-tuning](https://huggingface.co/datasets/Sweaterdog/MindCraft-LLM-tuning)
2. Open sheet.google.com, and upload the CSV file
3. Go to [API keys and Services](https://aistudio.google.com/app/apikey), then click on "New Tuned Model" on the left popup bar
4. Press "Import" and then select the CSV file you uploaded to google sheets
5. Rename the model to whatever you want, set the training settings, epochs, learning rate, and batch size
6. Change the model to either Gemini-1.0-pro or Gemini-1.5-flash **NOTE** Gemini 1.0 pro will be deprecated on February 15, 2025, meaning the model WILL BE deleted!
7. Hit tune and wait.
8. After the model is finished training, hit "Add API access" and select the google project you'd like to connect it to
9. Copy the model ID, and paste it into the Gemini.json file in MindCraft, then name the model to whatever you want.
10. (Optional) Test the model by pressing "Use in chat" and ask it basic actions, such as "Grapevine_eater: Come here!" and see the output, if it is not to your liking, train the model again with different settings,
11. (Optional) Since the rates for Gemini models are limited (If you do not have billing enabled) I recommend making a launch.bat file in the MindCraft folder, instead of crashing and having you need to manually start the program every time the rate limit is reached. Here is the code I use in launch.bat
```
@echo off
setlocal enabledelayedexpansion

:loop
node main.js
timeout /t 10 /nobreak

echo Restarting...
goto loop
```
12. Enjoy having a model play Minecraft with you, hopefully it is smarter than regular Gemini models!
#

**WARNING** The new v3 generation of models suck! That is because they were also trained for building *(coding)* and often do not use commands! I recommend using the v2 generation still, it is in the [deprecated models folder](https://huggingface.co/Sweaterdog/MindCraft-LLM-tuning/tree/main/deprecated-models).

# 

For Anybody who is wondering what the context length is, for the Hermesv1, they have a context window of 8196 tokens. For the Qwen version, it will have a length of 64000 tokens, for the Llama version, it will have 128000 tokens.

# 

I wanted to include the google colab link, in case you wanted to know how to train models via CSV, or use my dataset to train your own model, on your own settings, on a different model. [Google Colab](https://colab.research.google.com/drive/1VYkncZMfGFkeCEgN2IzbZIKEDkyQuJAS#scrollTo=2eSvM9zX_2d3)

# 

**UPDATE** The Qwen and Llama models are out, with the expanded dataset! I have found the llama models are incredibly dumb, but changing the Modelfile may provide better results, With the Qwen version of Andy, the Q4_K_M, it took 2 minutes to craft a wooden pickaxe, collected stone after that, took 5 minutes.

# 

This qwen2 and llama3.2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)