File size: 6,515 Bytes
893d689 7aaf7cf e4615d0 893d689 58f44e1 e4615d0 893d689 7aaf7cf 893d689 e4615d0 893d689 b634819 9ea9a03 b634819 4767d35 47cf4e6 4767d35 b634819 47cf4e6 738c024 58f44e1 738c024 7aaf7cf 738c024 7aaf7cf 738c024 7aaf7cf 738c024 7aaf7cf 738c024 a2cd8c5 738c024 58f44e1 4782668 41020f7 7aaf7cf 41020f7 dbbf61c ddb2733 d6b22d8 893d689 7aaf7cf |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 |
---
base_model:
- unsloth/Qwen2.5-7B-bnb-4bit
- unsloth/gemma-2-9b-it-bnb-4bit
- unsloth/Llama-3.2-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gemma2
- llama3
- trl
license: apache-2.0
language:
- en
datasets:
- Sweaterdog/MindCraft-LLM-tuning
---
# Uploaded model
- **Developed by:** Sweaterdog
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-7B-bnb-4bit and unsloth/gemma-2-9b-it-bnb-4bit and unsloth/Llama-3.2-3B-Instruct
The MindCraft LLM tuning CSV file can be found here, this can be tweaked as needed. [MindCraft-LLM](https://huggingface.co/datasets/Sweaterdog/MindCraft-LLM-tuning/raw/main/Gemini-Minecraft%20-%20training_data_minecraft_updated.csv)
# What is the Purpose?
This model is built and designed to play Minecraft via the extension named "[MindCraft](https://github.com/kolbytn/mindcraft)" Which allows language models, like the ones provided in the files section, to play Minecraft.
- Why a new model?
#
While, yes, models that aren't fine tuned to play Minecraft *Can* play Minecraft, most are slow, innaccurate, and not as smart, in the fine tuning, it expands reasoning, conversation examples, and command (tool) usage.
- What kind of Dataset was used?
#
I'm deeming the first generation of this model, Hermesv1, for future generations, they will be named *"Andy"* based from the actual MindCraft plugin's default character. it was trained for reasoning by using examples of in-game "Vision" as well as examples of spacial reasoning, for expanding thinking, I also added puzzle examples where the model broke down the process step by step to reach the goal.
- Why choose Qwen2.5 for the base model?
#
During testing, to find the best local LLM for playing Minecraft, I came across two, Gemma 2, and Qwen2.5, these two were by far the best at playing Minecraft before fine-tuning, and I knew, once tuned, it would become better.
- If Gemma 2 and Qwen 2.5 are the best before fine tuning, why include Llama 3.2, especially the lower intelligence, 3B parameter version?
#
That is a great question, I know since Llama 3.2 3b has low amounts of parameters, it is dumb, and doesn't play minecraft well without fine tuning, but, it is a lot smaller than other models which are for people with less powerful computers, and the hope is, once the model is tuned, it will become much better at minecraft.
- Why is it taking so long to release more tuned models?
#
Well, you see, I do not have the most powerful computer, and Unsloth, the thing I'm using for fine tuning, has a google colab set up, so I am waiting for GPU time to tune the models, but they will be released ASAP, I promise.
# How to Use
In order to use this model, A, download the GGUF file of the version you want, either a Qwen, or Gemma model, and then the Modelfile, after you download both, in the Modelfile, change the directory of the model, to your model. Here is a simple guide if needed for the rest:
#
1.Download the .gguf Model u want. For this example it is in the standard Windows "Download" Folder
2.Download the Modelfile
3.Open the Modelfile with / in notepad, or you can rename it to Modelfile.txt, and change the GGUF path, for example, this is my PATH "C:\Users\SweaterDog\OneDrive\Documents\Raw GGUF Files\Hermes-1.0\Hermes-1.Q8_0.gguf"
4.Safe + Close Modelfile
5.Rename "Modelfile.txt" into "Modelfile" if you changed it before-hand
6.Open CMD and type in "ollama create Hermes1 -f Modelfile" (You can change the name to anything you'd like, for this example, I am just using the same name as the GGUF)
7.Wait until finished
8.In the CMD window, type "ollama run Hermes1" (replace the 1 in Hermes with whatever version you downloaded)
# How to fine tune a Gemini Model
1. Download the CSV for [MindCraft-LLM-tuning](https://huggingface.co/datasets/Sweaterdog/MindCraft-LLM-tuning)
2. Open sheet.google.com, and upload the CSV file
3. Go to [API keys and Services](https://aistudio.google.com/app/apikey), then click on "New Tuned Model" on the left popup bar
4. Press "Import" and then select the CSV file you uploaded to google sheets
5. Rename the model to whatever you want, set the training settings, epochs, learning rate, and batch size
6. Change the model to either Gemini-1.0-pro or Gemini-1.5-flash **NOTE** Gemini 1.0 pro will be deprecated on February 15, 2025, meaning the model WILL BE deleted!
7. Hit tune and wait.
8. After the model is finished training, hit "Add API access" and select the google project you'd like to connect it to
9. Copy the model ID, and paste it into the Gemini.json file in MindCraft, then name the model to whatever you want.
10. (Optional) Test the model by pressing "Use in chat" and ask it basic actions, such as "Grapevine_eater: Come here!" and see the output, if it is not to your liking, train the model again with different settings,
11. (Optional) Since the rates for Gemini models are limited (If you do not have billing enabled) I recommend making a launch.bat file in the MindCraft folder, instead of crashing and having you need to manually start the program every time the rate limit is reached. Here is the code I use in launch.bat
```
@echo off
setlocal enabledelayedexpansion
:loop
node main.js
timeout /t 10 /nobreak
echo Restarting...
goto loop
```
12. Enjoy having a model play Minecraft with you, hopefully it is smarter than regular Gemini models!
#
I'm aware it does say there are multiple Qwen2.5 files, even though there are two, and it also says there are Gemma2 models, even though there isn't, I am aware and have been trying to train the rest of these models.
#
For Anybody who is wondering what the context length is, for the Hermesv1, they have a context window of 8196 tokens, but when the v2 generation drops, including LLaMa 3.2 and Gemma2, they will use a larger dataset, and have a context length of 128000 tokens
#
I wanted to include the google colab link, in case you wanted to know how to train models via CSV, or use my dataset to train your own model, on your own settings, on a different model. [Google Colab](https://colab.research.google.com/drive/1VYkncZMfGFkeCEgN2IzbZIKEDkyQuJAS#scrollTo=2eSvM9zX_2d3)
#
This qwen2, gemma2, and llama3.2 models were trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |