Sweaterdog
commited on
Commit
•
e559fb7
1
Parent(s):
b74bc5c
Update README.md
Browse files
README.md
CHANGED
@@ -32,7 +32,7 @@ This model is built and designed to play Minecraft via the extension named "[Min
|
|
32 |
While, yes, models that aren't fine tuned to play Minecraft *Can* play Minecraft, most are slow, innaccurate, and not as smart, in the fine tuning, it expands reasoning, conversation examples, and command (tool) usage.
|
33 |
- What kind of Dataset was used?
|
34 |
#
|
35 |
-
I'm deeming the first generation of this model, Hermesv1, for future generations, they will be named ***"Andy"*** based from the actual MindCraft plugin's default character. it was trained for reasoning by using examples of in-game "Vision" as well as examples of
|
36 |
- Why choose Qwen2.5 for the base model?
|
37 |
#
|
38 |
During testing, to find the best local LLM for playing Minecraft, I came across two, Gemma 2, and Qwen2.5, these two were by far the best at playing Minecraft before fine-tuning, and I knew, once tuned, it would become better.
|
|
|
32 |
While, yes, models that aren't fine tuned to play Minecraft *Can* play Minecraft, most are slow, innaccurate, and not as smart, in the fine tuning, it expands reasoning, conversation examples, and command (tool) usage.
|
33 |
- What kind of Dataset was used?
|
34 |
#
|
35 |
+
I'm deeming the first generation of this model, Hermesv1, for future generations, they will be named ***"Andy"*** based from the actual MindCraft plugin's default character. it was trained for reasoning by using examples of in-game "Vision" as well as examples of spatial reasoning, for expanding thinking, I also added puzzle examples where the model broke down the process step by step to reach the goal.
|
36 |
- Why choose Qwen2.5 for the base model?
|
37 |
#
|
38 |
During testing, to find the best local LLM for playing Minecraft, I came across two, Gemma 2, and Qwen2.5, these two were by far the best at playing Minecraft before fine-tuning, and I knew, once tuned, it would become better.
|