MultivexAI
commited on
Commit
•
07c170b
1
Parent(s):
4a2428a
Update README.md
Browse files
README.md
CHANGED
@@ -1,7 +1,73 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
|
4 |
-
-
|
5 |
-
|
6 |
-
-
|
7 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
base_model:
|
6 |
+
- meta-llama/Llama-3.2-3B-Instruct
|
7 |
+
---
|
8 |
+
|
9 |
+
* **Model size: 3.21B parameters**
|
10 |
+
|
11 |
+
# Gladiator-Mini-exp-1222-Instruct
|
12 |
+
|
13 |
+
**Gladiator-Mini-exp-1222** is a 3-billion parameter language model focused on **complex analytical tasks**. This experimental model builds upon the foundation of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct), and aims to explore what’s possible with smaller, more resource-efficient AI models. We believe small models represent the future of open source language models, making AI more accessible and adaptable for a wider range of users and applications.
|
14 |
+
|
15 |
+
**What's New in This Version?**
|
16 |
+
|
17 |
+
We've continued to refine the Gladiator-Mini series, and this version focuses on strengthening the model's analytical and problem-solving capabilities. We've also improved its ability to operate effectively with or without specific system prompts, increasing its flexibility and adaptability. The model has been trained on a larger and more varied dataset, aimed at improving overall performance.
|
18 |
+
|
19 |
+
The previous iteration, Gladiator-Mini-exp-1211, had a tendency to underperform compared to the non-fine-tuned base Llama model and required specific prompts to function effectively, making it less versatile. This version is an upgrade.
|
20 |
+
|
21 |
+
**How it Performs:**
|
22 |
+
|
23 |
+
Gladiator-Mini-exp-1222 demonstrates progress in various areas. It can approach multi-step analytical problems and is able to complete complex calculations, when needed. It also shows improved capabilities in applying logic and reason to produce an accurate answer. The model can also follow complex instructions effectively, even with minimal or no guidance, showing that its reasoning capabilities are more reliable.
|
24 |
+
|
25 |
+
**Current Performance Examples:**
|
26 |
+
|
27 |
+
To illustrate the model's current capabilities, here are some specific examples of its performance:
|
28 |
+
|
29 |
+
* **Multi-Step Calculations:** When given a mathematical problem involving a combination of multiplication, division, and addition, the model accurately identifies the steps to solve the problem and arrives at the correct answer.
|
30 |
+
* **Logical Analysis:** When given complex problems involving interwoven statements and rules, the model now uses a more structured methodology and is able to complete the required logical deductions needed to come to a conclusion, even if that conclusion is not completely correct.
|
31 |
+
* **Instruction Following:** The model can follow complex instructions to produce structured text outputs and is capable of adhering to specific requirements, such as length constraints or specific wording.
|
32 |
+
|
33 |
+
These examples represent a small selection of the types of tasks the model can handle.
|
34 |
+
|
35 |
+
**Example System Prompts (Optional):**
|
36 |
+
|
37 |
+
* **For Complex Tasks:** "You are an advanced AI with strong analytical skills. Approach the problem step-by-step and show your work.”
|
38 |
+
* **For Problem Solving:** "You are an expert problem solver. Explain your process clearly and concisely."
|
39 |
+
|
40 |
+
**What Are We Still Working On?**
|
41 |
+
|
42 |
+
Gladiator-Mini-exp-1222 remains under development. It is not perfect, and there are still areas that require further work. One notable area for improvement is creative text generation; the model is not designed for these types of tasks. It is important to recognize that, as an experimental model, its capabilities should not be overestimated. The experimental date for this model is 12/22/2024.
|
43 |
+
|
44 |
+
**Performance:**
|
45 |
+
|
46 |
+
This model has shown some encouraging results in internal testing, particularly in analytical tasks, however its performance may vary depending on the specific problem it is given. We welcome community feedback and are continually looking for ways to improve the model’s performance and reliability.
|
47 |
+
|
48 |
+
**Our Goal:**
|
49 |
+
|
50 |
+
We want to create a strong problem solver in a compact model. We believe that smaller, more efficient models are the future of AI, and this experimental version represents an important step towards that goal. We’re working towards a model that can perform at a high level without requiring large amounts of computing resources.
|
51 |
+
|
52 |
+
**How You Can Help:**
|
53 |
+
|
54 |
+
We encourage you to experiment with Gladiator-Mini-exp-1222 and let us know what you find. Your feedback is essential to future development.
|
55 |
+
|
56 |
+
**Limitations:**
|
57 |
+
|
58 |
+
* System prompts are not strictly needed, but may still be helpful.
|
59 |
+
* Its reasoning capabilities are continuously being improved.
|
60 |
+
* Do not expect excellent creative text generation.
|
61 |
+
* Like any AI model, it could produce bias or make mistakes.
|
62 |
+
|
63 |
+
**Disclaimer:**
|
64 |
+
|
65 |
+
Gladiator-Mini-exp-1222 is an experimental model and it’s best to use with caution. Please always double-check the outputs and avoid relying on them blindly.
|
66 |
+
|
67 |
+
Base model: [https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct)
|
68 |
+
|
69 |
+
Thanks to Meta for the fantastic Llama-3.2-3B model!
|
70 |
+
|
71 |
+
**Finetuning Dataset:**
|
72 |
+
|
73 |
+
* The model was fine-tuned on a privately collected dataset. Further details on training data are withheld.
|