Remove truncation
Browse files- README.md +4 -17
- tokenizer.json +2 -2
README.md
CHANGED
@@ -13,6 +13,7 @@ tags:
|
|
13 |
base_model: chuanli11/Llama-3.2-3B-Instruct-uncensored
|
14 |
datasets:
|
15 |
- KingNish/reasoning-base-20k
|
|
|
16 |
model-index:
|
17 |
- name: thea-3b-25r
|
18 |
results:
|
@@ -112,9 +113,9 @@ model-index:
|
|
112 |
|
113 |
# Model Description
|
114 |
|
115 |
-
|
116 |
|
117 |
-
|
118 |
Here is what inference code you should use:
|
119 |
```py
|
120 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
@@ -138,7 +139,7 @@ reasoning_inputs = tokenizer(reasoning_template, return_tensors="pt").to(model.d
|
|
138 |
reasoning_ids = model.generate(**reasoning_inputs, max_new_tokens=MAX_REASONING_TOKENS)
|
139 |
reasoning_output = tokenizer.decode(reasoning_ids[0, reasoning_inputs.input_ids.shape[1]:], skip_special_tokens=True)
|
140 |
|
141 |
-
|
142 |
|
143 |
# Generate answer
|
144 |
messages.append({"role": "reasoning", "content": reasoning_output})
|
@@ -158,17 +159,3 @@ print("ANSWER: " + response_output)
|
|
158 |
This Llama model was trained faster than [Unsloth](https://github.com/unslothai/unsloth) using [custom training code](https://www.kaggle.com/code/piotr25691/distributed-llama-training-with-2xt4).
|
159 |
|
160 |
Visit https://www.kaggle.com/code/piotr25691/distributed-llama-training-with-2xt4 to find out how you can finetune your models using BOTH of the Kaggle provided GPUs.
|
161 |
-
|
162 |
-
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
|
163 |
-
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_piotr25691__thea-3b-25r)
|
164 |
-
|
165 |
-
| Metric |Value|
|
166 |
-
|-------------------|----:|
|
167 |
-
|Avg. |23.74|
|
168 |
-
|IFEval (0-Shot) |73.44|
|
169 |
-
|BBH (3-Shot) |22.55|
|
170 |
-
|MATH Lvl 5 (4-Shot)|16.31|
|
171 |
-
|GPQA (0-shot) | 2.35|
|
172 |
-
|MuSR (0-shot) | 3.57|
|
173 |
-
|MMLU-PRO (5-shot) |24.25|
|
174 |
-
|
|
|
13 |
base_model: chuanli11/Llama-3.2-3B-Instruct-uncensored
|
14 |
datasets:
|
15 |
- KingNish/reasoning-base-20k
|
16 |
+
- piotr25691/thea-name-overrides
|
17 |
model-index:
|
18 |
- name: thea-3b-25r
|
19 |
results:
|
|
|
113 |
|
114 |
# Model Description
|
115 |
|
116 |
+
An uncensored reasoning Llama 3.2 3B model trained on reasoning data.
|
117 |
|
118 |
+
It has been trained using improved training code, and gives an improved performance.
|
119 |
Here is what inference code you should use:
|
120 |
```py
|
121 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
|
139 |
reasoning_ids = model.generate(**reasoning_inputs, max_new_tokens=MAX_REASONING_TOKENS)
|
140 |
reasoning_output = tokenizer.decode(reasoning_ids[0, reasoning_inputs.input_ids.shape[1]:], skip_special_tokens=True)
|
141 |
|
142 |
+
print("REASONING: " + reasoning_output)
|
143 |
|
144 |
# Generate answer
|
145 |
messages.append({"role": "reasoning", "content": reasoning_output})
|
|
|
159 |
This Llama model was trained faster than [Unsloth](https://github.com/unslothai/unsloth) using [custom training code](https://www.kaggle.com/code/piotr25691/distributed-llama-training-with-2xt4).
|
160 |
|
161 |
Visit https://www.kaggle.com/code/piotr25691/distributed-llama-training-with-2xt4 to find out how you can finetune your models using BOTH of the Kaggle provided GPUs.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
tokenizer.json
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:12487b766b0b1584dcc5311824df327d5ea154939524790c643cdf2a3f6adf9f
|
3 |
+
size 17209921
|