DongfuJiang
commited on
Commit
•
06c2800
1
Parent(s):
3a2ffd6
End of training
Browse files- README.md +3 -2
- all_results.json +12 -0
- eval_results.json +7 -0
- train_results.json +8 -0
- trainer_state.json +0 -0
- training_eval_loss.png +0 -0
- training_loss.png +0 -0
README.md
CHANGED
@@ -4,6 +4,7 @@ license: apache-2.0
|
|
4 |
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
|
5 |
tags:
|
6 |
- llama-factory
|
|
|
7 |
- generated_from_trainer
|
8 |
model-index:
|
9 |
- name: prm_qwen25_coder_version3_subsample_hf
|
@@ -15,9 +16,9 @@ should probably proofread and complete it, then remove this comment. -->
|
|
15 |
|
16 |
# prm_qwen25_coder_version3_subsample_hf
|
17 |
|
18 |
-
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) on
|
19 |
It achieves the following results on the evaluation set:
|
20 |
-
- Loss: 0.
|
21 |
|
22 |
## Model description
|
23 |
|
|
|
4 |
base_model: Qwen/Qwen2.5-Coder-7B-Instruct
|
5 |
tags:
|
6 |
- llama-factory
|
7 |
+
- full
|
8 |
- generated_from_trainer
|
9 |
model-index:
|
10 |
- name: prm_qwen25_coder_version3_subsample_hf
|
|
|
16 |
|
17 |
# prm_qwen25_coder_version3_subsample_hf
|
18 |
|
19 |
+
This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) on the prm_conversations_prm_version3_math+webinstructsub-mcq+webinstructsub-oe+apps+gsm_mix_ref_subsample_hf dataset.
|
20 |
It achieves the following results on the evaluation set:
|
21 |
+
- Loss: 0.1426
|
22 |
|
23 |
## Model description
|
24 |
|
all_results.json
ADDED
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"epoch": 0.9999155096178218,
|
3 |
+
"eval_loss": 0.14257754385471344,
|
4 |
+
"eval_runtime": 56.841,
|
5 |
+
"eval_samples_per_second": 50.492,
|
6 |
+
"eval_steps_per_second": 6.316,
|
7 |
+
"total_flos": 995060424065024.0,
|
8 |
+
"train_loss": 0.16356557928304405,
|
9 |
+
"train_runtime": 22100.2501,
|
10 |
+
"train_samples_per_second": 12.853,
|
11 |
+
"train_steps_per_second": 0.201
|
12 |
+
}
|
eval_results.json
ADDED
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"epoch": 0.9999155096178218,
|
3 |
+
"eval_loss": 0.14257754385471344,
|
4 |
+
"eval_runtime": 56.841,
|
5 |
+
"eval_samples_per_second": 50.492,
|
6 |
+
"eval_steps_per_second": 6.316
|
7 |
+
}
|
train_results.json
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"epoch": 0.9999155096178218,
|
3 |
+
"total_flos": 995060424065024.0,
|
4 |
+
"train_loss": 0.16356557928304405,
|
5 |
+
"train_runtime": 22100.2501,
|
6 |
+
"train_samples_per_second": 12.853,
|
7 |
+
"train_steps_per_second": 0.201
|
8 |
+
}
|
trainer_state.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
training_eval_loss.png
ADDED
training_loss.png
ADDED