update base models
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C1-encoder_classification-C1-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only}/.hydra/config.yaml +2 -2
- runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C3-encoder_classification-C3-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only}/.hydra/hydra.yaml +6 -5
- runs/base_models/bertimbau/jbcs2025_bert-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only/.hydra/overrides.yaml +1 -0
- runs/base_models/bertimbau/jbcs2025_bert-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only/bootstrap_confidence_intervals.csv +2 -0
- runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C1-encoder_classification-C1-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only}/evaluation_results.csv +2 -2
- runs/base_models/bertimbau/jbcs2025_bert-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only/jbcs2025_bert-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only_inference_results.jsonl +0 -0
- runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C1-encoder_classification-C1-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only}/run_inference_experiment.log +105 -51
- runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C2-encoder_classification-C2-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only}/.hydra/config.yaml +4 -4
- runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C1-encoder_classification-C1-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only}/.hydra/hydra.yaml +6 -5
- runs/base_models/bertimbau/jbcs2025_bert-base-portuguese-cased-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only/.hydra/overrides.yaml +1 -0
- runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C1-encoder_classification-C1-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only}/bootstrap_confidence_intervals.csv +1 -1
- runs/base_models/bertimbau/jbcs2025_bert-base-portuguese-cased-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only/evaluation_results.csv +2 -0
- runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C2-encoder_classification-C2-essay_only/jbcs2025_bertimbau_base-C2-encoder_classification-C2-essay_only_inference_results.jsonl → jbcs2025_bert-base-portuguese-cased-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only/jbcs2025_bert-base-portuguese-cased-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only_inference_results.jsonl} +0 -0
- runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C3-encoder_classification-C3-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only}/run_inference_experiment.log +108 -54
- runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C3-encoder_classification-C3-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only}/.hydra/config.yaml +4 -4
- runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C2-encoder_classification-C2-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only}/.hydra/hydra.yaml +6 -5
- runs/base_models/bertimbau/jbcs2025_bert-base-portuguese-cased-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only/.hydra/overrides.yaml +1 -0
- runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C3-encoder_classification-C3-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only}/bootstrap_confidence_intervals.csv +1 -1
- runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C4-encoder_classification-C4-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only}/evaluation_results.csv +2 -2
- runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C3-encoder_classification-C3-essay_only/jbcs2025_bertimbau_base-C3-encoder_classification-C3-essay_only_inference_results.jsonl → jbcs2025_bert-base-portuguese-cased-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only/jbcs2025_bert-base-portuguese-cased-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only_inference_results.jsonl} +0 -0
- runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C2-encoder_classification-C2-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only}/run_inference_experiment.log +108 -54
- runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C4-encoder_classification-C4-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only}/.hydra/config.yaml +4 -4
- runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C4-encoder_classification-C4-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only}/.hydra/hydra.yaml +6 -5
- runs/base_models/bertimbau/jbcs2025_bert-base-portuguese-cased-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only/.hydra/overrides.yaml +1 -0
- runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C2-encoder_classification-C2-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only}/bootstrap_confidence_intervals.csv +1 -1
- runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C3-encoder_classification-C3-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only}/evaluation_results.csv +2 -2
- runs/base_models/bertimbau/jbcs2025_bert-base-portuguese-cased-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only/jbcs2025_bert-base-portuguese-cased-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only_inference_results.jsonl +0 -0
- runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C4-encoder_classification-C4-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only}/run_inference_experiment.log +107 -53
- runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C5-encoder_classification-C5-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only}/.hydra/config.yaml +4 -4
- runs/base_models/bertimbau/jbcs2025_bert-base-portuguese-cased-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only/.hydra/hydra.yaml +157 -0
- runs/base_models/bertimbau/jbcs2025_bert-base-portuguese-cased-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only/.hydra/overrides.yaml +1 -0
- runs/base_models/bertimbau/jbcs2025_bert-base-portuguese-cased-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only/bootstrap_confidence_intervals.csv +2 -0
- runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C2-encoder_classification-C2-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only}/evaluation_results.csv +2 -2
- runs/base_models/{mbert/jbcs2025_mbert_base-C5-encoder_classification-C5-essay_only/jbcs2025_mbert_base-C5-encoder_classification-C5-essay_only_inference_results.jsonl → bertimbau/jbcs2025_bert-base-portuguese-cased-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only/jbcs2025_bert-base-portuguese-cased-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only_inference_results.jsonl} +0 -0
- runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C5-encoder_classification-C5-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only}/run_inference_experiment.log +107 -53
- runs/base_models/bertimbau/jbcs2025_bertimbau_base-C1-encoder_classification-C1-essay_only/.hydra/overrides.yaml +0 -1
- runs/base_models/bertimbau/jbcs2025_bertimbau_base-C2-encoder_classification-C2-essay_only/.hydra/overrides.yaml +0 -1
- runs/base_models/bertimbau/jbcs2025_bertimbau_base-C3-encoder_classification-C3-essay_only/.hydra/overrides.yaml +0 -1
- runs/base_models/bertimbau/jbcs2025_bertimbau_base-C4-encoder_classification-C4-essay_only/.hydra/overrides.yaml +0 -1
- runs/base_models/bertimbau/jbcs2025_bertimbau_base-C5-encoder_classification-C5-essay_only/.hydra/hydra.yaml +0 -156
- runs/base_models/bertimbau/jbcs2025_bertimbau_base-C5-encoder_classification-C5-essay_only/.hydra/overrides.yaml +0 -1
- runs/base_models/bertimbau/jbcs2025_bertimbau_base-C5-encoder_classification-C5-essay_only/bootstrap_confidence_intervals.csv +0 -2
- runs/base_models/bertimbau/jbcs2025_bertimbau_base-C5-encoder_classification-C5-essay_only/evaluation_results.csv +0 -2
- runs/base_models/bertugues/jbcs2025_BERTugues-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only/.hydra/config.yaml +41 -0
- runs/base_models/bertugues/jbcs2025_BERTugues-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only/.hydra/hydra.yaml +157 -0
- runs/base_models/bertugues/jbcs2025_BERTugues-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only/.hydra/overrides.yaml +1 -0
- runs/base_models/bertugues/jbcs2025_BERTugues-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only/bootstrap_confidence_intervals.csv +2 -0
- runs/base_models/bertugues/jbcs2025_BERTugues-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only/evaluation_results.csv +2 -0
- runs/base_models/{bertimbau/jbcs2025_bertimbau_base-C1-encoder_classification-C1-essay_only/jbcs2025_bertimbau_base-C1-encoder_classification-C1-essay_only_inference_results.jsonl → bertugues/jbcs2025_BERTugues-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only/jbcs2025_BERTugues-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only_inference_results.jsonl} +0 -0
- runs/base_models/bertugues/jbcs2025_BERTugues-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only/run_inference_experiment.log +250 -0
runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C1-encoder_classification-C1-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only}/.hydra/config.yaml
RENAMED
|
@@ -20,12 +20,12 @@ post_training_results:
|
|
| 20 |
model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59
|
| 21 |
experiments:
|
| 22 |
model:
|
| 23 |
-
name: kamel-usp/
|
| 24 |
type: encoder_classification
|
| 25 |
num_labels: 6
|
| 26 |
output_dir: ./results/
|
| 27 |
logging_dir: ./logs/
|
| 28 |
-
best_model_dir:
|
| 29 |
tokenizer:
|
| 30 |
name: neuralmind/bert-base-portuguese-cased
|
| 31 |
dataset:
|
|
|
|
| 20 |
model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59
|
| 21 |
experiments:
|
| 22 |
model:
|
| 23 |
+
name: kamel-usp/jbcs2025_bert-base-portuguese-cased-encoder_classification-C1-essay_only
|
| 24 |
type: encoder_classification
|
| 25 |
num_labels: 6
|
| 26 |
output_dir: ./results/
|
| 27 |
logging_dir: ./logs/
|
| 28 |
+
best_model_dir: kamel-usp/jbcs2025_bert-base-portuguese-cased-encoder_classification-C1-essay_only
|
| 29 |
tokenizer:
|
| 30 |
name: neuralmind/bert-base-portuguese-cased
|
| 31 |
dataset:
|
runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C3-encoder_classification-C3-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only}/.hydra/hydra.yaml
RENAMED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
hydra:
|
| 2 |
run:
|
| 3 |
-
dir:
|
| 4 |
sweep:
|
| 5 |
dir: multirun/${now:%Y-%m-%d}/${now:%H-%M-%S}
|
| 6 |
subdir: ${hydra.job.num}
|
|
@@ -110,13 +110,14 @@ hydra:
|
|
| 110 |
output_subdir: .hydra
|
| 111 |
overrides:
|
| 112 |
hydra:
|
|
|
|
| 113 |
- hydra.mode=RUN
|
| 114 |
task:
|
| 115 |
-
- experiments=
|
| 116 |
job:
|
| 117 |
name: run_inference_experiment
|
| 118 |
chdir: null
|
| 119 |
-
override_dirname: experiments=
|
| 120 |
id: ???
|
| 121 |
num: ???
|
| 122 |
config_name: config
|
|
@@ -141,9 +142,9 @@ hydra:
|
|
| 141 |
- path: ''
|
| 142 |
schema: structured
|
| 143 |
provider: schema
|
| 144 |
-
output_dir: /workspace/jbcs2025/
|
| 145 |
choices:
|
| 146 |
-
experiments:
|
| 147 |
hydra/env: default
|
| 148 |
hydra/callbacks: null
|
| 149 |
hydra/job_logging: default
|
|
|
|
| 1 |
hydra:
|
| 2 |
run:
|
| 3 |
+
dir: inference_output/2025-07-10/00-30-43
|
| 4 |
sweep:
|
| 5 |
dir: multirun/${now:%Y-%m-%d}/${now:%H-%M-%S}
|
| 6 |
subdir: ${hydra.job.num}
|
|
|
|
| 110 |
output_subdir: .hydra
|
| 111 |
overrides:
|
| 112 |
hydra:
|
| 113 |
+
- hydra.run.dir=inference_output/2025-07-10/00-30-43
|
| 114 |
- hydra.mode=RUN
|
| 115 |
task:
|
| 116 |
+
- experiments=temp_inference/kamel-usp_jbcs2025_bert-base-portuguese-cased-encoder_classification-C1-essay_only
|
| 117 |
job:
|
| 118 |
name: run_inference_experiment
|
| 119 |
chdir: null
|
| 120 |
+
override_dirname: experiments=temp_inference/kamel-usp_jbcs2025_bert-base-portuguese-cased-encoder_classification-C1-essay_only
|
| 121 |
id: ???
|
| 122 |
num: ???
|
| 123 |
config_name: config
|
|
|
|
| 142 |
- path: ''
|
| 143 |
schema: structured
|
| 144 |
provider: schema
|
| 145 |
+
output_dir: /workspace/jbcs2025/inference_output/2025-07-10/00-30-43
|
| 146 |
choices:
|
| 147 |
+
experiments: temp_inference/kamel-usp_jbcs2025_bert-base-portuguese-cased-encoder_classification-C1-essay_only
|
| 148 |
hydra/env: default
|
| 149 |
hydra/callbacks: null
|
| 150 |
hydra/job_logging: default
|
runs/base_models/bertimbau/jbcs2025_bert-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only/.hydra/overrides.yaml
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
- experiments=temp_inference/kamel-usp_jbcs2025_bert-base-portuguese-cased-encoder_classification-C1-essay_only
|
runs/base_models/bertimbau/jbcs2025_bert-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only/bootstrap_confidence_intervals.csv
ADDED
|
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
|
|
|
| 1 |
+
experiment_id,timestamp,QWK_mean,QWK_lower_95ci,QWK_upper_95ci,QWK_ci_width,Macro_F1_mean,Macro_F1_lower_95ci,Macro_F1_upper_95ci,Macro_F1_ci_width,Weighted_F1_mean,Weighted_F1_lower_95ci,Weighted_F1_upper_95ci,Weighted_F1_ci_width
|
| 2 |
+
jbcs2025_bert-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only,2025-07-10 00:30:49,0.5958170760500687,0.5050211692312583,0.6837528419789116,0.17873167274765334,0.40342568031357534,0.29995035727435776,0.5366313750336464,0.23668101775928868,0.5194883832140518,0.43316166991474553,0.6049356141130027,0.17177394419825714
|
runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C1-encoder_classification-C1-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only}/evaluation_results.csv
RENAMED
|
@@ -1,2 +1,2 @@
|
|
| 1 |
-
accuracy,RMSE,QWK,HDIV,Macro_F1,Micro_F1,Weighted_F1,TP_0,TN_0,FP_0,FN_0,TP_1,TN_1,FP_1,FN_1,TP_2,TN_2,FP_2,FN_2,TP_3,TN_3,FP_3,FN_3,TP_4,TN_4,FP_4,FN_4,TP_5,TN_5,FP_5,FN_5,timestamp,id
|
| 2 |
-
0.
|
|
|
|
| 1 |
+
accuracy,RMSE,QWK,HDIV,Macro_F1,Micro_F1,Weighted_F1,TP_0,TN_0,FP_0,FN_0,TP_1,TN_1,FP_1,FN_1,TP_2,TN_2,FP_2,FN_2,TP_3,TN_3,FP_3,FN_3,TP_4,TN_4,FP_4,FN_4,TP_5,TN_5,FP_5,FN_5,timestamp,id
|
| 2 |
+
0.5144927536231884,31.20757990421976,0.5980582524271845,0.007246376811594235,0.37408319849679,0.5144927536231884,0.51825410693578,0,137,0,1,0,138,0,0,5,115,13,5,25,65,7,41,34,57,30,17,7,111,17,3,2025-07-10 00:30:49,jbcs2025_bert-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only
|
runs/base_models/bertimbau/jbcs2025_bert-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only/jbcs2025_bert-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only_inference_results.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C1-encoder_classification-C1-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only}/run_inference_experiment.log
RENAMED
|
@@ -1,5 +1,5 @@
|
|
| 1 |
-
[2025-
|
| 2 |
-
[2025-
|
| 3 |
dataset:
|
| 4 |
name: kamel-usp/aes_enem_dataset
|
| 5 |
split: JBCS2025
|
|
@@ -21,12 +21,12 @@ post_training_results:
|
|
| 21 |
model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59
|
| 22 |
experiments:
|
| 23 |
model:
|
| 24 |
-
name: kamel-usp/
|
| 25 |
type: encoder_classification
|
| 26 |
num_labels: 6
|
| 27 |
output_dir: ./results/
|
| 28 |
logging_dir: ./logs/
|
| 29 |
-
best_model_dir:
|
| 30 |
tokenizer:
|
| 31 |
name: neuralmind/bert-base-portuguese-cased
|
| 32 |
dataset:
|
|
@@ -41,9 +41,9 @@ experiments:
|
|
| 41 |
gradient_accumulation_steps: 1
|
| 42 |
gradient_checkpointing: false
|
| 43 |
|
| 44 |
-
[2025-
|
| 45 |
-
[2025-
|
| 46 |
-
[2025-
|
| 47 |
"architectures": [
|
| 48 |
"BertForMaskedLM"
|
| 49 |
],
|
|
@@ -68,20 +68,14 @@ experiments:
|
|
| 68 |
"pooler_size_per_head": 128,
|
| 69 |
"pooler_type": "first_token_transform",
|
| 70 |
"position_embedding_type": "absolute",
|
| 71 |
-
"transformers_version": "4.53.
|
| 72 |
"type_vocab_size": 2,
|
| 73 |
"use_cache": true,
|
| 74 |
"vocab_size": 29794
|
| 75 |
}
|
| 76 |
|
| 77 |
-
[2025-
|
| 78 |
-
[2025-
|
| 79 |
-
[2025-06-30 23:51:46,722][transformers.tokenization_utils_base][INFO] - loading file added_tokens.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/added_tokens.json
|
| 80 |
-
[2025-06-30 23:51:46,722][transformers.tokenization_utils_base][INFO] - loading file special_tokens_map.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/special_tokens_map.json
|
| 81 |
-
[2025-06-30 23:51:46,722][transformers.tokenization_utils_base][INFO] - loading file tokenizer_config.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/tokenizer_config.json
|
| 82 |
-
[2025-06-30 23:51:46,722][transformers.tokenization_utils_base][INFO] - loading file chat_template.jinja from cache at None
|
| 83 |
-
[2025-06-30 23:51:46,722][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/config.json
|
| 84 |
-
[2025-06-30 23:51:46,723][transformers.configuration_utils][INFO] - Model config BertConfig {
|
| 85 |
"architectures": [
|
| 86 |
"BertForMaskedLM"
|
| 87 |
],
|
|
@@ -106,14 +100,20 @@ experiments:
|
|
| 106 |
"pooler_size_per_head": 128,
|
| 107 |
"pooler_type": "first_token_transform",
|
| 108 |
"position_embedding_type": "absolute",
|
| 109 |
-
"transformers_version": "4.53.
|
| 110 |
"type_vocab_size": 2,
|
| 111 |
"use_cache": true,
|
| 112 |
"vocab_size": 29794
|
| 113 |
}
|
| 114 |
|
| 115 |
-
[2025-
|
| 116 |
-
[2025-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 117 |
"architectures": [
|
| 118 |
"BertForMaskedLM"
|
| 119 |
],
|
|
@@ -138,18 +138,73 @@ experiments:
|
|
| 138 |
"pooler_size_per_head": 128,
|
| 139 |
"pooler_type": "first_token_transform",
|
| 140 |
"position_embedding_type": "absolute",
|
| 141 |
-
"transformers_version": "4.53.
|
| 142 |
"type_vocab_size": 2,
|
| 143 |
"use_cache": true,
|
| 144 |
"vocab_size": 29794
|
| 145 |
}
|
| 146 |
|
| 147 |
-
[2025-
|
| 148 |
-
[2025-
|
| 149 |
-
|
| 150 |
-
|
| 151 |
-
|
| 152 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 153 |
"architectures": [
|
| 154 |
"BertForSequenceClassification"
|
| 155 |
],
|
|
@@ -190,37 +245,36 @@ experiments:
|
|
| 190 |
"pooler_size_per_head": 128,
|
| 191 |
"pooler_type": "first_token_transform",
|
| 192 |
"position_embedding_type": "absolute",
|
| 193 |
-
"problem_type": "single_label_classification",
|
| 194 |
"torch_dtype": "float32",
|
| 195 |
-
"transformers_version": "4.53.
|
| 196 |
"type_vocab_size": 2,
|
| 197 |
"use_cache": true,
|
| 198 |
"vocab_size": 29794
|
| 199 |
}
|
| 200 |
|
| 201 |
-
[2025-
|
| 202 |
-
[2025-
|
| 203 |
-
[2025-
|
| 204 |
-
[2025-
|
| 205 |
|
| 206 |
-
[2025-
|
| 207 |
If your task is similar to the task the model of the checkpoint was trained on, you can already use BertForSequenceClassification for predictions without further training.
|
| 208 |
-
[2025-
|
| 209 |
-
[2025-
|
| 210 |
-
[2025-
|
| 211 |
-
[2025-
|
| 212 |
-
[2025-
|
| 213 |
-
[2025-
|
| 214 |
-
[2025-
|
| 215 |
***** Running Prediction *****
|
| 216 |
-
[2025-
|
| 217 |
-
[2025-
|
| 218 |
-
[2025-
|
| 219 |
-
[2025-
|
| 220 |
-
[2025-
|
| 221 |
-
[2025-
|
| 222 |
-
[2025-
|
| 223 |
-
[2025-
|
| 224 |
-
[2025-
|
| 225 |
-
[2025-
|
| 226 |
-
[2025-
|
|
|
|
| 1 |
+
[2025-07-10 00:30:49,125][__main__][INFO] - Starting inference experiment
|
| 2 |
+
[2025-07-10 00:30:49,127][__main__][INFO] - cache_dir: /tmp/
|
| 3 |
dataset:
|
| 4 |
name: kamel-usp/aes_enem_dataset
|
| 5 |
split: JBCS2025
|
|
|
|
| 21 |
model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59
|
| 22 |
experiments:
|
| 23 |
model:
|
| 24 |
+
name: kamel-usp/jbcs2025_bert-base-portuguese-cased-encoder_classification-C1-essay_only
|
| 25 |
type: encoder_classification
|
| 26 |
num_labels: 6
|
| 27 |
output_dir: ./results/
|
| 28 |
logging_dir: ./logs/
|
| 29 |
+
best_model_dir: kamel-usp/jbcs2025_bert-base-portuguese-cased-encoder_classification-C1-essay_only
|
| 30 |
tokenizer:
|
| 31 |
name: neuralmind/bert-base-portuguese-cased
|
| 32 |
dataset:
|
|
|
|
| 41 |
gradient_accumulation_steps: 1
|
| 42 |
gradient_checkpointing: false
|
| 43 |
|
| 44 |
+
[2025-07-10 00:30:49,129][__main__][INFO] - Running inference with fine-tuned HF model
|
| 45 |
+
[2025-07-10 00:30:53,775][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/config.json
|
| 46 |
+
[2025-07-10 00:30:53,776][transformers.configuration_utils][INFO] - Model config BertConfig {
|
| 47 |
"architectures": [
|
| 48 |
"BertForMaskedLM"
|
| 49 |
],
|
|
|
|
| 68 |
"pooler_size_per_head": 128,
|
| 69 |
"pooler_type": "first_token_transform",
|
| 70 |
"position_embedding_type": "absolute",
|
| 71 |
+
"transformers_version": "4.53.1",
|
| 72 |
"type_vocab_size": 2,
|
| 73 |
"use_cache": true,
|
| 74 |
"vocab_size": 29794
|
| 75 |
}
|
| 76 |
|
| 77 |
+
[2025-07-10 00:30:54,000][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/config.json
|
| 78 |
+
[2025-07-10 00:30:54,000][transformers.configuration_utils][INFO] - Model config BertConfig {
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 79 |
"architectures": [
|
| 80 |
"BertForMaskedLM"
|
| 81 |
],
|
|
|
|
| 100 |
"pooler_size_per_head": 128,
|
| 101 |
"pooler_type": "first_token_transform",
|
| 102 |
"position_embedding_type": "absolute",
|
| 103 |
+
"transformers_version": "4.53.1",
|
| 104 |
"type_vocab_size": 2,
|
| 105 |
"use_cache": true,
|
| 106 |
"vocab_size": 29794
|
| 107 |
}
|
| 108 |
|
| 109 |
+
[2025-07-10 00:30:54,217][transformers.tokenization_utils_base][INFO] - loading file vocab.txt from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/vocab.txt
|
| 110 |
+
[2025-07-10 00:30:54,217][transformers.tokenization_utils_base][INFO] - loading file tokenizer.json from cache at None
|
| 111 |
+
[2025-07-10 00:30:54,217][transformers.tokenization_utils_base][INFO] - loading file added_tokens.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/added_tokens.json
|
| 112 |
+
[2025-07-10 00:30:54,217][transformers.tokenization_utils_base][INFO] - loading file special_tokens_map.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/special_tokens_map.json
|
| 113 |
+
[2025-07-10 00:30:54,217][transformers.tokenization_utils_base][INFO] - loading file tokenizer_config.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/tokenizer_config.json
|
| 114 |
+
[2025-07-10 00:30:54,218][transformers.tokenization_utils_base][INFO] - loading file chat_template.jinja from cache at None
|
| 115 |
+
[2025-07-10 00:30:54,218][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/config.json
|
| 116 |
+
[2025-07-10 00:30:54,218][transformers.configuration_utils][INFO] - Model config BertConfig {
|
| 117 |
"architectures": [
|
| 118 |
"BertForMaskedLM"
|
| 119 |
],
|
|
|
|
| 138 |
"pooler_size_per_head": 128,
|
| 139 |
"pooler_type": "first_token_transform",
|
| 140 |
"position_embedding_type": "absolute",
|
| 141 |
+
"transformers_version": "4.53.1",
|
| 142 |
"type_vocab_size": 2,
|
| 143 |
"use_cache": true,
|
| 144 |
"vocab_size": 29794
|
| 145 |
}
|
| 146 |
|
| 147 |
+
[2025-07-10 00:30:54,254][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/config.json
|
| 148 |
+
[2025-07-10 00:30:54,255][transformers.configuration_utils][INFO] - Model config BertConfig {
|
| 149 |
+
"architectures": [
|
| 150 |
+
"BertForMaskedLM"
|
| 151 |
+
],
|
| 152 |
+
"attention_probs_dropout_prob": 0.1,
|
| 153 |
+
"classifier_dropout": null,
|
| 154 |
+
"directionality": "bidi",
|
| 155 |
+
"hidden_act": "gelu",
|
| 156 |
+
"hidden_dropout_prob": 0.1,
|
| 157 |
+
"hidden_size": 768,
|
| 158 |
+
"initializer_range": 0.02,
|
| 159 |
+
"intermediate_size": 3072,
|
| 160 |
+
"layer_norm_eps": 1e-12,
|
| 161 |
+
"max_position_embeddings": 512,
|
| 162 |
+
"model_type": "bert",
|
| 163 |
+
"num_attention_heads": 12,
|
| 164 |
+
"num_hidden_layers": 12,
|
| 165 |
+
"output_past": true,
|
| 166 |
+
"pad_token_id": 0,
|
| 167 |
+
"pooler_fc_size": 768,
|
| 168 |
+
"pooler_num_attention_heads": 12,
|
| 169 |
+
"pooler_num_fc_layers": 3,
|
| 170 |
+
"pooler_size_per_head": 128,
|
| 171 |
+
"pooler_type": "first_token_transform",
|
| 172 |
+
"position_embedding_type": "absolute",
|
| 173 |
+
"transformers_version": "4.53.1",
|
| 174 |
+
"type_vocab_size": 2,
|
| 175 |
+
"use_cache": true,
|
| 176 |
+
"vocab_size": 29794
|
| 177 |
+
}
|
| 178 |
+
|
| 179 |
+
[2025-07-10 00:30:54,274][__main__][INFO] - Tokenizer function parameters- Padding:longest; Truncation: True; Use Full Context: False
|
| 180 |
+
[2025-07-10 00:30:54,745][__main__][INFO] -
|
| 181 |
+
Token statistics for 'train' split:
|
| 182 |
+
[2025-07-10 00:30:54,745][__main__][INFO] - Total examples: 500
|
| 183 |
+
[2025-07-10 00:30:54,745][__main__][INFO] - Min tokens: 512
|
| 184 |
+
[2025-07-10 00:30:54,746][__main__][INFO] - Max tokens: 512
|
| 185 |
+
[2025-07-10 00:30:54,746][__main__][INFO] - Avg tokens: 512.00
|
| 186 |
+
[2025-07-10 00:30:54,746][__main__][INFO] - Std tokens: 0.00
|
| 187 |
+
[2025-07-10 00:30:54,858][__main__][INFO] -
|
| 188 |
+
Token statistics for 'validation' split:
|
| 189 |
+
[2025-07-10 00:30:54,858][__main__][INFO] - Total examples: 132
|
| 190 |
+
[2025-07-10 00:30:54,858][__main__][INFO] - Min tokens: 512
|
| 191 |
+
[2025-07-10 00:30:54,858][__main__][INFO] - Max tokens: 512
|
| 192 |
+
[2025-07-10 00:30:54,858][__main__][INFO] - Avg tokens: 512.00
|
| 193 |
+
[2025-07-10 00:30:54,858][__main__][INFO] - Std tokens: 0.00
|
| 194 |
+
[2025-07-10 00:30:54,966][__main__][INFO] -
|
| 195 |
+
Token statistics for 'test' split:
|
| 196 |
+
[2025-07-10 00:30:54,966][__main__][INFO] - Total examples: 138
|
| 197 |
+
[2025-07-10 00:30:54,966][__main__][INFO] - Min tokens: 512
|
| 198 |
+
[2025-07-10 00:30:54,966][__main__][INFO] - Max tokens: 512
|
| 199 |
+
[2025-07-10 00:30:54,966][__main__][INFO] - Avg tokens: 512.00
|
| 200 |
+
[2025-07-10 00:30:54,966][__main__][INFO] - Std tokens: 0.00
|
| 201 |
+
[2025-07-10 00:30:54,966][__main__][INFO] - If token statistics are the same (max, avg, min) keep in mind that this is due to batched tokenization and padding.
|
| 202 |
+
[2025-07-10 00:30:54,966][__main__][INFO] - Model max length: 512. If it is the same as stats, then there is a high chance that sequences are being truncated.
|
| 203 |
+
[2025-07-10 00:30:54,967][__main__][INFO] - Loading model from: kamel-usp/jbcs2025_bert-base-portuguese-cased-encoder_classification-C1-essay_only
|
| 204 |
+
[2025-07-10 00:30:54,967][__main__][INFO] - Loading model from: kamel-usp/jbcs2025_bert-base-portuguese-cased-encoder_classification-C1-essay_only
|
| 205 |
+
[2025-07-10 00:30:55,690][__main__][INFO] - Model need ≈ 1.36 GiB to run inference and 2.58 for training
|
| 206 |
+
[2025-07-10 00:30:55,962][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--kamel-usp--jbcs2025_bert-base-portuguese-cased-encoder_classification-C1-essay_only/snapshots/7ae84ea1c7bb39379d09e28c0d1de9ed08d5c308/config.json
|
| 207 |
+
[2025-07-10 00:30:55,963][transformers.configuration_utils][INFO] - Model config BertConfig {
|
| 208 |
"architectures": [
|
| 209 |
"BertForSequenceClassification"
|
| 210 |
],
|
|
|
|
| 245 |
"pooler_size_per_head": 128,
|
| 246 |
"pooler_type": "first_token_transform",
|
| 247 |
"position_embedding_type": "absolute",
|
|
|
|
| 248 |
"torch_dtype": "float32",
|
| 249 |
+
"transformers_version": "4.53.1",
|
| 250 |
"type_vocab_size": 2,
|
| 251 |
"use_cache": true,
|
| 252 |
"vocab_size": 29794
|
| 253 |
}
|
| 254 |
|
| 255 |
+
[2025-07-10 00:30:56,173][transformers.modeling_utils][INFO] - loading weights file model.safetensors from cache at /tmp/models--kamel-usp--jbcs2025_bert-base-portuguese-cased-encoder_classification-C1-essay_only/snapshots/7ae84ea1c7bb39379d09e28c0d1de9ed08d5c308/model.safetensors
|
| 256 |
+
[2025-07-10 00:30:56,173][transformers.modeling_utils][INFO] - Will use torch_dtype=torch.float32 as defined in model's config object
|
| 257 |
+
[2025-07-10 00:30:56,173][transformers.modeling_utils][INFO] - Instantiating BertForSequenceClassification model under default dtype torch.float32.
|
| 258 |
+
[2025-07-10 00:30:57,523][transformers.modeling_utils][INFO] - All model checkpoint weights were used when initializing BertForSequenceClassification.
|
| 259 |
|
| 260 |
+
[2025-07-10 00:30:57,523][transformers.modeling_utils][INFO] - All the weights of BertForSequenceClassification were initialized from the model checkpoint at kamel-usp/jbcs2025_bert-base-portuguese-cased-encoder_classification-C1-essay_only.
|
| 261 |
If your task is similar to the task the model of the checkpoint was trained on, you can already use BertForSequenceClassification for predictions without further training.
|
| 262 |
+
[2025-07-10 00:30:57,533][transformers.training_args][INFO] - PyTorch: setting up devices
|
| 263 |
+
[2025-07-10 00:30:57,557][transformers.training_args][INFO] - The default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-).
|
| 264 |
+
[2025-07-10 00:30:57,564][transformers.trainer][INFO] - You have loaded a model on multiple GPUs. `is_model_parallel` attribute will be force-set to `True` to avoid any unexpected behavior such as device placement mismatching.
|
| 265 |
+
[2025-07-10 00:30:57,589][transformers.trainer][INFO] - Using auto half precision backend
|
| 266 |
+
[2025-07-10 00:31:00,916][__main__][INFO] - Running inference on test dataset
|
| 267 |
+
[2025-07-10 00:31:00,917][transformers.trainer][INFO] - The following columns in the test set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: supporting_text, id, prompt, reference, essay_year, id_prompt, grades, essay_text. If supporting_text, id, prompt, reference, essay_year, id_prompt, grades, essay_text are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.
|
| 268 |
+
[2025-07-10 00:31:00,923][transformers.trainer][INFO] -
|
| 269 |
***** Running Prediction *****
|
| 270 |
+
[2025-07-10 00:31:00,924][transformers.trainer][INFO] - Num examples = 138
|
| 271 |
+
[2025-07-10 00:31:00,924][transformers.trainer][INFO] - Batch size = 16
|
| 272 |
+
[2025-07-10 00:31:01,737][__main__][INFO] - Inference results saved to jbcs2025_bert-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only_inference_results.jsonl
|
| 273 |
+
[2025-07-10 00:31:01,738][__main__][INFO] - Computing bootstrap confidence intervals for metrics: ['QWK', 'Macro_F1', 'Weighted_F1']
|
| 274 |
+
[2025-07-10 00:33:14,131][__main__][INFO] - Bootstrap CI results saved to bootstrap_confidence_intervals.csv
|
| 275 |
+
[2025-07-10 00:33:14,131][__main__][INFO] - Bootstrap Confidence Intervals (95%):
|
| 276 |
+
[2025-07-10 00:33:14,131][__main__][INFO] - QWK: 0.5958 [0.5050, 0.6838]
|
| 277 |
+
[2025-07-10 00:33:14,131][__main__][INFO] - Macro_F1: 0.4034 [0.3000, 0.5366]
|
| 278 |
+
[2025-07-10 00:33:14,131][__main__][INFO] - Weighted_F1: 0.5195 [0.4332, 0.6049]
|
| 279 |
+
[2025-07-10 00:33:14,132][__main__][INFO] - Inference results: {'accuracy': 0.5144927536231884, 'RMSE': 31.20757990421976, 'QWK': 0.5980582524271845, 'HDIV': 0.007246376811594235, 'Macro_F1': 0.37408319849679, 'Micro_F1': 0.5144927536231884, 'Weighted_F1': 0.51825410693578, 'TP_0': np.int64(0), 'TN_0': np.int64(137), 'FP_0': np.int64(0), 'FN_0': np.int64(1), 'TP_1': np.int64(0), 'TN_1': np.int64(138), 'FP_1': np.int64(0), 'FN_1': np.int64(0), 'TP_2': np.int64(5), 'TN_2': np.int64(115), 'FP_2': np.int64(13), 'FN_2': np.int64(5), 'TP_3': np.int64(25), 'TN_3': np.int64(65), 'FP_3': np.int64(7), 'FN_3': np.int64(41), 'TP_4': np.int64(34), 'TN_4': np.int64(57), 'FP_4': np.int64(30), 'FN_4': np.int64(17), 'TP_5': np.int64(7), 'TN_5': np.int64(111), 'FP_5': np.int64(17), 'FN_5': np.int64(3)}
|
| 280 |
+
[2025-07-10 00:33:14,133][__main__][INFO] - Inference experiment completed
|
runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C2-encoder_classification-C2-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only}/.hydra/config.yaml
RENAMED
|
@@ -20,12 +20,12 @@ post_training_results:
|
|
| 20 |
model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59
|
| 21 |
experiments:
|
| 22 |
model:
|
| 23 |
-
name: kamel-usp/
|
| 24 |
type: encoder_classification
|
| 25 |
num_labels: 6
|
| 26 |
-
output_dir: ./results/
|
| 27 |
-
logging_dir: ./logs/
|
| 28 |
-
best_model_dir:
|
| 29 |
tokenizer:
|
| 30 |
name: neuralmind/bert-base-portuguese-cased
|
| 31 |
dataset:
|
|
|
|
| 20 |
model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59
|
| 21 |
experiments:
|
| 22 |
model:
|
| 23 |
+
name: kamel-usp/jbcs2025_bert-base-portuguese-cased-encoder_classification-C2-essay_only
|
| 24 |
type: encoder_classification
|
| 25 |
num_labels: 6
|
| 26 |
+
output_dir: ./results/
|
| 27 |
+
logging_dir: ./logs/
|
| 28 |
+
best_model_dir: kamel-usp/jbcs2025_bert-base-portuguese-cased-encoder_classification-C2-essay_only
|
| 29 |
tokenizer:
|
| 30 |
name: neuralmind/bert-base-portuguese-cased
|
| 31 |
dataset:
|
runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C1-encoder_classification-C1-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only}/.hydra/hydra.yaml
RENAMED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
hydra:
|
| 2 |
run:
|
| 3 |
-
dir:
|
| 4 |
sweep:
|
| 5 |
dir: multirun/${now:%Y-%m-%d}/${now:%H-%M-%S}
|
| 6 |
subdir: ${hydra.job.num}
|
|
@@ -110,13 +110,14 @@ hydra:
|
|
| 110 |
output_subdir: .hydra
|
| 111 |
overrides:
|
| 112 |
hydra:
|
|
|
|
| 113 |
- hydra.mode=RUN
|
| 114 |
task:
|
| 115 |
-
- experiments=
|
| 116 |
job:
|
| 117 |
name: run_inference_experiment
|
| 118 |
chdir: null
|
| 119 |
-
override_dirname: experiments=
|
| 120 |
id: ???
|
| 121 |
num: ???
|
| 122 |
config_name: config
|
|
@@ -141,9 +142,9 @@ hydra:
|
|
| 141 |
- path: ''
|
| 142 |
schema: structured
|
| 143 |
provider: schema
|
| 144 |
-
output_dir: /workspace/jbcs2025/
|
| 145 |
choices:
|
| 146 |
-
experiments:
|
| 147 |
hydra/env: default
|
| 148 |
hydra/callbacks: null
|
| 149 |
hydra/job_logging: default
|
|
|
|
| 1 |
hydra:
|
| 2 |
run:
|
| 3 |
+
dir: inference_output/2025-07-10/00-33-18
|
| 4 |
sweep:
|
| 5 |
dir: multirun/${now:%Y-%m-%d}/${now:%H-%M-%S}
|
| 6 |
subdir: ${hydra.job.num}
|
|
|
|
| 110 |
output_subdir: .hydra
|
| 111 |
overrides:
|
| 112 |
hydra:
|
| 113 |
+
- hydra.run.dir=inference_output/2025-07-10/00-33-18
|
| 114 |
- hydra.mode=RUN
|
| 115 |
task:
|
| 116 |
+
- experiments=temp_inference/kamel-usp_jbcs2025_bert-base-portuguese-cased-encoder_classification-C2-essay_only
|
| 117 |
job:
|
| 118 |
name: run_inference_experiment
|
| 119 |
chdir: null
|
| 120 |
+
override_dirname: experiments=temp_inference/kamel-usp_jbcs2025_bert-base-portuguese-cased-encoder_classification-C2-essay_only
|
| 121 |
id: ???
|
| 122 |
num: ???
|
| 123 |
config_name: config
|
|
|
|
| 142 |
- path: ''
|
| 143 |
schema: structured
|
| 144 |
provider: schema
|
| 145 |
+
output_dir: /workspace/jbcs2025/inference_output/2025-07-10/00-33-18
|
| 146 |
choices:
|
| 147 |
+
experiments: temp_inference/kamel-usp_jbcs2025_bert-base-portuguese-cased-encoder_classification-C2-essay_only
|
| 148 |
hydra/env: default
|
| 149 |
hydra/callbacks: null
|
| 150 |
hydra/job_logging: default
|
runs/base_models/bertimbau/jbcs2025_bert-base-portuguese-cased-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only/.hydra/overrides.yaml
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
- experiments=temp_inference/kamel-usp_jbcs2025_bert-base-portuguese-cased-encoder_classification-C2-essay_only
|
runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C1-encoder_classification-C1-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only}/bootstrap_confidence_intervals.csv
RENAMED
|
@@ -1,2 +1,2 @@
|
|
| 1 |
experiment_id,timestamp,QWK_mean,QWK_lower_95ci,QWK_upper_95ci,QWK_ci_width,Macro_F1_mean,Macro_F1_lower_95ci,Macro_F1_upper_95ci,Macro_F1_ci_width,Weighted_F1_mean,Weighted_F1_lower_95ci,Weighted_F1_upper_95ci,Weighted_F1_ci_width
|
| 2 |
-
|
|
|
|
| 1 |
experiment_id,timestamp,QWK_mean,QWK_lower_95ci,QWK_upper_95ci,QWK_ci_width,Macro_F1_mean,Macro_F1_lower_95ci,Macro_F1_upper_95ci,Macro_F1_ci_width,Weighted_F1_mean,Weighted_F1_lower_95ci,Weighted_F1_upper_95ci,Weighted_F1_ci_width
|
| 2 |
+
jbcs2025_bert-base-portuguese-cased-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only,2025-07-10 00:33:24,0.34458059523975704,0.1923734854942415,0.48687400610895387,0.2945005206147124,0.2628237843641066,0.18370517105663375,0.3615635882194743,0.17785841716284057,0.34854638571439245,0.2694481582021241,0.4294118073125417,0.1599636491104176
|
runs/base_models/bertimbau/jbcs2025_bert-base-portuguese-cased-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only/evaluation_results.csv
ADDED
|
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
|
|
|
| 1 |
+
accuracy,RMSE,QWK,HDIV,Macro_F1,Micro_F1,Weighted_F1,TP_0,TN_0,FP_0,FN_0,TP_1,TN_1,FP_1,FN_1,TP_2,TN_2,FP_2,FN_2,TP_3,TN_3,FP_3,FN_3,TP_4,TN_4,FP_4,FN_4,TP_5,TN_5,FP_5,FN_5,timestamp,id
|
| 2 |
+
0.34782608695652173,63.519938670433646,0.3477835723598436,0.1159420289855072,0.24903083028083028,0.34782608695652173,0.34937224611137657,0,137,0,1,17,70,33,18,1,124,9,4,19,66,21,32,6,92,20,20,5,111,7,15,2025-07-10 00:33:24,jbcs2025_bert-base-portuguese-cased-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only
|
runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C2-encoder_classification-C2-essay_only/jbcs2025_bertimbau_base-C2-encoder_classification-C2-essay_only_inference_results.jsonl → jbcs2025_bert-base-portuguese-cased-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only/jbcs2025_bert-base-portuguese-cased-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only_inference_results.jsonl}
RENAMED
|
The diff for this file is too large to render.
See raw diff
|
|
|
runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C3-encoder_classification-C3-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only}/run_inference_experiment.log
RENAMED
|
@@ -1,5 +1,5 @@
|
|
| 1 |
-
[2025-
|
| 2 |
-
[2025-
|
| 3 |
dataset:
|
| 4 |
name: kamel-usp/aes_enem_dataset
|
| 5 |
split: JBCS2025
|
|
@@ -21,16 +21,16 @@ post_training_results:
|
|
| 21 |
model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59
|
| 22 |
experiments:
|
| 23 |
model:
|
| 24 |
-
name: kamel-usp/
|
| 25 |
type: encoder_classification
|
| 26 |
num_labels: 6
|
| 27 |
-
output_dir: ./results/
|
| 28 |
-
logging_dir: ./logs/
|
| 29 |
-
best_model_dir:
|
| 30 |
tokenizer:
|
| 31 |
name: neuralmind/bert-base-portuguese-cased
|
| 32 |
dataset:
|
| 33 |
-
grade_index:
|
| 34 |
use_full_context: false
|
| 35 |
training_params:
|
| 36 |
weight_decay: 0.01
|
|
@@ -41,9 +41,9 @@ experiments:
|
|
| 41 |
gradient_accumulation_steps: 1
|
| 42 |
gradient_checkpointing: false
|
| 43 |
|
| 44 |
-
[2025-
|
| 45 |
-
[2025-
|
| 46 |
-
[2025-
|
| 47 |
"architectures": [
|
| 48 |
"BertForMaskedLM"
|
| 49 |
],
|
|
@@ -68,20 +68,14 @@ experiments:
|
|
| 68 |
"pooler_size_per_head": 128,
|
| 69 |
"pooler_type": "first_token_transform",
|
| 70 |
"position_embedding_type": "absolute",
|
| 71 |
-
"transformers_version": "4.53.
|
| 72 |
"type_vocab_size": 2,
|
| 73 |
"use_cache": true,
|
| 74 |
"vocab_size": 29794
|
| 75 |
}
|
| 76 |
|
| 77 |
-
[2025-
|
| 78 |
-
[2025-
|
| 79 |
-
[2025-06-30 23:55:44,390][transformers.tokenization_utils_base][INFO] - loading file added_tokens.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/added_tokens.json
|
| 80 |
-
[2025-06-30 23:55:44,390][transformers.tokenization_utils_base][INFO] - loading file special_tokens_map.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/special_tokens_map.json
|
| 81 |
-
[2025-06-30 23:55:44,390][transformers.tokenization_utils_base][INFO] - loading file tokenizer_config.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/tokenizer_config.json
|
| 82 |
-
[2025-06-30 23:55:44,391][transformers.tokenization_utils_base][INFO] - loading file chat_template.jinja from cache at None
|
| 83 |
-
[2025-06-30 23:55:44,391][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/config.json
|
| 84 |
-
[2025-06-30 23:55:44,391][transformers.configuration_utils][INFO] - Model config BertConfig {
|
| 85 |
"architectures": [
|
| 86 |
"BertForMaskedLM"
|
| 87 |
],
|
|
@@ -106,14 +100,20 @@ experiments:
|
|
| 106 |
"pooler_size_per_head": 128,
|
| 107 |
"pooler_type": "first_token_transform",
|
| 108 |
"position_embedding_type": "absolute",
|
| 109 |
-
"transformers_version": "4.53.
|
| 110 |
"type_vocab_size": 2,
|
| 111 |
"use_cache": true,
|
| 112 |
"vocab_size": 29794
|
| 113 |
}
|
| 114 |
|
| 115 |
-
[2025-
|
| 116 |
-
[2025-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 117 |
"architectures": [
|
| 118 |
"BertForMaskedLM"
|
| 119 |
],
|
|
@@ -138,18 +138,73 @@ experiments:
|
|
| 138 |
"pooler_size_per_head": 128,
|
| 139 |
"pooler_type": "first_token_transform",
|
| 140 |
"position_embedding_type": "absolute",
|
| 141 |
-
"transformers_version": "4.53.
|
| 142 |
"type_vocab_size": 2,
|
| 143 |
"use_cache": true,
|
| 144 |
"vocab_size": 29794
|
| 145 |
}
|
| 146 |
|
| 147 |
-
[2025-
|
| 148 |
-
[2025-
|
| 149 |
-
|
| 150 |
-
|
| 151 |
-
|
| 152 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 153 |
"architectures": [
|
| 154 |
"BertForSequenceClassification"
|
| 155 |
],
|
|
@@ -190,37 +245,36 @@ experiments:
|
|
| 190 |
"pooler_size_per_head": 128,
|
| 191 |
"pooler_type": "first_token_transform",
|
| 192 |
"position_embedding_type": "absolute",
|
| 193 |
-
"problem_type": "single_label_classification",
|
| 194 |
"torch_dtype": "float32",
|
| 195 |
-
"transformers_version": "4.53.
|
| 196 |
"type_vocab_size": 2,
|
| 197 |
"use_cache": true,
|
| 198 |
"vocab_size": 29794
|
| 199 |
}
|
| 200 |
|
| 201 |
-
[2025-
|
| 202 |
-
[2025-
|
| 203 |
-
[2025-
|
| 204 |
-
[2025-
|
| 205 |
|
| 206 |
-
[2025-
|
| 207 |
If your task is similar to the task the model of the checkpoint was trained on, you can already use BertForSequenceClassification for predictions without further training.
|
| 208 |
-
[2025-
|
| 209 |
-
[2025-
|
| 210 |
-
[2025-
|
| 211 |
-
[2025-
|
| 212 |
-
[2025-
|
| 213 |
-
[2025-
|
| 214 |
-
[2025-
|
| 215 |
***** Running Prediction *****
|
| 216 |
-
[2025-
|
| 217 |
-
[2025-
|
| 218 |
-
[2025-
|
| 219 |
-
[2025-
|
| 220 |
-
[2025-
|
| 221 |
-
[2025-
|
| 222 |
-
[2025-
|
| 223 |
-
[2025-
|
| 224 |
-
[2025-
|
| 225 |
-
[2025-
|
| 226 |
-
[2025-
|
|
|
|
| 1 |
+
[2025-07-10 00:33:24,293][__main__][INFO] - Starting inference experiment
|
| 2 |
+
[2025-07-10 00:33:24,295][__main__][INFO] - cache_dir: /tmp/
|
| 3 |
dataset:
|
| 4 |
name: kamel-usp/aes_enem_dataset
|
| 5 |
split: JBCS2025
|
|
|
|
| 21 |
model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59
|
| 22 |
experiments:
|
| 23 |
model:
|
| 24 |
+
name: kamel-usp/jbcs2025_bert-base-portuguese-cased-encoder_classification-C2-essay_only
|
| 25 |
type: encoder_classification
|
| 26 |
num_labels: 6
|
| 27 |
+
output_dir: ./results/
|
| 28 |
+
logging_dir: ./logs/
|
| 29 |
+
best_model_dir: kamel-usp/jbcs2025_bert-base-portuguese-cased-encoder_classification-C2-essay_only
|
| 30 |
tokenizer:
|
| 31 |
name: neuralmind/bert-base-portuguese-cased
|
| 32 |
dataset:
|
| 33 |
+
grade_index: 1
|
| 34 |
use_full_context: false
|
| 35 |
training_params:
|
| 36 |
weight_decay: 0.01
|
|
|
|
| 41 |
gradient_accumulation_steps: 1
|
| 42 |
gradient_checkpointing: false
|
| 43 |
|
| 44 |
+
[2025-07-10 00:33:24,297][__main__][INFO] - Running inference with fine-tuned HF model
|
| 45 |
+
[2025-07-10 00:33:29,586][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/config.json
|
| 46 |
+
[2025-07-10 00:33:29,587][transformers.configuration_utils][INFO] - Model config BertConfig {
|
| 47 |
"architectures": [
|
| 48 |
"BertForMaskedLM"
|
| 49 |
],
|
|
|
|
| 68 |
"pooler_size_per_head": 128,
|
| 69 |
"pooler_type": "first_token_transform",
|
| 70 |
"position_embedding_type": "absolute",
|
| 71 |
+
"transformers_version": "4.53.1",
|
| 72 |
"type_vocab_size": 2,
|
| 73 |
"use_cache": true,
|
| 74 |
"vocab_size": 29794
|
| 75 |
}
|
| 76 |
|
| 77 |
+
[2025-07-10 00:33:29,802][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/config.json
|
| 78 |
+
[2025-07-10 00:33:29,803][transformers.configuration_utils][INFO] - Model config BertConfig {
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 79 |
"architectures": [
|
| 80 |
"BertForMaskedLM"
|
| 81 |
],
|
|
|
|
| 100 |
"pooler_size_per_head": 128,
|
| 101 |
"pooler_type": "first_token_transform",
|
| 102 |
"position_embedding_type": "absolute",
|
| 103 |
+
"transformers_version": "4.53.1",
|
| 104 |
"type_vocab_size": 2,
|
| 105 |
"use_cache": true,
|
| 106 |
"vocab_size": 29794
|
| 107 |
}
|
| 108 |
|
| 109 |
+
[2025-07-10 00:33:29,988][transformers.tokenization_utils_base][INFO] - loading file vocab.txt from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/vocab.txt
|
| 110 |
+
[2025-07-10 00:33:29,988][transformers.tokenization_utils_base][INFO] - loading file tokenizer.json from cache at None
|
| 111 |
+
[2025-07-10 00:33:29,988][transformers.tokenization_utils_base][INFO] - loading file added_tokens.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/added_tokens.json
|
| 112 |
+
[2025-07-10 00:33:29,988][transformers.tokenization_utils_base][INFO] - loading file special_tokens_map.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/special_tokens_map.json
|
| 113 |
+
[2025-07-10 00:33:29,988][transformers.tokenization_utils_base][INFO] - loading file tokenizer_config.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/tokenizer_config.json
|
| 114 |
+
[2025-07-10 00:33:29,988][transformers.tokenization_utils_base][INFO] - loading file chat_template.jinja from cache at None
|
| 115 |
+
[2025-07-10 00:33:29,988][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/config.json
|
| 116 |
+
[2025-07-10 00:33:29,989][transformers.configuration_utils][INFO] - Model config BertConfig {
|
| 117 |
"architectures": [
|
| 118 |
"BertForMaskedLM"
|
| 119 |
],
|
|
|
|
| 138 |
"pooler_size_per_head": 128,
|
| 139 |
"pooler_type": "first_token_transform",
|
| 140 |
"position_embedding_type": "absolute",
|
| 141 |
+
"transformers_version": "4.53.1",
|
| 142 |
"type_vocab_size": 2,
|
| 143 |
"use_cache": true,
|
| 144 |
"vocab_size": 29794
|
| 145 |
}
|
| 146 |
|
| 147 |
+
[2025-07-10 00:33:30,019][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/config.json
|
| 148 |
+
[2025-07-10 00:33:30,019][transformers.configuration_utils][INFO] - Model config BertConfig {
|
| 149 |
+
"architectures": [
|
| 150 |
+
"BertForMaskedLM"
|
| 151 |
+
],
|
| 152 |
+
"attention_probs_dropout_prob": 0.1,
|
| 153 |
+
"classifier_dropout": null,
|
| 154 |
+
"directionality": "bidi",
|
| 155 |
+
"hidden_act": "gelu",
|
| 156 |
+
"hidden_dropout_prob": 0.1,
|
| 157 |
+
"hidden_size": 768,
|
| 158 |
+
"initializer_range": 0.02,
|
| 159 |
+
"intermediate_size": 3072,
|
| 160 |
+
"layer_norm_eps": 1e-12,
|
| 161 |
+
"max_position_embeddings": 512,
|
| 162 |
+
"model_type": "bert",
|
| 163 |
+
"num_attention_heads": 12,
|
| 164 |
+
"num_hidden_layers": 12,
|
| 165 |
+
"output_past": true,
|
| 166 |
+
"pad_token_id": 0,
|
| 167 |
+
"pooler_fc_size": 768,
|
| 168 |
+
"pooler_num_attention_heads": 12,
|
| 169 |
+
"pooler_num_fc_layers": 3,
|
| 170 |
+
"pooler_size_per_head": 128,
|
| 171 |
+
"pooler_type": "first_token_transform",
|
| 172 |
+
"position_embedding_type": "absolute",
|
| 173 |
+
"transformers_version": "4.53.1",
|
| 174 |
+
"type_vocab_size": 2,
|
| 175 |
+
"use_cache": true,
|
| 176 |
+
"vocab_size": 29794
|
| 177 |
+
}
|
| 178 |
+
|
| 179 |
+
[2025-07-10 00:33:30,037][__main__][INFO] - Tokenizer function parameters- Padding:longest; Truncation: True; Use Full Context: False
|
| 180 |
+
[2025-07-10 00:33:30,459][__main__][INFO] -
|
| 181 |
+
Token statistics for 'train' split:
|
| 182 |
+
[2025-07-10 00:33:30,459][__main__][INFO] - Total examples: 500
|
| 183 |
+
[2025-07-10 00:33:30,459][__main__][INFO] - Min tokens: 512
|
| 184 |
+
[2025-07-10 00:33:30,459][__main__][INFO] - Max tokens: 512
|
| 185 |
+
[2025-07-10 00:33:30,459][__main__][INFO] - Avg tokens: 512.00
|
| 186 |
+
[2025-07-10 00:33:30,459][__main__][INFO] - Std tokens: 0.00
|
| 187 |
+
[2025-07-10 00:33:30,550][__main__][INFO] -
|
| 188 |
+
Token statistics for 'validation' split:
|
| 189 |
+
[2025-07-10 00:33:30,550][__main__][INFO] - Total examples: 132
|
| 190 |
+
[2025-07-10 00:33:30,550][__main__][INFO] - Min tokens: 512
|
| 191 |
+
[2025-07-10 00:33:30,550][__main__][INFO] - Max tokens: 512
|
| 192 |
+
[2025-07-10 00:33:30,550][__main__][INFO] - Avg tokens: 512.00
|
| 193 |
+
[2025-07-10 00:33:30,550][__main__][INFO] - Std tokens: 0.00
|
| 194 |
+
[2025-07-10 00:33:30,644][__main__][INFO] -
|
| 195 |
+
Token statistics for 'test' split:
|
| 196 |
+
[2025-07-10 00:33:30,644][__main__][INFO] - Total examples: 138
|
| 197 |
+
[2025-07-10 00:33:30,644][__main__][INFO] - Min tokens: 512
|
| 198 |
+
[2025-07-10 00:33:30,644][__main__][INFO] - Max tokens: 512
|
| 199 |
+
[2025-07-10 00:33:30,644][__main__][INFO] - Avg tokens: 512.00
|
| 200 |
+
[2025-07-10 00:33:30,644][__main__][INFO] - Std tokens: 0.00
|
| 201 |
+
[2025-07-10 00:33:30,644][__main__][INFO] - If token statistics are the same (max, avg, min) keep in mind that this is due to batched tokenization and padding.
|
| 202 |
+
[2025-07-10 00:33:30,644][__main__][INFO] - Model max length: 512. If it is the same as stats, then there is a high chance that sequences are being truncated.
|
| 203 |
+
[2025-07-10 00:33:30,645][__main__][INFO] - Loading model from: kamel-usp/jbcs2025_bert-base-portuguese-cased-encoder_classification-C2-essay_only
|
| 204 |
+
[2025-07-10 00:33:30,645][__main__][INFO] - Loading model from: kamel-usp/jbcs2025_bert-base-portuguese-cased-encoder_classification-C2-essay_only
|
| 205 |
+
[2025-07-10 00:33:31,659][__main__][INFO] - Model need ≈ 1.36 GiB to run inference and 2.58 for training
|
| 206 |
+
[2025-07-10 00:33:32,456][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--kamel-usp--jbcs2025_bert-base-portuguese-cased-encoder_classification-C2-essay_only/snapshots/05b2dbb38d9087976e945d31d5e052862b434715/config.json
|
| 207 |
+
[2025-07-10 00:33:32,457][transformers.configuration_utils][INFO] - Model config BertConfig {
|
| 208 |
"architectures": [
|
| 209 |
"BertForSequenceClassification"
|
| 210 |
],
|
|
|
|
| 245 |
"pooler_size_per_head": 128,
|
| 246 |
"pooler_type": "first_token_transform",
|
| 247 |
"position_embedding_type": "absolute",
|
|
|
|
| 248 |
"torch_dtype": "float32",
|
| 249 |
+
"transformers_version": "4.53.1",
|
| 250 |
"type_vocab_size": 2,
|
| 251 |
"use_cache": true,
|
| 252 |
"vocab_size": 29794
|
| 253 |
}
|
| 254 |
|
| 255 |
+
[2025-07-10 00:33:41,347][transformers.modeling_utils][INFO] - loading weights file model.safetensors from cache at /tmp/models--kamel-usp--jbcs2025_bert-base-portuguese-cased-encoder_classification-C2-essay_only/snapshots/05b2dbb38d9087976e945d31d5e052862b434715/model.safetensors
|
| 256 |
+
[2025-07-10 00:33:41,348][transformers.modeling_utils][INFO] - Will use torch_dtype=torch.float32 as defined in model's config object
|
| 257 |
+
[2025-07-10 00:33:41,349][transformers.modeling_utils][INFO] - Instantiating BertForSequenceClassification model under default dtype torch.float32.
|
| 258 |
+
[2025-07-10 00:33:41,748][transformers.modeling_utils][INFO] - All model checkpoint weights were used when initializing BertForSequenceClassification.
|
| 259 |
|
| 260 |
+
[2025-07-10 00:33:41,748][transformers.modeling_utils][INFO] - All the weights of BertForSequenceClassification were initialized from the model checkpoint at kamel-usp/jbcs2025_bert-base-portuguese-cased-encoder_classification-C2-essay_only.
|
| 261 |
If your task is similar to the task the model of the checkpoint was trained on, you can already use BertForSequenceClassification for predictions without further training.
|
| 262 |
+
[2025-07-10 00:33:41,758][transformers.training_args][INFO] - PyTorch: setting up devices
|
| 263 |
+
[2025-07-10 00:33:41,783][transformers.training_args][INFO] - The default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-).
|
| 264 |
+
[2025-07-10 00:33:41,791][transformers.trainer][INFO] - You have loaded a model on multiple GPUs. `is_model_parallel` attribute will be force-set to `True` to avoid any unexpected behavior such as device placement mismatching.
|
| 265 |
+
[2025-07-10 00:33:41,818][transformers.trainer][INFO] - Using auto half precision backend
|
| 266 |
+
[2025-07-10 00:33:45,118][__main__][INFO] - Running inference on test dataset
|
| 267 |
+
[2025-07-10 00:33:45,119][transformers.trainer][INFO] - The following columns in the test set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: reference, prompt, essay_year, id_prompt, grades, essay_text, supporting_text, id. If reference, prompt, essay_year, id_prompt, grades, essay_text, supporting_text, id are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.
|
| 268 |
+
[2025-07-10 00:33:45,125][transformers.trainer][INFO] -
|
| 269 |
***** Running Prediction *****
|
| 270 |
+
[2025-07-10 00:33:45,125][transformers.trainer][INFO] - Num examples = 138
|
| 271 |
+
[2025-07-10 00:33:45,126][transformers.trainer][INFO] - Batch size = 16
|
| 272 |
+
[2025-07-10 00:33:45,907][__main__][INFO] - Inference results saved to jbcs2025_bert-base-portuguese-cased-encoder_classification-C2-essay_only-encoder_classification-C2-essay_only_inference_results.jsonl
|
| 273 |
+
[2025-07-10 00:33:45,908][__main__][INFO] - Computing bootstrap confidence intervals for metrics: ['QWK', 'Macro_F1', 'Weighted_F1']
|
| 274 |
+
[2025-07-10 00:35:53,783][__main__][INFO] - Bootstrap CI results saved to bootstrap_confidence_intervals.csv
|
| 275 |
+
[2025-07-10 00:35:53,783][__main__][INFO] - Bootstrap Confidence Intervals (95%):
|
| 276 |
+
[2025-07-10 00:35:53,783][__main__][INFO] - QWK: 0.3446 [0.1924, 0.4869]
|
| 277 |
+
[2025-07-10 00:35:53,783][__main__][INFO] - Macro_F1: 0.2628 [0.1837, 0.3616]
|
| 278 |
+
[2025-07-10 00:35:53,783][__main__][INFO] - Weighted_F1: 0.3485 [0.2694, 0.4294]
|
| 279 |
+
[2025-07-10 00:35:53,783][__main__][INFO] - Inference results: {'accuracy': 0.34782608695652173, 'RMSE': 63.519938670433646, 'QWK': 0.3477835723598436, 'HDIV': 0.1159420289855072, 'Macro_F1': 0.24903083028083028, 'Micro_F1': 0.34782608695652173, 'Weighted_F1': 0.34937224611137657, 'TP_0': np.int64(0), 'TN_0': np.int64(137), 'FP_0': np.int64(0), 'FN_0': np.int64(1), 'TP_1': np.int64(17), 'TN_1': np.int64(70), 'FP_1': np.int64(33), 'FN_1': np.int64(18), 'TP_2': np.int64(1), 'TN_2': np.int64(124), 'FP_2': np.int64(9), 'FN_2': np.int64(4), 'TP_3': np.int64(19), 'TN_3': np.int64(66), 'FP_3': np.int64(21), 'FN_3': np.int64(32), 'TP_4': np.int64(6), 'TN_4': np.int64(92), 'FP_4': np.int64(20), 'FN_4': np.int64(20), 'TP_5': np.int64(5), 'TN_5': np.int64(111), 'FP_5': np.int64(7), 'FN_5': np.int64(15)}
|
| 280 |
+
[2025-07-10 00:35:53,783][__main__][INFO] - Inference experiment completed
|
runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C3-encoder_classification-C3-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only}/.hydra/config.yaml
RENAMED
|
@@ -20,12 +20,12 @@ post_training_results:
|
|
| 20 |
model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59
|
| 21 |
experiments:
|
| 22 |
model:
|
| 23 |
-
name: kamel-usp/
|
| 24 |
type: encoder_classification
|
| 25 |
num_labels: 6
|
| 26 |
-
output_dir: ./results/
|
| 27 |
-
logging_dir: ./logs/
|
| 28 |
-
best_model_dir:
|
| 29 |
tokenizer:
|
| 30 |
name: neuralmind/bert-base-portuguese-cased
|
| 31 |
dataset:
|
|
|
|
| 20 |
model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59
|
| 21 |
experiments:
|
| 22 |
model:
|
| 23 |
+
name: kamel-usp/jbcs2025_bert-base-portuguese-cased-encoder_classification-C3-essay_only
|
| 24 |
type: encoder_classification
|
| 25 |
num_labels: 6
|
| 26 |
+
output_dir: ./results/
|
| 27 |
+
logging_dir: ./logs/
|
| 28 |
+
best_model_dir: kamel-usp/jbcs2025_bert-base-portuguese-cased-encoder_classification-C3-essay_only
|
| 29 |
tokenizer:
|
| 30 |
name: neuralmind/bert-base-portuguese-cased
|
| 31 |
dataset:
|
runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C2-encoder_classification-C2-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only}/.hydra/hydra.yaml
RENAMED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
hydra:
|
| 2 |
run:
|
| 3 |
-
dir:
|
| 4 |
sweep:
|
| 5 |
dir: multirun/${now:%Y-%m-%d}/${now:%H-%M-%S}
|
| 6 |
subdir: ${hydra.job.num}
|
|
@@ -110,13 +110,14 @@ hydra:
|
|
| 110 |
output_subdir: .hydra
|
| 111 |
overrides:
|
| 112 |
hydra:
|
|
|
|
| 113 |
- hydra.mode=RUN
|
| 114 |
task:
|
| 115 |
-
- experiments=
|
| 116 |
job:
|
| 117 |
name: run_inference_experiment
|
| 118 |
chdir: null
|
| 119 |
-
override_dirname: experiments=
|
| 120 |
id: ???
|
| 121 |
num: ???
|
| 122 |
config_name: config
|
|
@@ -141,9 +142,9 @@ hydra:
|
|
| 141 |
- path: ''
|
| 142 |
schema: structured
|
| 143 |
provider: schema
|
| 144 |
-
output_dir: /workspace/jbcs2025/
|
| 145 |
choices:
|
| 146 |
-
experiments:
|
| 147 |
hydra/env: default
|
| 148 |
hydra/callbacks: null
|
| 149 |
hydra/job_logging: default
|
|
|
|
| 1 |
hydra:
|
| 2 |
run:
|
| 3 |
+
dir: inference_output/2025-07-10/00-35-58
|
| 4 |
sweep:
|
| 5 |
dir: multirun/${now:%Y-%m-%d}/${now:%H-%M-%S}
|
| 6 |
subdir: ${hydra.job.num}
|
|
|
|
| 110 |
output_subdir: .hydra
|
| 111 |
overrides:
|
| 112 |
hydra:
|
| 113 |
+
- hydra.run.dir=inference_output/2025-07-10/00-35-58
|
| 114 |
- hydra.mode=RUN
|
| 115 |
task:
|
| 116 |
+
- experiments=temp_inference/kamel-usp_jbcs2025_bert-base-portuguese-cased-encoder_classification-C3-essay_only
|
| 117 |
job:
|
| 118 |
name: run_inference_experiment
|
| 119 |
chdir: null
|
| 120 |
+
override_dirname: experiments=temp_inference/kamel-usp_jbcs2025_bert-base-portuguese-cased-encoder_classification-C3-essay_only
|
| 121 |
id: ???
|
| 122 |
num: ???
|
| 123 |
config_name: config
|
|
|
|
| 142 |
- path: ''
|
| 143 |
schema: structured
|
| 144 |
provider: schema
|
| 145 |
+
output_dir: /workspace/jbcs2025/inference_output/2025-07-10/00-35-58
|
| 146 |
choices:
|
| 147 |
+
experiments: temp_inference/kamel-usp_jbcs2025_bert-base-portuguese-cased-encoder_classification-C3-essay_only
|
| 148 |
hydra/env: default
|
| 149 |
hydra/callbacks: null
|
| 150 |
hydra/job_logging: default
|
runs/base_models/bertimbau/jbcs2025_bert-base-portuguese-cased-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only/.hydra/overrides.yaml
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
- experiments=temp_inference/kamel-usp_jbcs2025_bert-base-portuguese-cased-encoder_classification-C3-essay_only
|
runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C3-encoder_classification-C3-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only}/bootstrap_confidence_intervals.csv
RENAMED
|
@@ -1,2 +1,2 @@
|
|
| 1 |
experiment_id,timestamp,QWK_mean,QWK_lower_95ci,QWK_upper_95ci,QWK_ci_width,Macro_F1_mean,Macro_F1_lower_95ci,Macro_F1_upper_95ci,Macro_F1_ci_width,Weighted_F1_mean,Weighted_F1_lower_95ci,Weighted_F1_upper_95ci,Weighted_F1_ci_width
|
| 2 |
-
|
|
|
|
| 1 |
experiment_id,timestamp,QWK_mean,QWK_lower_95ci,QWK_upper_95ci,QWK_ci_width,Macro_F1_mean,Macro_F1_lower_95ci,Macro_F1_upper_95ci,Macro_F1_ci_width,Weighted_F1_mean,Weighted_F1_lower_95ci,Weighted_F1_upper_95ci,Weighted_F1_ci_width
|
| 2 |
+
jbcs2025_bert-base-portuguese-cased-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only,2025-07-10 00:36:03,0.34693349366334814,0.1990600571882507,0.4852465288821567,0.286186471693906,0.23118176088630918,0.16413028510813224,0.3143442286349,0.15021394352676778,0.2678663440815833,0.19402110750115098,0.34536043688479334,0.15133932938364236
|
runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C4-encoder_classification-C4-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only}/evaluation_results.csv
RENAMED
|
@@ -1,2 +1,2 @@
|
|
| 1 |
-
accuracy,RMSE,QWK,HDIV,Macro_F1,Micro_F1,Weighted_F1,TP_0,TN_0,FP_0,FN_0,TP_1,TN_1,FP_1,FN_1,TP_2,TN_2,FP_2,FN_2,TP_3,TN_3,FP_3,FN_3,TP_4,TN_4,FP_4,FN_4,TP_5,TN_5,FP_5,FN_5,timestamp,id
|
| 2 |
-
0.
|
|
|
|
| 1 |
+
accuracy,RMSE,QWK,HDIV,Macro_F1,Micro_F1,Weighted_F1,TP_0,TN_0,FP_0,FN_0,TP_1,TN_1,FP_1,FN_1,TP_2,TN_2,FP_2,FN_2,TP_3,TN_3,FP_3,FN_3,TP_4,TN_4,FP_4,FN_4,TP_5,TN_5,FP_5,FN_5,timestamp,id
|
| 2 |
+
0.3115942028985507,51.07539184552491,0.3491384731480316,0.050724637681159424,0.21826640792158034,0.3115942028985507,0.26630624081898446,0,137,0,1,0,107,2,29,14,94,26,4,18,57,36,27,9,81,19,29,2,119,12,5,2025-07-10 00:36:03,jbcs2025_bert-base-portuguese-cased-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only
|
runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C3-encoder_classification-C3-essay_only/jbcs2025_bertimbau_base-C3-encoder_classification-C3-essay_only_inference_results.jsonl → jbcs2025_bert-base-portuguese-cased-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only/jbcs2025_bert-base-portuguese-cased-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only_inference_results.jsonl}
RENAMED
|
The diff for this file is too large to render.
See raw diff
|
|
|
runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C2-encoder_classification-C2-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only}/run_inference_experiment.log
RENAMED
|
@@ -1,5 +1,5 @@
|
|
| 1 |
-
[2025-
|
| 2 |
-
[2025-
|
| 3 |
dataset:
|
| 4 |
name: kamel-usp/aes_enem_dataset
|
| 5 |
split: JBCS2025
|
|
@@ -21,16 +21,16 @@ post_training_results:
|
|
| 21 |
model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59
|
| 22 |
experiments:
|
| 23 |
model:
|
| 24 |
-
name: kamel-usp/
|
| 25 |
type: encoder_classification
|
| 26 |
num_labels: 6
|
| 27 |
-
output_dir: ./results/
|
| 28 |
-
logging_dir: ./logs/
|
| 29 |
-
best_model_dir:
|
| 30 |
tokenizer:
|
| 31 |
name: neuralmind/bert-base-portuguese-cased
|
| 32 |
dataset:
|
| 33 |
-
grade_index:
|
| 34 |
use_full_context: false
|
| 35 |
training_params:
|
| 36 |
weight_decay: 0.01
|
|
@@ -41,9 +41,9 @@ experiments:
|
|
| 41 |
gradient_accumulation_steps: 1
|
| 42 |
gradient_checkpointing: false
|
| 43 |
|
| 44 |
-
[2025-
|
| 45 |
-
[2025-
|
| 46 |
-
[2025-
|
| 47 |
"architectures": [
|
| 48 |
"BertForMaskedLM"
|
| 49 |
],
|
|
@@ -68,20 +68,14 @@ experiments:
|
|
| 68 |
"pooler_size_per_head": 128,
|
| 69 |
"pooler_type": "first_token_transform",
|
| 70 |
"position_embedding_type": "absolute",
|
| 71 |
-
"transformers_version": "4.53.
|
| 72 |
"type_vocab_size": 2,
|
| 73 |
"use_cache": true,
|
| 74 |
"vocab_size": 29794
|
| 75 |
}
|
| 76 |
|
| 77 |
-
[2025-
|
| 78 |
-
[2025-
|
| 79 |
-
[2025-06-30 23:53:37,279][transformers.tokenization_utils_base][INFO] - loading file added_tokens.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/added_tokens.json
|
| 80 |
-
[2025-06-30 23:53:37,279][transformers.tokenization_utils_base][INFO] - loading file special_tokens_map.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/special_tokens_map.json
|
| 81 |
-
[2025-06-30 23:53:37,279][transformers.tokenization_utils_base][INFO] - loading file tokenizer_config.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/tokenizer_config.json
|
| 82 |
-
[2025-06-30 23:53:37,279][transformers.tokenization_utils_base][INFO] - loading file chat_template.jinja from cache at None
|
| 83 |
-
[2025-06-30 23:53:37,279][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/config.json
|
| 84 |
-
[2025-06-30 23:53:37,280][transformers.configuration_utils][INFO] - Model config BertConfig {
|
| 85 |
"architectures": [
|
| 86 |
"BertForMaskedLM"
|
| 87 |
],
|
|
@@ -106,14 +100,20 @@ experiments:
|
|
| 106 |
"pooler_size_per_head": 128,
|
| 107 |
"pooler_type": "first_token_transform",
|
| 108 |
"position_embedding_type": "absolute",
|
| 109 |
-
"transformers_version": "4.53.
|
| 110 |
"type_vocab_size": 2,
|
| 111 |
"use_cache": true,
|
| 112 |
"vocab_size": 29794
|
| 113 |
}
|
| 114 |
|
| 115 |
-
[2025-
|
| 116 |
-
[2025-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 117 |
"architectures": [
|
| 118 |
"BertForMaskedLM"
|
| 119 |
],
|
|
@@ -138,18 +138,73 @@ experiments:
|
|
| 138 |
"pooler_size_per_head": 128,
|
| 139 |
"pooler_type": "first_token_transform",
|
| 140 |
"position_embedding_type": "absolute",
|
| 141 |
-
"transformers_version": "4.53.
|
| 142 |
"type_vocab_size": 2,
|
| 143 |
"use_cache": true,
|
| 144 |
"vocab_size": 29794
|
| 145 |
}
|
| 146 |
|
| 147 |
-
[2025-
|
| 148 |
-
[2025-
|
| 149 |
-
|
| 150 |
-
|
| 151 |
-
|
| 152 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 153 |
"architectures": [
|
| 154 |
"BertForSequenceClassification"
|
| 155 |
],
|
|
@@ -190,37 +245,36 @@ experiments:
|
|
| 190 |
"pooler_size_per_head": 128,
|
| 191 |
"pooler_type": "first_token_transform",
|
| 192 |
"position_embedding_type": "absolute",
|
| 193 |
-
"problem_type": "single_label_classification",
|
| 194 |
"torch_dtype": "float32",
|
| 195 |
-
"transformers_version": "4.53.
|
| 196 |
"type_vocab_size": 2,
|
| 197 |
"use_cache": true,
|
| 198 |
"vocab_size": 29794
|
| 199 |
}
|
| 200 |
|
| 201 |
-
[2025-
|
| 202 |
-
[2025-
|
| 203 |
-
[2025-
|
| 204 |
-
[2025-
|
| 205 |
|
| 206 |
-
[2025-
|
| 207 |
If your task is similar to the task the model of the checkpoint was trained on, you can already use BertForSequenceClassification for predictions without further training.
|
| 208 |
-
[2025-
|
| 209 |
-
[2025-
|
| 210 |
-
[2025-
|
| 211 |
-
[2025-
|
| 212 |
-
[2025-
|
| 213 |
-
[2025-
|
| 214 |
-
[2025-
|
| 215 |
***** Running Prediction *****
|
| 216 |
-
[2025-
|
| 217 |
-
[2025-
|
| 218 |
-
[2025-
|
| 219 |
-
[2025-
|
| 220 |
-
[2025-
|
| 221 |
-
[2025-
|
| 222 |
-
[2025-
|
| 223 |
-
[2025-
|
| 224 |
-
[2025-
|
| 225 |
-
[2025-
|
| 226 |
-
[2025-
|
|
|
|
| 1 |
+
[2025-07-10 00:36:03,801][__main__][INFO] - Starting inference experiment
|
| 2 |
+
[2025-07-10 00:36:03,803][__main__][INFO] - cache_dir: /tmp/
|
| 3 |
dataset:
|
| 4 |
name: kamel-usp/aes_enem_dataset
|
| 5 |
split: JBCS2025
|
|
|
|
| 21 |
model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59
|
| 22 |
experiments:
|
| 23 |
model:
|
| 24 |
+
name: kamel-usp/jbcs2025_bert-base-portuguese-cased-encoder_classification-C3-essay_only
|
| 25 |
type: encoder_classification
|
| 26 |
num_labels: 6
|
| 27 |
+
output_dir: ./results/
|
| 28 |
+
logging_dir: ./logs/
|
| 29 |
+
best_model_dir: kamel-usp/jbcs2025_bert-base-portuguese-cased-encoder_classification-C3-essay_only
|
| 30 |
tokenizer:
|
| 31 |
name: neuralmind/bert-base-portuguese-cased
|
| 32 |
dataset:
|
| 33 |
+
grade_index: 2
|
| 34 |
use_full_context: false
|
| 35 |
training_params:
|
| 36 |
weight_decay: 0.01
|
|
|
|
| 41 |
gradient_accumulation_steps: 1
|
| 42 |
gradient_checkpointing: false
|
| 43 |
|
| 44 |
+
[2025-07-10 00:36:03,805][__main__][INFO] - Running inference with fine-tuned HF model
|
| 45 |
+
[2025-07-10 00:36:09,107][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/config.json
|
| 46 |
+
[2025-07-10 00:36:09,108][transformers.configuration_utils][INFO] - Model config BertConfig {
|
| 47 |
"architectures": [
|
| 48 |
"BertForMaskedLM"
|
| 49 |
],
|
|
|
|
| 68 |
"pooler_size_per_head": 128,
|
| 69 |
"pooler_type": "first_token_transform",
|
| 70 |
"position_embedding_type": "absolute",
|
| 71 |
+
"transformers_version": "4.53.1",
|
| 72 |
"type_vocab_size": 2,
|
| 73 |
"use_cache": true,
|
| 74 |
"vocab_size": 29794
|
| 75 |
}
|
| 76 |
|
| 77 |
+
[2025-07-10 00:36:09,328][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/config.json
|
| 78 |
+
[2025-07-10 00:36:09,329][transformers.configuration_utils][INFO] - Model config BertConfig {
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 79 |
"architectures": [
|
| 80 |
"BertForMaskedLM"
|
| 81 |
],
|
|
|
|
| 100 |
"pooler_size_per_head": 128,
|
| 101 |
"pooler_type": "first_token_transform",
|
| 102 |
"position_embedding_type": "absolute",
|
| 103 |
+
"transformers_version": "4.53.1",
|
| 104 |
"type_vocab_size": 2,
|
| 105 |
"use_cache": true,
|
| 106 |
"vocab_size": 29794
|
| 107 |
}
|
| 108 |
|
| 109 |
+
[2025-07-10 00:36:09,524][transformers.tokenization_utils_base][INFO] - loading file vocab.txt from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/vocab.txt
|
| 110 |
+
[2025-07-10 00:36:09,524][transformers.tokenization_utils_base][INFO] - loading file tokenizer.json from cache at None
|
| 111 |
+
[2025-07-10 00:36:09,524][transformers.tokenization_utils_base][INFO] - loading file added_tokens.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/added_tokens.json
|
| 112 |
+
[2025-07-10 00:36:09,524][transformers.tokenization_utils_base][INFO] - loading file special_tokens_map.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/special_tokens_map.json
|
| 113 |
+
[2025-07-10 00:36:09,524][transformers.tokenization_utils_base][INFO] - loading file tokenizer_config.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/tokenizer_config.json
|
| 114 |
+
[2025-07-10 00:36:09,524][transformers.tokenization_utils_base][INFO] - loading file chat_template.jinja from cache at None
|
| 115 |
+
[2025-07-10 00:36:09,525][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/config.json
|
| 116 |
+
[2025-07-10 00:36:09,525][transformers.configuration_utils][INFO] - Model config BertConfig {
|
| 117 |
"architectures": [
|
| 118 |
"BertForMaskedLM"
|
| 119 |
],
|
|
|
|
| 138 |
"pooler_size_per_head": 128,
|
| 139 |
"pooler_type": "first_token_transform",
|
| 140 |
"position_embedding_type": "absolute",
|
| 141 |
+
"transformers_version": "4.53.1",
|
| 142 |
"type_vocab_size": 2,
|
| 143 |
"use_cache": true,
|
| 144 |
"vocab_size": 29794
|
| 145 |
}
|
| 146 |
|
| 147 |
+
[2025-07-10 00:36:09,555][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/config.json
|
| 148 |
+
[2025-07-10 00:36:09,556][transformers.configuration_utils][INFO] - Model config BertConfig {
|
| 149 |
+
"architectures": [
|
| 150 |
+
"BertForMaskedLM"
|
| 151 |
+
],
|
| 152 |
+
"attention_probs_dropout_prob": 0.1,
|
| 153 |
+
"classifier_dropout": null,
|
| 154 |
+
"directionality": "bidi",
|
| 155 |
+
"hidden_act": "gelu",
|
| 156 |
+
"hidden_dropout_prob": 0.1,
|
| 157 |
+
"hidden_size": 768,
|
| 158 |
+
"initializer_range": 0.02,
|
| 159 |
+
"intermediate_size": 3072,
|
| 160 |
+
"layer_norm_eps": 1e-12,
|
| 161 |
+
"max_position_embeddings": 512,
|
| 162 |
+
"model_type": "bert",
|
| 163 |
+
"num_attention_heads": 12,
|
| 164 |
+
"num_hidden_layers": 12,
|
| 165 |
+
"output_past": true,
|
| 166 |
+
"pad_token_id": 0,
|
| 167 |
+
"pooler_fc_size": 768,
|
| 168 |
+
"pooler_num_attention_heads": 12,
|
| 169 |
+
"pooler_num_fc_layers": 3,
|
| 170 |
+
"pooler_size_per_head": 128,
|
| 171 |
+
"pooler_type": "first_token_transform",
|
| 172 |
+
"position_embedding_type": "absolute",
|
| 173 |
+
"transformers_version": "4.53.1",
|
| 174 |
+
"type_vocab_size": 2,
|
| 175 |
+
"use_cache": true,
|
| 176 |
+
"vocab_size": 29794
|
| 177 |
+
}
|
| 178 |
+
|
| 179 |
+
[2025-07-10 00:36:09,573][__main__][INFO] - Tokenizer function parameters- Padding:longest; Truncation: True; Use Full Context: False
|
| 180 |
+
[2025-07-10 00:36:09,998][__main__][INFO] -
|
| 181 |
+
Token statistics for 'train' split:
|
| 182 |
+
[2025-07-10 00:36:09,998][__main__][INFO] - Total examples: 500
|
| 183 |
+
[2025-07-10 00:36:09,998][__main__][INFO] - Min tokens: 512
|
| 184 |
+
[2025-07-10 00:36:09,998][__main__][INFO] - Max tokens: 512
|
| 185 |
+
[2025-07-10 00:36:09,998][__main__][INFO] - Avg tokens: 512.00
|
| 186 |
+
[2025-07-10 00:36:09,998][__main__][INFO] - Std tokens: 0.00
|
| 187 |
+
[2025-07-10 00:36:10,092][__main__][INFO] -
|
| 188 |
+
Token statistics for 'validation' split:
|
| 189 |
+
[2025-07-10 00:36:10,092][__main__][INFO] - Total examples: 132
|
| 190 |
+
[2025-07-10 00:36:10,092][__main__][INFO] - Min tokens: 512
|
| 191 |
+
[2025-07-10 00:36:10,092][__main__][INFO] - Max tokens: 512
|
| 192 |
+
[2025-07-10 00:36:10,092][__main__][INFO] - Avg tokens: 512.00
|
| 193 |
+
[2025-07-10 00:36:10,092][__main__][INFO] - Std tokens: 0.00
|
| 194 |
+
[2025-07-10 00:36:10,186][__main__][INFO] -
|
| 195 |
+
Token statistics for 'test' split:
|
| 196 |
+
[2025-07-10 00:36:10,186][__main__][INFO] - Total examples: 138
|
| 197 |
+
[2025-07-10 00:36:10,186][__main__][INFO] - Min tokens: 512
|
| 198 |
+
[2025-07-10 00:36:10,186][__main__][INFO] - Max tokens: 512
|
| 199 |
+
[2025-07-10 00:36:10,186][__main__][INFO] - Avg tokens: 512.00
|
| 200 |
+
[2025-07-10 00:36:10,186][__main__][INFO] - Std tokens: 0.00
|
| 201 |
+
[2025-07-10 00:36:10,186][__main__][INFO] - If token statistics are the same (max, avg, min) keep in mind that this is due to batched tokenization and padding.
|
| 202 |
+
[2025-07-10 00:36:10,186][__main__][INFO] - Model max length: 512. If it is the same as stats, then there is a high chance that sequences are being truncated.
|
| 203 |
+
[2025-07-10 00:36:10,187][__main__][INFO] - Loading model from: kamel-usp/jbcs2025_bert-base-portuguese-cased-encoder_classification-C3-essay_only
|
| 204 |
+
[2025-07-10 00:36:10,187][__main__][INFO] - Loading model from: kamel-usp/jbcs2025_bert-base-portuguese-cased-encoder_classification-C3-essay_only
|
| 205 |
+
[2025-07-10 00:36:11,179][__main__][INFO] - Model need ≈ 1.36 GiB to run inference and 2.58 for training
|
| 206 |
+
[2025-07-10 00:36:11,981][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--kamel-usp--jbcs2025_bert-base-portuguese-cased-encoder_classification-C3-essay_only/snapshots/b3bbed41224b673570856cd0c37769f629b1161a/config.json
|
| 207 |
+
[2025-07-10 00:36:11,982][transformers.configuration_utils][INFO] - Model config BertConfig {
|
| 208 |
"architectures": [
|
| 209 |
"BertForSequenceClassification"
|
| 210 |
],
|
|
|
|
| 245 |
"pooler_size_per_head": 128,
|
| 246 |
"pooler_type": "first_token_transform",
|
| 247 |
"position_embedding_type": "absolute",
|
|
|
|
| 248 |
"torch_dtype": "float32",
|
| 249 |
+
"transformers_version": "4.53.1",
|
| 250 |
"type_vocab_size": 2,
|
| 251 |
"use_cache": true,
|
| 252 |
"vocab_size": 29794
|
| 253 |
}
|
| 254 |
|
| 255 |
+
[2025-07-10 00:36:20,667][transformers.modeling_utils][INFO] - loading weights file model.safetensors from cache at /tmp/models--kamel-usp--jbcs2025_bert-base-portuguese-cased-encoder_classification-C3-essay_only/snapshots/b3bbed41224b673570856cd0c37769f629b1161a/model.safetensors
|
| 256 |
+
[2025-07-10 00:36:20,668][transformers.modeling_utils][INFO] - Will use torch_dtype=torch.float32 as defined in model's config object
|
| 257 |
+
[2025-07-10 00:36:20,668][transformers.modeling_utils][INFO] - Instantiating BertForSequenceClassification model under default dtype torch.float32.
|
| 258 |
+
[2025-07-10 00:36:21,051][transformers.modeling_utils][INFO] - All model checkpoint weights were used when initializing BertForSequenceClassification.
|
| 259 |
|
| 260 |
+
[2025-07-10 00:36:21,051][transformers.modeling_utils][INFO] - All the weights of BertForSequenceClassification were initialized from the model checkpoint at kamel-usp/jbcs2025_bert-base-portuguese-cased-encoder_classification-C3-essay_only.
|
| 261 |
If your task is similar to the task the model of the checkpoint was trained on, you can already use BertForSequenceClassification for predictions without further training.
|
| 262 |
+
[2025-07-10 00:36:21,060][transformers.training_args][INFO] - PyTorch: setting up devices
|
| 263 |
+
[2025-07-10 00:36:21,082][transformers.training_args][INFO] - The default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-).
|
| 264 |
+
[2025-07-10 00:36:21,089][transformers.trainer][INFO] - You have loaded a model on multiple GPUs. `is_model_parallel` attribute will be force-set to `True` to avoid any unexpected behavior such as device placement mismatching.
|
| 265 |
+
[2025-07-10 00:36:21,117][transformers.trainer][INFO] - Using auto half precision backend
|
| 266 |
+
[2025-07-10 00:36:24,486][__main__][INFO] - Running inference on test dataset
|
| 267 |
+
[2025-07-10 00:36:24,487][transformers.trainer][INFO] - The following columns in the test set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: id_prompt, reference, prompt, essay_year, id, essay_text, grades, supporting_text. If id_prompt, reference, prompt, essay_year, id, essay_text, grades, supporting_text are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.
|
| 268 |
+
[2025-07-10 00:36:24,493][transformers.trainer][INFO] -
|
| 269 |
***** Running Prediction *****
|
| 270 |
+
[2025-07-10 00:36:24,493][transformers.trainer][INFO] - Num examples = 138
|
| 271 |
+
[2025-07-10 00:36:24,494][transformers.trainer][INFO] - Batch size = 16
|
| 272 |
+
[2025-07-10 00:36:25,277][__main__][INFO] - Inference results saved to jbcs2025_bert-base-portuguese-cased-encoder_classification-C3-essay_only-encoder_classification-C3-essay_only_inference_results.jsonl
|
| 273 |
+
[2025-07-10 00:36:25,278][__main__][INFO] - Computing bootstrap confidence intervals for metrics: ['QWK', 'Macro_F1', 'Weighted_F1']
|
| 274 |
+
[2025-07-10 00:38:31,831][__main__][INFO] - Bootstrap CI results saved to bootstrap_confidence_intervals.csv
|
| 275 |
+
[2025-07-10 00:38:31,832][__main__][INFO] - Bootstrap Confidence Intervals (95%):
|
| 276 |
+
[2025-07-10 00:38:31,832][__main__][INFO] - QWK: 0.3469 [0.1991, 0.4852]
|
| 277 |
+
[2025-07-10 00:38:31,832][__main__][INFO] - Macro_F1: 0.2312 [0.1641, 0.3143]
|
| 278 |
+
[2025-07-10 00:38:31,832][__main__][INFO] - Weighted_F1: 0.2679 [0.1940, 0.3454]
|
| 279 |
+
[2025-07-10 00:38:31,832][__main__][INFO] - Inference results: {'accuracy': 0.3115942028985507, 'RMSE': 51.07539184552491, 'QWK': 0.3491384731480316, 'HDIV': 0.050724637681159424, 'Macro_F1': 0.21826640792158034, 'Micro_F1': 0.3115942028985507, 'Weighted_F1': 0.26630624081898446, 'TP_0': np.int64(0), 'TN_0': np.int64(137), 'FP_0': np.int64(0), 'FN_0': np.int64(1), 'TP_1': np.int64(0), 'TN_1': np.int64(107), 'FP_1': np.int64(2), 'FN_1': np.int64(29), 'TP_2': np.int64(14), 'TN_2': np.int64(94), 'FP_2': np.int64(26), 'FN_2': np.int64(4), 'TP_3': np.int64(18), 'TN_3': np.int64(57), 'FP_3': np.int64(36), 'FN_3': np.int64(27), 'TP_4': np.int64(9), 'TN_4': np.int64(81), 'FP_4': np.int64(19), 'FN_4': np.int64(29), 'TP_5': np.int64(2), 'TN_5': np.int64(119), 'FP_5': np.int64(12), 'FN_5': np.int64(5)}
|
| 280 |
+
[2025-07-10 00:38:31,832][__main__][INFO] - Inference experiment completed
|
runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C4-encoder_classification-C4-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only}/.hydra/config.yaml
RENAMED
|
@@ -20,12 +20,12 @@ post_training_results:
|
|
| 20 |
model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59
|
| 21 |
experiments:
|
| 22 |
model:
|
| 23 |
-
name: kamel-usp/
|
| 24 |
type: encoder_classification
|
| 25 |
num_labels: 6
|
| 26 |
-
output_dir: ./results/
|
| 27 |
-
logging_dir: ./logs/
|
| 28 |
-
best_model_dir:
|
| 29 |
tokenizer:
|
| 30 |
name: neuralmind/bert-base-portuguese-cased
|
| 31 |
dataset:
|
|
|
|
| 20 |
model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59
|
| 21 |
experiments:
|
| 22 |
model:
|
| 23 |
+
name: kamel-usp/jbcs2025_bert-base-portuguese-cased-encoder_classification-C4-essay_only
|
| 24 |
type: encoder_classification
|
| 25 |
num_labels: 6
|
| 26 |
+
output_dir: ./results/
|
| 27 |
+
logging_dir: ./logs/
|
| 28 |
+
best_model_dir: kamel-usp/jbcs2025_bert-base-portuguese-cased-encoder_classification-C4-essay_only
|
| 29 |
tokenizer:
|
| 30 |
name: neuralmind/bert-base-portuguese-cased
|
| 31 |
dataset:
|
runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C4-encoder_classification-C4-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only}/.hydra/hydra.yaml
RENAMED
|
@@ -1,6 +1,6 @@
|
|
| 1 |
hydra:
|
| 2 |
run:
|
| 3 |
-
dir:
|
| 4 |
sweep:
|
| 5 |
dir: multirun/${now:%Y-%m-%d}/${now:%H-%M-%S}
|
| 6 |
subdir: ${hydra.job.num}
|
|
@@ -110,13 +110,14 @@ hydra:
|
|
| 110 |
output_subdir: .hydra
|
| 111 |
overrides:
|
| 112 |
hydra:
|
|
|
|
| 113 |
- hydra.mode=RUN
|
| 114 |
task:
|
| 115 |
-
- experiments=
|
| 116 |
job:
|
| 117 |
name: run_inference_experiment
|
| 118 |
chdir: null
|
| 119 |
-
override_dirname: experiments=
|
| 120 |
id: ???
|
| 121 |
num: ???
|
| 122 |
config_name: config
|
|
@@ -141,9 +142,9 @@ hydra:
|
|
| 141 |
- path: ''
|
| 142 |
schema: structured
|
| 143 |
provider: schema
|
| 144 |
-
output_dir: /workspace/jbcs2025/
|
| 145 |
choices:
|
| 146 |
-
experiments:
|
| 147 |
hydra/env: default
|
| 148 |
hydra/callbacks: null
|
| 149 |
hydra/job_logging: default
|
|
|
|
| 1 |
hydra:
|
| 2 |
run:
|
| 3 |
+
dir: inference_output/2025-07-10/00-38-36
|
| 4 |
sweep:
|
| 5 |
dir: multirun/${now:%Y-%m-%d}/${now:%H-%M-%S}
|
| 6 |
subdir: ${hydra.job.num}
|
|
|
|
| 110 |
output_subdir: .hydra
|
| 111 |
overrides:
|
| 112 |
hydra:
|
| 113 |
+
- hydra.run.dir=inference_output/2025-07-10/00-38-36
|
| 114 |
- hydra.mode=RUN
|
| 115 |
task:
|
| 116 |
+
- experiments=temp_inference/kamel-usp_jbcs2025_bert-base-portuguese-cased-encoder_classification-C4-essay_only
|
| 117 |
job:
|
| 118 |
name: run_inference_experiment
|
| 119 |
chdir: null
|
| 120 |
+
override_dirname: experiments=temp_inference/kamel-usp_jbcs2025_bert-base-portuguese-cased-encoder_classification-C4-essay_only
|
| 121 |
id: ???
|
| 122 |
num: ???
|
| 123 |
config_name: config
|
|
|
|
| 142 |
- path: ''
|
| 143 |
schema: structured
|
| 144 |
provider: schema
|
| 145 |
+
output_dir: /workspace/jbcs2025/inference_output/2025-07-10/00-38-36
|
| 146 |
choices:
|
| 147 |
+
experiments: temp_inference/kamel-usp_jbcs2025_bert-base-portuguese-cased-encoder_classification-C4-essay_only
|
| 148 |
hydra/env: default
|
| 149 |
hydra/callbacks: null
|
| 150 |
hydra/job_logging: default
|
runs/base_models/bertimbau/jbcs2025_bert-base-portuguese-cased-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only/.hydra/overrides.yaml
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
- experiments=temp_inference/kamel-usp_jbcs2025_bert-base-portuguese-cased-encoder_classification-C4-essay_only
|
runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C2-encoder_classification-C2-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only}/bootstrap_confidence_intervals.csv
RENAMED
|
@@ -1,2 +1,2 @@
|
|
| 1 |
experiment_id,timestamp,QWK_mean,QWK_lower_95ci,QWK_upper_95ci,QWK_ci_width,Macro_F1_mean,Macro_F1_lower_95ci,Macro_F1_upper_95ci,Macro_F1_ci_width,Weighted_F1_mean,Weighted_F1_lower_95ci,Weighted_F1_upper_95ci,Weighted_F1_ci_width
|
| 2 |
-
|
|
|
|
| 1 |
experiment_id,timestamp,QWK_mean,QWK_lower_95ci,QWK_upper_95ci,QWK_ci_width,Macro_F1_mean,Macro_F1_lower_95ci,Macro_F1_upper_95ci,Macro_F1_ci_width,Weighted_F1_mean,Weighted_F1_lower_95ci,Weighted_F1_upper_95ci,Weighted_F1_ci_width
|
| 2 |
+
jbcs2025_bert-base-portuguese-cased-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only,2025-07-10 00:38:42,0.5435296340232288,0.4184589340416943,0.6551030761025943,0.2366441420609,0.36421051718441505,0.254025794056358,0.5230880773040134,0.2690622832476554,0.5514773744123801,0.46507790634693985,0.6358729541143203,0.17079504776738047
|
runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C3-encoder_classification-C3-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only}/evaluation_results.csv
RENAMED
|
@@ -1,2 +1,2 @@
|
|
| 1 |
-
accuracy,RMSE,QWK,HDIV,Macro_F1,Micro_F1,Weighted_F1,TP_0,TN_0,FP_0,FN_0,TP_1,TN_1,FP_1,FN_1,TP_2,TN_2,FP_2,FN_2,TP_3,TN_3,FP_3,FN_3,TP_4,TN_4,FP_4,FN_4,TP_5,TN_5,FP_5,FN_5,timestamp,id
|
| 2 |
-
0.
|
|
|
|
| 1 |
+
accuracy,RMSE,QWK,HDIV,Macro_F1,Micro_F1,Weighted_F1,TP_0,TN_0,FP_0,FN_0,TP_1,TN_1,FP_1,FN_1,TP_2,TN_2,FP_2,FN_2,TP_3,TN_3,FP_3,FN_3,TP_4,TN_4,FP_4,FN_4,TP_5,TN_5,FP_5,FN_5,timestamp,id
|
| 2 |
+
0.5362318840579711,31.20757990421976,0.5471167369901547,0.007246376811594235,0.31768171092726494,0.5362318840579711,0.5501656251760338,0,137,0,1,0,137,0,1,7,102,27,2,47,41,21,29,16,84,8,30,4,125,8,1,2025-07-10 00:38:42,jbcs2025_bert-base-portuguese-cased-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only
|
runs/base_models/bertimbau/jbcs2025_bert-base-portuguese-cased-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only/jbcs2025_bert-base-portuguese-cased-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only_inference_results.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C4-encoder_classification-C4-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only}/run_inference_experiment.log
RENAMED
|
@@ -1,5 +1,5 @@
|
|
| 1 |
-
[2025-
|
| 2 |
-
[2025-
|
| 3 |
dataset:
|
| 4 |
name: kamel-usp/aes_enem_dataset
|
| 5 |
split: JBCS2025
|
|
@@ -21,12 +21,12 @@ post_training_results:
|
|
| 21 |
model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59
|
| 22 |
experiments:
|
| 23 |
model:
|
| 24 |
-
name: kamel-usp/
|
| 25 |
type: encoder_classification
|
| 26 |
num_labels: 6
|
| 27 |
-
output_dir: ./results/
|
| 28 |
-
logging_dir: ./logs/
|
| 29 |
-
best_model_dir:
|
| 30 |
tokenizer:
|
| 31 |
name: neuralmind/bert-base-portuguese-cased
|
| 32 |
dataset:
|
|
@@ -41,9 +41,9 @@ experiments:
|
|
| 41 |
gradient_accumulation_steps: 1
|
| 42 |
gradient_checkpointing: false
|
| 43 |
|
| 44 |
-
[2025-
|
| 45 |
-
[2025-
|
| 46 |
-
[2025-
|
| 47 |
"architectures": [
|
| 48 |
"BertForMaskedLM"
|
| 49 |
],
|
|
@@ -68,20 +68,14 @@ experiments:
|
|
| 68 |
"pooler_size_per_head": 128,
|
| 69 |
"pooler_type": "first_token_transform",
|
| 70 |
"position_embedding_type": "absolute",
|
| 71 |
-
"transformers_version": "4.53.
|
| 72 |
"type_vocab_size": 2,
|
| 73 |
"use_cache": true,
|
| 74 |
"vocab_size": 29794
|
| 75 |
}
|
| 76 |
|
| 77 |
-
[2025-
|
| 78 |
-
[2025-
|
| 79 |
-
[2025-06-30 23:57:50,350][transformers.tokenization_utils_base][INFO] - loading file added_tokens.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/added_tokens.json
|
| 80 |
-
[2025-06-30 23:57:50,350][transformers.tokenization_utils_base][INFO] - loading file special_tokens_map.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/special_tokens_map.json
|
| 81 |
-
[2025-06-30 23:57:50,350][transformers.tokenization_utils_base][INFO] - loading file tokenizer_config.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/tokenizer_config.json
|
| 82 |
-
[2025-06-30 23:57:50,350][transformers.tokenization_utils_base][INFO] - loading file chat_template.jinja from cache at None
|
| 83 |
-
[2025-06-30 23:57:50,350][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/config.json
|
| 84 |
-
[2025-06-30 23:57:50,351][transformers.configuration_utils][INFO] - Model config BertConfig {
|
| 85 |
"architectures": [
|
| 86 |
"BertForMaskedLM"
|
| 87 |
],
|
|
@@ -106,14 +100,20 @@ experiments:
|
|
| 106 |
"pooler_size_per_head": 128,
|
| 107 |
"pooler_type": "first_token_transform",
|
| 108 |
"position_embedding_type": "absolute",
|
| 109 |
-
"transformers_version": "4.53.
|
| 110 |
"type_vocab_size": 2,
|
| 111 |
"use_cache": true,
|
| 112 |
"vocab_size": 29794
|
| 113 |
}
|
| 114 |
|
| 115 |
-
[2025-
|
| 116 |
-
[2025-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 117 |
"architectures": [
|
| 118 |
"BertForMaskedLM"
|
| 119 |
],
|
|
@@ -138,18 +138,73 @@ experiments:
|
|
| 138 |
"pooler_size_per_head": 128,
|
| 139 |
"pooler_type": "first_token_transform",
|
| 140 |
"position_embedding_type": "absolute",
|
| 141 |
-
"transformers_version": "4.53.
|
| 142 |
"type_vocab_size": 2,
|
| 143 |
"use_cache": true,
|
| 144 |
"vocab_size": 29794
|
| 145 |
}
|
| 146 |
|
| 147 |
-
[2025-
|
| 148 |
-
[2025-
|
| 149 |
-
|
| 150 |
-
|
| 151 |
-
|
| 152 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 153 |
"architectures": [
|
| 154 |
"BertForSequenceClassification"
|
| 155 |
],
|
|
@@ -190,37 +245,36 @@ experiments:
|
|
| 190 |
"pooler_size_per_head": 128,
|
| 191 |
"pooler_type": "first_token_transform",
|
| 192 |
"position_embedding_type": "absolute",
|
| 193 |
-
"problem_type": "single_label_classification",
|
| 194 |
"torch_dtype": "float32",
|
| 195 |
-
"transformers_version": "4.53.
|
| 196 |
"type_vocab_size": 2,
|
| 197 |
"use_cache": true,
|
| 198 |
"vocab_size": 29794
|
| 199 |
}
|
| 200 |
|
| 201 |
-
[2025-
|
| 202 |
-
[2025-
|
| 203 |
-
[2025-
|
| 204 |
-
[2025-
|
| 205 |
|
| 206 |
-
[2025-
|
| 207 |
If your task is similar to the task the model of the checkpoint was trained on, you can already use BertForSequenceClassification for predictions without further training.
|
| 208 |
-
[2025-
|
| 209 |
-
[2025-
|
| 210 |
-
[2025-
|
| 211 |
-
[2025-
|
| 212 |
-
[2025-
|
| 213 |
-
[2025-
|
| 214 |
-
[2025-
|
| 215 |
***** Running Prediction *****
|
| 216 |
-
[2025-
|
| 217 |
-
[2025-
|
| 218 |
-
[2025-
|
| 219 |
-
[2025-
|
| 220 |
-
[2025-
|
| 221 |
-
[2025-
|
| 222 |
-
[2025-
|
| 223 |
-
[2025-
|
| 224 |
-
[2025-
|
| 225 |
-
[2025-
|
| 226 |
-
[2025-
|
|
|
|
| 1 |
+
[2025-07-10 00:38:42,200][__main__][INFO] - Starting inference experiment
|
| 2 |
+
[2025-07-10 00:38:42,202][__main__][INFO] - cache_dir: /tmp/
|
| 3 |
dataset:
|
| 4 |
name: kamel-usp/aes_enem_dataset
|
| 5 |
split: JBCS2025
|
|
|
|
| 21 |
model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59
|
| 22 |
experiments:
|
| 23 |
model:
|
| 24 |
+
name: kamel-usp/jbcs2025_bert-base-portuguese-cased-encoder_classification-C4-essay_only
|
| 25 |
type: encoder_classification
|
| 26 |
num_labels: 6
|
| 27 |
+
output_dir: ./results/
|
| 28 |
+
logging_dir: ./logs/
|
| 29 |
+
best_model_dir: kamel-usp/jbcs2025_bert-base-portuguese-cased-encoder_classification-C4-essay_only
|
| 30 |
tokenizer:
|
| 31 |
name: neuralmind/bert-base-portuguese-cased
|
| 32 |
dataset:
|
|
|
|
| 41 |
gradient_accumulation_steps: 1
|
| 42 |
gradient_checkpointing: false
|
| 43 |
|
| 44 |
+
[2025-07-10 00:38:42,204][__main__][INFO] - Running inference with fine-tuned HF model
|
| 45 |
+
[2025-07-10 00:38:47,581][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/config.json
|
| 46 |
+
[2025-07-10 00:38:47,582][transformers.configuration_utils][INFO] - Model config BertConfig {
|
| 47 |
"architectures": [
|
| 48 |
"BertForMaskedLM"
|
| 49 |
],
|
|
|
|
| 68 |
"pooler_size_per_head": 128,
|
| 69 |
"pooler_type": "first_token_transform",
|
| 70 |
"position_embedding_type": "absolute",
|
| 71 |
+
"transformers_version": "4.53.1",
|
| 72 |
"type_vocab_size": 2,
|
| 73 |
"use_cache": true,
|
| 74 |
"vocab_size": 29794
|
| 75 |
}
|
| 76 |
|
| 77 |
+
[2025-07-10 00:38:47,831][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/config.json
|
| 78 |
+
[2025-07-10 00:38:47,832][transformers.configuration_utils][INFO] - Model config BertConfig {
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 79 |
"architectures": [
|
| 80 |
"BertForMaskedLM"
|
| 81 |
],
|
|
|
|
| 100 |
"pooler_size_per_head": 128,
|
| 101 |
"pooler_type": "first_token_transform",
|
| 102 |
"position_embedding_type": "absolute",
|
| 103 |
+
"transformers_version": "4.53.1",
|
| 104 |
"type_vocab_size": 2,
|
| 105 |
"use_cache": true,
|
| 106 |
"vocab_size": 29794
|
| 107 |
}
|
| 108 |
|
| 109 |
+
[2025-07-10 00:38:48,390][transformers.tokenization_utils_base][INFO] - loading file vocab.txt from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/vocab.txt
|
| 110 |
+
[2025-07-10 00:38:48,390][transformers.tokenization_utils_base][INFO] - loading file tokenizer.json from cache at None
|
| 111 |
+
[2025-07-10 00:38:48,390][transformers.tokenization_utils_base][INFO] - loading file added_tokens.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/added_tokens.json
|
| 112 |
+
[2025-07-10 00:38:48,390][transformers.tokenization_utils_base][INFO] - loading file special_tokens_map.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/special_tokens_map.json
|
| 113 |
+
[2025-07-10 00:38:48,390][transformers.tokenization_utils_base][INFO] - loading file tokenizer_config.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/tokenizer_config.json
|
| 114 |
+
[2025-07-10 00:38:48,390][transformers.tokenization_utils_base][INFO] - loading file chat_template.jinja from cache at None
|
| 115 |
+
[2025-07-10 00:38:48,391][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/config.json
|
| 116 |
+
[2025-07-10 00:38:48,391][transformers.configuration_utils][INFO] - Model config BertConfig {
|
| 117 |
"architectures": [
|
| 118 |
"BertForMaskedLM"
|
| 119 |
],
|
|
|
|
| 138 |
"pooler_size_per_head": 128,
|
| 139 |
"pooler_type": "first_token_transform",
|
| 140 |
"position_embedding_type": "absolute",
|
| 141 |
+
"transformers_version": "4.53.1",
|
| 142 |
"type_vocab_size": 2,
|
| 143 |
"use_cache": true,
|
| 144 |
"vocab_size": 29794
|
| 145 |
}
|
| 146 |
|
| 147 |
+
[2025-07-10 00:38:48,421][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/config.json
|
| 148 |
+
[2025-07-10 00:38:48,421][transformers.configuration_utils][INFO] - Model config BertConfig {
|
| 149 |
+
"architectures": [
|
| 150 |
+
"BertForMaskedLM"
|
| 151 |
+
],
|
| 152 |
+
"attention_probs_dropout_prob": 0.1,
|
| 153 |
+
"classifier_dropout": null,
|
| 154 |
+
"directionality": "bidi",
|
| 155 |
+
"hidden_act": "gelu",
|
| 156 |
+
"hidden_dropout_prob": 0.1,
|
| 157 |
+
"hidden_size": 768,
|
| 158 |
+
"initializer_range": 0.02,
|
| 159 |
+
"intermediate_size": 3072,
|
| 160 |
+
"layer_norm_eps": 1e-12,
|
| 161 |
+
"max_position_embeddings": 512,
|
| 162 |
+
"model_type": "bert",
|
| 163 |
+
"num_attention_heads": 12,
|
| 164 |
+
"num_hidden_layers": 12,
|
| 165 |
+
"output_past": true,
|
| 166 |
+
"pad_token_id": 0,
|
| 167 |
+
"pooler_fc_size": 768,
|
| 168 |
+
"pooler_num_attention_heads": 12,
|
| 169 |
+
"pooler_num_fc_layers": 3,
|
| 170 |
+
"pooler_size_per_head": 128,
|
| 171 |
+
"pooler_type": "first_token_transform",
|
| 172 |
+
"position_embedding_type": "absolute",
|
| 173 |
+
"transformers_version": "4.53.1",
|
| 174 |
+
"type_vocab_size": 2,
|
| 175 |
+
"use_cache": true,
|
| 176 |
+
"vocab_size": 29794
|
| 177 |
+
}
|
| 178 |
+
|
| 179 |
+
[2025-07-10 00:38:48,438][__main__][INFO] - Tokenizer function parameters- Padding:longest; Truncation: True; Use Full Context: False
|
| 180 |
+
[2025-07-10 00:38:48,876][__main__][INFO] -
|
| 181 |
+
Token statistics for 'train' split:
|
| 182 |
+
[2025-07-10 00:38:48,876][__main__][INFO] - Total examples: 500
|
| 183 |
+
[2025-07-10 00:38:48,876][__main__][INFO] - Min tokens: 512
|
| 184 |
+
[2025-07-10 00:38:48,876][__main__][INFO] - Max tokens: 512
|
| 185 |
+
[2025-07-10 00:38:48,876][__main__][INFO] - Avg tokens: 512.00
|
| 186 |
+
[2025-07-10 00:38:48,876][__main__][INFO] - Std tokens: 0.00
|
| 187 |
+
[2025-07-10 00:38:48,971][__main__][INFO] -
|
| 188 |
+
Token statistics for 'validation' split:
|
| 189 |
+
[2025-07-10 00:38:48,972][__main__][INFO] - Total examples: 132
|
| 190 |
+
[2025-07-10 00:38:48,972][__main__][INFO] - Min tokens: 512
|
| 191 |
+
[2025-07-10 00:38:48,972][__main__][INFO] - Max tokens: 512
|
| 192 |
+
[2025-07-10 00:38:48,972][__main__][INFO] - Avg tokens: 512.00
|
| 193 |
+
[2025-07-10 00:38:48,972][__main__][INFO] - Std tokens: 0.00
|
| 194 |
+
[2025-07-10 00:38:49,072][__main__][INFO] -
|
| 195 |
+
Token statistics for 'test' split:
|
| 196 |
+
[2025-07-10 00:38:49,072][__main__][INFO] - Total examples: 138
|
| 197 |
+
[2025-07-10 00:38:49,072][__main__][INFO] - Min tokens: 512
|
| 198 |
+
[2025-07-10 00:38:49,072][__main__][INFO] - Max tokens: 512
|
| 199 |
+
[2025-07-10 00:38:49,072][__main__][INFO] - Avg tokens: 512.00
|
| 200 |
+
[2025-07-10 00:38:49,072][__main__][INFO] - Std tokens: 0.00
|
| 201 |
+
[2025-07-10 00:38:49,072][__main__][INFO] - If token statistics are the same (max, avg, min) keep in mind that this is due to batched tokenization and padding.
|
| 202 |
+
[2025-07-10 00:38:49,072][__main__][INFO] - Model max length: 512. If it is the same as stats, then there is a high chance that sequences are being truncated.
|
| 203 |
+
[2025-07-10 00:38:49,073][__main__][INFO] - Loading model from: kamel-usp/jbcs2025_bert-base-portuguese-cased-encoder_classification-C4-essay_only
|
| 204 |
+
[2025-07-10 00:38:49,073][__main__][INFO] - Loading model from: kamel-usp/jbcs2025_bert-base-portuguese-cased-encoder_classification-C4-essay_only
|
| 205 |
+
[2025-07-10 00:38:50,033][__main__][INFO] - Model need ≈ 1.36 GiB to run inference and 2.58 for training
|
| 206 |
+
[2025-07-10 00:38:50,973][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--kamel-usp--jbcs2025_bert-base-portuguese-cased-encoder_classification-C4-essay_only/snapshots/5ed8cb6c9c541d19f43a96b369ea78181d9617f0/config.json
|
| 207 |
+
[2025-07-10 00:38:50,974][transformers.configuration_utils][INFO] - Model config BertConfig {
|
| 208 |
"architectures": [
|
| 209 |
"BertForSequenceClassification"
|
| 210 |
],
|
|
|
|
| 245 |
"pooler_size_per_head": 128,
|
| 246 |
"pooler_type": "first_token_transform",
|
| 247 |
"position_embedding_type": "absolute",
|
|
|
|
| 248 |
"torch_dtype": "float32",
|
| 249 |
+
"transformers_version": "4.53.1",
|
| 250 |
"type_vocab_size": 2,
|
| 251 |
"use_cache": true,
|
| 252 |
"vocab_size": 29794
|
| 253 |
}
|
| 254 |
|
| 255 |
+
[2025-07-10 00:39:00,410][transformers.modeling_utils][INFO] - loading weights file model.safetensors from cache at /tmp/models--kamel-usp--jbcs2025_bert-base-portuguese-cased-encoder_classification-C4-essay_only/snapshots/5ed8cb6c9c541d19f43a96b369ea78181d9617f0/model.safetensors
|
| 256 |
+
[2025-07-10 00:39:00,411][transformers.modeling_utils][INFO] - Will use torch_dtype=torch.float32 as defined in model's config object
|
| 257 |
+
[2025-07-10 00:39:00,411][transformers.modeling_utils][INFO] - Instantiating BertForSequenceClassification model under default dtype torch.float32.
|
| 258 |
+
[2025-07-10 00:39:00,802][transformers.modeling_utils][INFO] - All model checkpoint weights were used when initializing BertForSequenceClassification.
|
| 259 |
|
| 260 |
+
[2025-07-10 00:39:00,803][transformers.modeling_utils][INFO] - All the weights of BertForSequenceClassification were initialized from the model checkpoint at kamel-usp/jbcs2025_bert-base-portuguese-cased-encoder_classification-C4-essay_only.
|
| 261 |
If your task is similar to the task the model of the checkpoint was trained on, you can already use BertForSequenceClassification for predictions without further training.
|
| 262 |
+
[2025-07-10 00:39:00,812][transformers.training_args][INFO] - PyTorch: setting up devices
|
| 263 |
+
[2025-07-10 00:39:00,836][transformers.training_args][INFO] - The default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-).
|
| 264 |
+
[2025-07-10 00:39:00,843][transformers.trainer][INFO] - You have loaded a model on multiple GPUs. `is_model_parallel` attribute will be force-set to `True` to avoid any unexpected behavior such as device placement mismatching.
|
| 265 |
+
[2025-07-10 00:39:00,871][transformers.trainer][INFO] - Using auto half precision backend
|
| 266 |
+
[2025-07-10 00:39:04,179][__main__][INFO] - Running inference on test dataset
|
| 267 |
+
[2025-07-10 00:39:04,180][transformers.trainer][INFO] - The following columns in the test set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: grades, id, prompt, essay_year, reference, essay_text, supporting_text, id_prompt. If grades, id, prompt, essay_year, reference, essay_text, supporting_text, id_prompt are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.
|
| 268 |
+
[2025-07-10 00:39:04,186][transformers.trainer][INFO] -
|
| 269 |
***** Running Prediction *****
|
| 270 |
+
[2025-07-10 00:39:04,186][transformers.trainer][INFO] - Num examples = 138
|
| 271 |
+
[2025-07-10 00:39:04,187][transformers.trainer][INFO] - Batch size = 16
|
| 272 |
+
[2025-07-10 00:39:05,077][__main__][INFO] - Inference results saved to jbcs2025_bert-base-portuguese-cased-encoder_classification-C4-essay_only-encoder_classification-C4-essay_only_inference_results.jsonl
|
| 273 |
+
[2025-07-10 00:39:05,078][__main__][INFO] - Computing bootstrap confidence intervals for metrics: ['QWK', 'Macro_F1', 'Weighted_F1']
|
| 274 |
+
[2025-07-10 00:41:12,570][__main__][INFO] - Bootstrap CI results saved to bootstrap_confidence_intervals.csv
|
| 275 |
+
[2025-07-10 00:41:12,570][__main__][INFO] - Bootstrap Confidence Intervals (95%):
|
| 276 |
+
[2025-07-10 00:41:12,570][__main__][INFO] - QWK: 0.5435 [0.4185, 0.6551]
|
| 277 |
+
[2025-07-10 00:41:12,570][__main__][INFO] - Macro_F1: 0.3642 [0.2540, 0.5231]
|
| 278 |
+
[2025-07-10 00:41:12,570][__main__][INFO] - Weighted_F1: 0.5515 [0.4651, 0.6359]
|
| 279 |
+
[2025-07-10 00:41:12,571][__main__][INFO] - Inference results: {'accuracy': 0.5362318840579711, 'RMSE': 31.20757990421976, 'QWK': 0.5471167369901547, 'HDIV': 0.007246376811594235, 'Macro_F1': 0.31768171092726494, 'Micro_F1': 0.5362318840579711, 'Weighted_F1': 0.5501656251760338, 'TP_0': np.int64(0), 'TN_0': np.int64(137), 'FP_0': np.int64(0), 'FN_0': np.int64(1), 'TP_1': np.int64(0), 'TN_1': np.int64(137), 'FP_1': np.int64(0), 'FN_1': np.int64(1), 'TP_2': np.int64(7), 'TN_2': np.int64(102), 'FP_2': np.int64(27), 'FN_2': np.int64(2), 'TP_3': np.int64(47), 'TN_3': np.int64(41), 'FP_3': np.int64(21), 'FN_3': np.int64(29), 'TP_4': np.int64(16), 'TN_4': np.int64(84), 'FP_4': np.int64(8), 'FN_4': np.int64(30), 'TP_5': np.int64(4), 'TN_5': np.int64(125), 'FP_5': np.int64(8), 'FN_5': np.int64(1)}
|
| 280 |
+
[2025-07-10 00:41:12,572][__main__][INFO] - Inference experiment completed
|
runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C5-encoder_classification-C5-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only}/.hydra/config.yaml
RENAMED
|
@@ -20,12 +20,12 @@ post_training_results:
|
|
| 20 |
model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59
|
| 21 |
experiments:
|
| 22 |
model:
|
| 23 |
-
name: kamel-usp/
|
| 24 |
type: encoder_classification
|
| 25 |
num_labels: 6
|
| 26 |
-
output_dir: ./results/
|
| 27 |
-
logging_dir: ./logs/
|
| 28 |
-
best_model_dir:
|
| 29 |
tokenizer:
|
| 30 |
name: neuralmind/bert-base-portuguese-cased
|
| 31 |
dataset:
|
|
|
|
| 20 |
model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59
|
| 21 |
experiments:
|
| 22 |
model:
|
| 23 |
+
name: kamel-usp/jbcs2025_bert-base-portuguese-cased-encoder_classification-C5-essay_only
|
| 24 |
type: encoder_classification
|
| 25 |
num_labels: 6
|
| 26 |
+
output_dir: ./results/
|
| 27 |
+
logging_dir: ./logs/
|
| 28 |
+
best_model_dir: kamel-usp/jbcs2025_bert-base-portuguese-cased-encoder_classification-C5-essay_only
|
| 29 |
tokenizer:
|
| 30 |
name: neuralmind/bert-base-portuguese-cased
|
| 31 |
dataset:
|
runs/base_models/bertimbau/jbcs2025_bert-base-portuguese-cased-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only/.hydra/hydra.yaml
ADDED
|
@@ -0,0 +1,157 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
hydra:
|
| 2 |
+
run:
|
| 3 |
+
dir: inference_output/2025-07-10/00-41-17
|
| 4 |
+
sweep:
|
| 5 |
+
dir: multirun/${now:%Y-%m-%d}/${now:%H-%M-%S}
|
| 6 |
+
subdir: ${hydra.job.num}
|
| 7 |
+
launcher:
|
| 8 |
+
_target_: hydra._internal.core_plugins.basic_launcher.BasicLauncher
|
| 9 |
+
sweeper:
|
| 10 |
+
_target_: hydra._internal.core_plugins.basic_sweeper.BasicSweeper
|
| 11 |
+
max_batch_size: null
|
| 12 |
+
params: null
|
| 13 |
+
help:
|
| 14 |
+
app_name: ${hydra.job.name}
|
| 15 |
+
header: '${hydra.help.app_name} is powered by Hydra.
|
| 16 |
+
|
| 17 |
+
'
|
| 18 |
+
footer: 'Powered by Hydra (https://hydra.cc)
|
| 19 |
+
|
| 20 |
+
Use --hydra-help to view Hydra specific help
|
| 21 |
+
|
| 22 |
+
'
|
| 23 |
+
template: '${hydra.help.header}
|
| 24 |
+
|
| 25 |
+
== Configuration groups ==
|
| 26 |
+
|
| 27 |
+
Compose your configuration from those groups (group=option)
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
$APP_CONFIG_GROUPS
|
| 31 |
+
|
| 32 |
+
|
| 33 |
+
== Config ==
|
| 34 |
+
|
| 35 |
+
Override anything in the config (foo.bar=value)
|
| 36 |
+
|
| 37 |
+
|
| 38 |
+
$CONFIG
|
| 39 |
+
|
| 40 |
+
|
| 41 |
+
${hydra.help.footer}
|
| 42 |
+
|
| 43 |
+
'
|
| 44 |
+
hydra_help:
|
| 45 |
+
template: 'Hydra (${hydra.runtime.version})
|
| 46 |
+
|
| 47 |
+
See https://hydra.cc for more info.
|
| 48 |
+
|
| 49 |
+
|
| 50 |
+
== Flags ==
|
| 51 |
+
|
| 52 |
+
$FLAGS_HELP
|
| 53 |
+
|
| 54 |
+
|
| 55 |
+
== Configuration groups ==
|
| 56 |
+
|
| 57 |
+
Compose your configuration from those groups (For example, append hydra/job_logging=disabled
|
| 58 |
+
to command line)
|
| 59 |
+
|
| 60 |
+
|
| 61 |
+
$HYDRA_CONFIG_GROUPS
|
| 62 |
+
|
| 63 |
+
|
| 64 |
+
Use ''--cfg hydra'' to Show the Hydra config.
|
| 65 |
+
|
| 66 |
+
'
|
| 67 |
+
hydra_help: ???
|
| 68 |
+
hydra_logging:
|
| 69 |
+
version: 1
|
| 70 |
+
formatters:
|
| 71 |
+
simple:
|
| 72 |
+
format: '[%(asctime)s][HYDRA] %(message)s'
|
| 73 |
+
handlers:
|
| 74 |
+
console:
|
| 75 |
+
class: logging.StreamHandler
|
| 76 |
+
formatter: simple
|
| 77 |
+
stream: ext://sys.stdout
|
| 78 |
+
root:
|
| 79 |
+
level: INFO
|
| 80 |
+
handlers:
|
| 81 |
+
- console
|
| 82 |
+
loggers:
|
| 83 |
+
logging_example:
|
| 84 |
+
level: DEBUG
|
| 85 |
+
disable_existing_loggers: false
|
| 86 |
+
job_logging:
|
| 87 |
+
version: 1
|
| 88 |
+
formatters:
|
| 89 |
+
simple:
|
| 90 |
+
format: '[%(asctime)s][%(name)s][%(levelname)s] - %(message)s'
|
| 91 |
+
handlers:
|
| 92 |
+
console:
|
| 93 |
+
class: logging.StreamHandler
|
| 94 |
+
formatter: simple
|
| 95 |
+
stream: ext://sys.stdout
|
| 96 |
+
file:
|
| 97 |
+
class: logging.FileHandler
|
| 98 |
+
formatter: simple
|
| 99 |
+
filename: ${hydra.runtime.output_dir}/${hydra.job.name}.log
|
| 100 |
+
root:
|
| 101 |
+
level: INFO
|
| 102 |
+
handlers:
|
| 103 |
+
- console
|
| 104 |
+
- file
|
| 105 |
+
disable_existing_loggers: false
|
| 106 |
+
env: {}
|
| 107 |
+
mode: RUN
|
| 108 |
+
searchpath: []
|
| 109 |
+
callbacks: {}
|
| 110 |
+
output_subdir: .hydra
|
| 111 |
+
overrides:
|
| 112 |
+
hydra:
|
| 113 |
+
- hydra.run.dir=inference_output/2025-07-10/00-41-17
|
| 114 |
+
- hydra.mode=RUN
|
| 115 |
+
task:
|
| 116 |
+
- experiments=temp_inference/kamel-usp_jbcs2025_bert-base-portuguese-cased-encoder_classification-C5-essay_only
|
| 117 |
+
job:
|
| 118 |
+
name: run_inference_experiment
|
| 119 |
+
chdir: null
|
| 120 |
+
override_dirname: experiments=temp_inference/kamel-usp_jbcs2025_bert-base-portuguese-cased-encoder_classification-C5-essay_only
|
| 121 |
+
id: ???
|
| 122 |
+
num: ???
|
| 123 |
+
config_name: config
|
| 124 |
+
env_set: {}
|
| 125 |
+
env_copy: []
|
| 126 |
+
config:
|
| 127 |
+
override_dirname:
|
| 128 |
+
kv_sep: '='
|
| 129 |
+
item_sep: ','
|
| 130 |
+
exclude_keys: []
|
| 131 |
+
runtime:
|
| 132 |
+
version: 1.3.2
|
| 133 |
+
version_base: '1.1'
|
| 134 |
+
cwd: /workspace/jbcs2025
|
| 135 |
+
config_sources:
|
| 136 |
+
- path: hydra.conf
|
| 137 |
+
schema: pkg
|
| 138 |
+
provider: hydra
|
| 139 |
+
- path: /workspace/jbcs2025/configs
|
| 140 |
+
schema: file
|
| 141 |
+
provider: main
|
| 142 |
+
- path: ''
|
| 143 |
+
schema: structured
|
| 144 |
+
provider: schema
|
| 145 |
+
output_dir: /workspace/jbcs2025/inference_output/2025-07-10/00-41-17
|
| 146 |
+
choices:
|
| 147 |
+
experiments: temp_inference/kamel-usp_jbcs2025_bert-base-portuguese-cased-encoder_classification-C5-essay_only
|
| 148 |
+
hydra/env: default
|
| 149 |
+
hydra/callbacks: null
|
| 150 |
+
hydra/job_logging: default
|
| 151 |
+
hydra/hydra_logging: default
|
| 152 |
+
hydra/hydra_help: default
|
| 153 |
+
hydra/help: default
|
| 154 |
+
hydra/sweeper: basic
|
| 155 |
+
hydra/launcher: basic
|
| 156 |
+
hydra/output: default
|
| 157 |
+
verbose: false
|
runs/base_models/bertimbau/jbcs2025_bert-base-portuguese-cased-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only/.hydra/overrides.yaml
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
- experiments=temp_inference/kamel-usp_jbcs2025_bert-base-portuguese-cased-encoder_classification-C5-essay_only
|
runs/base_models/bertimbau/jbcs2025_bert-base-portuguese-cased-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only/bootstrap_confidence_intervals.csv
ADDED
|
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
|
|
|
| 1 |
+
experiment_id,timestamp,QWK_mean,QWK_lower_95ci,QWK_upper_95ci,QWK_ci_width,Macro_F1_mean,Macro_F1_lower_95ci,Macro_F1_upper_95ci,Macro_F1_ci_width,Weighted_F1_mean,Weighted_F1_lower_95ci,Weighted_F1_upper_95ci,Weighted_F1_ci_width
|
| 2 |
+
jbcs2025_bert-base-portuguese-cased-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only,2025-07-10 00:41:22,0.6257235463885343,0.5090039348253902,0.7265976075113407,0.21759367268595053,0.3040685783009996,0.24405777738239762,0.3809607821665941,0.1369030047841965,0.3590255329781086,0.27462031936437037,0.44809835701310474,0.17347803764873437
|
runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C2-encoder_classification-C2-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only}/evaluation_results.csv
RENAMED
|
@@ -1,2 +1,2 @@
|
|
| 1 |
-
accuracy,RMSE,QWK,HDIV,Macro_F1,Micro_F1,Weighted_F1,TP_0,TN_0,FP_0,FN_0,TP_1,TN_1,FP_1,FN_1,TP_2,TN_2,FP_2,FN_2,TP_3,TN_3,FP_3,FN_3,TP_4,TN_4,FP_4,FN_4,TP_5,TN_5,FP_5,FN_5,timestamp,id
|
| 2 |
-
0.
|
|
|
|
| 1 |
+
accuracy,RMSE,QWK,HDIV,Macro_F1,Micro_F1,Weighted_F1,TP_0,TN_0,FP_0,FN_0,TP_1,TN_1,FP_1,FN_1,TP_2,TN_2,FP_2,FN_2,TP_3,TN_3,FP_3,FN_3,TP_4,TN_4,FP_4,FN_4,TP_5,TN_5,FP_5,FN_5,timestamp,id
|
| 2 |
+
0.4057971014492754,52.53018455949943,0.629166290308012,0.07246376811594202,0.30402855742671303,0.4057971014492754,0.359115694090939,15,105,11,7,7,85,21,25,5,93,21,19,2,111,2,23,27,79,27,5,0,135,0,3,2025-07-10 00:41:22,jbcs2025_bert-base-portuguese-cased-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only
|
runs/base_models/{mbert/jbcs2025_mbert_base-C5-encoder_classification-C5-essay_only/jbcs2025_mbert_base-C5-encoder_classification-C5-essay_only_inference_results.jsonl → bertimbau/jbcs2025_bert-base-portuguese-cased-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only/jbcs2025_bert-base-portuguese-cased-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only_inference_results.jsonl}
RENAMED
|
The diff for this file is too large to render.
See raw diff
|
|
|
runs/base_models/bertimbau/{jbcs2025_bertimbau_base-C5-encoder_classification-C5-essay_only → jbcs2025_bert-base-portuguese-cased-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only}/run_inference_experiment.log
RENAMED
|
@@ -1,5 +1,5 @@
|
|
| 1 |
-
[2025-
|
| 2 |
-
[2025-
|
| 3 |
dataset:
|
| 4 |
name: kamel-usp/aes_enem_dataset
|
| 5 |
split: JBCS2025
|
|
@@ -21,12 +21,12 @@ post_training_results:
|
|
| 21 |
model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59
|
| 22 |
experiments:
|
| 23 |
model:
|
| 24 |
-
name: kamel-usp/
|
| 25 |
type: encoder_classification
|
| 26 |
num_labels: 6
|
| 27 |
-
output_dir: ./results/
|
| 28 |
-
logging_dir: ./logs/
|
| 29 |
-
best_model_dir:
|
| 30 |
tokenizer:
|
| 31 |
name: neuralmind/bert-base-portuguese-cased
|
| 32 |
dataset:
|
|
@@ -41,9 +41,9 @@ experiments:
|
|
| 41 |
gradient_accumulation_steps: 1
|
| 42 |
gradient_checkpointing: false
|
| 43 |
|
| 44 |
-
[2025-
|
| 45 |
-
[2025-07-
|
| 46 |
-
[2025-07-
|
| 47 |
"architectures": [
|
| 48 |
"BertForMaskedLM"
|
| 49 |
],
|
|
@@ -68,20 +68,14 @@ experiments:
|
|
| 68 |
"pooler_size_per_head": 128,
|
| 69 |
"pooler_type": "first_token_transform",
|
| 70 |
"position_embedding_type": "absolute",
|
| 71 |
-
"transformers_version": "4.53.
|
| 72 |
"type_vocab_size": 2,
|
| 73 |
"use_cache": true,
|
| 74 |
"vocab_size": 29794
|
| 75 |
}
|
| 76 |
|
| 77 |
-
[2025-07-
|
| 78 |
-
[2025-07-
|
| 79 |
-
[2025-07-01 00:00:00,980][transformers.tokenization_utils_base][INFO] - loading file added_tokens.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/added_tokens.json
|
| 80 |
-
[2025-07-01 00:00:00,980][transformers.tokenization_utils_base][INFO] - loading file special_tokens_map.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/special_tokens_map.json
|
| 81 |
-
[2025-07-01 00:00:00,980][transformers.tokenization_utils_base][INFO] - loading file tokenizer_config.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/tokenizer_config.json
|
| 82 |
-
[2025-07-01 00:00:00,980][transformers.tokenization_utils_base][INFO] - loading file chat_template.jinja from cache at None
|
| 83 |
-
[2025-07-01 00:00:00,980][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/config.json
|
| 84 |
-
[2025-07-01 00:00:00,981][transformers.configuration_utils][INFO] - Model config BertConfig {
|
| 85 |
"architectures": [
|
| 86 |
"BertForMaskedLM"
|
| 87 |
],
|
|
@@ -106,14 +100,20 @@ experiments:
|
|
| 106 |
"pooler_size_per_head": 128,
|
| 107 |
"pooler_type": "first_token_transform",
|
| 108 |
"position_embedding_type": "absolute",
|
| 109 |
-
"transformers_version": "4.53.
|
| 110 |
"type_vocab_size": 2,
|
| 111 |
"use_cache": true,
|
| 112 |
"vocab_size": 29794
|
| 113 |
}
|
| 114 |
|
| 115 |
-
[2025-07-
|
| 116 |
-
[2025-07-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 117 |
"architectures": [
|
| 118 |
"BertForMaskedLM"
|
| 119 |
],
|
|
@@ -138,18 +138,73 @@ experiments:
|
|
| 138 |
"pooler_size_per_head": 128,
|
| 139 |
"pooler_type": "first_token_transform",
|
| 140 |
"position_embedding_type": "absolute",
|
| 141 |
-
"transformers_version": "4.53.
|
| 142 |
"type_vocab_size": 2,
|
| 143 |
"use_cache": true,
|
| 144 |
"vocab_size": 29794
|
| 145 |
}
|
| 146 |
|
| 147 |
-
[2025-07-
|
| 148 |
-
[2025-07-
|
| 149 |
-
|
| 150 |
-
|
| 151 |
-
|
| 152 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 153 |
"architectures": [
|
| 154 |
"BertForSequenceClassification"
|
| 155 |
],
|
|
@@ -190,37 +245,36 @@ experiments:
|
|
| 190 |
"pooler_size_per_head": 128,
|
| 191 |
"pooler_type": "first_token_transform",
|
| 192 |
"position_embedding_type": "absolute",
|
| 193 |
-
"problem_type": "single_label_classification",
|
| 194 |
"torch_dtype": "float32",
|
| 195 |
-
"transformers_version": "4.53.
|
| 196 |
"type_vocab_size": 2,
|
| 197 |
"use_cache": true,
|
| 198 |
"vocab_size": 29794
|
| 199 |
}
|
| 200 |
|
| 201 |
-
[2025-07-
|
| 202 |
-
[2025-07-
|
| 203 |
-
[2025-07-
|
| 204 |
-
[2025-07-
|
| 205 |
|
| 206 |
-
[2025-07-
|
| 207 |
If your task is similar to the task the model of the checkpoint was trained on, you can already use BertForSequenceClassification for predictions without further training.
|
| 208 |
-
[2025-07-
|
| 209 |
-
[2025-07-
|
| 210 |
-
[2025-07-
|
| 211 |
-
[2025-07-
|
| 212 |
-
[2025-07-
|
| 213 |
-
[2025-07-
|
| 214 |
-
[2025-07-
|
| 215 |
***** Running Prediction *****
|
| 216 |
-
[2025-07-
|
| 217 |
-
[2025-07-
|
| 218 |
-
[2025-07-
|
| 219 |
-
[2025-07-
|
| 220 |
-
[2025-07-
|
| 221 |
-
[2025-07-
|
| 222 |
-
[2025-07-
|
| 223 |
-
[2025-07-
|
| 224 |
-
[2025-07-
|
| 225 |
-
[2025-07-
|
| 226 |
-
[2025-07-
|
|
|
|
| 1 |
+
[2025-07-10 00:41:22,791][__main__][INFO] - Starting inference experiment
|
| 2 |
+
[2025-07-10 00:41:22,793][__main__][INFO] - cache_dir: /tmp/
|
| 3 |
dataset:
|
| 4 |
name: kamel-usp/aes_enem_dataset
|
| 5 |
split: JBCS2025
|
|
|
|
| 21 |
model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59
|
| 22 |
experiments:
|
| 23 |
model:
|
| 24 |
+
name: kamel-usp/jbcs2025_bert-base-portuguese-cased-encoder_classification-C5-essay_only
|
| 25 |
type: encoder_classification
|
| 26 |
num_labels: 6
|
| 27 |
+
output_dir: ./results/
|
| 28 |
+
logging_dir: ./logs/
|
| 29 |
+
best_model_dir: kamel-usp/jbcs2025_bert-base-portuguese-cased-encoder_classification-C5-essay_only
|
| 30 |
tokenizer:
|
| 31 |
name: neuralmind/bert-base-portuguese-cased
|
| 32 |
dataset:
|
|
|
|
| 41 |
gradient_accumulation_steps: 1
|
| 42 |
gradient_checkpointing: false
|
| 43 |
|
| 44 |
+
[2025-07-10 00:41:22,795][__main__][INFO] - Running inference with fine-tuned HF model
|
| 45 |
+
[2025-07-10 00:41:27,077][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/config.json
|
| 46 |
+
[2025-07-10 00:41:27,079][transformers.configuration_utils][INFO] - Model config BertConfig {
|
| 47 |
"architectures": [
|
| 48 |
"BertForMaskedLM"
|
| 49 |
],
|
|
|
|
| 68 |
"pooler_size_per_head": 128,
|
| 69 |
"pooler_type": "first_token_transform",
|
| 70 |
"position_embedding_type": "absolute",
|
| 71 |
+
"transformers_version": "4.53.1",
|
| 72 |
"type_vocab_size": 2,
|
| 73 |
"use_cache": true,
|
| 74 |
"vocab_size": 29794
|
| 75 |
}
|
| 76 |
|
| 77 |
+
[2025-07-10 00:41:27,284][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/config.json
|
| 78 |
+
[2025-07-10 00:41:27,285][transformers.configuration_utils][INFO] - Model config BertConfig {
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 79 |
"architectures": [
|
| 80 |
"BertForMaskedLM"
|
| 81 |
],
|
|
|
|
| 100 |
"pooler_size_per_head": 128,
|
| 101 |
"pooler_type": "first_token_transform",
|
| 102 |
"position_embedding_type": "absolute",
|
| 103 |
+
"transformers_version": "4.53.1",
|
| 104 |
"type_vocab_size": 2,
|
| 105 |
"use_cache": true,
|
| 106 |
"vocab_size": 29794
|
| 107 |
}
|
| 108 |
|
| 109 |
+
[2025-07-10 00:41:27,487][transformers.tokenization_utils_base][INFO] - loading file vocab.txt from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/vocab.txt
|
| 110 |
+
[2025-07-10 00:41:27,487][transformers.tokenization_utils_base][INFO] - loading file tokenizer.json from cache at None
|
| 111 |
+
[2025-07-10 00:41:27,487][transformers.tokenization_utils_base][INFO] - loading file added_tokens.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/added_tokens.json
|
| 112 |
+
[2025-07-10 00:41:27,487][transformers.tokenization_utils_base][INFO] - loading file special_tokens_map.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/special_tokens_map.json
|
| 113 |
+
[2025-07-10 00:41:27,487][transformers.tokenization_utils_base][INFO] - loading file tokenizer_config.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/tokenizer_config.json
|
| 114 |
+
[2025-07-10 00:41:27,487][transformers.tokenization_utils_base][INFO] - loading file chat_template.jinja from cache at None
|
| 115 |
+
[2025-07-10 00:41:27,487][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/config.json
|
| 116 |
+
[2025-07-10 00:41:27,488][transformers.configuration_utils][INFO] - Model config BertConfig {
|
| 117 |
"architectures": [
|
| 118 |
"BertForMaskedLM"
|
| 119 |
],
|
|
|
|
| 138 |
"pooler_size_per_head": 128,
|
| 139 |
"pooler_type": "first_token_transform",
|
| 140 |
"position_embedding_type": "absolute",
|
| 141 |
+
"transformers_version": "4.53.1",
|
| 142 |
"type_vocab_size": 2,
|
| 143 |
"use_cache": true,
|
| 144 |
"vocab_size": 29794
|
| 145 |
}
|
| 146 |
|
| 147 |
+
[2025-07-10 00:41:27,520][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--neuralmind--bert-base-portuguese-cased/snapshots/94d69c95f98f7d5b2a8700c420230ae10def0baa/config.json
|
| 148 |
+
[2025-07-10 00:41:27,520][transformers.configuration_utils][INFO] - Model config BertConfig {
|
| 149 |
+
"architectures": [
|
| 150 |
+
"BertForMaskedLM"
|
| 151 |
+
],
|
| 152 |
+
"attention_probs_dropout_prob": 0.1,
|
| 153 |
+
"classifier_dropout": null,
|
| 154 |
+
"directionality": "bidi",
|
| 155 |
+
"hidden_act": "gelu",
|
| 156 |
+
"hidden_dropout_prob": 0.1,
|
| 157 |
+
"hidden_size": 768,
|
| 158 |
+
"initializer_range": 0.02,
|
| 159 |
+
"intermediate_size": 3072,
|
| 160 |
+
"layer_norm_eps": 1e-12,
|
| 161 |
+
"max_position_embeddings": 512,
|
| 162 |
+
"model_type": "bert",
|
| 163 |
+
"num_attention_heads": 12,
|
| 164 |
+
"num_hidden_layers": 12,
|
| 165 |
+
"output_past": true,
|
| 166 |
+
"pad_token_id": 0,
|
| 167 |
+
"pooler_fc_size": 768,
|
| 168 |
+
"pooler_num_attention_heads": 12,
|
| 169 |
+
"pooler_num_fc_layers": 3,
|
| 170 |
+
"pooler_size_per_head": 128,
|
| 171 |
+
"pooler_type": "first_token_transform",
|
| 172 |
+
"position_embedding_type": "absolute",
|
| 173 |
+
"transformers_version": "4.53.1",
|
| 174 |
+
"type_vocab_size": 2,
|
| 175 |
+
"use_cache": true,
|
| 176 |
+
"vocab_size": 29794
|
| 177 |
+
}
|
| 178 |
+
|
| 179 |
+
[2025-07-10 00:41:27,539][__main__][INFO] - Tokenizer function parameters- Padding:longest; Truncation: True; Use Full Context: False
|
| 180 |
+
[2025-07-10 00:41:27,963][__main__][INFO] -
|
| 181 |
+
Token statistics for 'train' split:
|
| 182 |
+
[2025-07-10 00:41:27,964][__main__][INFO] - Total examples: 500
|
| 183 |
+
[2025-07-10 00:41:27,964][__main__][INFO] - Min tokens: 512
|
| 184 |
+
[2025-07-10 00:41:27,964][__main__][INFO] - Max tokens: 512
|
| 185 |
+
[2025-07-10 00:41:27,964][__main__][INFO] - Avg tokens: 512.00
|
| 186 |
+
[2025-07-10 00:41:27,964][__main__][INFO] - Std tokens: 0.00
|
| 187 |
+
[2025-07-10 00:41:28,061][__main__][INFO] -
|
| 188 |
+
Token statistics for 'validation' split:
|
| 189 |
+
[2025-07-10 00:41:28,061][__main__][INFO] - Total examples: 132
|
| 190 |
+
[2025-07-10 00:41:28,062][__main__][INFO] - Min tokens: 512
|
| 191 |
+
[2025-07-10 00:41:28,062][__main__][INFO] - Max tokens: 512
|
| 192 |
+
[2025-07-10 00:41:28,062][__main__][INFO] - Avg tokens: 512.00
|
| 193 |
+
[2025-07-10 00:41:28,062][__main__][INFO] - Std tokens: 0.00
|
| 194 |
+
[2025-07-10 00:41:28,162][__main__][INFO] -
|
| 195 |
+
Token statistics for 'test' split:
|
| 196 |
+
[2025-07-10 00:41:28,162][__main__][INFO] - Total examples: 138
|
| 197 |
+
[2025-07-10 00:41:28,162][__main__][INFO] - Min tokens: 512
|
| 198 |
+
[2025-07-10 00:41:28,162][__main__][INFO] - Max tokens: 512
|
| 199 |
+
[2025-07-10 00:41:28,162][__main__][INFO] - Avg tokens: 512.00
|
| 200 |
+
[2025-07-10 00:41:28,162][__main__][INFO] - Std tokens: 0.00
|
| 201 |
+
[2025-07-10 00:41:28,162][__main__][INFO] - If token statistics are the same (max, avg, min) keep in mind that this is due to batched tokenization and padding.
|
| 202 |
+
[2025-07-10 00:41:28,162][__main__][INFO] - Model max length: 512. If it is the same as stats, then there is a high chance that sequences are being truncated.
|
| 203 |
+
[2025-07-10 00:41:28,163][__main__][INFO] - Loading model from: kamel-usp/jbcs2025_bert-base-portuguese-cased-encoder_classification-C5-essay_only
|
| 204 |
+
[2025-07-10 00:41:28,163][__main__][INFO] - Loading model from: kamel-usp/jbcs2025_bert-base-portuguese-cased-encoder_classification-C5-essay_only
|
| 205 |
+
[2025-07-10 00:41:29,268][__main__][INFO] - Model need ≈ 1.36 GiB to run inference and 2.58 for training
|
| 206 |
+
[2025-07-10 00:41:30,165][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--kamel-usp--jbcs2025_bert-base-portuguese-cased-encoder_classification-C5-essay_only/snapshots/15a44128caf634f8d6327daaa8b803cd4b8339f8/config.json
|
| 207 |
+
[2025-07-10 00:41:30,165][transformers.configuration_utils][INFO] - Model config BertConfig {
|
| 208 |
"architectures": [
|
| 209 |
"BertForSequenceClassification"
|
| 210 |
],
|
|
|
|
| 245 |
"pooler_size_per_head": 128,
|
| 246 |
"pooler_type": "first_token_transform",
|
| 247 |
"position_embedding_type": "absolute",
|
|
|
|
| 248 |
"torch_dtype": "float32",
|
| 249 |
+
"transformers_version": "4.53.1",
|
| 250 |
"type_vocab_size": 2,
|
| 251 |
"use_cache": true,
|
| 252 |
"vocab_size": 29794
|
| 253 |
}
|
| 254 |
|
| 255 |
+
[2025-07-10 00:41:38,557][transformers.modeling_utils][INFO] - loading weights file model.safetensors from cache at /tmp/models--kamel-usp--jbcs2025_bert-base-portuguese-cased-encoder_classification-C5-essay_only/snapshots/15a44128caf634f8d6327daaa8b803cd4b8339f8/model.safetensors
|
| 256 |
+
[2025-07-10 00:41:38,559][transformers.modeling_utils][INFO] - Will use torch_dtype=torch.float32 as defined in model's config object
|
| 257 |
+
[2025-07-10 00:41:38,559][transformers.modeling_utils][INFO] - Instantiating BertForSequenceClassification model under default dtype torch.float32.
|
| 258 |
+
[2025-07-10 00:41:38,941][transformers.modeling_utils][INFO] - All model checkpoint weights were used when initializing BertForSequenceClassification.
|
| 259 |
|
| 260 |
+
[2025-07-10 00:41:38,942][transformers.modeling_utils][INFO] - All the weights of BertForSequenceClassification were initialized from the model checkpoint at kamel-usp/jbcs2025_bert-base-portuguese-cased-encoder_classification-C5-essay_only.
|
| 261 |
If your task is similar to the task the model of the checkpoint was trained on, you can already use BertForSequenceClassification for predictions without further training.
|
| 262 |
+
[2025-07-10 00:41:38,951][transformers.training_args][INFO] - PyTorch: setting up devices
|
| 263 |
+
[2025-07-10 00:41:38,976][transformers.training_args][INFO] - The default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-).
|
| 264 |
+
[2025-07-10 00:41:38,985][transformers.trainer][INFO] - You have loaded a model on multiple GPUs. `is_model_parallel` attribute will be force-set to `True` to avoid any unexpected behavior such as device placement mismatching.
|
| 265 |
+
[2025-07-10 00:41:39,013][transformers.trainer][INFO] - Using auto half precision backend
|
| 266 |
+
[2025-07-10 00:41:42,342][__main__][INFO] - Running inference on test dataset
|
| 267 |
+
[2025-07-10 00:41:42,344][transformers.trainer][INFO] - The following columns in the test set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: id, id_prompt, supporting_text, grades, essay_text, prompt, reference, essay_year. If id, id_prompt, supporting_text, grades, essay_text, prompt, reference, essay_year are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.
|
| 268 |
+
[2025-07-10 00:41:42,350][transformers.trainer][INFO] -
|
| 269 |
***** Running Prediction *****
|
| 270 |
+
[2025-07-10 00:41:42,350][transformers.trainer][INFO] - Num examples = 138
|
| 271 |
+
[2025-07-10 00:41:42,350][transformers.trainer][INFO] - Batch size = 16
|
| 272 |
+
[2025-07-10 00:41:43,340][__main__][INFO] - Inference results saved to jbcs2025_bert-base-portuguese-cased-encoder_classification-C5-essay_only-encoder_classification-C5-essay_only_inference_results.jsonl
|
| 273 |
+
[2025-07-10 00:41:43,341][__main__][INFO] - Computing bootstrap confidence intervals for metrics: ['QWK', 'Macro_F1', 'Weighted_F1']
|
| 274 |
+
[2025-07-10 00:43:47,972][__main__][INFO] - Bootstrap CI results saved to bootstrap_confidence_intervals.csv
|
| 275 |
+
[2025-07-10 00:43:47,972][__main__][INFO] - Bootstrap Confidence Intervals (95%):
|
| 276 |
+
[2025-07-10 00:43:47,972][__main__][INFO] - QWK: 0.6257 [0.5090, 0.7266]
|
| 277 |
+
[2025-07-10 00:43:47,972][__main__][INFO] - Macro_F1: 0.3041 [0.2441, 0.3810]
|
| 278 |
+
[2025-07-10 00:43:47,972][__main__][INFO] - Weighted_F1: 0.3590 [0.2746, 0.4481]
|
| 279 |
+
[2025-07-10 00:43:47,972][__main__][INFO] - Inference results: {'accuracy': 0.4057971014492754, 'RMSE': 52.53018455949943, 'QWK': 0.629166290308012, 'HDIV': 0.07246376811594202, 'Macro_F1': 0.30402855742671303, 'Micro_F1': 0.4057971014492754, 'Weighted_F1': 0.359115694090939, 'TP_0': np.int64(15), 'TN_0': np.int64(105), 'FP_0': np.int64(11), 'FN_0': np.int64(7), 'TP_1': np.int64(7), 'TN_1': np.int64(85), 'FP_1': np.int64(21), 'FN_1': np.int64(25), 'TP_2': np.int64(5), 'TN_2': np.int64(93), 'FP_2': np.int64(21), 'FN_2': np.int64(19), 'TP_3': np.int64(2), 'TN_3': np.int64(111), 'FP_3': np.int64(2), 'FN_3': np.int64(23), 'TP_4': np.int64(27), 'TN_4': np.int64(79), 'FP_4': np.int64(27), 'FN_4': np.int64(5), 'TP_5': np.int64(0), 'TN_5': np.int64(135), 'FP_5': np.int64(0), 'FN_5': np.int64(3)}
|
| 280 |
+
[2025-07-10 00:43:47,972][__main__][INFO] - Inference experiment completed
|
runs/base_models/bertimbau/jbcs2025_bertimbau_base-C1-encoder_classification-C1-essay_only/.hydra/overrides.yaml
DELETED
|
@@ -1 +0,0 @@
|
|
| 1 |
-
- experiments=base_models/C1
|
|
|
|
|
|
runs/base_models/bertimbau/jbcs2025_bertimbau_base-C2-encoder_classification-C2-essay_only/.hydra/overrides.yaml
DELETED
|
@@ -1 +0,0 @@
|
|
| 1 |
-
- experiments=base_models/C2
|
|
|
|
|
|
runs/base_models/bertimbau/jbcs2025_bertimbau_base-C3-encoder_classification-C3-essay_only/.hydra/overrides.yaml
DELETED
|
@@ -1 +0,0 @@
|
|
| 1 |
-
- experiments=base_models/C3
|
|
|
|
|
|
runs/base_models/bertimbau/jbcs2025_bertimbau_base-C4-encoder_classification-C4-essay_only/.hydra/overrides.yaml
DELETED
|
@@ -1 +0,0 @@
|
|
| 1 |
-
- experiments=base_models/C4
|
|
|
|
|
|
runs/base_models/bertimbau/jbcs2025_bertimbau_base-C5-encoder_classification-C5-essay_only/.hydra/hydra.yaml
DELETED
|
@@ -1,156 +0,0 @@
|
|
| 1 |
-
hydra:
|
| 2 |
-
run:
|
| 3 |
-
dir: outputs/${now:%Y-%m-%d}/${now:%H-%M-%S}
|
| 4 |
-
sweep:
|
| 5 |
-
dir: multirun/${now:%Y-%m-%d}/${now:%H-%M-%S}
|
| 6 |
-
subdir: ${hydra.job.num}
|
| 7 |
-
launcher:
|
| 8 |
-
_target_: hydra._internal.core_plugins.basic_launcher.BasicLauncher
|
| 9 |
-
sweeper:
|
| 10 |
-
_target_: hydra._internal.core_plugins.basic_sweeper.BasicSweeper
|
| 11 |
-
max_batch_size: null
|
| 12 |
-
params: null
|
| 13 |
-
help:
|
| 14 |
-
app_name: ${hydra.job.name}
|
| 15 |
-
header: '${hydra.help.app_name} is powered by Hydra.
|
| 16 |
-
|
| 17 |
-
'
|
| 18 |
-
footer: 'Powered by Hydra (https://hydra.cc)
|
| 19 |
-
|
| 20 |
-
Use --hydra-help to view Hydra specific help
|
| 21 |
-
|
| 22 |
-
'
|
| 23 |
-
template: '${hydra.help.header}
|
| 24 |
-
|
| 25 |
-
== Configuration groups ==
|
| 26 |
-
|
| 27 |
-
Compose your configuration from those groups (group=option)
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
$APP_CONFIG_GROUPS
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
== Config ==
|
| 34 |
-
|
| 35 |
-
Override anything in the config (foo.bar=value)
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
$CONFIG
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
${hydra.help.footer}
|
| 42 |
-
|
| 43 |
-
'
|
| 44 |
-
hydra_help:
|
| 45 |
-
template: 'Hydra (${hydra.runtime.version})
|
| 46 |
-
|
| 47 |
-
See https://hydra.cc for more info.
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
== Flags ==
|
| 51 |
-
|
| 52 |
-
$FLAGS_HELP
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
== Configuration groups ==
|
| 56 |
-
|
| 57 |
-
Compose your configuration from those groups (For example, append hydra/job_logging=disabled
|
| 58 |
-
to command line)
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
$HYDRA_CONFIG_GROUPS
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
Use ''--cfg hydra'' to Show the Hydra config.
|
| 65 |
-
|
| 66 |
-
'
|
| 67 |
-
hydra_help: ???
|
| 68 |
-
hydra_logging:
|
| 69 |
-
version: 1
|
| 70 |
-
formatters:
|
| 71 |
-
simple:
|
| 72 |
-
format: '[%(asctime)s][HYDRA] %(message)s'
|
| 73 |
-
handlers:
|
| 74 |
-
console:
|
| 75 |
-
class: logging.StreamHandler
|
| 76 |
-
formatter: simple
|
| 77 |
-
stream: ext://sys.stdout
|
| 78 |
-
root:
|
| 79 |
-
level: INFO
|
| 80 |
-
handlers:
|
| 81 |
-
- console
|
| 82 |
-
loggers:
|
| 83 |
-
logging_example:
|
| 84 |
-
level: DEBUG
|
| 85 |
-
disable_existing_loggers: false
|
| 86 |
-
job_logging:
|
| 87 |
-
version: 1
|
| 88 |
-
formatters:
|
| 89 |
-
simple:
|
| 90 |
-
format: '[%(asctime)s][%(name)s][%(levelname)s] - %(message)s'
|
| 91 |
-
handlers:
|
| 92 |
-
console:
|
| 93 |
-
class: logging.StreamHandler
|
| 94 |
-
formatter: simple
|
| 95 |
-
stream: ext://sys.stdout
|
| 96 |
-
file:
|
| 97 |
-
class: logging.FileHandler
|
| 98 |
-
formatter: simple
|
| 99 |
-
filename: ${hydra.runtime.output_dir}/${hydra.job.name}.log
|
| 100 |
-
root:
|
| 101 |
-
level: INFO
|
| 102 |
-
handlers:
|
| 103 |
-
- console
|
| 104 |
-
- file
|
| 105 |
-
disable_existing_loggers: false
|
| 106 |
-
env: {}
|
| 107 |
-
mode: RUN
|
| 108 |
-
searchpath: []
|
| 109 |
-
callbacks: {}
|
| 110 |
-
output_subdir: .hydra
|
| 111 |
-
overrides:
|
| 112 |
-
hydra:
|
| 113 |
-
- hydra.mode=RUN
|
| 114 |
-
task:
|
| 115 |
-
- experiments=base_models/C5
|
| 116 |
-
job:
|
| 117 |
-
name: run_inference_experiment
|
| 118 |
-
chdir: null
|
| 119 |
-
override_dirname: experiments=base_models/C5
|
| 120 |
-
id: ???
|
| 121 |
-
num: ???
|
| 122 |
-
config_name: config
|
| 123 |
-
env_set: {}
|
| 124 |
-
env_copy: []
|
| 125 |
-
config:
|
| 126 |
-
override_dirname:
|
| 127 |
-
kv_sep: '='
|
| 128 |
-
item_sep: ','
|
| 129 |
-
exclude_keys: []
|
| 130 |
-
runtime:
|
| 131 |
-
version: 1.3.2
|
| 132 |
-
version_base: '1.1'
|
| 133 |
-
cwd: /workspace/jbcs2025
|
| 134 |
-
config_sources:
|
| 135 |
-
- path: hydra.conf
|
| 136 |
-
schema: pkg
|
| 137 |
-
provider: hydra
|
| 138 |
-
- path: /workspace/jbcs2025/configs
|
| 139 |
-
schema: file
|
| 140 |
-
provider: main
|
| 141 |
-
- path: ''
|
| 142 |
-
schema: structured
|
| 143 |
-
provider: schema
|
| 144 |
-
output_dir: /workspace/jbcs2025/outputs/2025-06-30/23-59-55
|
| 145 |
-
choices:
|
| 146 |
-
experiments: base_models/C5
|
| 147 |
-
hydra/env: default
|
| 148 |
-
hydra/callbacks: null
|
| 149 |
-
hydra/job_logging: default
|
| 150 |
-
hydra/hydra_logging: default
|
| 151 |
-
hydra/hydra_help: default
|
| 152 |
-
hydra/help: default
|
| 153 |
-
hydra/sweeper: basic
|
| 154 |
-
hydra/launcher: basic
|
| 155 |
-
hydra/output: default
|
| 156 |
-
verbose: false
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
runs/base_models/bertimbau/jbcs2025_bertimbau_base-C5-encoder_classification-C5-essay_only/.hydra/overrides.yaml
DELETED
|
@@ -1 +0,0 @@
|
|
| 1 |
-
- experiments=base_models/C5
|
|
|
|
|
|
runs/base_models/bertimbau/jbcs2025_bertimbau_base-C5-encoder_classification-C5-essay_only/bootstrap_confidence_intervals.csv
DELETED
|
@@ -1,2 +0,0 @@
|
|
| 1 |
-
experiment_id,timestamp,QWK_mean,QWK_lower_95ci,QWK_upper_95ci,QWK_ci_width,Macro_F1_mean,Macro_F1_lower_95ci,Macro_F1_upper_95ci,Macro_F1_ci_width,Weighted_F1_mean,Weighted_F1_lower_95ci,Weighted_F1_upper_95ci,Weighted_F1_ci_width
|
| 2 |
-
jbcs2025_bertimbau_base-C5-encoder_classification-C5-essay_only,2025-06-30 23:59:55,0.47349799901126716,0.3401973117894254,0.5947975929869902,0.2546002811975648,0.20469588256838514,0.14697576658446224,0.27274642041824704,0.1257706538337848,0.25750931482031114,0.18034272476682853,0.33952288243091566,0.15918015766408714
|
|
|
|
|
|
|
|
|
runs/base_models/bertimbau/jbcs2025_bertimbau_base-C5-encoder_classification-C5-essay_only/evaluation_results.csv
DELETED
|
@@ -1,2 +0,0 @@
|
|
| 1 |
-
accuracy,RMSE,QWK,HDIV,Macro_F1,Micro_F1,Weighted_F1,TP_0,TN_0,FP_0,FN_0,TP_1,TN_1,FP_1,FN_1,TP_2,TN_2,FP_2,FN_2,TP_3,TN_3,FP_3,FN_3,TP_4,TN_4,FP_4,FN_4,TP_5,TN_5,FP_5,FN_5,timestamp,id
|
| 2 |
-
0.3188405797101449,61.2904702146299,0.476219483623073,0.13043478260869568,0.2055897809038726,0.3188405797101449,0.25808413038205613,3,113,3,19,9,71,35,23,3,103,11,21,1,108,5,24,28,66,40,4,0,135,0,3,2025-06-30 23:59:55,jbcs2025_bertimbau_base-C5-encoder_classification-C5-essay_only
|
|
|
|
|
|
|
|
|
runs/base_models/bertugues/jbcs2025_BERTugues-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only/.hydra/config.yaml
ADDED
|
@@ -0,0 +1,41 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
cache_dir: /tmp/
|
| 2 |
+
dataset:
|
| 3 |
+
name: kamel-usp/aes_enem_dataset
|
| 4 |
+
split: JBCS2025
|
| 5 |
+
training_params:
|
| 6 |
+
seed: 42
|
| 7 |
+
num_train_epochs: 20
|
| 8 |
+
logging_steps: 100
|
| 9 |
+
metric_for_best_model: QWK
|
| 10 |
+
bf16: true
|
| 11 |
+
bootstrap:
|
| 12 |
+
enabled: true
|
| 13 |
+
n_bootstrap: 10000
|
| 14 |
+
bootstrap_seed: 42
|
| 15 |
+
metrics:
|
| 16 |
+
- QWK
|
| 17 |
+
- Macro_F1
|
| 18 |
+
- Weighted_F1
|
| 19 |
+
post_training_results:
|
| 20 |
+
model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59
|
| 21 |
+
experiments:
|
| 22 |
+
model:
|
| 23 |
+
name: kamel-usp/jbcs2025_BERTugues-base-portuguese-cased-encoder_classification-C1-essay_only
|
| 24 |
+
type: encoder_classification
|
| 25 |
+
num_labels: 6
|
| 26 |
+
output_dir: ./results/
|
| 27 |
+
logging_dir: ./logs/
|
| 28 |
+
best_model_dir: kamel-usp/jbcs2025_BERTugues-base-portuguese-cased-encoder_classification-C1-essay_only
|
| 29 |
+
tokenizer:
|
| 30 |
+
name: ricardoz/BERTugues-base-portuguese-cased
|
| 31 |
+
dataset:
|
| 32 |
+
grade_index: 0
|
| 33 |
+
use_full_context: false
|
| 34 |
+
training_params:
|
| 35 |
+
weight_decay: 0.01
|
| 36 |
+
warmup_ratio: 0.1
|
| 37 |
+
learning_rate: 5.0e-05
|
| 38 |
+
train_batch_size: 16
|
| 39 |
+
eval_batch_size: 16
|
| 40 |
+
gradient_accumulation_steps: 1
|
| 41 |
+
gradient_checkpointing: false
|
runs/base_models/bertugues/jbcs2025_BERTugues-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only/.hydra/hydra.yaml
ADDED
|
@@ -0,0 +1,157 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
hydra:
|
| 2 |
+
run:
|
| 3 |
+
dir: inference_output/2025-07-10/00-57-36
|
| 4 |
+
sweep:
|
| 5 |
+
dir: multirun/${now:%Y-%m-%d}/${now:%H-%M-%S}
|
| 6 |
+
subdir: ${hydra.job.num}
|
| 7 |
+
launcher:
|
| 8 |
+
_target_: hydra._internal.core_plugins.basic_launcher.BasicLauncher
|
| 9 |
+
sweeper:
|
| 10 |
+
_target_: hydra._internal.core_plugins.basic_sweeper.BasicSweeper
|
| 11 |
+
max_batch_size: null
|
| 12 |
+
params: null
|
| 13 |
+
help:
|
| 14 |
+
app_name: ${hydra.job.name}
|
| 15 |
+
header: '${hydra.help.app_name} is powered by Hydra.
|
| 16 |
+
|
| 17 |
+
'
|
| 18 |
+
footer: 'Powered by Hydra (https://hydra.cc)
|
| 19 |
+
|
| 20 |
+
Use --hydra-help to view Hydra specific help
|
| 21 |
+
|
| 22 |
+
'
|
| 23 |
+
template: '${hydra.help.header}
|
| 24 |
+
|
| 25 |
+
== Configuration groups ==
|
| 26 |
+
|
| 27 |
+
Compose your configuration from those groups (group=option)
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
$APP_CONFIG_GROUPS
|
| 31 |
+
|
| 32 |
+
|
| 33 |
+
== Config ==
|
| 34 |
+
|
| 35 |
+
Override anything in the config (foo.bar=value)
|
| 36 |
+
|
| 37 |
+
|
| 38 |
+
$CONFIG
|
| 39 |
+
|
| 40 |
+
|
| 41 |
+
${hydra.help.footer}
|
| 42 |
+
|
| 43 |
+
'
|
| 44 |
+
hydra_help:
|
| 45 |
+
template: 'Hydra (${hydra.runtime.version})
|
| 46 |
+
|
| 47 |
+
See https://hydra.cc for more info.
|
| 48 |
+
|
| 49 |
+
|
| 50 |
+
== Flags ==
|
| 51 |
+
|
| 52 |
+
$FLAGS_HELP
|
| 53 |
+
|
| 54 |
+
|
| 55 |
+
== Configuration groups ==
|
| 56 |
+
|
| 57 |
+
Compose your configuration from those groups (For example, append hydra/job_logging=disabled
|
| 58 |
+
to command line)
|
| 59 |
+
|
| 60 |
+
|
| 61 |
+
$HYDRA_CONFIG_GROUPS
|
| 62 |
+
|
| 63 |
+
|
| 64 |
+
Use ''--cfg hydra'' to Show the Hydra config.
|
| 65 |
+
|
| 66 |
+
'
|
| 67 |
+
hydra_help: ???
|
| 68 |
+
hydra_logging:
|
| 69 |
+
version: 1
|
| 70 |
+
formatters:
|
| 71 |
+
simple:
|
| 72 |
+
format: '[%(asctime)s][HYDRA] %(message)s'
|
| 73 |
+
handlers:
|
| 74 |
+
console:
|
| 75 |
+
class: logging.StreamHandler
|
| 76 |
+
formatter: simple
|
| 77 |
+
stream: ext://sys.stdout
|
| 78 |
+
root:
|
| 79 |
+
level: INFO
|
| 80 |
+
handlers:
|
| 81 |
+
- console
|
| 82 |
+
loggers:
|
| 83 |
+
logging_example:
|
| 84 |
+
level: DEBUG
|
| 85 |
+
disable_existing_loggers: false
|
| 86 |
+
job_logging:
|
| 87 |
+
version: 1
|
| 88 |
+
formatters:
|
| 89 |
+
simple:
|
| 90 |
+
format: '[%(asctime)s][%(name)s][%(levelname)s] - %(message)s'
|
| 91 |
+
handlers:
|
| 92 |
+
console:
|
| 93 |
+
class: logging.StreamHandler
|
| 94 |
+
formatter: simple
|
| 95 |
+
stream: ext://sys.stdout
|
| 96 |
+
file:
|
| 97 |
+
class: logging.FileHandler
|
| 98 |
+
formatter: simple
|
| 99 |
+
filename: ${hydra.runtime.output_dir}/${hydra.job.name}.log
|
| 100 |
+
root:
|
| 101 |
+
level: INFO
|
| 102 |
+
handlers:
|
| 103 |
+
- console
|
| 104 |
+
- file
|
| 105 |
+
disable_existing_loggers: false
|
| 106 |
+
env: {}
|
| 107 |
+
mode: RUN
|
| 108 |
+
searchpath: []
|
| 109 |
+
callbacks: {}
|
| 110 |
+
output_subdir: .hydra
|
| 111 |
+
overrides:
|
| 112 |
+
hydra:
|
| 113 |
+
- hydra.run.dir=inference_output/2025-07-10/00-57-36
|
| 114 |
+
- hydra.mode=RUN
|
| 115 |
+
task:
|
| 116 |
+
- experiments=temp_inference/kamel-usp_jbcs2025_BERTugues-base-portuguese-cased-encoder_classification-C1-essay_only
|
| 117 |
+
job:
|
| 118 |
+
name: run_inference_experiment
|
| 119 |
+
chdir: null
|
| 120 |
+
override_dirname: experiments=temp_inference/kamel-usp_jbcs2025_BERTugues-base-portuguese-cased-encoder_classification-C1-essay_only
|
| 121 |
+
id: ???
|
| 122 |
+
num: ???
|
| 123 |
+
config_name: config
|
| 124 |
+
env_set: {}
|
| 125 |
+
env_copy: []
|
| 126 |
+
config:
|
| 127 |
+
override_dirname:
|
| 128 |
+
kv_sep: '='
|
| 129 |
+
item_sep: ','
|
| 130 |
+
exclude_keys: []
|
| 131 |
+
runtime:
|
| 132 |
+
version: 1.3.2
|
| 133 |
+
version_base: '1.1'
|
| 134 |
+
cwd: /workspace/jbcs2025
|
| 135 |
+
config_sources:
|
| 136 |
+
- path: hydra.conf
|
| 137 |
+
schema: pkg
|
| 138 |
+
provider: hydra
|
| 139 |
+
- path: /workspace/jbcs2025/configs
|
| 140 |
+
schema: file
|
| 141 |
+
provider: main
|
| 142 |
+
- path: ''
|
| 143 |
+
schema: structured
|
| 144 |
+
provider: schema
|
| 145 |
+
output_dir: /workspace/jbcs2025/inference_output/2025-07-10/00-57-36
|
| 146 |
+
choices:
|
| 147 |
+
experiments: temp_inference/kamel-usp_jbcs2025_BERTugues-base-portuguese-cased-encoder_classification-C1-essay_only
|
| 148 |
+
hydra/env: default
|
| 149 |
+
hydra/callbacks: null
|
| 150 |
+
hydra/job_logging: default
|
| 151 |
+
hydra/hydra_logging: default
|
| 152 |
+
hydra/hydra_help: default
|
| 153 |
+
hydra/help: default
|
| 154 |
+
hydra/sweeper: basic
|
| 155 |
+
hydra/launcher: basic
|
| 156 |
+
hydra/output: default
|
| 157 |
+
verbose: false
|
runs/base_models/bertugues/jbcs2025_BERTugues-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only/.hydra/overrides.yaml
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
- experiments=temp_inference/kamel-usp_jbcs2025_BERTugues-base-portuguese-cased-encoder_classification-C1-essay_only
|
runs/base_models/bertugues/jbcs2025_BERTugues-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only/bootstrap_confidence_intervals.csv
ADDED
|
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
|
|
|
| 1 |
+
experiment_id,timestamp,QWK_mean,QWK_lower_95ci,QWK_upper_95ci,QWK_ci_width,Macro_F1_mean,Macro_F1_lower_95ci,Macro_F1_upper_95ci,Macro_F1_ci_width,Weighted_F1_mean,Weighted_F1_lower_95ci,Weighted_F1_upper_95ci,Weighted_F1_ci_width
|
| 2 |
+
jbcs2025_BERTugues-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only,2025-07-10 00:57:41,0.6118239365325281,0.5155564145265364,0.7026110342235472,0.18705461969701076,0.41828021220630307,0.3165853797061227,0.5526803717911853,0.23609499208506263,0.5627013765718267,0.4791976087167935,0.6438201797903147,0.16462257107352118
|
runs/base_models/bertugues/jbcs2025_BERTugues-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only/evaluation_results.csv
ADDED
|
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
|
|
|
| 1 |
+
accuracy,RMSE,QWK,HDIV,Macro_F1,Micro_F1,Weighted_F1,TP_0,TN_0,FP_0,FN_0,TP_1,TN_1,FP_1,FN_1,TP_2,TN_2,FP_2,FN_2,TP_3,TN_3,FP_3,FN_3,TP_4,TN_4,FP_4,FN_4,TP_5,TN_5,FP_5,FN_5,timestamp,id
|
| 2 |
+
0.5434782608695652,30.45547950507524,0.6139860139860139,0.007246376811594235,0.38733766233766237,0.5434782608695652,0.5620203065855239,0,137,0,1,0,138,0,0,7,109,19,3,35,61,11,31,28,67,20,23,5,115,13,5,2025-07-10 00:57:41,jbcs2025_BERTugues-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only
|
runs/base_models/{bertimbau/jbcs2025_bertimbau_base-C1-encoder_classification-C1-essay_only/jbcs2025_bertimbau_base-C1-encoder_classification-C1-essay_only_inference_results.jsonl → bertugues/jbcs2025_BERTugues-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only/jbcs2025_BERTugues-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only_inference_results.jsonl}
RENAMED
|
The diff for this file is too large to render.
See raw diff
|
|
|
runs/base_models/bertugues/jbcs2025_BERTugues-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only/run_inference_experiment.log
ADDED
|
@@ -0,0 +1,250 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[2025-07-10 00:57:41,959][__main__][INFO] - Starting inference experiment
|
| 2 |
+
[2025-07-10 00:57:41,961][__main__][INFO] - cache_dir: /tmp/
|
| 3 |
+
dataset:
|
| 4 |
+
name: kamel-usp/aes_enem_dataset
|
| 5 |
+
split: JBCS2025
|
| 6 |
+
training_params:
|
| 7 |
+
seed: 42
|
| 8 |
+
num_train_epochs: 20
|
| 9 |
+
logging_steps: 100
|
| 10 |
+
metric_for_best_model: QWK
|
| 11 |
+
bf16: true
|
| 12 |
+
bootstrap:
|
| 13 |
+
enabled: true
|
| 14 |
+
n_bootstrap: 10000
|
| 15 |
+
bootstrap_seed: 42
|
| 16 |
+
metrics:
|
| 17 |
+
- QWK
|
| 18 |
+
- Macro_F1
|
| 19 |
+
- Weighted_F1
|
| 20 |
+
post_training_results:
|
| 21 |
+
model_path: /workspace/jbcs2025/outputs/2025-03-24/20-42-59
|
| 22 |
+
experiments:
|
| 23 |
+
model:
|
| 24 |
+
name: kamel-usp/jbcs2025_BERTugues-base-portuguese-cased-encoder_classification-C1-essay_only
|
| 25 |
+
type: encoder_classification
|
| 26 |
+
num_labels: 6
|
| 27 |
+
output_dir: ./results/
|
| 28 |
+
logging_dir: ./logs/
|
| 29 |
+
best_model_dir: kamel-usp/jbcs2025_BERTugues-base-portuguese-cased-encoder_classification-C1-essay_only
|
| 30 |
+
tokenizer:
|
| 31 |
+
name: ricardoz/BERTugues-base-portuguese-cased
|
| 32 |
+
dataset:
|
| 33 |
+
grade_index: 0
|
| 34 |
+
use_full_context: false
|
| 35 |
+
training_params:
|
| 36 |
+
weight_decay: 0.01
|
| 37 |
+
warmup_ratio: 0.1
|
| 38 |
+
learning_rate: 5.0e-05
|
| 39 |
+
train_batch_size: 16
|
| 40 |
+
eval_batch_size: 16
|
| 41 |
+
gradient_accumulation_steps: 1
|
| 42 |
+
gradient_checkpointing: false
|
| 43 |
+
|
| 44 |
+
[2025-07-10 00:57:41,963][__main__][INFO] - Running inference with fine-tuned HF model
|
| 45 |
+
[2025-07-10 00:57:47,898][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--ricardoz--BERTugues-base-portuguese-cased/snapshots/76022866e209716d673e144cc9186f7b20830967/config.json
|
| 46 |
+
[2025-07-10 00:57:47,899][transformers.configuration_utils][INFO] - Model config BertConfig {
|
| 47 |
+
"architectures": [
|
| 48 |
+
"BertForPreTraining"
|
| 49 |
+
],
|
| 50 |
+
"attention_probs_dropout_prob": 0.1,
|
| 51 |
+
"classifier_dropout": null,
|
| 52 |
+
"hidden_act": "gelu",
|
| 53 |
+
"hidden_dropout_prob": 0.1,
|
| 54 |
+
"hidden_size": 768,
|
| 55 |
+
"initializer_range": 0.02,
|
| 56 |
+
"intermediate_size": 3072,
|
| 57 |
+
"layer_norm_eps": 1e-12,
|
| 58 |
+
"max_position_embeddings": 512,
|
| 59 |
+
"model_type": "bert",
|
| 60 |
+
"num_attention_heads": 12,
|
| 61 |
+
"num_hidden_layers": 12,
|
| 62 |
+
"pad_token_id": 0,
|
| 63 |
+
"position_embedding_type": "absolute",
|
| 64 |
+
"torch_dtype": "float32",
|
| 65 |
+
"transformers_version": "4.53.1",
|
| 66 |
+
"type_vocab_size": 2,
|
| 67 |
+
"use_cache": true,
|
| 68 |
+
"vocab_size": 30522
|
| 69 |
+
}
|
| 70 |
+
|
| 71 |
+
[2025-07-10 00:57:48,108][transformers.models.auto.tokenization_auto][INFO] - Could not locate the tokenizer configuration file, will try to use the model config instead.
|
| 72 |
+
[2025-07-10 00:57:48,331][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--ricardoz--BERTugues-base-portuguese-cased/snapshots/76022866e209716d673e144cc9186f7b20830967/config.json
|
| 73 |
+
[2025-07-10 00:57:48,332][transformers.configuration_utils][INFO] - Model config BertConfig {
|
| 74 |
+
"architectures": [
|
| 75 |
+
"BertForPreTraining"
|
| 76 |
+
],
|
| 77 |
+
"attention_probs_dropout_prob": 0.1,
|
| 78 |
+
"classifier_dropout": null,
|
| 79 |
+
"hidden_act": "gelu",
|
| 80 |
+
"hidden_dropout_prob": 0.1,
|
| 81 |
+
"hidden_size": 768,
|
| 82 |
+
"initializer_range": 0.02,
|
| 83 |
+
"intermediate_size": 3072,
|
| 84 |
+
"layer_norm_eps": 1e-12,
|
| 85 |
+
"max_position_embeddings": 512,
|
| 86 |
+
"model_type": "bert",
|
| 87 |
+
"num_attention_heads": 12,
|
| 88 |
+
"num_hidden_layers": 12,
|
| 89 |
+
"pad_token_id": 0,
|
| 90 |
+
"position_embedding_type": "absolute",
|
| 91 |
+
"torch_dtype": "float32",
|
| 92 |
+
"transformers_version": "4.53.1",
|
| 93 |
+
"type_vocab_size": 2,
|
| 94 |
+
"use_cache": true,
|
| 95 |
+
"vocab_size": 30522
|
| 96 |
+
}
|
| 97 |
+
|
| 98 |
+
[2025-07-10 00:57:48,944][transformers.tokenization_utils_base][INFO] - loading file vocab.txt from cache at /tmp/models--ricardoz--BERTugues-base-portuguese-cased/snapshots/76022866e209716d673e144cc9186f7b20830967/vocab.txt
|
| 99 |
+
[2025-07-10 00:57:48,944][transformers.tokenization_utils_base][INFO] - loading file tokenizer.json from cache at None
|
| 100 |
+
[2025-07-10 00:57:48,944][transformers.tokenization_utils_base][INFO] - loading file added_tokens.json from cache at None
|
| 101 |
+
[2025-07-10 00:57:48,944][transformers.tokenization_utils_base][INFO] - loading file special_tokens_map.json from cache at None
|
| 102 |
+
[2025-07-10 00:57:48,944][transformers.tokenization_utils_base][INFO] - loading file tokenizer_config.json from cache at None
|
| 103 |
+
[2025-07-10 00:57:48,944][transformers.tokenization_utils_base][INFO] - loading file chat_template.jinja from cache at None
|
| 104 |
+
[2025-07-10 00:57:48,944][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--ricardoz--BERTugues-base-portuguese-cased/snapshots/76022866e209716d673e144cc9186f7b20830967/config.json
|
| 105 |
+
[2025-07-10 00:57:48,945][transformers.configuration_utils][INFO] - Model config BertConfig {
|
| 106 |
+
"architectures": [
|
| 107 |
+
"BertForPreTraining"
|
| 108 |
+
],
|
| 109 |
+
"attention_probs_dropout_prob": 0.1,
|
| 110 |
+
"classifier_dropout": null,
|
| 111 |
+
"hidden_act": "gelu",
|
| 112 |
+
"hidden_dropout_prob": 0.1,
|
| 113 |
+
"hidden_size": 768,
|
| 114 |
+
"initializer_range": 0.02,
|
| 115 |
+
"intermediate_size": 3072,
|
| 116 |
+
"layer_norm_eps": 1e-12,
|
| 117 |
+
"max_position_embeddings": 512,
|
| 118 |
+
"model_type": "bert",
|
| 119 |
+
"num_attention_heads": 12,
|
| 120 |
+
"num_hidden_layers": 12,
|
| 121 |
+
"pad_token_id": 0,
|
| 122 |
+
"position_embedding_type": "absolute",
|
| 123 |
+
"torch_dtype": "float32",
|
| 124 |
+
"transformers_version": "4.53.1",
|
| 125 |
+
"type_vocab_size": 2,
|
| 126 |
+
"use_cache": true,
|
| 127 |
+
"vocab_size": 30522
|
| 128 |
+
}
|
| 129 |
+
|
| 130 |
+
[2025-07-10 00:57:48,977][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--ricardoz--BERTugues-base-portuguese-cased/snapshots/76022866e209716d673e144cc9186f7b20830967/config.json
|
| 131 |
+
[2025-07-10 00:57:48,978][transformers.configuration_utils][INFO] - Model config BertConfig {
|
| 132 |
+
"architectures": [
|
| 133 |
+
"BertForPreTraining"
|
| 134 |
+
],
|
| 135 |
+
"attention_probs_dropout_prob": 0.1,
|
| 136 |
+
"classifier_dropout": null,
|
| 137 |
+
"hidden_act": "gelu",
|
| 138 |
+
"hidden_dropout_prob": 0.1,
|
| 139 |
+
"hidden_size": 768,
|
| 140 |
+
"initializer_range": 0.02,
|
| 141 |
+
"intermediate_size": 3072,
|
| 142 |
+
"layer_norm_eps": 1e-12,
|
| 143 |
+
"max_position_embeddings": 512,
|
| 144 |
+
"model_type": "bert",
|
| 145 |
+
"num_attention_heads": 12,
|
| 146 |
+
"num_hidden_layers": 12,
|
| 147 |
+
"pad_token_id": 0,
|
| 148 |
+
"position_embedding_type": "absolute",
|
| 149 |
+
"torch_dtype": "float32",
|
| 150 |
+
"transformers_version": "4.53.1",
|
| 151 |
+
"type_vocab_size": 2,
|
| 152 |
+
"use_cache": true,
|
| 153 |
+
"vocab_size": 30522
|
| 154 |
+
}
|
| 155 |
+
|
| 156 |
+
[2025-07-10 00:57:48,996][__main__][INFO] - Tokenizer function parameters- Padding:longest; Truncation: True; Use Full Context: False
|
| 157 |
+
[2025-07-10 00:57:49,407][__main__][INFO] -
|
| 158 |
+
Token statistics for 'train' split:
|
| 159 |
+
[2025-07-10 00:57:49,407][__main__][INFO] - Total examples: 500
|
| 160 |
+
[2025-07-10 00:57:49,407][__main__][INFO] - Min tokens: 512
|
| 161 |
+
[2025-07-10 00:57:49,407][__main__][INFO] - Max tokens: 512
|
| 162 |
+
[2025-07-10 00:57:49,407][__main__][INFO] - Avg tokens: 512.00
|
| 163 |
+
[2025-07-10 00:57:49,407][__main__][INFO] - Std tokens: 0.00
|
| 164 |
+
[2025-07-10 00:57:49,497][__main__][INFO] -
|
| 165 |
+
Token statistics for 'validation' split:
|
| 166 |
+
[2025-07-10 00:57:49,497][__main__][INFO] - Total examples: 132
|
| 167 |
+
[2025-07-10 00:57:49,497][__main__][INFO] - Min tokens: 512
|
| 168 |
+
[2025-07-10 00:57:49,497][__main__][INFO] - Max tokens: 512
|
| 169 |
+
[2025-07-10 00:57:49,497][__main__][INFO] - Avg tokens: 512.00
|
| 170 |
+
[2025-07-10 00:57:49,497][__main__][INFO] - Std tokens: 0.00
|
| 171 |
+
[2025-07-10 00:57:49,593][__main__][INFO] -
|
| 172 |
+
Token statistics for 'test' split:
|
| 173 |
+
[2025-07-10 00:57:49,593][__main__][INFO] - Total examples: 138
|
| 174 |
+
[2025-07-10 00:57:49,593][__main__][INFO] - Min tokens: 512
|
| 175 |
+
[2025-07-10 00:57:49,593][__main__][INFO] - Max tokens: 512
|
| 176 |
+
[2025-07-10 00:57:49,593][__main__][INFO] - Avg tokens: 512.00
|
| 177 |
+
[2025-07-10 00:57:49,593][__main__][INFO] - Std tokens: 0.00
|
| 178 |
+
[2025-07-10 00:57:49,593][__main__][INFO] - If token statistics are the same (max, avg, min) keep in mind that this is due to batched tokenization and padding.
|
| 179 |
+
[2025-07-10 00:57:49,593][__main__][INFO] - Model max length: 512. If it is the same as stats, then there is a high chance that sequences are being truncated.
|
| 180 |
+
[2025-07-10 00:57:49,593][__main__][INFO] - Loading model from: kamel-usp/jbcs2025_BERTugues-base-portuguese-cased-encoder_classification-C1-essay_only
|
| 181 |
+
[2025-07-10 00:57:49,593][__main__][INFO] - Loading model from: kamel-usp/jbcs2025_BERTugues-base-portuguese-cased-encoder_classification-C1-essay_only
|
| 182 |
+
[2025-07-10 00:57:50,956][__main__][INFO] - Model need ≈ 1.36 GiB to run inference and 2.59 for training
|
| 183 |
+
[2025-07-10 00:57:51,848][transformers.configuration_utils][INFO] - loading configuration file config.json from cache at /tmp/models--kamel-usp--jbcs2025_BERTugues-base-portuguese-cased-encoder_classification-C1-essay_only/snapshots/c8b174c3e6521d94d4c8700c5ae1a12ea8a389b1/config.json
|
| 184 |
+
[2025-07-10 00:57:51,849][transformers.configuration_utils][INFO] - Model config BertConfig {
|
| 185 |
+
"architectures": [
|
| 186 |
+
"BertForSequenceClassification"
|
| 187 |
+
],
|
| 188 |
+
"attention_probs_dropout_prob": 0.1,
|
| 189 |
+
"classifier_dropout": null,
|
| 190 |
+
"hidden_act": "gelu",
|
| 191 |
+
"hidden_dropout_prob": 0.1,
|
| 192 |
+
"hidden_size": 768,
|
| 193 |
+
"id2label": {
|
| 194 |
+
"0": 0,
|
| 195 |
+
"1": 40,
|
| 196 |
+
"2": 80,
|
| 197 |
+
"3": 120,
|
| 198 |
+
"4": 160,
|
| 199 |
+
"5": 200
|
| 200 |
+
},
|
| 201 |
+
"initializer_range": 0.02,
|
| 202 |
+
"intermediate_size": 3072,
|
| 203 |
+
"label2id": {
|
| 204 |
+
"0": 0,
|
| 205 |
+
"40": 1,
|
| 206 |
+
"80": 2,
|
| 207 |
+
"120": 3,
|
| 208 |
+
"160": 4,
|
| 209 |
+
"200": 5
|
| 210 |
+
},
|
| 211 |
+
"layer_norm_eps": 1e-12,
|
| 212 |
+
"max_position_embeddings": 512,
|
| 213 |
+
"model_type": "bert",
|
| 214 |
+
"num_attention_heads": 12,
|
| 215 |
+
"num_hidden_layers": 12,
|
| 216 |
+
"pad_token_id": 0,
|
| 217 |
+
"position_embedding_type": "absolute",
|
| 218 |
+
"torch_dtype": "float32",
|
| 219 |
+
"transformers_version": "4.53.1",
|
| 220 |
+
"type_vocab_size": 2,
|
| 221 |
+
"use_cache": true,
|
| 222 |
+
"vocab_size": 30522
|
| 223 |
+
}
|
| 224 |
+
|
| 225 |
+
[2025-07-10 00:58:00,510][transformers.modeling_utils][INFO] - loading weights file model.safetensors from cache at /tmp/models--kamel-usp--jbcs2025_BERTugues-base-portuguese-cased-encoder_classification-C1-essay_only/snapshots/c8b174c3e6521d94d4c8700c5ae1a12ea8a389b1/model.safetensors
|
| 226 |
+
[2025-07-10 00:58:00,511][transformers.modeling_utils][INFO] - Will use torch_dtype=torch.float32 as defined in model's config object
|
| 227 |
+
[2025-07-10 00:58:00,511][transformers.modeling_utils][INFO] - Instantiating BertForSequenceClassification model under default dtype torch.float32.
|
| 228 |
+
[2025-07-10 00:58:00,897][transformers.modeling_utils][INFO] - All model checkpoint weights were used when initializing BertForSequenceClassification.
|
| 229 |
+
|
| 230 |
+
[2025-07-10 00:58:00,897][transformers.modeling_utils][INFO] - All the weights of BertForSequenceClassification were initialized from the model checkpoint at kamel-usp/jbcs2025_BERTugues-base-portuguese-cased-encoder_classification-C1-essay_only.
|
| 231 |
+
If your task is similar to the task the model of the checkpoint was trained on, you can already use BertForSequenceClassification for predictions without further training.
|
| 232 |
+
[2025-07-10 00:58:00,906][transformers.training_args][INFO] - PyTorch: setting up devices
|
| 233 |
+
[2025-07-10 00:58:00,930][transformers.training_args][INFO] - The default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-).
|
| 234 |
+
[2025-07-10 00:58:00,937][transformers.trainer][INFO] - You have loaded a model on multiple GPUs. `is_model_parallel` attribute will be force-set to `True` to avoid any unexpected behavior such as device placement mismatching.
|
| 235 |
+
[2025-07-10 00:58:00,963][transformers.trainer][INFO] - Using auto half precision backend
|
| 236 |
+
[2025-07-10 00:58:04,297][__main__][INFO] - Running inference on test dataset
|
| 237 |
+
[2025-07-10 00:58:04,298][transformers.trainer][INFO] - The following columns in the test set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: supporting_text, grades, prompt, id_prompt, id, essay_year, essay_text, reference. If supporting_text, grades, prompt, id_prompt, id, essay_year, essay_text, reference are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.
|
| 238 |
+
[2025-07-10 00:58:04,307][transformers.trainer][INFO] -
|
| 239 |
+
***** Running Prediction *****
|
| 240 |
+
[2025-07-10 00:58:04,307][transformers.trainer][INFO] - Num examples = 138
|
| 241 |
+
[2025-07-10 00:58:04,307][transformers.trainer][INFO] - Batch size = 16
|
| 242 |
+
[2025-07-10 00:58:05,176][__main__][INFO] - Inference results saved to jbcs2025_BERTugues-base-portuguese-cased-encoder_classification-C1-essay_only-encoder_classification-C1-essay_only_inference_results.jsonl
|
| 243 |
+
[2025-07-10 00:58:05,177][__main__][INFO] - Computing bootstrap confidence intervals for metrics: ['QWK', 'Macro_F1', 'Weighted_F1']
|
| 244 |
+
[2025-07-10 01:00:09,857][__main__][INFO] - Bootstrap CI results saved to bootstrap_confidence_intervals.csv
|
| 245 |
+
[2025-07-10 01:00:09,857][__main__][INFO] - Bootstrap Confidence Intervals (95%):
|
| 246 |
+
[2025-07-10 01:00:09,857][__main__][INFO] - QWK: 0.6118 [0.5156, 0.7026]
|
| 247 |
+
[2025-07-10 01:00:09,857][__main__][INFO] - Macro_F1: 0.4183 [0.3166, 0.5527]
|
| 248 |
+
[2025-07-10 01:00:09,857][__main__][INFO] - Weighted_F1: 0.5627 [0.4792, 0.6438]
|
| 249 |
+
[2025-07-10 01:00:09,857][__main__][INFO] - Inference results: {'accuracy': 0.5434782608695652, 'RMSE': 30.45547950507524, 'QWK': 0.6139860139860139, 'HDIV': 0.007246376811594235, 'Macro_F1': 0.38733766233766237, 'Micro_F1': 0.5434782608695652, 'Weighted_F1': 0.5620203065855239, 'TP_0': np.int64(0), 'TN_0': np.int64(137), 'FP_0': np.int64(0), 'FN_0': np.int64(1), 'TP_1': np.int64(0), 'TN_1': np.int64(138), 'FP_1': np.int64(0), 'FN_1': np.int64(0), 'TP_2': np.int64(7), 'TN_2': np.int64(109), 'FP_2': np.int64(19), 'FN_2': np.int64(3), 'TP_3': np.int64(35), 'TN_3': np.int64(61), 'FP_3': np.int64(11), 'FN_3': np.int64(31), 'TP_4': np.int64(28), 'TN_4': np.int64(67), 'FP_4': np.int64(20), 'FN_4': np.int64(23), 'TP_5': np.int64(5), 'TN_5': np.int64(115), 'FP_5': np.int64(13), 'FN_5': np.int64(5)}
|
| 250 |
+
[2025-07-10 01:00:09,858][__main__][INFO] - Inference experiment completed
|