Training procedure
The following bitsandbytes
quantization config was used during training:
- quant_method: bitsandbytes
- _load_in_8bit: False
- _load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
- bnb_4bit_quant_storage: uint8
- load_in_4bit: True
- load_in_8bit: False
Framework versions
- PEFT 0.5.0
Open Portuguese LLM Leaderboard Evaluation Results
Detailed results can be found here and on the 🚀 Open Portuguese LLM Leaderboard
Metric | Value |
---|---|
Average | 67.11 |
ENEM Challenge (No Images) | 66.90 |
BLUEX (No Images) | 57.16 |
OAB Exams | 45.47 |
Assin2 RTE | 86.61 |
Assin2 STS | 71.39 |
FaQuAD NLI | 67.40 |
HateBR Binary | 79.81 |
PT Hate Speech Binary | 63.75 |
tweetSentBR | 65.49 |
- Downloads last month
- 16
Unable to determine this model’s pipeline type. Check the
docs
.
Evaluation results
- accuracy on ENEM Challenge (No Images)Open Portuguese LLM Leaderboard66.900
- accuracy on BLUEX (No Images)Open Portuguese LLM Leaderboard57.160
- accuracy on OAB ExamsOpen Portuguese LLM Leaderboard45.470
- f1-macro on Assin2 RTEtest set Open Portuguese LLM Leaderboard86.610
- pearson on Assin2 STStest set Open Portuguese LLM Leaderboard71.390
- f1-macro on FaQuAD NLItest set Open Portuguese LLM Leaderboard67.400
- f1-macro on HateBR Binarytest set Open Portuguese LLM Leaderboard79.810
- f1-macro on PT Hate Speech Binarytest set Open Portuguese LLM Leaderboard63.750
- f1-macro on tweetSentBRtest set Open Portuguese LLM Leaderboard65.490