Edit model card

Training procedure

The following bitsandbytes quantization config was used during training:

  • quant_method: bitsandbytes
  • _load_in_8bit: False
  • _load_in_4bit: True
  • llm_int8_threshold: 6.0
  • llm_int8_skip_modules: None
  • llm_int8_enable_fp32_cpu_offload: False
  • llm_int8_has_fp16_weight: False
  • bnb_4bit_quant_type: nf4
  • bnb_4bit_use_double_quant: True
  • bnb_4bit_compute_dtype: float16
  • bnb_4bit_quant_storage: uint8
  • load_in_4bit: True
  • load_in_8bit: False

Framework versions

  • PEFT 0.5.0

Open Portuguese LLM Leaderboard Evaluation Results

Detailed results can be found here and on the 🚀 Open Portuguese LLM Leaderboard

Metric Value
Average 32.3
ENEM Challenge (No Images) 24.14
BLUEX (No Images) 20.31
OAB Exams 25.56
Assin2 RTE 69.75
Assin2 STS 4.16
FaQuAD NLI 52.63
HateBR Binary 33.33
PT Hate Speech Binary 41.65
tweetSentBR 19.15
Downloads last month
4
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Space using recogna-nlp/gembode-2b-ultraalpaca-qlora 1

Collection including recogna-nlp/gembode-2b-ultraalpaca-qlora

Evaluation results