--- language: - pt license: cc-by-3.0 library_name: peft datasets: - Gustrd/dolly-15k-hippo-translated-pt-12k base_model: HachiML/mpt-7b-instruct-for-peft --- ### Cabra: A portuguese finetuned instruction Open-LLaMA LoRA adapter created with the procedures detailed at the GitHub repository: https://github.com/gustrd/cabra . This training was done at 2 epochs using two T4 at Kaggle. This LoRA adapter was created following the procedure: ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 ### Framework versions - PEFT 0.5.0.dev0