Edit model card

Fine-tune LLaMA 2 (7B) with LoRA on meta-math/MetaMathQA

Fine-tune for one epoch

Result:

After the pre-training: Invalid output length: 7, Testing length: 1319 , Accuracy: 0.580

Comparison

The official report accuracy is 0.665 by fine-tuning the whole LLaMA 2 7B model for 3 epochs.

Note: The LoRA adapter is being used for future research purposes.

πŸš€ Adapter Usage

# Load the Pre-trained LoRA Adapter
model.load_adapter("shuyuej/metamath_lora_qkv_llama2_7b")
model.enable_adapters()
Downloads last month

-

Downloads are not tracked for this model. How to track
Unable to determine this model's library. Check the docs .

Collection including shuyuej/metamath_lora_qkv_llama2_7b

Evaluation results