Edit model card

The LLaMA-2-7b model finetuned on the Math task using CorDA in the IPA mode with MetaMath.

Method TriviaQA NQ open GSM8k Math
LoRA 44.17 1.91 42.68 5.92
CorDA (KPA with nqopen) 45.23 10.44 45.64 6.94
CorDA (IPA with MetaMath) - - 54.59 8.54

You can evaluate the model's performance following the step-3 in CorDA github repo.

Note: The model trained using CorDA adapter is based on customized code. If you want to restore the original LLaMA architecture, execute merge_adapter_for_corda.py in CorDA github repo.

Downloads last month
18
Safetensors
Model size
7.06B params
Tensor type
FP16
·
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.

Collection including iboing/CorDA_IPA_math_finetuned_math