iboing's picture
Update README.md
5740238 verified
---
license: llama2
---
The LLaMA-2-7b model finetuned on the Math task using [CorDA](https://huggingface.co/papers/2406.05223) in the KPA mode with nqopen.
| Method | TriviaQA | NQ open | GSM8k | Math |
|---|---|---|---|---|
|LoRA|44.17|1.91|42.68|5.92|
|[CorDA (KPA with nqopen)](https://huggingface.co/iboing/CorDA_KPA_nqopen_finetuned_math/tree/main) | **45.23** | **10.44** | 45.64 | 6.94|
|[CorDA (IPA with MetaMath)](https://huggingface.co/iboing/CorDA_IPA_math_finetuned_math/tree/main) | - | - | **54.59** | **8.54** |
You can evaluate the model's performance following the step-3 in [CorDA github repo](https://github.com/iboing/CorDA).
Note: The model trained using CorDA adapter is based on customized code. If you want to restore the original LLaMA architecture, execute `merge_adapter_for_corda.py` in [CorDA github repo](https://github.com/iboing/CorDA).