--- license: apache-2.0 datasets: - Open-Orca/OpenOrca language: - en library_name: transformers pipeline_tag: question-answering metrics: - accuracy --- ## Model Details ### Model Description - **Developed by:** Andron00e - **Language(s) (NLP):** Python (PyTorch, transformers, peft) - **License:** apache-2.0 - **Finetuned from model:** openlm-research/open_llama_3b ### Model Sources [optional] - **Repository:** https://github.com/Andron00e/Fine-Tuning-project ### Training Data https://huggingface.co/datasets/Open-Orca/OpenOrca ## Evaluation Evaluation of the model was carried out using EulerAI library, more [precisely](https://github.com/EleutherAI/lm-evaluation-harness/tree/e47e01beea79cfe87421e2dac49e64d499c240b4#task-versioning) #### Testing Data hellaswag testing dataset #### Metrics Accuracy ### Results and Model Examination | Task |Version| Metric |Value | |Stderr| |---------|------:|--------|-----:|---|-----:| |hellaswag| 0|acc |0.4899|± |0.0050| | | |acc_norm|0.6506|± |0.0048| ## Citations ``` @software{openlm2023openllama, author = {Geng, Xinyang and Liu, Hao}, title = {OpenLLaMA: An Open Reproduction of LLaMA}, month = May, year = 2023, url = {https://github.com/openlm-research/open_llama} } ``` ``` @software{eval-harness, author = {Gao, Leo and Tow, Jonathan and Biderman, Stella and Black, Sid and DiPofi, Anthony and Foster, Charles and Golding, Laurence and Hsu, Jeffrey and McDonell, Kyle and Muennighoff, Niklas and Phang, Jason and Reynolds, Laria and Tang, Eric and Thite, Anish and Wang, Ben and Wang, Kevin and Zou, Andy}, title = {A framework for few-shot language model evaluation}, month = sep, year = 2021, publisher = {Zenodo}, version = {v0.0.1}, doi = {10.5281/zenodo.5371628}, url = {https://doi.org/10.5281/zenodo.5371628} } ``` ## Model Card Authors and Contact [Andron00e](https://github.com/Andron00e)