--- license: mit datasets: - GAIR/lima language: - en pipeline_tag: text-generation --- # lgaalves/gpt2-xl_lima (1.5B) **lgaalves/lgaalves/gpt2-xl_lima** is an instruction fine-tuned model based on the GPT-2 transformer architecture. ### Benchmark Metrics | Metric |gpt2-xl_lima |gpt2-xl (base) | |-----------------------|-------|-------| | Avg. | 36.65 | **36.66** | | ARC (25-shot) | **31.14** | 30.29 | | HellaSwag (10-shot) | 51.28 | **51.38** | | MMLU (5-shot) | 25.43 | **26.43** | | TruthfulQA (0-shot) | **38.74** | 38.54 | We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results. ### Model Details * **Trained by**: Luiz G A Alves * **Model type:** **lgaalves/gpt2-xl_lima** is an auto-regressive language model based on the GPT-2 transformer architecture. * **Language(s)**: English ### How to use: ```python # Use a pipeline as a high-level helper >>> from transformers import pipeline >>> pipe = pipeline("text-generation", model="lgaalves/gpt2-xl_lima") >>> question = "What is a large language model?" >>> answer = pipe(question) >>> print(answer[0]['generated_text']) ``` or, you can load the model direclty using: ```python # Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("lgaalves/gpt2-xl_lima") model = AutoModelForCausalLM.from_pretrained("lgaalves/gpt2-xl_lima") ``` ### Training Dataset `lgaalves/gpt2-xl_lima` trained on the [GAIR/lima](https://huggingface.co/datasets/GAIR/lima). ### Training Procedure `lgaalves/gpt2-xl_lima` was instruction fine-tuned using LoRA on 1 Tesla V100-SXM2-16GB. It took about 10 minutes to train it. # Intended uses, limitations & biases You can use the raw model for text generation or fine-tune it to a downstream task. The model was not extensively tested and may produce false information. It contains a lot of unfiltered content from the internet, which is far from neutral. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lgaalves__gpt2-xl_lima) | Metric | Value | |-----------------------|---------------------------| | Avg. | 29.95 | | ARC (25-shot) | 31.14 | | HellaSwag (10-shot) | 51.28 | | MMLU (5-shot) | 25.43 | | TruthfulQA (0-shot) | 38.74 | | Winogrande (5-shot) | 57.22 | | GSM8K (5-shot) | 0.91 | | DROP (3-shot) | 4.89 |