--- library_name: transformers language: - en license: apache-2.0 base_model: BEE-spoke-data/tFINE-900m-e16-d32-flan tags: - instruct - code datasets: - pszemraj/infinity-instruct-7m-T2T_en pipeline_tag: text2text-generation --- # tFINE-900m-e16-d32-instruct ## Model description This model is a fine-tuned version of [BEE-spoke-data/tFINE-900m-e16-d32-flan](https://hf.co/BEE-spoke-data/tFINE-900m-e16-d32-flan) on the pszemraj/infinity-instruct-7m-T2T_en dataset. It achieves the following results on the evaluation set: - Loss: 1.3588 - Num Input Tokens Seen: 810173896 ## Usage Example > [!TIP] > You can also run inference with [turboT5](https://github.com/Knowledgator/TurboT5) on ampere+ GPUs for better performance. See [example on Colab](https://colab.research.google.com/gist/pszemraj/59cf088d603a0f62bbb5cedc42a7837b/turbot5-inference.ipynb). ```py from transformers import pipeline pipe = pipeline( "text2text-generation", model="BEE-spoke-data/tFINE-900m-e16-d32-instruct", # device_map="auto", # uncomment if have GPU/accelerate ) prompt = "Write me a python script that demonstrates an advanced sorting algorithm" res = pipe( prompt, max_new_tokens=384, num_beams=4, early_stopping=True, no_repeat_ngram_size=6, ) print(res[0]["generated_text"]) ``` ## evals ### open-llm-leaderboard 2 | Model | Average ⬆️ | IFEval | BBH | MATH Lvl 5 | GPQA | MUSR | MMLU-PRO | |---------------------------------------------------------------------------------------------------------------------------------------|------------|--------|------|------------|------|------|----------| | 🔶 BEE-spoke-data/tFINE-900m-e16-d32-instruct | 5.82 | 13.21 | 4.74 | 0 | 0.56 | 13.81| 2.63 | | 🔶 BEE-spoke-data/tFINE-900m-e16-d32-flan | 4.43 | 15.06 | 4.41 | 0 | 0 | 3.72 | 3.41 |