Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Model Evaluation

The model now ranks number 3 for hellaSwag benchmark and number 1 for TruthfulQA benchmark on Open llm leaderboard!

Expect More High Quality Models Soon!

Experimental Model Warning

This model is an experimental prototype and should not be considered production-ready. Reasons for Experimental Status Potential for Bias: Due to the experimental nature of the model, it may exhibit biases in its output, which could lead to incorrect or unfair results. this is not the instruct/chat version!

Precautions to Take

Use with Caution: Be aware that the model's output may contain factual inaccuracies or biases.

Verify Output: Always verify the model's output with other sources to ensure its accuracy.

Report Issues: If you encounter any issues or biases in the model's output, please report them so that they can be addressed in future updates.

Avoid Sensitive Applications: Do not use the model for applications where accuracy and reliability are critical, such as medical or financial decision-making.

By understanding the experimental nature of this model and taking the necessary precautions, you can help ensure that it is used responsibly and effectively

License: This model is strictly non-commercial (cc-by-nc-4.0) use only. The "Model" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included cc-by-nc-4.0 license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences. The licence can be changed after new model released. If you are to use this model for commercial purpose, Contact me.

Disclaimer: By Downloading And/Or using the model, you fully agree to the license (cc-by-nc-4.0) and its commercial-use restrictions.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 76.48
AI2 Reasoning Challenge (25-Shot) 73.46
HellaSwag (10-Shot) 89.38
MMLU (5-Shot) 64.19
TruthfulQA (0-shot) 79.86
Winogrande (5-shot) 85.48
GSM8k (5-shot) 66.49
Downloads last month
497
Safetensors
Model size
7.24B params
Tensor type
BF16
·

Evaluation results