Edit model card

Model Details

  • Model Description: This model is test for data ordering.
  • Developed by: Juhwan Lee
  • Model Type: Large Language Model

Model Architecture

This model is based on Mistral-7B-v0.1. We fine-tuning this model for data ordering task.

Mistral-7B-v0.1 is a transformer model, with the following architecture choices:

  • Grouped-Query Attention
  • Sliding-Window Attention
  • Byte-fallback BPE tokenizer

Dataset

We random sample Open-Orca dataset. (We finetune the 100,000 dataset)

Guthub

https://github.com/trailerAI

License

Apache License 2.0

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 54.05
AI2 Reasoning Challenge (25-Shot) 52.99
HellaSwag (10-Shot) 78.54
MMLU (5-Shot) 54.79
TruthfulQA (0-shot) 45.37
Winogrande (5-shot) 75.61
GSM8k (5-shot) 16.98
Downloads last month
1,648
Safetensors
Model size
7.24B params
Tensor type
F32
·

Dataset used to train NLUHOPOE/Mistral-test-case-4

Evaluation results