This model is tuned on the LogiCoT data and the GPT-4 alpaca data with the LLaMa-7b model.

We use 2 A100 GPUs

We first instruction-tuning LLaMa-7b on the GPT-4 alpaca data for 3 days, then on the LogiCoT data for 4 days.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 39.37
ARC (25-shot) 47.01
HellaSwag (10-shot) 72.56
MMLU (5-shot) 38.93
TruthfulQA (0-shot) 43.63
Winogrande (5-shot) 67.56
GSM8K (5-shot) 0.0
DROP (3-shot) 5.92
Downloads last month
475
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train datatune/llama-7b-logicot

Spaces using datatune/llama-7b-logicot 21