Edit model card

Code-Mistral-7B

This Model is trained on refined version of my dataset Code-290k-ShareGPT.

Besides this it is trained on following datasets:

Code-Feedback

orca-math-word-problems-200k

Openhermes

The idea was to check how this Model will perform with both Code & Maths datasets. This model is very good with Coding. Maths is still hit & miss but you can test out this model.

This Model is trained on massive datasets so the results are very good. I have used ChatML prompt format.

Kindly note this is qLoRA version, a rare exception.

GGUF & Exllama

GGUF: Link

Exllama v2: Link

Special Thanks to Bartowski for quantizing this model.

Training:

Entire dataset was trained on 4 x A100 80GB. For 3 epoch, training took almost 33 Hours. Axolotl codebase was used for training purpose. Entire data is trained on Mistral.

Example Prompt: This model uses ChatML prompt format.

<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

You can modify above Prompt as per your requirement.

I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development.

Thank you for your love & support.

Example Output

C++

image/jpeg

Error Resolving

image/jpeg

Matrices

image/jpeg

Machine Learning

image/jpeg

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 69.97
AI2 Reasoning Challenge (25-Shot) 64.59
HellaSwag (10-Shot) 85.29
MMLU (5-Shot) 65.00
TruthfulQA (0-shot) 54.64
Winogrande (5-shot) 82.24
GSM8k (5-shot) 68.08
Downloads last month
3,684
Safetensors
Model size
7.24B params
Tensor type
BF16
·

Datasets used to train ajibawa-2023/Code-Mistral-7B

Evaluation results