Edit model card

Introduction

The model is trained with Masked Thought Fine-Tuning (MFT), a simple variant of standard Supervised Fine-Tuning (SFT). You can refer to our code and paper below.

Links

Results

We test it with the scripts provided in MetaMath.

Model GSM8K MATH
adalaw/MetaMath-Mistral-7B-MFT 79.90 29.0
meta-math/MetaMath-Mistral-7B-SFT 77.70 28.2
Downloads last month
17
Safetensors
Model size
7.24B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train adalaw/MetaMath-Mistral-7B-MFT