Edit model card

Image description

Tsunemoto GGUF's of mistral-ft-optimized-1218

This is a GGUF quantization of mistral-ft-optimized-1218.

Original Repo Link:

Original Repository

Original Model Card:


This model is intended to be a strong base suitable for downstream fine-tuning on a variety of tasks. Based on our internal evaluations, we believe it's one of the strongest models for most down-stream tasks. You can read more about our development and evaluation process here.

Downloads last month
204
GGUF
Model size
7.24B params
Architecture
llama
Unable to determine this model's library. Check the docs .