Edit model card

GGUF models of the following model : https://huggingface.co/mridul3301/BioMistral-7B-finetuned

3 format of quantization:

  1. fp8
  2. fp16
  3. fp32

Converted the safetensors to GGUF for inference in CPU using llama_cpp

Downloads last month
11
GGUF
Model size
7.24B params
Architecture
llama
Inference API
Unable to determine this model's library. Check the docs .