emissions-extraction-lora merged with the mistralai/Mistral-7B-Instruct-v0.2, converted into GGUF format and quantized. Can be used with llama.cpp.

Downloads last month
9
GGUF
Model size
7.24B params
Architecture
llama

5-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Collection including nopperl/emissions-extraction-lora-merged-GGUF