emissions-extraction-lora merged with the mistralai/Mistral-7B-Instruct-v0.2, converted into GGUF format and quantized. Can be used with llama.cpp.

Downloads last month
15
GGUF
Model size
7.24B params
Architecture
llama

5-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including nopperl/emissions-extraction-lora-merged-GGUF