Edit model card

Model Details

The 7B Mistral Model from GenAudit served in Q4_K_S and F16 GGUF format.
Merged and Quantised with Unsloth.AI

Model Description

Inspired by this paper: https://genaudit.org/
Original Code here: https://github.com/kukrishna/genaudit

Converted to GGUF format for running it on Ollama/Llama.cpp so as to take advantage of VRAM offloading to RAM (something Huggingface transformers is unable to do for now).

Merged base mistral_v0.1_instruct with Qlora and quantised to Q4_k_s gguf format
You may find the base 16 bit model here (but further quantisation is advisable as their Qlora module was fine tuned on the 4bit nf4 base llm)

Developed by: Nuode Chen
Finetuned from model: Mistral_V0.1_instruct

Model Sources

Uses

For evaluating the abstractive summaries of LLM given a source article.
This tool will be able to extract evidences supporting each sentence in the summary as well as provide edits to correct its factuality (if applicable)

Refer to original paper for more in-depth information.

Downloads last month
75
GGUF
Model size
7.24B params
Architecture
llama
Inference Examples
Unable to determine this model's library. Check the docs .