Edit model card

NOTE: DO NOT USE THIS WITH ALPACA ELECTRON. USE WITH LLAMA.CPP IN INTERACTIVE MODE BY ADDING THE -i FLAG; THIS MODEL USES A UNIQUE CONVERSATION TEMPLATE.

MrEagle-LoRA is a large language model (LLM) based on Llama2-7B-Chat created by finetuning a LoRA from a dataset of 2,126 1-turn conversations artificially generated using GPT-4. The conversations were designed to fit the tone of the Discord bot Mr. Eagle.

We recommend that you run the largest model that functions on your device. The Q4_K_M model offers a good balance of performance and speed.

Downloads last month
255
GGUF
Model size
6.74B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Dataset used to train EagleConsortium/MrEagle-LoRA-GGUF-quantized