Edit model card

QuantFactory Banner

QuantFactory/Replete-LLM-V2.5-Qwen-7b-GGUF

This is quantized version of Replete-AI/Replete-LLM-V2.5-Qwen-7b created using llama.cpp

Original Model Card

Replete-LLM-V2.5-Qwen-7b

image/png

Replete-LLM-V2.5-Qwen-7b is a continues finetuned version of Qwen2.5-7B. I noticed recently that the Qwen team did not learn from my methods of continuous finetuning, the great benefits, and no downsides of it. So I took it upon myself to merge the instruct model with the base model myself using the Ties merge method

This version of the model shows higher performance than the original instruct and base models.

Quants:

GGUF: https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-7b-GGUF

Benchmarks: (Coming soon)

Downloads last month
75
GGUF
Model size
7.62B params
Architecture
qwen2

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for QuantFactory/Replete-LLM-V2.5-Qwen-7b-GGUF

Base model

Qwen/Qwen2.5-7B
Quantized
(93)
this model