QuantFactory Banner

QuantFactory/TwinLlama-3.1-8B-GGUF

This is quantized version of mlabonne/TwinLlama-3.1-8B created using llama.cpp

Original Model Card

image/png

๐Ÿ‘ฅ TwinLlama-3.1-8B

TwinLlama-3.1-8B is a model created for the LLM Engineer's Handbook, trained on mlabonne/llmtwin.

It is designed to act as a digital twin, which is a clone of myself and my co-authors (Paul Iusztin and Alex Vesa), imitating our writing style and drawing knowledge from our articles.


This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
27
GGUF
Model size
8.03B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this modelโ€™s pipeline type. Check the docs .

Model tree for QuantFactory/TwinLlama-3.1-8B-GGUF

Quantized
(169)
this model

Dataset used to train QuantFactory/TwinLlama-3.1-8B-GGUF