Quantized version of: trillionlabs/Trillion-7B-preview

Note

I've forced llama.cpp to use llama-bpe tokenizer, as the checksum of the model's original tokenizer was not present in the converter code. The model has produced meaningful output in english and korean (that is what I've tried).

'Make knowledge free for everyone'

Made with

Buy Me a Coffee at ko-fi.com

Downloads last month
224
GGUF
Model size
7.53B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for DevQuasar/trillionlabs.Trillion-7B-preview-GGUF

Quantized
(4)
this model

Collection including DevQuasar/trillionlabs.Trillion-7B-preview-GGUF