Edit model card

Eric Hartford's Samantha 1.1 Llama 33B GGUF

Samantha 1.1 is a very smart model. For those of us with 24gb usable RAM (including Apple Silicon with 32gb RAM), this is just about the best model available as of October, 2023.

I've converted the model to GGUF for the sake of compatibility. Currently, only Q4_K_S is available because that's the largest model that runs in 24gb.

For more information, see Eric Hartford's Samantha 1.1 Llama 33B. To examine the original quant, see TheBloke/samantha-1.1-llama-33B-GGML

The particular quants selected for this repo are in support of calm, which is a language model runner that automatically uses the right prompts, templates, context size, etc.

Downloads last month
176
GGUF
Model size
32.5B params
Architecture
llama

4-bit

6-bit

Inference API
Inference API (serverless) has been turned off for this model.

Model tree for iandennismiller/samantha-1.1-llama-33b-GGUF

Quantized
(1)
this model

Dataset used to train iandennismiller/samantha-1.1-llama-33b-GGUF