Edit model card

vihangd/smartyplats-1.1b-v1-GGUF

Quantized GGUF model files for smartyplats-1.1b-v1 from vihangd

Name Quant method Size
smartyplats-1.1b-v1.q2_k.gguf q2_k 482.14 MB
smartyplats-1.1b-v1.q3_k_m.gguf q3_k_m 549.85 MB
smartyplats-1.1b-v1.q4_k_m.gguf q4_k_m 667.81 MB
smartyplats-1.1b-v1.q5_k_m.gguf q5_k_m 782.04 MB
smartyplats-1.1b-v1.q6_k.gguf q6_k 903.41 MB
smartyplats-1.1b-v1.q8_0.gguf q8_0 1.17 GB

Original Model Card:

SmartyPlats-1.1b V1

An experimental finetune of TinyLLaMA 1T with QLoRA

Datasets

Trained on alpca style datasets

Prompt Template

Uses alpaca style prompt template
Downloads last month
45
GGUF
Model size
1.1B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for afrideva/smartyplats-1.1b-v1-GGUF

Quantized
(1)
this model