Edit model card

QuantFactory/Fox-1-1.6B-GGUF

This is quantized version of QuantFactory/Fox-1-1.6B-GGUF created using llama.cpp

Model Card for Fox-1-1.6B

This model is a base pretrained model which requires further finetuning for most use cases. We will release the instruction-tuned version soon.

Fox-1 is a decoder-only transformer-based small language model (SLM) with 1.6B total parameters developed by TensorOpera AI. The model was trained with a 3-stage data curriculum on 3 trillion tokens of text and code data in 8K sequence length. Fox-1 uses grouped query attention (GQA) with 4 KV heads and 16 attention heads and has a deeper architecture than other SLMs.

For the full details of this model please read our release blog post.

Downloads last month
508
GGUF
Model size
1.67B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for QuantFactory/Fox-1-1.6B-GGUF

Quantized
(4)
this model