metadata
license: apache-2.0
language:
- en
pipeline_tag: text-generation
Model Card for Fox-1-1.6B
This model is a base pretrained model which requires further finetuning for most use cases. We will release the instruction-tuned version soon.
Fox-1 is a decoder-only transformer-based small language model (SLM) with 1.6B total parameters developed by TensorOpera AI. The model was trained with a 3-stage data curriculum on 3 trillion tokens of text and code data in 8K sequence length. Fox-1 uses grouped query attention (GQA) with 4 KV heads and 16 attention heads and has a deeper architecture than other SLMs.
For the full details of this model please read our release blog post.