Edit model card


TinyKAI 3B is a fine-tuned LLM (Large Language Model) based off of OpenLlama 3B v2. The TinyKAI models are a series of lightweight LLMs under 5 Billion parameters, usually used for research.

Direct Use

TinyKAI 3B is optimal for research on large language models, specifically the influence of web data on the properties of large language models (fairness, safety, limitations, capabilities, etc.).


This model was trained on a mixture of the Falcon refined-web dataset, the StarCoder dataset and the wikipedia, arxiv, book and stackexchange part of the RedPajama dataset.

Banned Use

Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful or insulting to anyone or any certain group. TinyKAI-3B is governed by the apache 2.0 liscense, and therefore means that whatever the license deems unacceptable shall not be allowed. We specificaly ban the use of ANY AND ALL KAI MODELS for hate speech towards a paricular thing, person, our particular group due to legal and ethical issues.


TinyKAI 3B is trained on English data only, and will not generate appropriately reasonable content in other languages. Being trained on a representative of the web, it will carry the stereotypes and biases commonly encountered online.


We recommend users of TinyKAI 3B to consider finetuning it for personal use, and for precautions to be taken for any commercial use.


This model runs on an older version of transformers, v4.10.0, and therefore may be unstable.

Downloads last month
Model size
3.43B params
Tensor type

Datasets used to train Keynote-Technology/TinyKAI-3B-v0.1

Collections including Keynote-Technology/TinyKAI-3B-v0.1