|
--- |
|
base_model: Keynote-Technology/TinyKAI-3B-beta |
|
inference: false |
|
license: apache-2.0 |
|
model_creator: Keynote-Technology |
|
model_name: TinyKAI-3B-beta |
|
pipeline_tag: text-generation |
|
quantized_by: afrideva |
|
tags: |
|
- gguf |
|
- ggml |
|
- quantized |
|
- q2_k |
|
- q3_k_m |
|
- q4_k_m |
|
- q5_k_m |
|
- q6_k |
|
- q8_0 |
|
--- |
|
# Keynote-Technology/TinyKAI-3B-beta-GGUF |
|
|
|
Quantized GGUF model files for [TinyKAI-3B-beta](https://huggingface.co/Keynote-Technology/TinyKAI-3B-beta) from [Keynote-Technology](https://huggingface.co/Keynote-Technology) |
|
|
|
|
|
| Name | Quant method | Size | |
|
| ---- | ---- | ---- | |
|
| [tinykai-3b-beta.q2_k.gguf](https://huggingface.co/afrideva/TinyKAI-3B-beta-GGUF/resolve/main/tinykai-3b-beta.q2_k.gguf) | q2_k | 2.15 GB | |
|
| [tinykai-3b-beta.q3_k_m.gguf](https://huggingface.co/afrideva/TinyKAI-3B-beta-GGUF/resolve/main/tinykai-3b-beta.q3_k_m.gguf) | q3_k_m | 2.27 GB | |
|
| [tinykai-3b-beta.q4_k_m.gguf](https://huggingface.co/afrideva/TinyKAI-3B-beta-GGUF/resolve/main/tinykai-3b-beta.q4_k_m.gguf) | q4_k_m | 2.58 GB | |
|
| [tinykai-3b-beta.q5_k_m.gguf](https://huggingface.co/afrideva/TinyKAI-3B-beta-GGUF/resolve/main/tinykai-3b-beta.q5_k_m.gguf) | q5_k_m | 2.76 GB | |
|
| [tinykai-3b-beta.q6_k.gguf](https://huggingface.co/afrideva/TinyKAI-3B-beta-GGUF/resolve/main/tinykai-3b-beta.q6_k.gguf) | q6_k | 3.64 GB | |
|
| [tinykai-3b-beta.q8_0.gguf](https://huggingface.co/afrideva/TinyKAI-3B-beta-GGUF/resolve/main/tinykai-3b-beta.q8_0.gguf) | q8_0 | 3.64 GB | |
|
|
|
|
|
|
|
## Original Model Card: |
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6500c7c912c1442d994c36e5/rav-pUPtWF-u-_A4k9RaZ.png) |
|
|
|
TinyKAI 3B is a fine-tuned LLM (Large Language Model) based off of [OpenLlama 3B v2](https://huggingface.co/openlm-research/open_llama_3b_v2). |
|
The TinyKAI models are a series of lightweight LLMs under 5 Billion parameters, usually used for research. |
|
|
|
## Direct Use |
|
TinyKAI 3B is optimal for research on large language models, specifically the influence of web data on the properties of large language models (fairness, safety, limitations, capabilities, etc.). |
|
|
|
## Training |
|
This model was trained on a mixture of the [Falcon refined-web dataset](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), the [StarCoder dataset](https://huggingface.co/datasets/bigcode/starcoderdata) and the wikipedia, arxiv, book and stackexchange part of the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T). |
|
|
|
## Banned Use |
|
Production use without adequate assessment of risks and mitigation; any use cases which may be considered irresponsible or harmful or insulting to anyone or any certain group. |
|
|
|
## Limitations |
|
TinyKAI 3B is trained on English data only, and will not generate appropriately reasonable content in other languages. Being trained on a representative of the web, it will carry the stereotypes and biases commonly encountered online. |
|
|
|
## Recommendations |
|
We recommend users of TinyKAI 3B to consider finetuning it for personal use, and for precautions to be taken for any commercial use. |
|
|
|
## WARNING! |
|
This model runs on an older version of transformers, v4.10.0, and therefore may be unstable. |