---
library_name: transformers
widget:
- messages:
- role: system
content: Anda adalah seorang konselor karir. User akan memberi Anda seorang individu
mencari bimbingan dalam kehidupan profesional mereka, dan tugas Anda adalah
membantu mereka dalam menentukan karir apa yang paling cocok bagi mereka berdasarkan
keterampilan mereka, minat, dan pengalaman. Anda juga harus melakukan penelitian
terhadap berbagai hal tersebut pilihan yang tersedia, jelaskan tren pasar kerja
di berbagai industri, Dan saran tentang kualifikasi mana yang akan bermanfaat
untuk mengejar bidang tertentu.
- role: user
content: Hellow!
- role: assistant
content: Hai! Apa yang bisa saya bantu?
- role: user
content: Saya tertarik untuk mengembangkan karir di bidang rekayasa perangkat
lunak. Apa Anda mau merekomendasikan saya untuk melakukannya?
- messages:
- role: system
content: Anda adalah asisten yang berpengetahuan luas. Bantu user sebanyak yang
Anda bisa.
- role: user
content: Bagaimana caranya menjadi lebih sehat?
- messages:
- role: system
content: Anda adalah asisten yang membantu dan memberikan tanggapan yang cerdas.
- role: user
content: Haloooo Bund!
- role: assistant
content: Halo! Apa yang bisa saya bantu?
- role: user
content: Saya perlu membangun situs web sederhana. Di mana saya harus mulai belajar
tentang pengembangan web?
- messages:
- role: system
content: Anda adalah asisten yang sangat kreatif. Pengguna akan memberi Anda tugas,
yang harus Anda selesaikan dengan seluruh pengetahuan Anda.
- role: user
content: Tulis latar belakang cerita game RPG tentang penyihir dan naga di dunia
fiksi ilmiah.
inference:
parameters:
max_new_tokens: 128
penalty_alpha: 0.5
top_k: 4
pipeline_tag: text-generation
tags:
- conversational
- convAI
- TensorBlock
- GGUF
license: apache-2.0
language:
- id
- en
datasets:
- FreedomIntelligence/evol-instruct-indonesian
base_model: kalisai/Nusantara-0.8b-Indo-Chat
---
## kalisai/Nusantara-0.8b-Indo-Chat - GGUF
This repo contains GGUF format model files for [kalisai/Nusantara-0.8b-Indo-Chat](https://huggingface.co/kalisai/Nusantara-0.8b-Indo-Chat).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4242](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
## Prompt template
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Nusantara-0.8b-Indo-Chat-Q2_K.gguf](https://huggingface.co/tensorblock/Nusantara-0.8b-Indo-Chat-GGUF/blob/main/Nusantara-0.8b-Indo-Chat-Q2_K.gguf) | Q2_K | 0.374 GB | smallest, significant quality loss - not recommended for most purposes |
| [Nusantara-0.8b-Indo-Chat-Q3_K_S.gguf](https://huggingface.co/tensorblock/Nusantara-0.8b-Indo-Chat-GGUF/blob/main/Nusantara-0.8b-Indo-Chat-Q3_K_S.gguf) | Q3_K_S | 0.422 GB | very small, high quality loss |
| [Nusantara-0.8b-Indo-Chat-Q3_K_M.gguf](https://huggingface.co/tensorblock/Nusantara-0.8b-Indo-Chat-GGUF/blob/main/Nusantara-0.8b-Indo-Chat-Q3_K_M.gguf) | Q3_K_M | 0.449 GB | very small, high quality loss |
| [Nusantara-0.8b-Indo-Chat-Q3_K_L.gguf](https://huggingface.co/tensorblock/Nusantara-0.8b-Indo-Chat-GGUF/blob/main/Nusantara-0.8b-Indo-Chat-Q3_K_L.gguf) | Q3_K_L | 0.473 GB | small, substantial quality loss |
| [Nusantara-0.8b-Indo-Chat-Q4_0.gguf](https://huggingface.co/tensorblock/Nusantara-0.8b-Indo-Chat-GGUF/blob/main/Nusantara-0.8b-Indo-Chat-Q4_0.gguf) | Q4_0 | 0.511 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Nusantara-0.8b-Indo-Chat-Q4_K_S.gguf](https://huggingface.co/tensorblock/Nusantara-0.8b-Indo-Chat-GGUF/blob/main/Nusantara-0.8b-Indo-Chat-Q4_K_S.gguf) | Q4_K_S | 0.513 GB | small, greater quality loss |
| [Nusantara-0.8b-Indo-Chat-Q4_K_M.gguf](https://huggingface.co/tensorblock/Nusantara-0.8b-Indo-Chat-GGUF/blob/main/Nusantara-0.8b-Indo-Chat-Q4_K_M.gguf) | Q4_K_M | 0.531 GB | medium, balanced quality - recommended |
| [Nusantara-0.8b-Indo-Chat-Q5_0.gguf](https://huggingface.co/tensorblock/Nusantara-0.8b-Indo-Chat-GGUF/blob/main/Nusantara-0.8b-Indo-Chat-Q5_0.gguf) | Q5_0 | 0.595 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Nusantara-0.8b-Indo-Chat-Q5_K_S.gguf](https://huggingface.co/tensorblock/Nusantara-0.8b-Indo-Chat-GGUF/blob/main/Nusantara-0.8b-Indo-Chat-Q5_K_S.gguf) | Q5_K_S | 0.595 GB | large, low quality loss - recommended |
| [Nusantara-0.8b-Indo-Chat-Q5_K_M.gguf](https://huggingface.co/tensorblock/Nusantara-0.8b-Indo-Chat-GGUF/blob/main/Nusantara-0.8b-Indo-Chat-Q5_K_M.gguf) | Q5_K_M | 0.605 GB | large, very low quality loss - recommended |
| [Nusantara-0.8b-Indo-Chat-Q6_K.gguf](https://huggingface.co/tensorblock/Nusantara-0.8b-Indo-Chat-GGUF/blob/main/Nusantara-0.8b-Indo-Chat-Q6_K.gguf) | Q6_K | 0.684 GB | very large, extremely low quality loss |
| [Nusantara-0.8b-Indo-Chat-Q8_0.gguf](https://huggingface.co/tensorblock/Nusantara-0.8b-Indo-Chat-GGUF/blob/main/Nusantara-0.8b-Indo-Chat-Q8_0.gguf) | Q8_0 | 0.883 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Nusantara-0.8b-Indo-Chat-GGUF --include "Nusantara-0.8b-Indo-Chat-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Nusantara-0.8b-Indo-Chat-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```