File size: 4,287 Bytes
c7fbd36 d46e856 85bd15c c7fbd36 d46e856 4a1f904 2b4cb5e d46e856 c7fbd36 92195e6 c7fbd36 d46e856 c7fbd36 931ce8d c7fbd36 073bba8 c7fbd36 eb078da c7fbd36 d46e856 c7fbd36 d46e856 c7fbd36 5776ba4 c7fbd36 92195e6 c7fbd36 92195e6 c7fbd36 d46e856 c7fbd36 92195e6 c7fbd36 d46e856 e10ea1f d46e856 2b4cb5e 92195e6 2b4cb5e d46e856 2b4cb5e d46e856 5b2907f d46e856 85bd15c 5776ba4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 |
---
library_name: transformers
tags:
- unsloth
- llama3
- indonesia
license: llama3
datasets:
- catinthebag/TumpengQA
language:
- id
inference: false
---
<center>
<img src="https://imgur.com/9nG5J1T.png" alt="Kancil" width="600" height="300">
<p><em>Kancil is a fine-tuned version of Llama 3 8B using synthetic QA dataset generated with Llama 3 70B. Version zero of Kancil is the first generative Indonesian LLM gain functional instruction performance using solely synthetic data.</em></p>
<p><em><a href="https://colab.research.google.com/drive/1526QJYfk32X1CqYKX7IA_FFcIHLXbOkx?usp=sharing" style="color: blue;">Go straight to the colab demo</a></em></p>
</center>
#### Introducing the Kancil family of open models
Selamat datang!
I am ultra-overjoyed to introduce you... the π¦ Kancil! It's a fine-tuned version of Llama 3 8B with the TumpengQA, an instruction dataset of 6.7 million words. Both the model and dataset is openly available in Huggingface.
π The dataset was synthetically generated from Llama 3 70B. A big problem with existing Indonesian instruction dataset is they're really badly translated versions of English datasets. Llama 3 70B can generate fluent Indonesian! (with minor caveats π)
π¦ This follows previous efforts for collection of open, fine-tuned Indonesian models, like Merak and Cendol. However, Kancil solely leverages synthetic data in a very creative way, which makes it a very unique contribution!
### Version 0.0
This is the very first working prototype, Kancil V0. It supports basic QA functionalities only. Currently, you cannot chat with it.
This model was fine-tuned with QLoRA using the amazing Unsloth framework! It was built on top of [unsloth/llama-3-8b-bnb-4bit](https://huggingface.co/unsloth/llama-3-8b-bnb-4bit) and subsequently merged with the adapter back to 4 bit (no visible difference with merging back to fp 16).
### Uses
## Direct Use
This model is developed with research purposes for researchers or general AI hobbyists. However, it has one big application: You can have lots of fun with it!
## Out-of-Scope Use
This is a minimally-functional research preview model with no safety curation. Do not use this model for commercial or practical applications.
You are also not allowed to use this model without having fun.
## Getting started
As mentioned, this model was trained with Unsloth. Please use its code for better experience.
```
# Install dependencies. You need GPU to run this (at least T4)
%%capture
!pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
!pip install --no-deps "xformers<0.0.26" trl peft accelerate bitsandbytes
```
```
# Load the model
from unsloth import FastLanguageModel
import torch
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "catinthebag/Kancil-V0-llama3",
max_seq_length = max_seq_length,
dtype = torch.bfloat16, # Will default to float 16 if not available
load_in_4bit = True,
)
```
```
# This model was trained on this specific prompt template. Changing it might lead to performance degradations.
prompt_template = """User: {prompt}
Asisten: {response}"""
EOS_TOKEN = tokenizer.eos_token
def formatting_prompts_func(examples):
inputs = examples["prompt"]
outputs = examples["response"]
texts = []
for input, output in zip(inputs, outputs):
text = prompt_template.format(prompt=input, response=output) + EOS_TOKEN
texts.append(text)
return { "text" : texts, }
pass
```
```
# Start generating!
FastLanguageModel.for_inference(model)
inputs = tokenizer(
[
prompt_template.format(
prompt="Bagaimana canting dan lilin digunakan untuk menggambar pola batik?",
response="",)
], return_tensors = "pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens = 600, temperature=.8, use_cache = True)
print(tokenizer.batch_decode(outputs)[0].replace('\\n', '\n'))
```
**Note:** For Version 0 there is an issue with the dataset where the newline characters are interpreted as literal strings. Very sorry about this! π Please keep the .replace() method to fix newline errors.
## Acknowledgments
- **Developed by:** Afrizal Hasbi Azizy
- **Funded by:** [DF Labs](dflabs.id)
- **License:** Llama 3 Community License Agreement |