File size: 4,572 Bytes
c7fbd36
 
 
 
d46e856
 
 
 
 
 
 
85bd15c
c7fbd36
ffa6522
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d46e856
 
4a1f904
65c21a5
7eea96a
d46e856
c7fbd36
d46e856
c7fbd36
931ce8d
c7fbd36
607ee93
c7fbd36
eb078da
c7fbd36
d46e856
c7fbd36
06e726b
c7fbd36
5776ba4
c7fbd36
92195e6
c7fbd36
d46e856
c7fbd36
571b69b
c7fbd36
a00ec2c
d46e856
 
 
571b69b
d46e856
 
 
 
e10ea1f
d46e856
5cedff5
 
 
 
d46e856
5cedff5
 
 
d46e856
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2b4cb5e
92195e6
2b4cb5e
d46e856
 
2b4cb5e
 
d46e856
 
5b2907f
d46e856
571b69b
d46e856
 
85bd15c
5776ba4
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
---
library_name: transformers
tags:
- unsloth
- llama3
- indonesia
license: llama3
datasets:
- catinthebag/TumpengQA
language:
- id
inference: false
---
<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Document Title</title>
    <style>
        h1 {
            font-size: 36px;
            color: navy;
            font-family: 'Tahoma';
            text-align: center;
        }
    </style>
</head>
<body>
    <h1>Introducing the Kancil family of open models</h1>
</body>
</html>

<center>
    <img src="https://imgur.com/9nG5J1T.png" alt="Kancil" width="600" height="300">
    <p><em>Kancil is a fine-tuned version of Llama 3 8B using synthetic QA dataset generated with Llama 3 70B. Version zero of Kancil is the first generative Indonesian LLM gain functional instruction performance using solely synthetic data.</em></p>
    <p><strong><a href="https://colab.research.google.com/drive/1526QJYfk32X1CqYKX7IA_FFcIHLXbOkx?usp=sharing" style="color: blue; font-family: Tahoma;">❕Go straight to the colab demo❕</a></strong></p>
    <p><em style="color: black; font-weight: bold;">Beta preview</em></p>
</center>

Selamat datang!

I am ultra-overjoyed to introduce you... the 🦌 Kancil! It's a fine-tuned version of Llama 3 8B with the TumpengQA, an instruction dataset of 6.7 million words. Both the model and dataset is openly available in Huggingface. 

📚 The dataset was synthetically generated from Llama 3 70B. A big problem with existing Indonesian instruction dataset is they're in reality not-very-good-translations of English datasets. Llama 3 70B can generate fluent Indonesian! (with minor caveats 😔)

🦚 This follows previous efforts for collection of open, fine-tuned Indonesian models, like Merak and Cendol. However, Kancil solely leverages synthetic data in a very creative way, which makes it a very unique contribution!

### Version 0.0

This is the very first working prototype, Kancil V0. It supports basic QA functionalities only. It does not support multi-turn conversation.

This model was fine-tuned with QLoRA using the amazing Unsloth framework! It was built on top of [unsloth/llama-3-8b-bnb-4bit](https://huggingface.co/unsloth/llama-3-8b-bnb-4bit) and subsequently merged with the adapter back to 4 bit (no visible difference with merging back to fp 16).

### Uses

This model is developed with research purposes for  researchers or general AI hobbyists. However, it has one big application: You can have lots of fun with it!

### Out-of-Scope Use

This is a research preview model with minimal safety curation. Do not use this model for commercial or practical applications.

You are also not allowed to use this model without having fun.

### Getting started

As mentioned, this model was trained with Unsloth. Please use its code for better experience.

```
# Install dependencies. You need GPU to run this (at least T4)
%%capture
!pip install torch==2.2.*
import torch
print(torch.__version__)
!pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git@53dbf76391da0aea35bc6b044b2fe85460d9e345"
!pip install --no-deps "xformers<0.0.26" trl peft accelerate bitsandbytes

# Available versions
KancilV0 = "catinthebag/Kancil-V0-llama3"
```

```
# Load the model
from unsloth import FastLanguageModel
import torch

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "catinthebag/Kancil-V0-llama3",
    max_seq_length = max_seq_length,
    dtype = torch.bfloat16, # Will default to float 16 if not available
    load_in_4bit = True,
)
```
```
# This model was trained on this specific prompt template. Changing it might lead to performance degradations.
prompt_template = """User: {prompt}
Asisten: {response}"""

# Start generating!
FastLanguageModel.for_inference(model)
inputs = tokenizer(
[
prompt_template.format(
        prompt="Bagaimana canting dan lilin digunakan untuk menggambar pola batik?",
        response="",)
], return_tensors = "pt").to("cuda")

outputs = model.generate(**inputs, max_new_tokens = 600, temperature=.8, use_cache = True)
print(tokenizer.batch_decode(outputs)[0].replace('\\n', '\n'))
```

**Note:** For Version 0 there is an issue with the dataset where the newline characters are interpreted as literal strings. Very sorry about this! 😔 Please keep the .replace() method to fix newline errors.

### Acknowledgments

- **Developed by:** Afrizal Hasbi Azizy
- **Funded by:** [DF Labs](dflabs.id)
- **License:** Llama 3 Community License Agreement