huseinzol05
commited on
Commit
•
961b5ab
1
Parent(s):
afadc28
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,69 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- ms
|
4 |
+
---
|
5 |
+
|
6 |
+
# Full Parameter Finetuning 7B 32768 context length Mistral on Malaysian instructions dataset
|
7 |
+
|
8 |
+
README at https://github.com/mesolitica/malaya/tree/5.1/session/mistral#instructions-7b-16384-context-length
|
9 |
+
|
10 |
+
We use exact Mistral Instruct chat template.
|
11 |
+
|
12 |
+
WandB, https://wandb.ai/huseinzol05/fpf-mistral-7b-hf-instructions-16k?workspace=user-huseinzol05
|
13 |
+
|
14 |
+
WandB report, https://wandb.ai/huseinzol05/fpf-tinyllama-1.1b-hf-instructions-16k/reports/Instruction-finetuning--Vmlldzo2MzQ3OTcz
|
15 |
+
|
16 |
+
## Dataset
|
17 |
+
|
18 |
+
Dataset gathered at https://huggingface.co/collections/mesolitica/malaysian-synthetic-dataset-656c2673fe7fe0b1e9e25fe2
|
19 |
+
|
20 |
+
Notebook to prepare dataset at https://github.com/mesolitica/malaysian-dataset/blob/master/llm-instruction/combine-malay-no-alignment-multitasks-v5.ipynb
|
21 |
+
|
22 |
+
## Limitations
|
23 |
+
|
24 |
+
This model is a quick demonstration that the base model can be easily fine-tuned to achieve some performance.
|
25 |
+
It does have minimal moderation mechanisms.
|
26 |
+
|
27 |
+
## how-to
|
28 |
+
|
29 |
+
```python
|
30 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
|
31 |
+
import torch
|
32 |
+
import json
|
33 |
+
|
34 |
+
TORCH_DTYPE = 'bfloat16'
|
35 |
+
nf4_config = BitsAndBytesConfig(
|
36 |
+
load_in_4bit=True,
|
37 |
+
bnb_4bit_quant_type='nf4',
|
38 |
+
bnb_4bit_use_double_quant=True,
|
39 |
+
bnb_4bit_compute_dtype=getattr(torch, TORCH_DTYPE)
|
40 |
+
)
|
41 |
+
|
42 |
+
tokenizer = AutoTokenizer.from_pretrained('mesolitica/malaysian-mistral-7b-32k-instructions-v4')
|
43 |
+
model = AutoModelForCausalLM.from_pretrained(
|
44 |
+
'mesolitica/malaysian-mistral-7b-32k-instructions-v4',
|
45 |
+
use_flash_attention_2 = True,
|
46 |
+
quantization_config = nf4_config
|
47 |
+
)
|
48 |
+
|
49 |
+
messages = [
|
50 |
+
{'role': 'user', 'content': 'kwsp tu apa'}
|
51 |
+
]
|
52 |
+
prompt = tokenizer.apply_chat_template(messages, tokenize = False)
|
53 |
+
inputs = tokenizer([prompt], return_tensors='pt', add_special_tokens=False).to('cuda')
|
54 |
+
generate_kwargs = dict(
|
55 |
+
inputs,
|
56 |
+
max_new_tokens=1024,
|
57 |
+
top_p=0.95,
|
58 |
+
top_k=50,
|
59 |
+
temperature=0.9,
|
60 |
+
do_sample=True,
|
61 |
+
num_beams=1,
|
62 |
+
)
|
63 |
+
r = model.generate(**generate_kwargs)
|
64 |
+
tokenizer.decode(r[0])
|
65 |
+
```
|
66 |
+
|
67 |
+
```text
|
68 |
+
<s> [INST] kwsp tu apa [/INST]KWSP bermaksud Kumpulan Wang Simpanan Pekerja. Ia adalah sebuah institusi simpanan persaraan yang ditubuhkan oleh Kementerian Kewangan Malaysia untuk tujuan mengumpul simpanan ahli untuk dibayar pada umur persaraan, penuh atau penuh persaraan penuh. KWSP ditubuhkan pada tahun 1951 dan mula beroperasi pada tahun 1952. KWSP adalah salah satu institusi simpanan persaraan terbesar di dunia, dengan pangkalan ahli sekitar 14 juta ahli.</s>
|
69 |
+
```
|