File size: 4,711 Bytes
7d0a716
87d6fdf
541f112
7d0a716
0e17c95
7d0a716
 
 
 
ce9b2c0
fd42157
e06c3e8
fd42157
60f824a
ae95934
 
7d0a716
ce9b2c0
f9e1d85
a9e6b87
 
f73bfae
 
ae95934
 
a9e6b87
ae95934
a9e6b87
 
0e17c95
f73bfae
f9e1d85
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ae95934
 
 
b38334a
119d339
b38334a
 
 
 
ae72b29
fd42157
7d0a716
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b38334a
7d0a716
b38334a
 
 
 
 
 
 
 
 
7d0a716
 
 
 
 
b38334a
ae72b29
b38334a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
---
inference: false
license: cc-by-nc-sa-4.0
datasets:
- asyafiqe/orca_mini_v1_indonesia
language:
- en
- id
---
# 🦚Merak-7B-v3-Mini-Orca🐳
<p align="center">
<img src="https://i.imgur.com/39sQd3h.png" alt="Merak Orca" width="300" height="300"/>
</p>

**Merak-7B-v3-Mini-Orca** is Ichsan2895's [Merak-7B-v3](https://huggingface.co/Ichsan2895/Merak-7B-v3) fine-tuned 
on Bahasa Indonesia translated psmathur's [orca_mini_v1_dataset](https://huggingface.co/datasets/psmathur/orca_mini_v1_dataset).


## Usage
This model fit on 16GB VRAM GPU (Google Collab T4 wil do), by using BitsandBytes it can run on 6GB VRAM GPU.

[![Open in Google Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/11xmPcRNirGwZcpgmNPNpUioJUG4PQBuh)

**Quantized** versions is available:

GPTQ: https://huggingface.co/asyafiqe/Merak-7B-v3-Mini-Orca-Indo-GPTQ

GGML/GGUF: I will try to make this version once GGUF merge is stable.



Start chatting with Merak Mini Orca using the following code snippet:
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("asyafiqe/Merak-7B-v3-Mini-Orca-Indo")
model = AutoModelForCausalLM.from_pretrained("asyafiqe/Merak-7B-v3-Mini-Orca-Indo", torch_dtype=torch.float16, device_map="auto")

system_prompt = "SYSTEM: 'Anda adalah asisten AI. Anda akan diberi tugas. Anda harus menghasilkan jawaban yang rinci dan panjang.\n"

message = "Buatlah rencana untuk mengurangi penggunaan listrik di rumah."

prompt = f"{system_prompt}USER: {message}\nASSISTANT:"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
output = model.generate(**inputs, do_sample=True, temperature=0.1, max_new_tokens=200)

print(tokenizer.decode(output[0], skip_special_tokens=True))
```

### Prompt format
You can use [Vicuna 1.1](https://github.com/oobabooga/text-generation-webui/blob/main/instruction-templates/Vicuna-v1.1.yaml) 
format for Ooobabooga's text generation webui.

```
SYSTEM: Anda adalah asisten AI. Anda akan diberi tugas. Anda harus memberikan jawaban yang rinci dan panjang.
USER: <prompt> (without the <>)
ASSISTANT:
```
## Training details
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="150" height="24"/>](https://github.com/OpenAccess-AI-Collective/axolotl)

Merak-7B-v3-Mini-Orca was instruction fine-tuned on 2 x 3090-24GB for 6 hours. [LoRA](https://github.com/microsoft/LoRA), [DeepSpeed ZeRO-2](https://github.com/microsoft/DeepSpeed), and [FlashAttention](https://github.com/Dao-AILab/flash-attention) were implemented during training using [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl).
Hyperparameter | value |
| ------ | ------ |
learning rate | 0.0004 |
batch size | 16 |
microbatch size | 2 |
warmup step | 100 |
epochs | 2 |
weight decay | 0.0 |
lr scheduler |	cosine |
lora alpha |	16 |
lora rank |	16 |
lora dropout |	0.05 |
lora target modules |	q_proj, v_proj, k_proj, o_proj |
cutoff length |	4096 |
#### Training loss
Step |Train Loss |
| ------ | ------ |
1 |0.9578 |
100 |0.816 |
200 |0.7819 |
300 |0.7279 |
400 |0.732 |
500 |0.7139 |
600 |0.6829 |
700 |0.6641 |
800 |0.6553 |

#### Limitations and bias
Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model.

Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/

## Citation
```
@Paper{arXiv,
  author  = {Touvron, et al},
  title   = {Llama 2: Open Foundation and Fine-Tuned Chat Models},
  journal = {arXiv preprint arXiv:2307.09288},
  year    = {2023}
}
@misc{orca_mini_v3_70b,
  author = {Pankaj Mathur},
  title = {orca_mini_v3_70b: An Orca Style Llama2-70b model},
  year = {2023},
  publisher = {HuggingFace},
  journal = {HuggingFace repository},
  howpublished = {\url{https://https://huggingface.co/psmathur/orca_mini_v3_70b},
}
@article{hu2021lora,
  title={LoRA: Low-Rank Adaptation of Large Language Models},
  author={Hu, Edward J. and Shen, Yelong and Wallis, Phillip and Allen-Zhu, Zeyuan and Li, Yuanzhi and Wang, Shean and Chen, Weizhu},
  journal={CoRR},
  year={2021}
}
```