File size: 1,294 Bytes
0b09793
 
9171d44
 
 
 
 
 
 
 
0b09793
9171d44
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
---
license: cc-by-nc-sa-4.0
datasets:
- HumanF-MarkrAI/Korean-RAG-ver2
language:
- ko
tags:
  - Retrieval Augmented Generation
  - RAG
  - Multi-domain
---

# MarkrAI/RAG-KO-Mixtral-7Bx2-v2.0

# Model Details  

## Model Developers  
MarkrAI - AI Researchers

## Base Model  
[DopeorNope/Ko-Mixtral-v1.4-MoE-7Bx2](https://huggingface.co/DopeorNope/Ko-Mixtral-v1.4-MoE-7Bx2).  

## Instruction tuning Method  
Using QLoRA.  
```
4-bit quantization
Lora_r: 64
Lora_alpha: 64
Lora_dropout: 0.05
Lora_target_modules: [embed_tokens, q_proj, k_proj, v_proj, o_proj, gate, w1, w2, w3, lm_head]
```

## Hyperparameters  
```
Epoch: 5
Batch size: 64
Learning_rate: 1e-5
Learning scheduler: linear
Warmup_ratio: 0.06
```

## Datasets
Private datasets: [HumanF-MarkrAI/Korean-RAG-ver2](https://huggingface.co/datasets/HumanF-MarkrAI/Korean-RAG-ver2)  
```
Aihub datasets ํ™œ์šฉํ•˜์—ฌ์„œ ์ œ์ž‘ํ•จ.  
```

## Implmentation Code
```
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

repo = "MarkrAI/RAG-KO-Mixtral-7Bx2-v2.0"
markrAI_RAG = AutoModelForCausalLM.from_pretrained(
        repo,
        return_dict=True,
        torch_dtype=torch.float16,
        device_map='auto'
)
markrAI_RAG_tokenizer = AutoTokenizer.from_pretrained(repo)
```

# Model Benchmark
- Coming soon...