kyujinpy commited on
Commit
92e3ef9
โ€ข
1 Parent(s): 514446a

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -0
README.md CHANGED
@@ -1,3 +1,66 @@
1
  ---
2
  license: cc-by-nc-sa-4.0
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-sa-4.0
3
+ datasets:
4
+ - HumanF-MarkrAI/Korean-RAG-ver2
5
+ language:
6
+ - ko
7
+ tags:
8
+ - Retrieval Augmented Generation
9
+ - RAG
10
+ - Multi-domain
11
  ---
12
+
13
+ # MarkrAI/RAG-KO-Mixtral-7Bx2-v1.15
14
+
15
+ # Model Details
16
+
17
+ ## Model Developers
18
+ MarkrAI - AI Researchers
19
+
20
+ ## Base Model
21
+ [DopeorNope/Ko-Mixtral-v1.3-MoE-7Bx2](https://huggingface.co/DopeorNope/Ko-Mixtral-v1.3-MoE-7Bx2).
22
+
23
+ ## Instruction tuning Method
24
+ Using QLoRA.
25
+ ```
26
+ 4-bit quantization
27
+ Lora_r: 64
28
+ Lora_alpha: 64
29
+ Lora_dropout: 0.05
30
+ Lora_target_modules: [embed_tokens, q_proj, k_proj, v_proj, o_proj, gate, w1, w2, w3, lm_head]
31
+ ```
32
+
33
+ ## Hyperparameters
34
+ ```
35
+ Epoch: 3
36
+ Batch size: 64
37
+ Learning_rate: 1e-5
38
+ Learning scheduler: linear
39
+ Warmup_ratio: 0.06
40
+ ```
41
+
42
+ ## Datasets
43
+ Private datasets: [HumanF-MarkrAI/Korean-RAG-ver2](https://huggingface.co/datasets/HumanF-MarkrAI/Korean-RAG-ver2)
44
+ ```
45
+ Aihub datasets ํ™œ์šฉํ•˜์—ฌ์„œ ์ œ์ž‘ํ•จ.
46
+ ```
47
+
48
+ ## Implmentation Code
49
+ ```
50
+ ### KO-Platypus
51
+ from transformers import AutoModelForCausalLM, AutoTokenizer
52
+ import torch
53
+
54
+ repo = "MarkrAI/RAG-KO-Mixtral-7Bx2-v1.15"
55
+ OpenOrca = AutoModelForCausalLM.from_pretrained(
56
+ repo,
57
+ return_dict=True,
58
+ torch_dtype=torch.float16,
59
+ device_map='auto'
60
+ )
61
+ OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
62
+
63
+ ```
64
+
65
+ # Model Benchmark
66
+ - Coming soon...