kyujinpy commited on
Commit
c7053d5
โ€ข
1 Parent(s): 37706e4

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -0
README.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: creativeml-openrail-m
3
+ datasets:
4
+ - HumanF-MarkrAI/Korean-RAG-ver2
5
+ language:
6
+ - ko
7
+ ---
8
+
9
+ # MarkrAI/RAG-KO-Mixtral-7Bx2-v1.1
10
+
11
+ # Model Details
12
+
13
+ ## Model Developers
14
+ MarkrAI - AI Researchers
15
+
16
+ ## Base Model
17
+ [DopeorNope/Ko-Mixtral-v1.3-MoE-7Bx2](https://huggingface.co/DopeorNope/Ko-Mixtral-v1.3-MoE-7Bx2).
18
+
19
+ ## Instruction tuning Method
20
+ Using QLoRA.
21
+ ```
22
+ 4-bit quantization
23
+ Lora_r: 64
24
+ Lora_alpha: 64
25
+ Lora_dropout: 0.05
26
+ Lora_target_modules: [embed_tokens, q_proj, k_proj, v_proj, o_proj, gate, w1, w2, w3, lm_head]
27
+ ```
28
+
29
+ ## Hyperparameters
30
+ ```
31
+ Epoch: 5
32
+ Batch size: 64
33
+ Learning_rate: 1e-5
34
+ Learning scheduler: linear
35
+ Warmup_ratio: 0.06
36
+ ```
37
+
38
+ ## Datasets
39
+ Private datasets: [HumanF-MarkrAI/Korean-RAG-ver2](https://huggingface.co/datasets/HumanF-MarkrAI/Korean-RAG-ver2)
40
+ ```
41
+ Aihub datasets ํ™œ์šฉํ•˜์—ฌ์„œ ์ œ์ž‘ํ•จ.
42
+ ```
43
+
44
+ ## Implmentation Code
45
+ ```
46
+ ### KO-Platypus
47
+ from transformers import AutoModelForCausalLM, AutoTokenizer
48
+ import torch
49
+
50
+ repo = "MarkrAI/RAG-KO-Mixtral-7Bx2-v1.1"
51
+ OpenOrca = AutoModelForCausalLM.from_pretrained(
52
+ repo,
53
+ return_dict=True,
54
+ torch_dtype=torch.float16,
55
+ device_map='auto'
56
+ )
57
+ OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
58
+
59
+ ```
60
+
61
+ # Model Benchmark
62
+ - Coming soon...