RichardErkhov commited on
Commit
87872d8
β€’
1 Parent(s): eda5cd6

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +150 -0
README.md ADDED
@@ -0,0 +1,150 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ EEVE-Korean-Instruct-2.8B-v1.0 - bnb 4bits
11
+ - Model creator: https://huggingface.co/yanolja/
12
+ - Original model: https://huggingface.co/yanolja/EEVE-Korean-Instruct-2.8B-v1.0/
13
+
14
+
15
+
16
+
17
+ Original model description:
18
+ ---
19
+ license: apache-2.0
20
+ tags:
21
+ - generated_from_trainer
22
+ base_model: yanolja/EEVE-Korean-2.8B-v1.0
23
+ model-index:
24
+ - name: yanolja/EEVE-Korean-Instruct-2.8B-v1.0
25
+ results: []
26
+ ---
27
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
28
+
29
+ <p align="left">
30
+ <img src="https://huggingface.co/yanolja/EEVE-Korean-Instruct-2.8B-v1.0/resolve/main/eeve_logo.webp" width="50%"/>
31
+ <p>
32
+
33
+ # EEVE-Korean-Instruct-2.8B-v1.0
34
+
35
+ ## Join Our Community on Discord!
36
+
37
+ If you're passionate about the field of Large Language Models and wish to exchange knowledge and insights, we warmly invite you to join our Discord server. It's worth noting that Korean is the primary language used in this server. The landscape of LLM is evolving rapidly, and without active sharing, our collective knowledge risks becoming outdated swiftly. Let's collaborate and drive greater impact together! Join us here: [Discord Link](https://discord.gg/b27bAHg95m).
38
+
39
+ ## Our Dedicated Team (Alphabetical Order)
40
+ | Research | Engineering | Product Management | UX Design |
41
+ |-----------------|-----------------|--------------------|--------------
42
+ | Myeongho Jeong | Geon Kim | Bokyung Huh | Eunsue Choi |
43
+ | Seungduk Kim | Rifqi Alfi | | |
44
+ | Seungtaek Choi | Sanghoon Han | | |
45
+ | | Suhyun Kang | | |
46
+
47
+ ## About the Model
48
+
49
+ This model is a fine-tuned version of [yanolja/EEVE-Korean-2.8B-v1.0](https://huggingface.co/yanolja/EEVE-Korean-2.8B-v1.0), which is a Korean vocabulary-extended version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2). Specifically, we utilized Direct Preference Optimization (DPO) through the use of [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl).
50
+
51
+ For more details, please refer to our technical report: [Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models](https://arxiv.org/abs/2402.14714).
52
+
53
+ ## Prompt Template
54
+ ```
55
+ A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
56
+ Human: {prompt}
57
+ Assistant:
58
+ ```
59
+ ## How to Use it
60
+ ```python
61
+ from transformers import AutoTokenizer
62
+ from transformers import AutoModelForCausalLM
63
+
64
+ model = AutoModelForCausalLM.from_pretrained("yanolja/EEVE-Korean-Instruct-2.8B-v1.0", trust_remote_code=True)
65
+ tokenizer = AutoTokenizer.from_pretrained("yanolja/EEVE-Korean-Instruct-2.8B-v1.0", trust_remote_code=True)
66
+
67
+ prompt_template = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.\nHuman: {prompt}\nAssistant:\n"
68
+ text = 'ν•œκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μΈκ°€μš”? μ•„λž˜ 선택지 쀑 κ³¨λΌμ£Όμ„Έμš”.\n\n(A) κ²½μ„±\n(B) λΆ€μ‚°\n(C) 평양\n(D) μ„œμšΈ\n(E) μ „μ£Ό'
69
+ model_inputs = tokenizer(prompt_template.format(prompt=text), return_tensors='pt')
70
+
71
+ outputs = model.generate(**model_inputs, max_new_tokens=256)
72
+ output_text = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]
73
+ print(output_text)
74
+ ```
75
+
76
+ ### Example Output
77
+ ```
78
+ A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
79
+ Human: ν•œκ΅­μ˜ μˆ˜λ„λŠ” μ–΄λ””μΈκ°€μš”? μ•„λž˜ 선택지 쀑 κ³¨λΌμ£Όμ„Έμš”.
80
+
81
+ (A) κ²½μ„±
82
+ (B) λΆ€μ‚°
83
+ (C) 평양
84
+ (D) μ„œμšΈ
85
+ (E) μ „μ£Ό
86
+ Assistant:
87
+ ν•œκ΅­μ˜ μˆ˜λ„λŠ” (D) μ„œμšΈμž…λ‹ˆλ‹€. μ„œμšΈμ€ μˆ˜λ„κΆŒκ³Ό μˆ˜λ„κΆŒ λ‚΄μ˜ μ£Όμš” λ„μ‹œλ“€μ„ ν¬ν•¨ν•˜λŠ” κ΄‘μ—­ ν–‰μ •κ΅¬μ—­μœΌλ‘œ, λŒ€ν•œλ―Όκ΅­μ˜ μˆ˜λ„μž…λ‹ˆλ‹€. μ„œμšΈμ€ μˆ˜λ„κΆŒ 인ꡬ의 μ•½ 70%λ₯Ό μ°¨μ§€ν•˜λ©°, λŒ€ν•œλ―Όκ΅­μ˜ 경제, μ •μΉ˜, λ¬Έν™”μ˜ μ€‘μ‹¬μ§€μž…λ‹ˆλ‹€.
88
+ ```
89
+
90
+
91
+ ## Training Data
92
+ - Korean-translated version of [Open-Orca/SlimOrca-Dedup](https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup)
93
+ - Korean-translated version of [argilla/ultrafeedback-binarized-preferences-cleaned](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences-cleaned)
94
+ - No other dataset was used
95
+
96
+ ## Citation
97
+ ```
98
+ @misc{kim2024efficient,
99
+ title={Efficient and Effective Vocabulary Expansion Towards Multilingual Large Language Models},
100
+ author={Seungduk Kim and Seungtaek Choi and Myeongho Jeong},
101
+ year={2024},
102
+ eprint={2402.14714},
103
+ archivePrefix={arXiv},
104
+ primaryClass={cs.CL}
105
+ }
106
+ ```
107
+ ```
108
+ @misc{cui2023ultrafeedback,
109
+ title={UltraFeedback: Boosting Language Models with High-quality Feedback},
110
+ author={Ganqu Cui and Lifan Yuan and Ning Ding and Guanming Yao and Wei Zhu and Yuan Ni and Guotong Xie and Zhiyuan Liu and Maosong Sun},
111
+ year={2023},
112
+ eprint={2310.01377},
113
+ archivePrefix={arXiv},
114
+ primaryClass={cs.CL}
115
+ }
116
+ ```
117
+ ```
118
+ @misc{SlimOrcaDedup,
119
+ title = {SlimOrca Dedup: A Deduplicated Subset of SlimOrca},
120
+ author = {Wing Lian and Guan Wang and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium" and Nathan Hoos},
121
+ year = {2023},
122
+ publisher = {HuggingFace},
123
+ url = {https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup/}
124
+ }
125
+ ```
126
+ ```
127
+ @misc{mukherjee2023orca,
128
+ title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
129
+ author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
130
+ year={2023},
131
+ eprint={2306.02707},
132
+ archivePrefix={arXiv},
133
+ primaryClass={cs.CL}
134
+ }
135
+ ```
136
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
137
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_yanolja__EEVE-Korean-Instruct-2.8B-v1.0)
138
+
139
+ | Metric |Value|
140
+ |---------------------------------|----:|
141
+ |Avg. |58.71|
142
+ |AI2 Reasoning Challenge (25-Shot)|58.28|
143
+ |HellaSwag (10-Shot) |72.42|
144
+ |MMLU (5-Shot) |53.35|
145
+ |TruthfulQA (0-shot) |48.32|
146
+ |Winogrande (5-shot) |74.82|
147
+ |GSM8k (5-shot) |45.11|
148
+
149
+
150
+