mav23 commited on
Commit
476823f
1 Parent(s): 1c2e8dc

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +192 -0
  3. llama-3-youko-8b-instruct.Q4_0.gguf +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ llama-3-youko-8b-instruct.Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,192 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
3
+ license: llama3
4
+ datasets:
5
+ - CohereForAI/aya_dataset
6
+ - kunishou/databricks-dolly-15k-ja
7
+ - kunishou/HelpSteer-35k-ja
8
+ - kunishou/HelpSteer2-20k-ja
9
+ - kunishou/hh-rlhf-49k-ja
10
+ - kunishou/oasst1-chat-44k-ja
11
+ - kunishou/oasst2-chat-68k-ja
12
+ - meta-math/MetaMathQA
13
+ - OpenAssistant/oasst1
14
+ - OpenAssistant/oasst2
15
+ - sahil2801/CodeAlpaca-20k
16
+ language:
17
+ - ja
18
+ - en
19
+ tags:
20
+ - llama
21
+ - llama-3
22
+ inference: false
23
+ base_model:
24
+ - rinna/llama-3-youko-8b
25
+ - meta-llama/Meta-Llama-3-8B
26
+ - meta-llama/Meta-Llama-3-8B-Instruct
27
+ base_model_relation: merge
28
+ ---
29
+
30
+ # `Llama 3 Youko 8B Instruct (rinna/llama-3-youko-8b-instruct)`
31
+
32
+ ![rinna-icon](./rinna.png)
33
+
34
+ # Overview
35
+
36
+ The model is the instruction-tuned version of [rinna/llama-3-youko-8b](https://huggingface.co/rinna/llama-3-youko-8b), using supervised fine-tuning (SFT), Chat Vector, and direct preference optimization (DPO). It adpots the Llama-3 chat format.
37
+
38
+ | Size | Continual Pre-Training | Instruction-Tuning |
39
+ | :- | :- | :- |
40
+ | 8B | Llama 3 Youko 8B [[HF]](https://huggingface.co/rinna/llama-3-youko-8b) [[GPTQ]](https://huggingface.co/rinna/llama-3-youko-8b-gptq) | Llama 3 Youko 8B Instruct [[HF]](https://huggingface.co/rinna/llama-3-youko-8b-instruct) [[GPTQ]](https://huggingface.co/rinna/llama-3-youko-8b-instruct-gptq) |
41
+ | 70B | Llama 3 Youko 70B [[HF]](https://huggingface.co/rinna/llama-3-youko-70b) [[GPTQ]](https://huggingface.co/rinna/llama-3-youko-70b-gptq) | Llama 3 Youko 70B Instruct [[HF]](https://huggingface.co/rinna/llama-3-youko-70b-instruct) [[GPTQ]](https://huggingface.co/rinna/llama-3-youko-70b-instruct-gptq) |
42
+
43
+ * **Model architecture**
44
+
45
+ A 32-layer, 4096-hidden-size transformer-based language model. Refer to the [Llama 3 Model Card](https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md) for architecture details.
46
+
47
+ * **Training: Built with Meta Llama 3**
48
+
49
+ **Supervised fine-tuning.** The supervised fine-tuning data is a subset of the following datasets.
50
+
51
+ - [CohereForAI/aya_dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset)
52
+ - The JPN subset was used.
53
+ - [FLAN](https://github.com/google-research/FLAN/tree/main/flan/v2)
54
+ - [kunishou/databricks-dolly-15k-ja](https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja)
55
+ - [kunishou/hh-rlhf-49k-ja](https://huggingface.co/datasets/kunishou/hh-rlhf-49k-ja)
56
+ - [kunishou/oasst1-chat-44k-ja](https://huggingface.co/datasets/kunishou/oasst1-chat-44k-ja)
57
+ - [kunishou/oasst2-chat-68k-ja](https://huggingface.co/datasets/kunishou/oasst2-chat-68k-ja)
58
+ - [meta-math/MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA)
59
+ - The following sections were used: MATH_AnsAug, MATH_Rephrased, MATH_SV, and MATH_FOBAR.
60
+ - The remaining sections, containing augmented data from commonly used evaluation corpora, were skipped for preventing any possibility of data leak.
61
+ - [OpenAssistant/oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1)
62
+ - The EN and JA subsets were used.
63
+ - [OpenAssistant/oasst2](https://huggingface.co/datasets/OpenAssistant/oasst2)
64
+ - The EN and JA subsets were used.
65
+ - [sahil2801/CodeAlpaca-20k](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k)
66
+ - rinna Dataset
67
+
68
+ **Model merging.** The fine-tuned model (llama-3-youko-8b-sft) has been enhanced through the following chat vector addition. The chat vector was obtained by subtracting the parameter vectors of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) from those of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct).
69
+
70
+ ~~~~text
71
+ llama-3-youko-8b-sft + 0.5 * (meta-llama/Meta-Llama-3-8B-Instruct - meta-llama/Meta-Llama-3-8B)
72
+ ~~~~
73
+
74
+ Here, the embedding layer was skipped while subtracting and adding the parameter vectors.
75
+
76
+ **Direct preference optimization** was then applied with a subset of the following datasets to build this instruct model.
77
+
78
+ - [kunishou/HelpSteer-35k-ja](https://huggingface.co/datasets/kunishou/HelpSteer-35k-ja)
79
+ - [kunishou/HelpSteer2-20k-ja](https://huggingface.co/datasets/kunishou/HelpSteer2-20k-ja)
80
+ - rinna Dataset
81
+
82
+ * **Contributors**
83
+
84
+ - [Xinqi Chen](https://huggingface.co/Keely0419)
85
+ - [Koh Mitsuda](https://huggingface.co/mitsu-koh)
86
+ - [Toshiaki Wakatsuki](https://huggingface.co/t-w)
87
+ - [Kei Sawada](https://huggingface.co/keisawada)
88
+
89
+ ---
90
+
91
+ # Benchmarking
92
+
93
+ Please refer to [rinna's LM benchmark page](https://rinnakk.github.io/research/benchmarks/lm/index.html).
94
+
95
+ ---
96
+
97
+ # How to use the model
98
+
99
+ We found this instruction-tuned model tends to generate repeated text more often than its base counterpart, and thus we set repetition_penalty=1.1 for better generation performance. The same repetition penalty was applied to the instruction-tuned model in the aforementioned evaluation experiments.
100
+
101
+ ~~~~python
102
+ import torch
103
+ from transformers import AutoTokenizer, AutoModelForCausalLM
104
+
105
+ model_id = "rinna/llama-3-youko-8b-instruct"
106
+
107
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
108
+ model = AutoModelForCausalLM.from_pretrained(
109
+ model_id,
110
+ torch_dtype=torch.bfloat16,
111
+ device_map="auto",
112
+ )
113
+
114
+ messages = [
115
+ {"role": "system", "content": "あなたは誠実で優秀なアシスタントです。どうか、簡潔かつ正直に答えてください。"},
116
+ {"role": "user", "content": "西田幾多郎とはどんな人物ですか?"},
117
+ ]
118
+
119
+ input_ids = tokenizer.apply_chat_template(
120
+ messages,
121
+ add_generation_prompt=True,
122
+ return_tensors="pt"
123
+ ).to(model.device)
124
+
125
+ terminators = [
126
+ tokenizer.convert_tokens_to_ids("<|end_of_text|>"),
127
+ tokenizer.convert_tokens_to_ids("<|eot_id|>")
128
+ ]
129
+
130
+ outputs = model.generate(
131
+ input_ids,
132
+ max_new_tokens=512,
133
+ eos_token_id=terminators,
134
+ do_sample=True,
135
+ temperature=0.6,
136
+ top_p=0.9,
137
+ repetition_penalty=1.1,
138
+ )
139
+
140
+ response = outputs[0][input_ids.shape[-1]:]
141
+ response = tokenizer.decode(response, skip_special_tokens=True)
142
+ print(response)
143
+ ~~~~
144
+
145
+ ---
146
+
147
+ # Tokenization
148
+ The model uses the original [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) tokenizer.
149
+
150
+ ---
151
+
152
+ # How to cite
153
+ ```bibtex
154
+ @misc{rinna-llama-3-youko-8b-instruct,
155
+ title = {rinna/llama-3-youko-8b-instruct},
156
+ author = {Chen, Xinqi and Mitsuda, Koh and Wakatsuki, Toshiaki and Sawada, Kei},
157
+ url = {https://huggingface.co/rinna/llama-3-youko-8b-instruct}
158
+ }
159
+
160
+ @inproceedings{sawada2024release,
161
+ title = {Release of Pre-Trained Models for the {J}apanese Language},
162
+ author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh},
163
+ booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
164
+ month = {5},
165
+ year = {2024},
166
+ pages = {13898--13905},
167
+ url = {https://aclanthology.org/2024.lrec-main.1213},
168
+ note = {\url{https://arxiv.org/abs/2404.01657}}
169
+ }
170
+ ```
171
+ ---
172
+
173
+ # References
174
+ ```bibtex
175
+ @article{llama3modelcard,
176
+ title = {Llama 3 Model Card},
177
+ author = {AI@Meta},
178
+ year = {2024},
179
+ url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
180
+ }
181
+
182
+ @article{huang2023chat,
183
+ title = {Chat Vector: A Simple Approach to Equip LLMs with Instruction Following and Model Alignment in New Languages},
184
+ author = {Huang, Shih-Cheng and Li, Pin-Zu and Hsu, Yu-Chi and Chen, Kuang-Ming and Lin, Yu Tung and Hsiao, Shih-Kai and Tzong-Han Tsai, Richard and Lee, Hung-yi},
185
+ year = {2023},
186
+ url = {https://arxiv.org/abs/2310.04799}
187
+ }
188
+ ```
189
+ ---
190
+
191
+ # License
192
+ [Meta Llama 3 Community License](https://llama.meta.com/llama3/license/)
llama-3-youko-8b-instruct.Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bae4adfee842b9cca81815e9726bef7256884d7b89427b56689354f548761a69
3
+ size 4661212864