keisawada commited on
Commit
6c13082
1 Parent(s): 237bee2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +187 -186
README.md CHANGED
@@ -1,187 +1,188 @@
1
- ---
2
- thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
3
- license: llama3
4
- datasets:
5
- - CohereForAI/aya_dataset
6
- - kunishou/databricks-dolly-15k-ja
7
- - kunishou/HelpSteer-35k-ja
8
- - kunishou/HelpSteer2-20k-ja
9
- - kunishou/hh-rlhf-49k-ja
10
- - kunishou/oasst1-chat-44k-ja
11
- - kunishou/oasst2-chat-68k-ja
12
- - meta-math/MetaMathQA
13
- - OpenAssistant/oasst1
14
- - OpenAssistant/oasst2
15
- - sahil2801/CodeAlpaca-20k
16
- language:
17
- - ja
18
- - en
19
- tags:
20
- - llama
21
- - llama-3
22
- inference: false
23
- ---
24
-
25
- # `Llama 3 Youko 8B Instruct (rinna/llama-3-youko-8b-instruct)`
26
-
27
- ![rinna-icon](./rinna.png)
28
-
29
- # Overview
30
-
31
- The model is the instruction-tuned version of [rinna/llama-3-youko-8b](https://huggingface.co/rinna/llama-3-youko-8b), using supervised fine-tuning (SFT), Chat Vector, and direct preference optimization (DPO). It adpots the Llama-3 chat format.
32
-
33
- | Size | Continual Pre-Training | Instruction-Tuning |
34
- | :- | :- | :- |
35
- | 8B | Llama 3 Youko 8B [[HF]](https://huggingface.co/rinna/llama-3-youko-8b) [[GPTQ]](https://huggingface.co/rinna/llama-3-youko-8b-gptq) | Llama 3 Youko 8B Instruct [[HF]](https://huggingface.co/rinna/llama-3-youko-8b-instruct) [[GPTQ]](https://huggingface.co/rinna/llama-3-youko-8b-instruct-gptq) |
36
- | 70B | Llama 3 Youko 70B [[HF]](https://huggingface.co/rinna/llama-3-youko-70b) [[GPTQ]](https://huggingface.co/rinna/llama-3-youko-70b-gptq) | Llama 3 Youko 70B Instruct [[HF]](https://huggingface.co/rinna/llama-3-youko-70b-instruct) [[GPTQ]](https://huggingface.co/rinna/llama-3-youko-70b-instruct-gptq) |
37
-
38
- * **Model architecture**
39
-
40
- A 32-layer, 4096-hidden-size transformer-based language model. Refer to the [Llama 3 Model Card](https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md) for architecture details.
41
-
42
- * **Training: Built with Meta Llama 3**
43
-
44
- **Supervised fine-tuning.** The supervised fine-tuning data is a subset of the following datasets.
45
-
46
- - [CohereForAI/aya_dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset)
47
- - The JPN subset was used.
48
- - [FLAN](https://github.com/google-research/FLAN/tree/main/flan/v2)
49
- - [kunishou/databricks-dolly-15k-ja](https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja)
50
- - [kunishou/hh-rlhf-49k-ja](https://huggingface.co/datasets/kunishou/hh-rlhf-49k-ja)
51
- - [kunishou/oasst1-chat-44k-ja](https://huggingface.co/datasets/kunishou/oasst1-chat-44k-ja)
52
- - [kunishou/oasst2-chat-68k-ja](https://huggingface.co/datasets/kunishou/oasst2-chat-68k-ja)
53
- - [meta-math/MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA)
54
- - The following sections were used: MATH_AnsAug, MATH_Rephrased, MATH_SV, and MATH_FOBAR.
55
- - The remaining sections, containing augmented data from commonly used evaluation corpora, were skipped for preventing any possibility of data leak.
56
- - [OpenAssistant/oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1)
57
- - The EN and JA subsets were used.
58
- - [OpenAssistant/oasst2](https://huggingface.co/datasets/OpenAssistant/oasst2)
59
- - The EN and JA subsets were used.
60
- - [sahil2801/CodeAlpaca-20k](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k)
61
- - rinna Dataset
62
-
63
- **Model merging.** The fine-tuned model (llama-3-youko-8b-sft) has been enhanced through the following chat vector addition. The chat vector was obtained by subtracting the parameter vectors of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) from those of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct).
64
-
65
- ~~~~text
66
- llama-3-youko-8b-sft + 0.5 * (meta-llama/Meta-Llama-3-8B-Instruct - meta-llama/Meta-Llama-3-8B)
67
- ~~~~
68
-
69
- Here, the embedding layer was skipped while subtracting and adding the parameter vectors.
70
-
71
- **Direct preference optimization** was then applied with a subset of the following datasets to build this instruct model.
72
-
73
- - [kunishou/HelpSteer-35k-ja](https://huggingface.co/datasets/kunishou/HelpSteer-35k-ja)
74
- - [kunishou/HelpSteer2-20k-ja](https://huggingface.co/datasets/kunishou/HelpSteer2-20k-ja)
75
- - rinna Dataset
76
-
77
- * **Contributors**
78
-
79
- - [Xinqi Chen](https://huggingface.co/Keely0419)
80
- - [Koh Mitsuda](https://huggingface.co/mitsu-koh)
81
- - [Toshiaki Wakatsuki](https://huggingface.co/t-w)
82
- - [Kei Sawada](https://huggingface.co/keisawada)
83
-
84
- ---
85
-
86
- # Benchmarking
87
-
88
- Please refer to [rinna's LM benchmark page](https://rinnakk.github.io/research/benchmarks/lm/index.html).
89
-
90
- ---
91
-
92
- # How to use the model
93
-
94
- We found this instruction-tuned model tends to generate repeated text more often than its base counterpart, and thus we set repetition_penalty=1.1 for better generation performance. The same repetition penalty was applied to the instruction-tuned model in the aforementioned evaluation experiments.
95
-
96
- ~~~~python
97
- import torch
98
- from transformers import AutoTokenizer, AutoModelForCausalLM
99
-
100
- model_id = "rinna/llama-3-youko-8b-instruct"
101
-
102
- tokenizer = AutoTokenizer.from_pretrained(model_id)
103
- model = AutoModelForCausalLM.from_pretrained(
104
- model_id,
105
- torch_dtype=torch.bfloat16,
106
- device_map="auto",
107
- )
108
-
109
- messages = [
110
- {"role": "system", "content": "あなたは誠実で優秀なアシスタントです。どうか、簡潔かつ正直に答えてください。"},
111
- {"role": "user", "content": "西田幾多郎とはどんな人物ですか?"},
112
- ]
113
-
114
- input_ids = tokenizer.apply_chat_template(
115
- messages,
116
- add_generation_prompt=True,
117
- return_tensors="pt"
118
- ).to(model.device)
119
-
120
- terminators = [
121
- tokenizer.convert_tokens_to_ids("<|end_of_text|>"),
122
- tokenizer.convert_tokens_to_ids("<|eot_id|>")
123
- ]
124
-
125
- outputs = model.generate(
126
- input_ids,
127
- max_new_tokens=512,
128
- eos_token_id=terminators,
129
- do_sample=True,
130
- temperature=0.6,
131
- top_p=0.9,
132
- repetition_penalty=1.1,
133
- )
134
-
135
- response = outputs[0][input_ids.shape[-1]:]
136
- response = tokenizer.decode(response, skip_special_tokens=True)
137
- print(response)
138
- ~~~~
139
-
140
- ---
141
-
142
- # Tokenization
143
- The model uses the original [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) tokenizer.
144
-
145
- ---
146
-
147
- # How to cite
148
- ```bibtex
149
- @misc{rinna-llama-3-youko-8b-instruct,
150
- title = {rinna/llama-3-youko-8b-instruct},
151
- author = {Chen, Xinqi and Mitsuda, Koh and Wakatsuki, Toshiaki and Sawada, Kei},
152
- url = {https://huggingface.co/rinna/llama-3-youko-8b-instruct}
153
- }
154
-
155
- @inproceedings{sawada2024release,
156
- title = {Release of Pre-Trained Models for the {J}apanese Language},
157
- author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh},
158
- booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
159
- month = {5},
160
- year = {2024},
161
- pages = {13898--13905},
162
- url = {https://aclanthology.org/2024.lrec-main.1213},
163
- note = {\url{https://arxiv.org/abs/2404.01657}}
164
- }
165
- ```
166
- ---
167
-
168
- # References
169
- ```bibtex
170
- @article{llama3modelcard,
171
- title = {Llama 3 Model Card},
172
- author = {AI@Meta},
173
- year = {2024},
174
- url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
175
- }
176
-
177
- @article{huang2023chat,
178
- title = {Chat Vector: A Simple Approach to Equip LLMs with Instruction Following and Model Alignment in New Languages},
179
- author = {Huang, Shih-Cheng and Li, Pin-Zu and Hsu, Yu-Chi and Chen, Kuang-Ming and Lin, Yu Tung and Hsiao, Shih-Kai and Tzong-Han Tsai, Richard and Lee, Hung-yi},
180
- year = {2023},
181
- url = {https://arxiv.org/abs/2310.04799}
182
- }
183
- ```
184
- ---
185
-
186
- # License
 
187
  [Meta Llama 3 Community License](https://llama.meta.com/llama3/license/)
 
1
+ ---
2
+ thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
3
+ license: llama3
4
+ datasets:
5
+ - CohereForAI/aya_dataset
6
+ - kunishou/databricks-dolly-15k-ja
7
+ - kunishou/HelpSteer-35k-ja
8
+ - kunishou/HelpSteer2-20k-ja
9
+ - kunishou/hh-rlhf-49k-ja
10
+ - kunishou/oasst1-chat-44k-ja
11
+ - kunishou/oasst2-chat-68k-ja
12
+ - meta-math/MetaMathQA
13
+ - OpenAssistant/oasst1
14
+ - OpenAssistant/oasst2
15
+ - sahil2801/CodeAlpaca-20k
16
+ language:
17
+ - ja
18
+ - en
19
+ tags:
20
+ - llama
21
+ - llama-3
22
+ inference: false
23
+ base_model: rinna/llama-3-youko-8b
24
+ ---
25
+
26
+ # `Llama 3 Youko 8B Instruct (rinna/llama-3-youko-8b-instruct)`
27
+
28
+ ![rinna-icon](./rinna.png)
29
+
30
+ # Overview
31
+
32
+ The model is the instruction-tuned version of [rinna/llama-3-youko-8b](https://huggingface.co/rinna/llama-3-youko-8b), using supervised fine-tuning (SFT), Chat Vector, and direct preference optimization (DPO). It adpots the Llama-3 chat format.
33
+
34
+ | Size | Continual Pre-Training | Instruction-Tuning |
35
+ | :- | :- | :- |
36
+ | 8B | Llama 3 Youko 8B [[HF]](https://huggingface.co/rinna/llama-3-youko-8b) [[GPTQ]](https://huggingface.co/rinna/llama-3-youko-8b-gptq) | Llama 3 Youko 8B Instruct [[HF]](https://huggingface.co/rinna/llama-3-youko-8b-instruct) [[GPTQ]](https://huggingface.co/rinna/llama-3-youko-8b-instruct-gptq) |
37
+ | 70B | Llama 3 Youko 70B [[HF]](https://huggingface.co/rinna/llama-3-youko-70b) [[GPTQ]](https://huggingface.co/rinna/llama-3-youko-70b-gptq) | Llama 3 Youko 70B Instruct [[HF]](https://huggingface.co/rinna/llama-3-youko-70b-instruct) [[GPTQ]](https://huggingface.co/rinna/llama-3-youko-70b-instruct-gptq) |
38
+
39
+ * **Model architecture**
40
+
41
+ A 32-layer, 4096-hidden-size transformer-based language model. Refer to the [Llama 3 Model Card](https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md) for architecture details.
42
+
43
+ * **Training: Built with Meta Llama 3**
44
+
45
+ **Supervised fine-tuning.** The supervised fine-tuning data is a subset of the following datasets.
46
+
47
+ - [CohereForAI/aya_dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset)
48
+ - The JPN subset was used.
49
+ - [FLAN](https://github.com/google-research/FLAN/tree/main/flan/v2)
50
+ - [kunishou/databricks-dolly-15k-ja](https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja)
51
+ - [kunishou/hh-rlhf-49k-ja](https://huggingface.co/datasets/kunishou/hh-rlhf-49k-ja)
52
+ - [kunishou/oasst1-chat-44k-ja](https://huggingface.co/datasets/kunishou/oasst1-chat-44k-ja)
53
+ - [kunishou/oasst2-chat-68k-ja](https://huggingface.co/datasets/kunishou/oasst2-chat-68k-ja)
54
+ - [meta-math/MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA)
55
+ - The following sections were used: MATH_AnsAug, MATH_Rephrased, MATH_SV, and MATH_FOBAR.
56
+ - The remaining sections, containing augmented data from commonly used evaluation corpora, were skipped for preventing any possibility of data leak.
57
+ - [OpenAssistant/oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1)
58
+ - The EN and JA subsets were used.
59
+ - [OpenAssistant/oasst2](https://huggingface.co/datasets/OpenAssistant/oasst2)
60
+ - The EN and JA subsets were used.
61
+ - [sahil2801/CodeAlpaca-20k](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k)
62
+ - rinna Dataset
63
+
64
+ **Model merging.** The fine-tuned model (llama-3-youko-8b-sft) has been enhanced through the following chat vector addition. The chat vector was obtained by subtracting the parameter vectors of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) from those of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct).
65
+
66
+ ~~~~text
67
+ llama-3-youko-8b-sft + 0.5 * (meta-llama/Meta-Llama-3-8B-Instruct - meta-llama/Meta-Llama-3-8B)
68
+ ~~~~
69
+
70
+ Here, the embedding layer was skipped while subtracting and adding the parameter vectors.
71
+
72
+ **Direct preference optimization** was then applied with a subset of the following datasets to build this instruct model.
73
+
74
+ - [kunishou/HelpSteer-35k-ja](https://huggingface.co/datasets/kunishou/HelpSteer-35k-ja)
75
+ - [kunishou/HelpSteer2-20k-ja](https://huggingface.co/datasets/kunishou/HelpSteer2-20k-ja)
76
+ - rinna Dataset
77
+
78
+ * **Contributors**
79
+
80
+ - [Xinqi Chen](https://huggingface.co/Keely0419)
81
+ - [Koh Mitsuda](https://huggingface.co/mitsu-koh)
82
+ - [Toshiaki Wakatsuki](https://huggingface.co/t-w)
83
+ - [Kei Sawada](https://huggingface.co/keisawada)
84
+
85
+ ---
86
+
87
+ # Benchmarking
88
+
89
+ Please refer to [rinna's LM benchmark page](https://rinnakk.github.io/research/benchmarks/lm/index.html).
90
+
91
+ ---
92
+
93
+ # How to use the model
94
+
95
+ We found this instruction-tuned model tends to generate repeated text more often than its base counterpart, and thus we set repetition_penalty=1.1 for better generation performance. The same repetition penalty was applied to the instruction-tuned model in the aforementioned evaluation experiments.
96
+
97
+ ~~~~python
98
+ import torch
99
+ from transformers import AutoTokenizer, AutoModelForCausalLM
100
+
101
+ model_id = "rinna/llama-3-youko-8b-instruct"
102
+
103
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
104
+ model = AutoModelForCausalLM.from_pretrained(
105
+ model_id,
106
+ torch_dtype=torch.bfloat16,
107
+ device_map="auto",
108
+ )
109
+
110
+ messages = [
111
+ {"role": "system", "content": "あなたは誠実で優秀なアシスタントです。どうか、簡潔かつ正直に答えてください。"},
112
+ {"role": "user", "content": "西田幾多郎とはどんな人物ですか?"},
113
+ ]
114
+
115
+ input_ids = tokenizer.apply_chat_template(
116
+ messages,
117
+ add_generation_prompt=True,
118
+ return_tensors="pt"
119
+ ).to(model.device)
120
+
121
+ terminators = [
122
+ tokenizer.convert_tokens_to_ids("<|end_of_text|>"),
123
+ tokenizer.convert_tokens_to_ids("<|eot_id|>")
124
+ ]
125
+
126
+ outputs = model.generate(
127
+ input_ids,
128
+ max_new_tokens=512,
129
+ eos_token_id=terminators,
130
+ do_sample=True,
131
+ temperature=0.6,
132
+ top_p=0.9,
133
+ repetition_penalty=1.1,
134
+ )
135
+
136
+ response = outputs[0][input_ids.shape[-1]:]
137
+ response = tokenizer.decode(response, skip_special_tokens=True)
138
+ print(response)
139
+ ~~~~
140
+
141
+ ---
142
+
143
+ # Tokenization
144
+ The model uses the original [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) tokenizer.
145
+
146
+ ---
147
+
148
+ # How to cite
149
+ ```bibtex
150
+ @misc{rinna-llama-3-youko-8b-instruct,
151
+ title = {rinna/llama-3-youko-8b-instruct},
152
+ author = {Chen, Xinqi and Mitsuda, Koh and Wakatsuki, Toshiaki and Sawada, Kei},
153
+ url = {https://huggingface.co/rinna/llama-3-youko-8b-instruct}
154
+ }
155
+
156
+ @inproceedings{sawada2024release,
157
+ title = {Release of Pre-Trained Models for the {J}apanese Language},
158
+ author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh},
159
+ booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
160
+ month = {5},
161
+ year = {2024},
162
+ pages = {13898--13905},
163
+ url = {https://aclanthology.org/2024.lrec-main.1213},
164
+ note = {\url{https://arxiv.org/abs/2404.01657}}
165
+ }
166
+ ```
167
+ ---
168
+
169
+ # References
170
+ ```bibtex
171
+ @article{llama3modelcard,
172
+ title = {Llama 3 Model Card},
173
+ author = {AI@Meta},
174
+ year = {2024},
175
+ url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
176
+ }
177
+
178
+ @article{huang2023chat,
179
+ title = {Chat Vector: A Simple Approach to Equip LLMs with Instruction Following and Model Alignment in New Languages},
180
+ author = {Huang, Shih-Cheng and Li, Pin-Zu and Hsu, Yu-Chi and Chen, Kuang-Ming and Lin, Yu Tung and Hsiao, Shih-Kai and Tzong-Han Tsai, Richard and Lee, Hung-yi},
181
+ year = {2023},
182
+ url = {https://arxiv.org/abs/2310.04799}
183
+ }
184
+ ```
185
+ ---
186
+
187
+ # License
188
  [Meta Llama 3 Community License](https://llama.meta.com/llama3/license/)