RichardErkhov commited on
Commit
88070ae
1 Parent(s): f4b7bbd

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +126 -0
README.md ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ RakutenAI-7B-chat - GGUF
11
+ - Model creator: https://huggingface.co/Rakuten/
12
+ - Original model: https://huggingface.co/Rakuten/RakutenAI-7B-chat/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [RakutenAI-7B-chat.Q2_K.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.Q2_K.gguf) | Q2_K | 2.6GB |
18
+ | [RakutenAI-7B-chat.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.IQ3_XS.gguf) | IQ3_XS | 2.89GB |
19
+ | [RakutenAI-7B-chat.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.IQ3_S.gguf) | IQ3_S | 3.04GB |
20
+ | [RakutenAI-7B-chat.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.Q3_K_S.gguf) | Q3_K_S | 3.02GB |
21
+ | [RakutenAI-7B-chat.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.IQ3_M.gguf) | IQ3_M | 3.14GB |
22
+ | [RakutenAI-7B-chat.Q3_K.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.Q3_K.gguf) | Q3_K | 3.35GB |
23
+ | [RakutenAI-7B-chat.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.Q3_K_M.gguf) | Q3_K_M | 3.35GB |
24
+ | [RakutenAI-7B-chat.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.Q3_K_L.gguf) | Q3_K_L | 3.64GB |
25
+ | [RakutenAI-7B-chat.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.IQ4_XS.gguf) | IQ4_XS | 3.76GB |
26
+ | [RakutenAI-7B-chat.Q4_0.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.Q4_0.gguf) | Q4_0 | 3.91GB |
27
+ | [RakutenAI-7B-chat.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.IQ4_NL.gguf) | IQ4_NL | 3.95GB |
28
+ | [RakutenAI-7B-chat.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.Q4_K_S.gguf) | Q4_K_S | 3.94GB |
29
+ | [RakutenAI-7B-chat.Q4_K.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.Q4_K.gguf) | Q4_K | 4.15GB |
30
+ | [RakutenAI-7B-chat.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.Q4_K_M.gguf) | Q4_K_M | 4.15GB |
31
+ | [RakutenAI-7B-chat.Q4_1.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.Q4_1.gguf) | Q4_1 | 4.33GB |
32
+ | [RakutenAI-7B-chat.Q5_0.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.Q5_0.gguf) | Q5_0 | 4.75GB |
33
+ | [RakutenAI-7B-chat.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.Q5_K_S.gguf) | Q5_K_S | 4.75GB |
34
+ | [RakutenAI-7B-chat.Q5_K.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.Q5_K.gguf) | Q5_K | 4.87GB |
35
+ | [RakutenAI-7B-chat.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.Q5_K_M.gguf) | Q5_K_M | 4.87GB |
36
+ | [RakutenAI-7B-chat.Q5_1.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.Q5_1.gguf) | Q5_1 | 5.16GB |
37
+ | [RakutenAI-7B-chat.Q6_K.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.Q6_K.gguf) | Q6_K | 5.63GB |
38
+ | [RakutenAI-7B-chat.Q8_0.gguf](https://huggingface.co/RichardErkhov/Rakuten_-_RakutenAI-7B-chat-gguf/blob/main/RakutenAI-7B-chat.Q8_0.gguf) | Q8_0 | 7.3GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ license: apache-2.0
46
+ ---
47
+ # RakutenAI-7B-chat
48
+ ## Model Description
49
+ RakutenAI-7B is a systematic initiative that brings the latest technologies to the world of Japanese LLMs. RakutenAI-7B achieves the best scores on the Japanese language understanding benchmarks while maintaining a competitive performance on the English test sets among similar models such as OpenCalm, Elyza, Youri, Nekomata and Swallow. RakutenAI-7B leverages the Mistral model architecture and is based on [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) pre-trained checkpoint, exemplifying a successful retrofitting of the pre-trained model weights. Moreover, we extend Mistral's vocabulary from 32k to 48k to offer a better character-per-token rate for Japanese.
50
+
51
+ *The technical report can be accessed at [arXiv](https://arxiv.org/abs/2403.15484).*
52
+
53
+ *If you are looking for a foundation model, check [RakutenAI-7B](https://huggingface.co/Rakuten/RakutenAI-7B)*.
54
+
55
+ *If you are looking for an instruction-tuned model, check [RakutenAI-7B-instruct](https://huggingface.co/Rakuten/RakutenAI-7B-instruct)*.
56
+
57
+ An independent evaluation by Kamata et.al. for [Nejumi LLMリーダーボード Neo](https://wandb.ai/wandb-japan/llm-leaderboard/reports/Nejumi-LLM-Neo--Vmlldzo2MTkyMTU0#総合評価) using a weighted average of [llm-jp-eval](https://github.com/llm-jp/llm-jp-eval) and [Japanese MT-bench](https://github.com/Stability-AI/FastChat/tree/jp-stable/fastchat/llm_judge) also confirms the highest performance of chat/instruct versions of RakutenAI-7B among Open LLMs of similar sizes, with a score of 0.393/0.331 respectively, as of 22nd March 2024.
58
+
59
+ ## Usage
60
+
61
+ ```python
62
+ from transformers import AutoModelForCausalLM, AutoTokenizer
63
+
64
+ model_path = "Rakuten/RakutenAI-7B-chat"
65
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
66
+ model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype="auto", device_map="auto")
67
+ model.eval()
68
+
69
+ requests = [
70
+ "「馬が合う」はどう言う意味ですか",
71
+ "How to make an authentic Spanish Omelette?",
72
+ ]
73
+
74
+ system_message = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {user_input} ASSISTANT:"
75
+
76
+ for req in requests:
77
+ input_req = system_message.format(user_input=req)
78
+ input_ids = tokenizer.encode(input_req, return_tensors="pt").to(device=model.device)
79
+ tokens = model.generate(
80
+ input_ids,
81
+ max_new_tokens=1024,
82
+ do_sample=True,
83
+ pad_token_id=tokenizer.eos_token_id,
84
+ )
85
+ out = tokenizer.decode(tokens[0][len(input_ids[0]):], skip_special_tokens=True)
86
+ print("USER:\n" + req)
87
+ print("ASSISTANT:\n" + out)
88
+ print()
89
+ print()
90
+ ```
91
+
92
+ ## Model Details
93
+
94
+ * **Developed by**: [Rakuten Group, Inc.](https://ai.rakuten.com/)
95
+ * **Language(s)**: Japanese, English
96
+ * **License**: This model is licensed under [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
97
+ * **Instruction-Tuning Dataset**: We fine-tune our foundation model to create RakutenAI-7B-instruct and RakutenAI-7B-chat using a mix of open source and internally hand-crafted datasets. We use `train` part of the following datasets (CC by-SA License) for instruction-tuned and chat-tuned models:
98
+ - [JSNLI](https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88)
99
+ - [RTE](https://nlp.ist.i.kyoto-u.ac.jp/?Textual+Entailment+%E8%A9%95%E4%BE%A1%E3%83%87%E3%83%BC%E3%82%BF)
100
+ - [KUCI](https://nlp.ist.i.kyoto-u.ac.jp/?KUCI)
101
+ - [BELEBELE](https://huggingface.co/datasets/facebook/belebele)
102
+ - [JCS](https://aclanthology.org/2022.lrec-1.317/)
103
+ - [JNLI](https://aclanthology.org/2022.lrec-1.317/)
104
+ - [Dolly-15K](https://huggingface.co/datasets/databricks/databricks-dolly-15k)
105
+ - [OpenAssistant1](https://huggingface.co/datasets/OpenAssistant/oasst1)
106
+
107
+
108
+ ### Limitations and Bias
109
+
110
+ The suite of RakutenAI-7B models is capable of generating human-like text on a wide range of topics. However, like all LLMs, they have limitations and can produce biased, inaccurate, or unsafe outputs. Please exercise caution and judgement while interacting with them.
111
+
112
+ ## Citation
113
+ For citing our work on the suite of RakutenAI-7B models, please use:
114
+
115
+ ```
116
+ @misc{rakutengroup2024rakutenai7b,
117
+ title={RakutenAI-7B: Extending Large Language Models for Japanese},
118
+ author={{Rakuten Group, Inc.} and Aaron Levine and Connie Huang and Chenguang Wang and Eduardo Batista and Ewa Szymanska and Hongyi Ding and Hou Wei Chou and Jean-François Pessiot and Johanes Effendi and Justin Chiu and Kai Torben Ohlhus and Karan Chopra and Keiji Shinzato and Koji Murakami and Lee Xiong and Lei Chen and Maki Kubota and Maksim Tkachenko and Miroku Lee and Naoki Takahashi and Prathyusha Jwalapuram and Ryutaro Tatsushima and Saurabh Jain and Sunil Kumar Yadav and Ting Cai and Wei-Te Chen and Yandi Xia and Yuki Nakayama and Yutaka Higashiyama},
119
+ year={2024},
120
+ eprint={2403.15484},
121
+ archivePrefix={arXiv},
122
+ primaryClass={cs.CL}
123
+ }
124
+ ```
125
+
126
+