sukuya commited on
Commit
ae42c33
1 Parent(s): 970f739

update the model card

Browse files
Files changed (1) hide show
  1. README.md +62 -0
README.md CHANGED
@@ -1,3 +1,65 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+ # RakutenAI-7B
5
+ ## Model Description
6
+ RakutenAI-7B is a systematic initiative that brings the latest technologies to the world of Japanese LLMs. RakutenAI-7B achieves the best scores on the Japanese language understanding benchmarks while maintaining a competitive performance on the English test sets among similar models such as OpenCalm, Elyza, Youri, Nekomata and Swallow. RakutenAI-7B leverages the Mistral model architecture and is based on [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) pre-trained checkpoint, exemplifying a successful retrofitting of the pre-trained model weights. Moreover, we extend Mistral's vocabulary from 32k to 48k to offer a better character-per-token rate for Japanese.
7
+
8
+ *If you are looking for an instruction-tuned model, check [RakutenAI-7B-instruct](https://huggingface.co/Rakuten/RakutenAI-7B-instruct)*.
9
+
10
+ *If you are looking for a chat-tuned model, check [RakutenAI-7B-chat](https://huggingface.co/Rakuten/RakutenAI-7B-chat)*.
11
+
12
+ ## Usage
13
+
14
+ ```python
15
+ from transformers import AutoModelForCausalLM, AutoTokenizer
16
+
17
+ model_path = "Rakuten/RakutenAI-7B"
18
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
19
+ model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype="auto", device_map="auto")
20
+ model.eval()
21
+
22
+ requests = [
23
+ "南硫黄島原生自然環境保全地域は、自然",
24
+ "The capybara is a giant cavy rodent",
25
+ ]
26
+
27
+ for req in requests:
28
+ input_ids = tokenizer.encode(req, return_tensors="pt").to(device=model.device)
29
+ tokens = model.generate(
30
+ input_ids,
31
+ max_new_tokens=256,
32
+ do_sample=True,
33
+ repetition_penalty=1.1,
34
+ pad_token_id=tokenizer.eos_token_id,
35
+ )
36
+ out = tokenizer.decode(tokens[0], skip_special_tokens=True)
37
+ print("INPUT:\n" + req)
38
+ print("OUTPUT:\n" + out)
39
+ print()
40
+ print()
41
+ ```
42
+
43
+ ## Model Details
44
+
45
+ * **Developed by**: [Rakuten Group, Inc.](https://ai.rakuten.com/)
46
+ * **Language(s)**: Japanese, English
47
+ * **License**: This model is licensed under [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
48
+
49
+ ### Limitations and Bias
50
+
51
+ The suite of RakutenAI-7B models is capable of generating human-like text on a wide range of topics. However, like all LLMs, they have limitations and can produce biased, inaccurate, or unsafe outputs. Please exercise caution and judgement while interacting with them.
52
+
53
+ ## Citation
54
+ For citing our work on the suite of RakutenAI-7B models, please use:
55
+
56
+ ```
57
+ @misc{2024RakutenAI-7B,
58
+ title={RakutenAI-7B: Extending Large Language Models for Japanese},
59
+ author={Rakuten Group, Inc.},
60
+ year={2024},
61
+ eprint={},
62
+ archivePrefix={arXiv},
63
+ primaryClass={cs.CL}
64
+ }
65
+ ```