RichardErkhov commited on
Commit
7630c33
1 Parent(s): df62b8d

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +184 -0
README.md ADDED
@@ -0,0 +1,184 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ japanese-stablelm-base-beta-70b - GGUF
11
+ - Model creator: https://huggingface.co/stabilityai/
12
+ - Original model: https://huggingface.co/stabilityai/japanese-stablelm-base-beta-70b/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [japanese-stablelm-base-beta-70b.Q2_K.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-70b-gguf/blob/main/japanese-stablelm-base-beta-70b.Q2_K.gguf) | Q2_K | 23.71GB |
18
+ | [japanese-stablelm-base-beta-70b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-70b-gguf/blob/main/japanese-stablelm-base-beta-70b.IQ3_XS.gguf) | IQ3_XS | 24.37GB |
19
+ | [japanese-stablelm-base-beta-70b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-70b-gguf/blob/main/japanese-stablelm-base-beta-70b.IQ3_S.gguf) | IQ3_S | 6.29GB |
20
+ | [japanese-stablelm-base-beta-70b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-70b-gguf/blob/main/japanese-stablelm-base-beta-70b.Q3_K_S.gguf) | Q3_K_S | 3.98GB |
21
+ | [japanese-stablelm-base-beta-70b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-70b-gguf/blob/main/japanese-stablelm-base-beta-70b.IQ3_M.gguf) | IQ3_M | 0.7GB |
22
+ | [japanese-stablelm-base-beta-70b.Q3_K.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-70b-gguf/blob/main/japanese-stablelm-base-beta-70b.Q3_K.gguf) | Q3_K | 0.41GB |
23
+ | [japanese-stablelm-base-beta-70b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-70b-gguf/blob/main/japanese-stablelm-base-beta-70b.Q3_K_M.gguf) | Q3_K_M | 0.26GB |
24
+ | [japanese-stablelm-base-beta-70b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-70b-gguf/blob/main/japanese-stablelm-base-beta-70b.Q3_K_L.gguf) | Q3_K_L | 0.14GB |
25
+ | [japanese-stablelm-base-beta-70b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-70b-gguf/blob/main/japanese-stablelm-base-beta-70b.IQ4_XS.gguf) | IQ4_XS | 0.0GB |
26
+ | [japanese-stablelm-base-beta-70b.Q4_0.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-70b-gguf/blob/main/japanese-stablelm-base-beta-70b.Q4_0.gguf) | Q4_0 | 0.0GB |
27
+ | [japanese-stablelm-base-beta-70b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-70b-gguf/blob/main/japanese-stablelm-base-beta-70b.IQ4_NL.gguf) | IQ4_NL | 0.27GB |
28
+ | [japanese-stablelm-base-beta-70b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-70b-gguf/blob/main/japanese-stablelm-base-beta-70b.Q4_K_S.gguf) | Q4_K_S | 0.07GB |
29
+ | [japanese-stablelm-base-beta-70b.Q4_K.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-70b-gguf/blob/main/japanese-stablelm-base-beta-70b.Q4_K.gguf) | Q4_K | 0.0GB |
30
+ | [japanese-stablelm-base-beta-70b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-70b-gguf/blob/main/japanese-stablelm-base-beta-70b.Q4_K_M.gguf) | Q4_K_M | 0.0GB |
31
+ | [japanese-stablelm-base-beta-70b.Q4_1.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-70b-gguf/blob/main/japanese-stablelm-base-beta-70b.Q4_1.gguf) | Q4_1 | 0.0GB |
32
+ | [japanese-stablelm-base-beta-70b.Q5_0.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-70b-gguf/blob/main/japanese-stablelm-base-beta-70b.Q5_0.gguf) | Q5_0 | 0.0GB |
33
+ | [japanese-stablelm-base-beta-70b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-70b-gguf/blob/main/japanese-stablelm-base-beta-70b.Q5_K_S.gguf) | Q5_K_S | 0.0GB |
34
+ | [japanese-stablelm-base-beta-70b.Q5_K.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-70b-gguf/blob/main/japanese-stablelm-base-beta-70b.Q5_K.gguf) | Q5_K | 0.0GB |
35
+ | [japanese-stablelm-base-beta-70b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-70b-gguf/blob/main/japanese-stablelm-base-beta-70b.Q5_K_M.gguf) | Q5_K_M | 0.0GB |
36
+ | [japanese-stablelm-base-beta-70b.Q5_1.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-70b-gguf/blob/main/japanese-stablelm-base-beta-70b.Q5_1.gguf) | Q5_1 | 0.0GB |
37
+ | [japanese-stablelm-base-beta-70b.Q6_K.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-70b-gguf/blob/main/japanese-stablelm-base-beta-70b.Q6_K.gguf) | Q6_K | 0.0GB |
38
+ | [japanese-stablelm-base-beta-70b.Q8_0.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-beta-70b-gguf/blob/main/japanese-stablelm-base-beta-70b.Q8_0.gguf) | Q8_0 | 0.0GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ language:
46
+ - ja
47
+ tags:
48
+ - japanese-stablelm
49
+ - causal-lm
50
+ pipeline_tag: text-generation
51
+ datasets:
52
+ - wikipedia
53
+ - mc4
54
+ - cc100
55
+ - oscar-corpus/OSCAR-2301
56
+ - oscar-corpus/OSCAR-2201
57
+ - cerebras/SlimPajama-627B
58
+ license:
59
+ - llama2
60
+ extra_gated_fields:
61
+ Name: text
62
+ Email: text
63
+ Country: text
64
+ Organization or Affiliation: text
65
+ I allow Stability AI to contact me about information related to its models and research: checkbox
66
+ ---
67
+
68
+ # Japanese-StableLM-Base-Beta-70B
69
+
70
+ ![A cute robot wearing a kimono writes calligraphy with one single brush](./japanese-stablelm-robot.jpg)
71
+
72
+ > A cute robot wearing a kimono writes calligraphy with one single brush — [Stable Diffusion XL](https://clipdrop.co/stable-diffusion)
73
+
74
+ ## Model Description
75
+
76
+ `japanese-stablelm-base-beta-70b` is a 70B-parameter decoder-only language model based on [Llama-2-70b](https://huggingface.co/meta-llama/Llama-2-70b) that has been fine-tuned on a diverse collection of Japanese data, with the intent of maximizing downstream performance on Japanese language tasks.
77
+
78
+ For an instruction-following model, check [Japanese-StableLM-Instruct-Beta-70B](https://huggingface.co/stabilityai/japanese-stablelm-instruct-beta-70b). The base and instruct models are also available in smaller 7b sizes. For a model that has faster inference times, see [Japanese-StableLM-Base-JA_Vocab-Beta-7B](https://huggingface.co/stabilityai/japanese-stablelm-base-ja_vocab-beta-7b), or [the instruction-following version](https://huggingface.co/stabilityai/japanese-stablelm-instruct-ja_vocab-beta-7b).
79
+
80
+ ## Usage
81
+
82
+ First install additional dependencies in [requirements.txt](./requirements.txt):
83
+
84
+ ```sh
85
+ pip install -r requirements.txt
86
+ ```
87
+
88
+ Then start generating text with `japanese-stablelm-base-beta-70b` by using the following code snippet:
89
+
90
+ ```python
91
+ import torch
92
+ from transformers import AutoTokenizer, AutoModelForCausalLM
93
+
94
+ model_name = "stabilityai/japanese-stablelm-base-beta-70b"
95
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
96
+
97
+ # The next line may need to be modified depending on the environment
98
+ model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto")
99
+
100
+ prompt = """
101
+ AI で科学研究を加速するには、
102
+ """.strip()
103
+
104
+ input_ids = tokenizer.encode(
105
+ prompt,
106
+ add_special_tokens=True,
107
+ return_tensors="pt"
108
+ )
109
+
110
+ # this is for reproducibility.
111
+ # feel free to change to get different result
112
+ seed = 23
113
+ torch.manual_seed(seed)
114
+
115
+ tokens = model.generate(
116
+ input_ids.to(device=model.device),
117
+ max_new_tokens=128,
118
+ temperature=0.99,
119
+ top_p=0.95,
120
+ do_sample=True,
121
+ )
122
+
123
+ out = tokenizer.decode(tokens[0], skip_special_tokens=True)
124
+ print(out)
125
+ ```
126
+
127
+ We suggest playing with different generation config (`top_p`, `repetition_penalty` etc) to find the best setup for your tasks. For example, use higher temperature for roleplay task, lower temperature for reasoning.
128
+
129
+ ## Model Details
130
+
131
+ * **Model type**: `japanese-stablelm-base-beta-70b` model is an auto-regressive language model based on the Llama2 transformer architecture.
132
+ * **Language(s)**: Japanese
133
+ * **License**: [Llama2 Community License](https://ai.meta.com/llama/license/).
134
+ * **Contact**: For questions and comments about the model, please join [Stable Community Japan](https://discord.gg/StableJP). For future announcements / information about Stability AI models, research, and events, please follow https://twitter.com/StabilityAI_JP.
135
+
136
+ ## Training Dataset
137
+
138
+ Roughly 100B tokens from a mixture of the following corpora were used for continued pre-training.
139
+
140
+ - [Japanese/English Wikipedia](https://dumps.wikimedia.org/other/cirrussearch)
141
+ - [Japanese mc4](https://huggingface.co/datasets/mc4)
142
+ - [Japanese CC-100](http://data.statmt.org/cc-100/ja.txt.xz)
143
+ - [Japanese OSCAR](https://oscar-project.github.io/documentation/)
144
+ - [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B) (excluding the Books3 subset)
145
+
146
+ ## Use and Limitations
147
+
148
+ ### Intended Use
149
+
150
+ The model is intended to be used by all individuals as a foundation for application-specific fine-tuning without strict limitations on commercial use.
151
+
152
+ ### Limitations and bias
153
+
154
+ The pre-training dataset may have contained offensive or inappropriate content even after applying data cleansing filters which can be reflected in the model generated text. We recommend users exercise reasonable caution when using these models in production systems. Do not use the model for any applications that may cause harm or distress to individuals or groups.
155
+
156
+ ## Authors
157
+ This model was developed by the Research & Development team at Stability AI Japan, and the development was co-led by [Takuya Akiba](https://huggingface.co/iwiwi) and [Meng Lee](https://huggingface.co/leemeng). The members of the team are as follows:
158
+
159
+ - [Meng Lee](https://huggingface.co/leemeng)
160
+ - [Fujiki Nakamura](https://huggingface.co/fujiki)
161
+ - [Makoto Shing](https://huggingface.co/mkshing)
162
+ - [Paul McCann](https://huggingface.co/polm-stability)
163
+ - [Takuya Akiba](https://huggingface.co/iwiwi)
164
+ - [Naoki Orii](https://huggingface.co/mrorii)
165
+
166
+ ## Acknowledgements
167
+
168
+ We thank Meta Research for releasing Llama 2 under an open license for others to build on.
169
+
170
+ We are grateful for the contributions of the EleutherAI Polyglot-JA team in helping us to collect a large amount of pre-training data in Japanese. Polyglot-JA members includes Hyunwoong Ko (Project Lead), Fujiki Nakamura (originally started this project when he commited to the Polyglot team), Yunho Mo, Minji Jung, KeunSeok Im, and Su-Kyeong Jang.
171
+
172
+ We are also appreciative of [AI Novelist/Sta (Bit192, Inc.)](https://ai-novel.com/index.php) and the numerous contributors from [Stable Community Japan](https://discord.gg/VPrcE475HB) for assisting us in gathering a large amount of high-quality Japanese textual data for model training.
173
+
174
+ ## How to cite
175
+ ```
176
+ @misc{JapaneseStableLMBaseBeta70B,
177
+ url={[https://huggingface.co/stabilityai/japanese-stablelm-base-beta-70b](https://huggingface.co/stabilityai/japanese-stablelm-base-beta-70b)},
178
+ title={Japanese StableLM Base Beta 70B},
179
+ author={Lee, Meng and Nakamura, Fujiki and Shing, Makoto and McCann, Paul and Akiba, Takuya and Orii, Naoki}
180
+ }
181
+ ```
182
+
183
+
184
+