RichardErkhov commited on
Commit
67bbafb
1 Parent(s): 3aaf792

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +149 -0
README.md ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ japanese-stablelm-base-gamma-7b - GGUF
11
+ - Model creator: https://huggingface.co/stabilityai/
12
+ - Original model: https://huggingface.co/stabilityai/japanese-stablelm-base-gamma-7b/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [japanese-stablelm-base-gamma-7b.Q2_K.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-gamma-7b-gguf/blob/main/japanese-stablelm-base-gamma-7b.Q2_K.gguf) | Q2_K | 2.53GB |
18
+ | [japanese-stablelm-base-gamma-7b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-gamma-7b-gguf/blob/main/japanese-stablelm-base-gamma-7b.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
19
+ | [japanese-stablelm-base-gamma-7b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-gamma-7b-gguf/blob/main/japanese-stablelm-base-gamma-7b.IQ3_S.gguf) | IQ3_S | 2.96GB |
20
+ | [japanese-stablelm-base-gamma-7b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-gamma-7b-gguf/blob/main/japanese-stablelm-base-gamma-7b.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
21
+ | [japanese-stablelm-base-gamma-7b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-gamma-7b-gguf/blob/main/japanese-stablelm-base-gamma-7b.IQ3_M.gguf) | IQ3_M | 3.06GB |
22
+ | [japanese-stablelm-base-gamma-7b.Q3_K.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-gamma-7b-gguf/blob/main/japanese-stablelm-base-gamma-7b.Q3_K.gguf) | Q3_K | 3.28GB |
23
+ | [japanese-stablelm-base-gamma-7b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-gamma-7b-gguf/blob/main/japanese-stablelm-base-gamma-7b.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
24
+ | [japanese-stablelm-base-gamma-7b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-gamma-7b-gguf/blob/main/japanese-stablelm-base-gamma-7b.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
25
+ | [japanese-stablelm-base-gamma-7b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-gamma-7b-gguf/blob/main/japanese-stablelm-base-gamma-7b.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
26
+ | [japanese-stablelm-base-gamma-7b.Q4_0.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-gamma-7b-gguf/blob/main/japanese-stablelm-base-gamma-7b.Q4_0.gguf) | Q4_0 | 3.83GB |
27
+ | [japanese-stablelm-base-gamma-7b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-gamma-7b-gguf/blob/main/japanese-stablelm-base-gamma-7b.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
28
+ | [japanese-stablelm-base-gamma-7b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-gamma-7b-gguf/blob/main/japanese-stablelm-base-gamma-7b.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
29
+ | [japanese-stablelm-base-gamma-7b.Q4_K.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-gamma-7b-gguf/blob/main/japanese-stablelm-base-gamma-7b.Q4_K.gguf) | Q4_K | 4.07GB |
30
+ | [japanese-stablelm-base-gamma-7b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-gamma-7b-gguf/blob/main/japanese-stablelm-base-gamma-7b.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
31
+ | [japanese-stablelm-base-gamma-7b.Q4_1.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-gamma-7b-gguf/blob/main/japanese-stablelm-base-gamma-7b.Q4_1.gguf) | Q4_1 | 4.24GB |
32
+ | [japanese-stablelm-base-gamma-7b.Q5_0.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-gamma-7b-gguf/blob/main/japanese-stablelm-base-gamma-7b.Q5_0.gguf) | Q5_0 | 4.65GB |
33
+ | [japanese-stablelm-base-gamma-7b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-gamma-7b-gguf/blob/main/japanese-stablelm-base-gamma-7b.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
34
+ | [japanese-stablelm-base-gamma-7b.Q5_K.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-gamma-7b-gguf/blob/main/japanese-stablelm-base-gamma-7b.Q5_K.gguf) | Q5_K | 4.78GB |
35
+ | [japanese-stablelm-base-gamma-7b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-gamma-7b-gguf/blob/main/japanese-stablelm-base-gamma-7b.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
36
+ | [japanese-stablelm-base-gamma-7b.Q5_1.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-gamma-7b-gguf/blob/main/japanese-stablelm-base-gamma-7b.Q5_1.gguf) | Q5_1 | 5.07GB |
37
+ | [japanese-stablelm-base-gamma-7b.Q6_K.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-gamma-7b-gguf/blob/main/japanese-stablelm-base-gamma-7b.Q6_K.gguf) | Q6_K | 5.53GB |
38
+ | [japanese-stablelm-base-gamma-7b.Q8_0.gguf](https://huggingface.co/RichardErkhov/stabilityai_-_japanese-stablelm-base-gamma-7b-gguf/blob/main/japanese-stablelm-base-gamma-7b.Q8_0.gguf) | Q8_0 | 7.17GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ license: apache-2.0
46
+ tags:
47
+ - japanese-stablelm
48
+ - causal-lm
49
+ pipeline_tag: text-generation
50
+ datasets:
51
+ - wikipedia
52
+ - mc4
53
+ - cc100
54
+ - oscar-corpus/OSCAR-2301
55
+ - oscar-corpus/OSCAR-2201
56
+ - cerebras/SlimPajama-627B
57
+ language:
58
+ - ja
59
+ extra_gated_fields:
60
+ Name: text
61
+ Email: text
62
+ Country: text
63
+ Organization or Affiliation: text
64
+ I allow Stability AI to contact me about information related to its models and research: checkbox
65
+ ---
66
+
67
+ # Japanese Stable LM Base Gamma 7B
68
+
69
+ ## Model Description
70
+
71
+ This is a 7B-parameter decoder-only language model with a focus on maximizing Japanese language modeling performance and Japanese downstream task performance.
72
+ We conducted continued pretraining using Japanese data on the English language model, [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1), to transfer the model's knowledge and capabilities to Japanese.
73
+
74
+ *If you are looking for an instruction-following model, check [Japanese Stable LM Instruct Gamma 7B](https://huggingface.co/stabilityai/japanese-stablelm-instruct-gamma-7b)*.
75
+
76
+ *If you are in search of a smaller model, please check [Japanese StableLM-3B-4E1T Base](https://huggingface.co/stabilityai/japanese-stablelm-3b-4e1t-base).*
77
+
78
+
79
+ ## Usage
80
+
81
+ Ensure you are using Transformers 4.34.0 or newer.
82
+
83
+ ```python
84
+ from transformers import AutoModelForCausalLM, AutoTokenizer
85
+ tokenizer = AutoTokenizer.from_pretrained("stabilityai/japanese-stablelm-base-gamma-7b")
86
+ model = AutoModelForCausalLM.from_pretrained(
87
+ "stabilityai/japanese-stablelm-base-gamma-7b",
88
+ torch_dtype="auto",
89
+ )
90
+ model.cuda()
91
+ inputs = tokenizer("AI で科学研究を加速するには、", return_tensors="pt").to("cuda")
92
+ tokens = model.generate(
93
+ **inputs,
94
+ max_new_tokens=64,
95
+ temperature=0.75,
96
+ top_p=0.95,
97
+ do_sample=True,
98
+ )
99
+ print(tokenizer.decode(tokens[0], skip_special_tokens=True))
100
+ ```
101
+
102
+ ## Model Details
103
+
104
+ * **Developed by**: [Stability AI](https://stability.ai/)
105
+ * **Model type**: `Japanese Stable LM Base Gamma 7B` model is an auto-regressive language model based on the transformer decoder architecture.
106
+ * **Language(s)**: Japanese
107
+ * **License**: This model is licensed under [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
108
+ * **Contact**: For questions and comments about the model, please join [Stable Community Japan](https://discord.gg/StableJP). For future announcements / information about Stability AI models, research, and events, please follow https://twitter.com/StabilityAI_JP.
109
+
110
+ ### Model Architecture
111
+
112
+ For details, please see Mistral AI's [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/announcing-mistral-7b/).
113
+
114
+
115
+ ### Training Dataset
116
+
117
+ Around 100B tokens from a mixture of the following corpora were used for the continued pretraining.
118
+
119
+ - [Japanese/English Wikipedia](https://dumps.wikimedia.org/other/cirrussearch)
120
+ - [Japanese mc4](https://huggingface.co/datasets/mc4)
121
+ - [Japanese CC-100](http://data.statmt.org/cc-100/ja.txt.xz)
122
+ - [Japanese OSCAR](https://oscar-project.github.io/documentation/)
123
+ - [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B) without the Books3 subset
124
+
125
+
126
+ ## Use and Limitations
127
+
128
+ ### Intended Use
129
+
130
+ The model is intended to be used by all individuals as a foundational model for application-specific fine-tuning without strict limitations on commercial use.
131
+
132
+ ### Limitations and bias
133
+
134
+ The pre-training dataset may have contained offensive or inappropriate content even after applying data cleansing filters which can be reflected in the model-generated text. We recommend users exercise reasonable caution when using these models in production systems. Do not use the model for any applications that may cause harm or distress to individuals or groups.
135
+
136
+ ## Credits
137
+
138
+ The continued pre-training was carried out by [Takuya Akiba](https://huggingface.co/iwiwi).
139
+ Other aspects, including data preparation and evaluation, were handled by the Language Team of Stability AI Japan, notably [Meng Lee](https://huggingface.co/leemeng), [Fujiki Nakamura](https://huggingface.co/fujiki), [Makoto Shing](https://huggingface.co/mkshing), [Paul McCann](https://huggingface.co/polm-stability), and [Naoki Orii](https://huggingface.co/mrorii).
140
+
141
+
142
+ ## Acknowledgements
143
+
144
+ This model is based on Mistral-7B-v0.1 released by the Mistral AI team. We are grateful to the Mistral AI team for providing such an excellent base model.
145
+
146
+ We are grateful for the contributions of the EleutherAI Polyglot-JA team in helping us to collect a large amount of pre-training data in Japanese. Polyglot-JA members includes Hyunwoong Ko (Project Lead), Fujiki Nakamura (originally started this project when he commited to the Polyglot team), Yunho Mo, Minji Jung, KeunSeok Im, and Su-Kyeong Jang.
147
+
148
+ We are also appreciative of [AI Novelist/Sta (Bit192, Inc.)](https://ai-novel.com/index.php) and the numerous contributors from [Stable Community Japan](https://discord.gg/VPrcE475HB) for assisting us in gathering a large amount of high-quality Japanese textual data for model training.
149
+