mav23 commited on
Commit
2a56cd9
1 Parent(s): 4fc836d

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ japanese-stablelm-3b-4e1t-base.Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - japanese-stablelm
5
+ - causal-lm
6
+ pipeline_tag: text-generation
7
+ datasets:
8
+ - wikipedia
9
+ - mc4
10
+ - cc100
11
+ - oscar-corpus/OSCAR-2301
12
+ - oscar-corpus/OSCAR-2201
13
+ - cerebras/SlimPajama-627B
14
+ language:
15
+ - ja
16
+ extra_gated_fields:
17
+ Name: text
18
+ Email: text
19
+ Country: text
20
+ Organization or Affiliation: text
21
+ I allow Stability AI to contact me about information related to its models and research: checkbox
22
+ ---
23
+
24
+ # Japanese StableLM-3B-4E1T Base
25
+
26
+ ## Model Description
27
+
28
+ This is a 3B-parameter decoder-only language model with a focus on maximizing Japanese language modeling performance and Japanese downstream task performance.
29
+ We conducted continued pretraining using Japanese data on the English language model, [StableLM-3B-4E1T](https://huggingface.co/stabilityai/stablelm-3b-4e1t/), to transfer the model's knowledge and capabilities to Japanese.
30
+
31
+ *If you are looking for an instruction-following model, please check [Japanese StableLM-3B-4E1T Instruct](https://huggingface.co/stabilityai/japanese-stablelm-3b-4e1t-instruct)*.
32
+
33
+ *If you are in search of a larger model, please check [Japanese Stable LM Base Gamma 7B](https://huggingface.co/stabilityai/japanese-stablelm-base-gamma-7b)*.
34
+
35
+
36
+ ## Usage
37
+
38
+ ```python
39
+ from transformers import AutoModelForCausalLM, AutoTokenizer
40
+ tokenizer = AutoTokenizer.from_pretrained("stabilityai/japanese-stablelm-3b-4e1t-base")
41
+ model = AutoModelForCausalLM.from_pretrained(
42
+ "stabilityai/japanese-stablelm-3b-4e1t-base",
43
+ trust_remote_code=True,
44
+ torch_dtype="auto",
45
+ )
46
+ model.cuda()
47
+ inputs = tokenizer("AI で科学研究を加速するには、", return_tensors="pt").to("cuda")
48
+ tokens = model.generate(
49
+ **inputs,
50
+ max_new_tokens=64,
51
+ temperature=0.75,
52
+ top_p=0.95,
53
+ do_sample=True,
54
+ )
55
+ print(tokenizer.decode(tokens[0], skip_special_tokens=True))
56
+ ```
57
+
58
+ ## Model Details
59
+
60
+ * **Developed by**: [Stability AI](https://stability.ai/)
61
+ * **Model type**: `Japanese StableLM-3B-4E1T Base` model is an auto-regressive language models based on the transformer decoder architecture.
62
+ * **Language(s)**: Japanese
63
+ * **License**: This model is licensed under [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
64
+ * **Contact**: For questions and comments about the model, please join [Stable Community Japan](https://discord.gg/StableJP). For future announcements / information about Stability AI models, research, and events, please follow https://twitter.com/StabilityAI_JP.
65
+
66
+ ### Model Architecture
67
+
68
+ The model is a decoder-only transformer similar to the LLaMA ([Touvron et al., 2023](https://arxiv.org/abs/2307.09288)) architecture with the following modifications:
69
+
70
+ | Parameters | Hidden Size | Layers | Heads | Sequence Length |
71
+ |----------------|-------------|--------|-------|-----------------|
72
+ | 2,795,443,200 | 2560 | 32 | 32 | 4096 |
73
+
74
+ * **Position Embeddings**: Rotary Position Embeddings ([Su et al., 2021](https://arxiv.org/abs/2104.09864)) applied to the first 25% of head embedding dimensions for improved throughput following [Black et al. (2022)](https://arxiv.org/pdf/2204.06745.pdf).
75
+ * **Normalization**: LayerNorm ([Ba et al., 2016](https://arxiv.org/abs/1607.06450)) with learned bias terms as opposed to RMSNorm ([Zhang & Sennrich, 2019](https://arxiv.org/abs/1910.07467)).
76
+ * **Tokenizer**: GPT-NeoX ([Black et al., 2022](https://arxiv.org/abs/2204.06745)).
77
+
78
+
79
+ ### Training Dataset
80
+
81
+ Around 100B tokens from a mixture of the following corpora were used for the continued pretraining.
82
+
83
+ - [Japanese/English Wikipedia](https://dumps.wikimedia.org/other/cirrussearch)
84
+ - [Japanese mc4](https://huggingface.co/datasets/mc4)
85
+ - [Japanese CC-100](http://data.statmt.org/cc-100/ja.txt.xz)
86
+ - [Japanese OSCAR](https://oscar-project.github.io/documentation/)
87
+ - [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B) without the Books3 subset
88
+
89
+
90
+ ## Use and Limitations
91
+
92
+ ### Intended Use
93
+
94
+ The model is intended to be used by all individuals as a foundational model for application-specific fine-tuning without strict limitations on commercial use.
95
+
96
+ ### Limitations and bias
97
+
98
+ The pre-training dataset may have contained offensive or inappropriate content even after applying data cleansing filters which can be reflected in the model-generated text. We recommend users exercise reasonable caution when using these models in production systems. Do not use the model for any applications that may cause harm or distress to individuals or groups.
99
+
100
+ ## Credits
101
+
102
+ The continued pre-training was carried out by [Takuya Akiba](https://huggingface.co/iwiwi).
103
+ Other aspects, including data preparation and evaluation, were handled by the Language Team of Stability AI Japan, notably [Meng Lee](https://huggingface.co/leemeng), [Fujiki Nakamura](https://huggingface.co/fujiki), [Makoto Shing](https://huggingface.co/mkshing), [Paul McCann](https://huggingface.co/polm-stability), and [Naoki Orii](https://huggingface.co/mrorii).
104
+
105
+ ## Acknowledgements
106
+
107
+ We are grateful for the contributions of the EleutherAI Polyglot-JA team in helping us to collect a large amount of pre-training data in Japanese. Polyglot-JA members includes Hyunwoong Ko (Project Lead), Fujiki Nakamura (originally started this project when he commited to the Polyglot team), Yunho Mo, Minji Jung, KeunSeok Im, and Su-Kyeong Jang.
108
+
109
+ We are also appreciative of [AI Novelist/Sta (Bit192, Inc.)](https://ai-novel.com/index.php) and the numerous contributors from [Stable Community Japan](https://discord.gg/VPrcE475HB) for assisting us in gathering a large amount of high-quality Japanese textual data for model training.
japanese-stablelm-3b-4e1t-base.Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:41a7182c95547e00f7575dcfb584aafcc74db109d655cc380761c58dfb00a379
3
+ size 1608571520