Text Generation
Transformers
GGUF
English
stablelm
causal-lm
ggml
quantized
q2_k
q3_k_m
q4_k_m
q5_k_m
q6_k
q8_0
8 papers
afrideva commited on
Commit
1d6f7e1
1 Parent(s): 3ed0e9a

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +141 -0
README.md ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: stabilityai/stablelm-3b-4e1t
3
+ datasets:
4
+ - tiiuae/falcon-refinedweb
5
+ - togethercomputer/RedPajama-Data-1T
6
+ - CarperAI/pilev2-dev
7
+ - bigcode/starcoderdata
8
+ - allenai/peS2o
9
+ extra_gated_fields:
10
+ Country: text
11
+ Email: text
12
+ I ALLOW Stability AI to email me about new model releases: checkbox
13
+ Name: text
14
+ Organization or Affiliation: text
15
+ inference: false
16
+ language:
17
+ - en
18
+ license: cc-by-sa-4.0
19
+ model_creator: stabilityai
20
+ model_name: stablelm-3b-4e1t
21
+ pipeline_tag: text-generation
22
+ quantized_by: afrideva
23
+ tags:
24
+ - causal-lm
25
+ - gguf
26
+ - ggml
27
+ - quantized
28
+ - q2_k
29
+ - q3_k_m
30
+ - q4_k_m
31
+ - q5_k_m
32
+ - q6_k
33
+ - q8_0
34
+ ---
35
+ # stabilityai/stablelm-3b-4e1t-GGUF
36
+
37
+ Quantized GGUF model files for [stablelm-3b-4e1t](https://huggingface.co/stabilityai/stablelm-3b-4e1t) from [stabilityai](https://huggingface.co/stabilityai)
38
+
39
+
40
+ | Name | Quant method | Size |
41
+ | ---- | ---- | ---- |
42
+ | [stablelm-3b-4e1t.q2_k.gguf](https://huggingface.co/afrideva/stablelm-3b-4e1t-GGUF/resolve/main/stablelm-3b-4e1t.q2_k.gguf) | q2_k | 1.20 GB |
43
+ | [stablelm-3b-4e1t.q3_k_m.gguf](https://huggingface.co/afrideva/stablelm-3b-4e1t-GGUF/resolve/main/stablelm-3b-4e1t.q3_k_m.gguf) | q3_k_m | 1.39 GB |
44
+ | [stablelm-3b-4e1t.q4_k_m.gguf](https://huggingface.co/afrideva/stablelm-3b-4e1t-GGUF/resolve/main/stablelm-3b-4e1t.q4_k_m.gguf) | q4_k_m | 1.71 GB |
45
+ | [stablelm-3b-4e1t.q5_k_m.gguf](https://huggingface.co/afrideva/stablelm-3b-4e1t-GGUF/resolve/main/stablelm-3b-4e1t.q5_k_m.gguf) | q5_k_m | 1.99 GB |
46
+ | [stablelm-3b-4e1t.q6_k.gguf](https://huggingface.co/afrideva/stablelm-3b-4e1t-GGUF/resolve/main/stablelm-3b-4e1t.q6_k.gguf) | q6_k | 2.30 GB |
47
+ | [stablelm-3b-4e1t.q8_0.gguf](https://huggingface.co/afrideva/stablelm-3b-4e1t-GGUF/resolve/main/stablelm-3b-4e1t.q8_0.gguf) | q8_0 | 2.97 GB |
48
+
49
+
50
+
51
+ ## Original Model Card:
52
+ # `StableLM-3B-4E1T`
53
+
54
+ ## Model Description
55
+
56
+ `StableLM-3B-4E1T` is a 3 billion parameter decoder-only language model pre-trained on 1 trillion tokens of diverse English and code datasets for 4 epochs.
57
+
58
+ ## Usage
59
+
60
+ Get started generating text with `StableLM-3B-4E1T` by using the following code snippet:
61
+
62
+ ```python
63
+ from transformers import AutoModelForCausalLM, AutoTokenizer
64
+ tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablelm-3b-4e1t")
65
+ model = AutoModelForCausalLM.from_pretrained(
66
+ "stabilityai/stablelm-3b-4e1t",
67
+ trust_remote_code=True,
68
+ torch_dtype="auto",
69
+ )
70
+ model.cuda()
71
+ inputs = tokenizer("The weather is always wonderful", return_tensors="pt").to("cuda")
72
+ tokens = model.generate(
73
+ **inputs,
74
+ max_new_tokens=64,
75
+ temperature=0.75,
76
+ top_p=0.95,
77
+ do_sample=True,
78
+ )
79
+ print(tokenizer.decode(tokens[0], skip_special_tokens=True))
80
+ ```
81
+
82
+ ## Model Details
83
+
84
+ * **Developed by**: [Stability AI](https://stability.ai/)
85
+ * **Model type**: `StableLM-3B-4E1T` models are auto-regressive language models based on the transformer decoder architecture.
86
+ * **Language(s)**: English
87
+ * **Library**: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
88
+ * **License**: Model checkpoints are licensed under the Creative Commons license ([CC BY-SA-4.0](https://creativecommons.org/licenses/by-sa/4.0/)). Under this license, you must give [credit](https://creativecommons.org/licenses/by/4.0/#) to Stability AI, provide a link to the license, and [indicate if changes were made](https://creativecommons.org/licenses/by/4.0/#). You may do so in any reasonable manner, but not in any way that suggests the Stability AI endorses you or your use.
89
+ * **Contact**: For questions and comments about the model, please email `lm@stability.ai`
90
+
91
+ ### Model Architecture
92
+
93
+ The model is a decoder-only transformer similar to the LLaMA ([Touvron et al., 2023](https://arxiv.org/abs/2307.09288)) architecture with the following modifications:
94
+
95
+ | Parameters | Hidden Size | Layers | Heads | Sequence Length |
96
+ |----------------|-------------|--------|-------|-----------------|
97
+ | 2,795,443,200 | 2560 | 32 | 32 | 4096 |
98
+
99
+ * **Position Embeddings**: Rotary Position Embeddings ([Su et al., 2021](https://arxiv.org/abs/2104.09864)) applied to the first 25% of head embedding dimensions for improved throughput following [Black et al. (2022)](https://arxiv.org/pdf/2204.06745.pdf).
100
+ * **Normalization**: LayerNorm ([Ba et al., 2016](https://arxiv.org/abs/1607.06450)) with learned bias terms as opposed to RMSNorm ([Zhang & Sennrich, 2019](https://arxiv.org/abs/1910.07467)).
101
+ * **Tokenizer**: GPT-NeoX ([Black et al., 2022](https://arxiv.org/abs/2204.06745)).
102
+
103
+ ## Training
104
+
105
+ For complete dataset and training details, please see the [StableLM-3B-4E1T Technical Report](https://stability.wandb.io/stability-llm/stable-lm/reports/StableLM-3B-4E1T--VmlldzoyMjU4?accessToken=u3zujipenkx5g7rtcj9qojjgxpconyjktjkli2po09nffrffdhhchq045vp0wyfo).
106
+
107
+ ### Training Dataset
108
+
109
+ The dataset is comprised of a filtered mixture of open-source large-scale datasets available on the [HuggingFace Hub](https://huggingface.co/datasets): Falcon RefinedWeb extract ([Penedo et al., 2023](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)), RedPajama-Data ([Together Computer., 2023](https://github.com/togethercomputer/RedPajama-Data)) and The Pile ([Gao et al., 2020](https://arxiv.org/abs/2101.00027)) both without the *Books3* subset, and StarCoder ([Li et al., 2023](https://arxiv.org/abs/2305.06161)).
110
+
111
+ * Given the large amount of web data, we recommend fine-tuning the base StableLM-3B-4E1T for your downstream tasks.
112
+
113
+ ### Training Procedure
114
+
115
+ The model is pre-trained on the aforementioned datasets in `bfloat16` precision, optimized with AdamW, and trained using the NeoX tokenizer with a vocabulary size of 50,257. We outline the complete hyperparameters choices in the project's [GitHub repository - config](https://github.com/Stability-AI/StableLM/blob/main/configs/stablelm-3b-4e1t.yml).
116
+
117
+ ### Training Infrastructure
118
+
119
+ * **Hardware**: `StableLM-3B-4E1T` was trained on the Stability AI cluster across 256 NVIDIA A100 40GB GPUs (AWS P4d instances). Training began on August 23, 2023, and took approximately 30 days to complete.
120
+
121
+ * **Software**: We use a fork of `gpt-neox` ([EleutherAI, 2021](https://github.com/EleutherAI/gpt-neox)), train under 2D parallelism (Data and Tensor Parallel) with ZeRO-1 ([Rajbhandari et al., 2019](https://arxiv.org/abs/1910.02054v3)), and rely on flash-attention as well as SwiGLU and Rotary Embedding kernels from FlashAttention-2 ([Dao et al., 2023](https://tridao.me/publications/flash2/flash2.pdf))
122
+
123
+ ## Use and Limitations
124
+
125
+ ### Intended Use
126
+
127
+ The model is intended to be used as a foundational base model for application-specific fine-tuning. Developers must evaluate and fine-tune the model for safe performance in downstream applications.
128
+
129
+ ### Limitations and Bias
130
+
131
+ As a base model, this model may exhibit unreliable, unsafe, or other undesirable behaviors that must be corrected through evaluation and fine-tuning prior to deployment. The pre-training dataset may have contained offensive or inappropriate content, even after applying data cleansing filters, which can be reflected in the model-generated text. We recommend that users exercise caution when using these models in production systems. Do not use the models if they are unsuitable for your application, or for any applications that may cause deliberate or unintentional harm to others.
132
+
133
+ ## How to Cite
134
+
135
+ ```bibtex
136
+ @misc{StableLM-3B-4E1T,
137
+ url={[https://huggingface.co/stabilityai/stablelm-3b-4e1t](https://huggingface.co/stabilityai/stablelm-3b-4e1t)},
138
+ title={StableLM 3B 4E1T},
139
+ author={Tow, Jonathan and Bellagente, Marco and Mahan, Dakota and Riquelme, Carlos}
140
+ }
141
+ ```