Text Generation
Transformers
Safetensors
7 languages
stablelm
causal-lm
Inference Endpoints
12 papers
jon-tow commited on
Commit
586f0cf
1 Parent(s): 96634d3

init: release

Browse files
README.md CHANGED
@@ -1,132 +1,55 @@
1
  ---
2
- license: cc-by-sa-4.0
3
- datasets:
4
- - tiiuae/falcon-refinedweb
5
- - togethercomputer/RedPajama-Data-1T
6
- - uonlp/CulturaX
7
- - CarperAI/pilev2-dev
8
- - bigcode/starcoderdata
9
- - DataProvenanceInitiative/Commercially-Verified-Licenses
10
  language:
11
  - en
12
  tags:
13
  - causal-lm
14
  ---
15
- # `StableLM-1.6B`
16
 
17
- ## Model Description
18
 
19
- `StableLM-1.6B` is a 1.6 billion parameter decoder-only language model pre-trained on 2 trillion tokens of diverse multi-lingual and code datasets for two epochs.
20
 
21
  ## Usage
22
 
23
- Get started generating text with `StableLM-1.6B` by using the following code snippet:
 
24
 
25
- ```python
26
- from transformers import AutoModelForCausalLM, AutoTokenizer
27
- tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablelm-1_6b", trust_remote_code=True)
28
- model = AutoModelForCausalLM.from_pretrained(
29
- "stabilityai/stablelm-1_6b",
30
- trust_remote_code=True,
31
- torch_dtype="auto",
32
- )
33
- model.cuda()
34
- inputs = tokenizer("The weather is always wonderful", return_tensors="pt").to(model.device)
35
- tokens = model.generate(
36
- **inputs,
37
- max_new_tokens=64,
38
- temperature=0.70,
39
- top_p=0.95,
40
- do_sample=True,
41
- )
42
- print(tokenizer.decode(tokens[0], skip_special_tokens=True))
43
- ```
44
 
45
- ### Run with Flash Attention 2 ⚡️
46
 
47
- <details>
48
- <summary> Click to expand </summary>
49
 
50
  ```python
51
  from transformers import AutoModelForCausalLM, AutoTokenizer
52
- tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablelm-1_6b", trust_remote_code=True)
53
  model = AutoModelForCausalLM.from_pretrained(
54
- "stabilityai/stablelm-1_6b",
55
  trust_remote_code=True,
56
  torch_dtype="auto",
57
- attn_implementation="flash_attention_2",
58
  )
59
  model.cuda()
60
- inputs = tokenizer("The weather is always wonderful", return_tensors="pt").to(model.device)
61
- tokens = model.generate(
62
- **inputs,
63
- max_new_tokens=64,
64
- temperature=0.70,
65
- top_p=0.95,
66
- do_sample=True,
67
- )
68
- print(tokenizer.decode(tokens[0], skip_special_tokens=True))
69
  ```
70
 
71
- </details>
72
-
73
-
74
- ## Model Details
75
-
76
- * **Developed by**: [Stability AI](https://stability.ai/)
77
- * **Model type**: `StableLM-1.6B` models are auto-regressive language models based on the transformer decoder architecture.
78
- * **Language(s)**: English
79
- * **Library**: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
80
- * **License**: Model checkpoints are licensed under the Creative Commons license ([CC BY-SA-4.0](https://creativecommons.org/licenses/by-sa/4.0/)). Under this license, you must give [credit](https://creativecommons.org/licenses/by/4.0/#) to Stability AI, provide a link to the license, and [indicate if changes were made](https://creativecommons.org/licenses/by/4.0/#). You may do so in any reasonable manner, but not in any way that suggests the Stability AI endorses you or your use.
81
- * **Contact**: For questions and comments about the model, please email `lm@stability.ai`
82
-
83
- ### Model Architecture
84
 
85
- The model is a decoder-only transformer similar to the LLaMA ([Touvron et al., 2023](https://arxiv.org/abs/2307.09288)) architecture with the following modifications:
86
 
87
- | Parameters | Hidden Size | Layers | Heads | Sequence Length |
88
- |----------------|-------------|--------|-------|-----------------|
89
- | 1,644,417,024 | 2048 | 24 | 32 | 4096 |
90
 
91
- * **Position Embeddings**: Rotary Position Embeddings ([Su et al., 2021](https://arxiv.org/abs/2104.09864)) applied to the first 25% of head embedding dimensions for improved throughput following [Black et al. (2022)](https://arxiv.org/pdf/2204.06745.pdf).
92
- * **Normalization**: LayerNorm ([Ba et al., 2016](https://arxiv.org/abs/1607.06450)) with learned bias terms as opposed to RMSNorm ([Zhang & Sennrich, 2019](https://arxiv.org/abs/1910.07467)).
93
- * **Biases**: We remove all bias terms from the model except for attention Q,K,V projections ([Bai et al., 2023](https://arxiv.org/abs/2309.16609)).
94
- * **Tokenizer**: We use Arcade100k, a BPE tokenizer extended from OpenAI's [`tiktoken.cl100k_base`](https://github.com/openai/tiktoken). We split digits into individual tokens following findings by [Liu & Low (2023)](https://arxiv.org/abs/2305.14201).
95
 
96
- ## Training
97
-
98
- ### Training Dataset
99
-
100
- The dataset is comprised of a filtered mixture of open-source large-scale datasets available on the [HuggingFace Hub](https://huggingface.co/datasets): Falcon RefinedWeb extract ([Penedo et al., 2023](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)), RedPajama-Data ([Together Computer., 2023](https://github.com/togethercomputer/RedPajama-Data)) and The Pile ([Gao et al., 2020](https://arxiv.org/abs/2101.00027)) both without the *Books3* subset, and StarCoder ([Li et al., 2023](https://arxiv.org/abs/2305.06161)). We further supplement our training with multi-lingual data from CulturaX ([Nguyen et al., 2023](https://arxiv.org/abs/2309.09400)) and, in particular, from its OSCAR corpora, as well as restructured data in the style of [Yuan & Liu (2022)](https://arxiv.org/abs/2206.11147).
101
-
102
- * Given the large amount of web data, we recommend fine-tuning the base StableLM-1.6B for your downstream tasks.
103
-
104
- ### Training Procedure
105
-
106
- The model is pre-trained on the aforementioned datasets in `bfloat16` precision, optimized with AdamW, and trained using the NeoX tokenizer with a vocabulary size of 100,352. We outline the complete hyperparameters choices in the project's [GitHub repository - config (TODO)](https://github.com/Stability-AI/StableLM/blob/main/configs/stablelm-1.6b.yml).
107
-
108
- ### Training Infrastructure
109
-
110
- * **Hardware**: `StableLM-1.6B` was trained on the Stability AI cluster across 512 NVIDIA A100 40GB GPUs (AWS P4d instances).
111
-
112
- * **Software**: We use a fork of `gpt-neox` ([EleutherAI, 2021](https://github.com/EleutherAI/gpt-neox)), train under 2D parallelism (Data and Tensor Parallel) with ZeRO-1 ([Rajbhandari et al., 2019](https://arxiv.org/abs/1910.02054v3)), and rely on flash-attention as well as SwiGLU and Rotary Embedding kernels from FlashAttention-2 ([Dao et al., 2023](https://tridao.me/publications/flash2/flash2.pdf))
113
-
114
- ## Use and Limitations
115
-
116
- ### Intended Use
117
-
118
- The model is intended to be used as a foundational base model for application-specific fine-tuning. Developers must evaluate and fine-tune the model for safe performance in downstream applications.
119
-
120
- ### Limitations and Bias
121
-
122
- As a base model, this model may exhibit unreliable, unsafe, or other undesirable behaviors that must be corrected through evaluation and fine-tuning prior to deployment. The pre-training dataset may have contained offensive or inappropriate content, even after applying data cleansing filters, which can be reflected in the model-generated text. We recommend that users exercise caution when using these models in production systems. Do not use the models if they are unsuitable for your application, or for any applications that may cause deliberate or unintentional harm to others.
123
-
124
- ## How to Cite
125
 
126
  ```bibtex
127
- @misc{StableLM-1.6B,
128
- url={[https://huggingface.co/stabilityai/stablelm-1.6b](https://huggingface.co/stabilityai/stablelm-1.6b)},
129
- title={StableLM 1.6B},
130
  author={Stability AI Language Team}
131
  }
132
- ```
 
1
  ---
2
+ license: other
 
 
 
 
 
 
 
3
  language:
4
  - en
5
  tags:
6
  - causal-lm
7
  ---
8
+ # `Stable LM 2 1.6B` (global_step420000)
9
 
10
+ ## Description
11
 
12
+ `Stable LM 2 1.6B` is a 1.6 billion parameter decoder-only language model pre-trained on 2 trillion tokens of diverse multilingual and code datasets for two epochs.
13
 
14
  ## Usage
15
 
16
+ This branch contains the training checkpoint for `Stable LM 2 1.6B` at step 420,000. It is the final checkpoint taken before cooldown.
17
+ We provide the following contents in the [`global_step420000`](https://huggingface.co/stabilityai/stablelm-2-1_6b/tree/global_step420000/global_step420000) directory:
18
 
19
+ - `bf16_zero_pp_mp_rank_00_optim_states.pt`: The Adam states and FP32 weights for each parameter. You will need to port this to your optimizer format when importing into your training process.
20
+
21
+ - `mp_rank_00_model_states.pt`: The model weights following the [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) convention.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
 
23
+ - `config.yml`: The pre-training configuration file for this checkpoint. Linear learning rate cooldown should be taken from `lr=0.0002529` to `lr=0.0`.
24
 
25
+ The model weights are also converted to HuggingFace `transformers` format and can be loaded with the following code:
 
26
 
27
  ```python
28
  from transformers import AutoModelForCausalLM, AutoTokenizer
29
+ tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablelm-2-1_6b", trust_remote_code=True)
30
  model = AutoModelForCausalLM.from_pretrained(
31
+ "stabilityai/stablelm-2-1_6b",
32
  trust_remote_code=True,
33
  torch_dtype="auto",
34
+ revision="global_step420000"
35
  )
36
  model.cuda()
 
 
 
 
 
 
 
 
 
37
  ```
38
 
39
+ ## License
 
 
 
 
 
 
 
 
 
 
 
 
40
 
41
+ * **Stability AI Non-Commercial Research Community License**. If you'd like to use this model for commercial products or purposes, please contact us [here](https://stability.ai/membership) to learn more.
42
 
43
+ ## Acknowledgements
 
 
44
 
45
+ - Dakota Mahan for creating the ZeRO optimizer state merging script.
 
 
 
46
 
47
+ ## Citation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
 
49
  ```bibtex
50
+ @misc{StableLM-2-1.6B,
51
+ url={[https://huggingface.co/stabilityai/stablelm-2-1_6b](https://huggingface.co/stabilityai/stablelm-2-1_6b)},
52
+ title={Stable LM 2 1.6B},
53
  author={Stability AI Language Team}
54
  }
55
+ ```
generation_config.json CHANGED
@@ -2,5 +2,5 @@
2
  "_from_model_config": true,
3
  "bos_token_id": 100257,
4
  "eos_token_id": 100257,
5
- "transformers_version": "4.36.2"
6
  }
 
2
  "_from_model_config": true,
3
  "bos_token_id": 100257,
4
  "eos_token_id": 100257,
5
+ "transformers_version": "4.35.2"
6
  }
global_step420000/bf16_zero_pp_mp_rank_00_optim_states.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3f24d0cd5a8289d48d9f59899ca029a17ced8257c81403e5fb0bee95ded3bb81
3
+ size 19733203617
global_step420000/config.yml ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ # parallelism settings
3
+ "pipe-parallel-size": 0,
4
+ "model-parallel-size": 1,
5
+
6
+ # model settings
7
+ "num-layers": 24,
8
+ "hidden-size": 2048,
9
+ "num-attention-heads": 32,
10
+ "seq-length": 4096,
11
+ "max-position-embeddings": 4096,
12
+
13
+ # architecture design
14
+ "attention_head_type": "multihead",
15
+ "norm": "layernorm",
16
+ "pos-emb": "rotary",
17
+ "rotary_pct": 0.25,
18
+ "rotary_interleaved": false, # GPT-NeoX style
19
+ "mlp_multiple_of": 256,
20
+ "mlp_type": "gated",
21
+ "activation": "silu",
22
+ "no-weight-tying": true,
23
+ "gpt_j_residual": false,
24
+ "gpt_j_tied": false,
25
+ "output_layer_parallelism": "column",
26
+
27
+ # init methods
28
+ "init_method": "normal",
29
+ "output_layer_init_method": "scaled_normal",
30
+ "init_method_std": 0.02,
31
+
32
+ # biases
33
+ # NOTE: QKV projections were hard-coded with bias=True
34
+ "use_bias_in_norms": false,
35
+ "use_bias_in_attn_linear": false,
36
+ "use_bias_in_mlp": false,
37
+
38
+ # fused ops
39
+ "use_flash_cross_entropy": true,
40
+ "bias-gelu-fusion": false,
41
+ "scaled-upper-triang-masked-softmax-fusion": false,
42
+ "attention-config": [[["flash"], 24]],
43
+
44
+ # optimizer settings
45
+ "optimizer": {
46
+ "type": "Adam",
47
+ "params": {
48
+ "lr": 0.001,
49
+ "betas": [0.9, 0.95],
50
+ "eps": 1.0e-8,
51
+ }
52
+ },
53
+ "min_lr": 0.0001,
54
+ "train-iters": 540_000,
55
+ "lr-decay-iters": 540_000,
56
+ "lr-decay-style": "hybrid_cosine_inv_sqrt_2",
57
+ "warmup": 0.018,
58
+ "cooldown": 0.,
59
+
60
+ "reset_attention_mask": true,
61
+ "reset_position_ids": true,
62
+
63
+ # for all zero_optimization options, see https://www.deepspeed.ai/docs/config-json/#zero-optimizations-for-fp16-training
64
+ "zero_optimization": {
65
+ "stage": 1,
66
+ "allgather_partitions": true,
67
+ "allgather_bucket_size": 1260000000,
68
+ "overlap_comm": true,
69
+ "reduce_scatter": true,
70
+ "reduce_bucket_size": 1260000000,
71
+ "contiguous_gradients": true,
72
+ "cpu_offload": false,
73
+ },
74
+
75
+ # batch / data settings
76
+ "train_micro_batch_size_per_gpu": 2,
77
+ "gradient_accumulation_steps": 2,
78
+ "data-impl": "mmap",
79
+ "eval-interval": 500_000,
80
+ "eval-iters": 1,
81
+ "eval_batch_size": 1,
82
+ "eval_tasks": [],
83
+
84
+ # activation checkpointing
85
+ "checkpoint-activations": true,
86
+ "checkpoint-num-layers": 24,
87
+ "partition-activations": true,
88
+ "synchronize-each-layer": true,
89
+
90
+ # regularization
91
+ "gradient_clipping": 1,
92
+ "weight-decay": 0.1,
93
+ "hidden-dropout": 0.,
94
+ "attention-dropout": 0.,
95
+
96
+ # precision settings
97
+ "bf16": { "enabled": true },
98
+ "precision": "bfloat16",
99
+ "full_precision_lm_cross_entropy": true,
100
+ "fp32_allreduce": true,
101
+
102
+ # misc. training settings
103
+ "num-workers": 2,
104
+ "distributed-backend": "nccl",
105
+
106
+ # checkpoint settings
107
+ "checkpoint-factor": 10_000,
108
+ #"s3_sync_interval": 20_000,
109
+ "extra-save-iters": [230_001],
110
+ "save": "",
111
+ "load": "",
112
+ #"s3_path": "",
113
+
114
+ "train_data_paths": [],
115
+ "valid-data-paths": [],
116
+ "test-data-paths": [],
117
+
118
+ # tokenizer settings
119
+ "tokenizer-type": "TiktokenTokenizer",
120
+ "vocab-file": "arcade100k.tiktoken",
121
+
122
+ "log-interval": 10,
123
+ "steps_per_print": 10,
124
+ "wall_clock_breakdown": true,
125
+
126
+ "use_wandb": true,
127
+ "wandb_host": "",
128
+ "wandb_team": "",
129
+ "wandb_project": "",
130
+ "wandb_group": "",
131
+ "wandb_name": "",
132
+ #"wandb_resume": "must",
133
+
134
+ # multi-node launcher
135
+ "launcher": "slurm",
136
+ "deepspeed_slurm": true,
137
+
138
+ "seed": 1234
139
+ }
global_step420000/mp_rank_00_model_states.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:04a05847e6c4a5968e8780b1a0895051c6e1666e3912842ece4f70cddf6395b9
3
+ size 3288906844
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8bdf317e2b35ab5c8009cbb6c7ce495e4e608a6b9b843d44054edf25b8c5860d
3
  size 3289069520
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f4cf5c39e33b678995a8b593c1670feed04213371e1840d07b3cdde950a309c7
3
  size 3289069520