Text Generation
Transformers
Safetensors
7 languages
stablelm
causal-lm
Inference Endpoints
12 papers
File size: 2,165 Bytes
96634d3
586f0cf
96634d3
 
 
 
 
586f0cf
96634d3
586f0cf
96634d3
586f0cf
96634d3
 
 
586f0cf
 
96634d3
586f0cf
 
 
96634d3
586f0cf
96634d3
586f0cf
96634d3
 
 
586f0cf
96634d3
586f0cf
96634d3
 
586f0cf
96634d3
 
 
 
586f0cf
96634d3
f12831a
96634d3
586f0cf
96634d3
586f0cf
96634d3
586f0cf
96634d3
 
586f0cf
 
 
96634d3
 
586f0cf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
---
license: other
language:
- en
tags:
- causal-lm
---
# `Stable LM 2 1.6B` (global_step420000)

## Description

`Stable LM 2 1.6B` is a 1.6 billion parameter decoder-only language model pre-trained on 2 trillion tokens of diverse multilingual and code datasets for two epochs.

## Usage

This branch contains the training checkpoint for `Stable LM 2 1.6B` at step 420,000. It is the final checkpoint taken before cooldown.
We provide the following contents in the [`global_step420000`](https://huggingface.co/stabilityai/stablelm-2-1_6b/tree/global_step420000/global_step420000) directory:

- `bf16_zero_pp_mp_rank_00_optim_states.pt`: The Adam states and FP32 weights for each parameter. You will need to port this to your optimizer format when importing into your training process.

- `mp_rank_00_model_states.pt`: The model weights following the [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) convention.

- `config.yml`: The pre-training configuration file for this checkpoint. Linear learning rate cooldown should be taken from `lr=0.0002529` to `lr=0.0`.

The model weights are also converted to HuggingFace `transformers` format and can be loaded with the following code:

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablelm-2-1_6b", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
  "stabilityai/stablelm-2-1_6b",
  trust_remote_code=True,
  torch_dtype="auto",
  revision="global_step420000"
)
model.cuda()
```

## License

* **License**: [Stability AI Non-Commercial Research Community License](https://huggingface.co/stabilityai/stablelm-2-1_6b/blob/main/LICENSE). If you'd like to use this model for commercial products or purposes, please contact us [here](https://stability.ai/membership) to learn more.

## Acknowledgements

- Dakota Mahan for creating the ZeRO optimizer state merging script.

## Citation

```bibtex
@misc{StableLM-2-1.6B,
      url={[https://huggingface.co/stabilityai/stablelm-2-1_6b](https://huggingface.co/stabilityai/stablelm-2-1_6b)},
      title={Stable LM 2 1.6B},
      author={Stability AI Language Team}
}
```