michaelfeil commited on
Commit
0ec0ab5
1 Parent(s): 83aff22

Upload stabilityai/stablelm-base-alpha-7b ctranslate fp16 weights

Browse files
README.md ADDED
@@ -0,0 +1,127 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - ctranslate2
6
+ - int8
7
+ - float16
8
+ - causal-lm
9
+ license: cc-by-sa-4.0
10
+ ---
11
+ # # Fast-Inference with Ctranslate2
12
+ Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
13
+
14
+ quantized version of [stabilityai/stablelm-base-alpha-7b](https://huggingface.co/stabilityai/stablelm-base-alpha-7b)
15
+ ```bash
16
+ pip install hf-hub-ctranslate2>=2.0.8
17
+ ```
18
+ Converted on 2023-05-22 using
19
+ ```
20
+ ct2-transformers-converter --model stabilityai/stablelm-base-alpha-7b --output_dir /home/michael/tmp-ct2fast-stablelm-base-alpha-7b --force --copy_files tokenizer.json README.md tokenizer_config.json generation_config.json special_tokens_map.json .gitattributes --quantization float16
21
+ ```
22
+
23
+ Checkpoint compatible to [ctranslate2>=3.13.0](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2>=2.0.6](https://github.com/michaelfeil/hf-hub-ctranslate2)
24
+ - `compute_type=int8_float16` for `device="cuda"`
25
+ - `compute_type=int8` for `device="cpu"`
26
+
27
+ ```python
28
+ from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
29
+ from transformers import AutoTokenizer
30
+
31
+ model_name = "michaelfeil/ct2fast-stablelm-base-alpha-7b"
32
+ # use either TranslatorCT2fromHfHub or GeneratorCT2fromHfHub here, depending on model.
33
+ model = GeneratorCT2fromHfHub(
34
+ # load in int8 on CUDA
35
+ model_name_or_path=model_name,
36
+ device="cuda",
37
+ compute_type="int8_float16",
38
+ # tokenizer=AutoTokenizer.from_pretrained("stabilityai/stablelm-base-alpha-7b")
39
+ )
40
+ outputs = model.generate(
41
+ text=["def print_hello_world():", "def hello_name(name:"],
42
+ max_length=64
43
+ )
44
+ print(outputs)
45
+ ```
46
+
47
+ # Licence and other remarks:
48
+ This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
49
+
50
+ # Original description
51
+
52
+
53
+ # StableLM-Base-Alpha
54
+
55
+ ## Model Description
56
+
57
+ `StableLM-Base-Alpha` is a suite of 3B and 7B parameter decoder-only language models pre-trained on a diverse collection of English datasets with a sequence length of 4096 to push beyond the context window limitations of existing open-source language models.
58
+
59
+ ## Usage
60
+
61
+ Get started generating text with `StableLM-Base-Alpha` by using the following code snippet:
62
+
63
+ ```python
64
+ from transformers import AutoModelForCausalLM, AutoTokenizer
65
+
66
+ tokenizer = AutoTokenizer.from_pretrained("StabilityAI/stablelm-base-alpha-7b")
67
+ model = AutoModelForCausalLM.from_pretrained("StabilityAI/stablelm-base-alpha-7b")
68
+ model.half().cuda()
69
+
70
+ inputs = tokenizer("What's your mood today?", return_tensors="pt").to("cuda")
71
+ tokens = model.generate(
72
+ **inputs,
73
+ max_new_tokens=64,
74
+ temperature=0.7,
75
+ do_sample=True,
76
+ )
77
+ print(tokenizer.decode(tokens[0], skip_special_tokens=True))
78
+ ```
79
+
80
+ ## Model Details
81
+
82
+ * **Developed by**: [Stability AI](https://stability.ai/)
83
+ * **Model type**: StableLM-Base-Alpha models are auto-regressive language models based on the NeoX transformer architecture.
84
+ * **Language(s)**: English
85
+ * **Library**: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
86
+ * **License**: Base model checkpoints (`StableLM-Base-Alpha`) are licensed under the Creative Commons license ([CC BY-SA-4.0](https://creativecommons.org/licenses/by-sa/4.0/)). Under the license, you must give [credit](https://creativecommons.org/licenses/by/4.0/#) to Stability AI, provide a link to the license, and [indicate if changes were made](https://creativecommons.org/licenses/by/4.0/#). You may do so in any reasonable manner, but not in any way that suggests the Stability AI endorses you or your use.
87
+
88
+ * **Contact**: For questions and comments about the model, please email `lm@stability.ai`
89
+
90
+ ## Training
91
+
92
+ | Parameters | Hidden Size | Layers | Heads | Sequence Length |
93
+ |------------|-------------|--------|-------|-----------------|
94
+ | 3B | 4096 | 16 | 32 | 4096 |
95
+ | 7B | 6144 | 16 | 48 | 4096 |
96
+
97
+ ### Training Dataset
98
+
99
+ `StableLM-Base-Alpha` is pre-trained on a new experimental dataset built atop [The Pile](https://huggingface.co/datasets/EleutherAI/the_pile) and is threes times larger at approximately 1.5T tokens.
100
+
101
+ ### Training Procedure
102
+
103
+ Models are pre-trained on the aforementioned dataset in mixed-precision (FP16), optimized with Adam, and trained using the NeoX tokenizer with a vocabulary size of 50,257. We outline the complete hyperparameters choices in the project's [GitHub repository](https://github.com/Stability-AI/StableLM/blob/main/configs/stablelm-base-alpha-7b.yaml).
104
+
105
+ ## Use and Limitations
106
+
107
+ ### Intended Use
108
+
109
+ These models are intended to be used by all individuals as foundational models for application-specific fine-tuning without strict limitations on commercial use.
110
+
111
+ ### Limitations and bias
112
+
113
+ The pre-training dataset may have contained offensive or inappropriate content even after applying data cleansing filters which can be reflected in the model generated text. We recommend users exercise reasonable caution when using these models in production systems. Do not use the models for any applications that may cause harm or distress to individuals or groups.
114
+
115
+ ## Citations
116
+
117
+ ```bibtext
118
+ @software{gpt-neox-library,
119
+ title = {{GPT-NeoX: Large Scale Autoregressive Language Modeling in PyTorch}},
120
+ author = {Andonian, Alex and Anthony, Quentin and Biderman, Stella and Black, Sid and Gali, Preetham and Gao, Leo and Hallahan, Eric and Levy-Kramer, Josh and Leahy, Connor and Nestler, Lucas and Parker, Kip and Pieler, Michael and Purohit, Shivanshu and Songz, Tri and Phil, Wang and Weinbach, Samuel},
121
+ url = {https://www.github.com/eleutherai/gpt-neox},
122
+ doi = {10.5281/zenodo.5879544},
123
+ month = {8},
124
+ year = {2021},
125
+ version = {0.0.1},
126
+ }
127
+ ```
config.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<|endoftext|>",
3
+ "eos_token": "<|endoftext|>",
4
+ "unk_token": "<|endoftext|>"
5
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 0,
4
+ "eos_token_id": 0,
5
+ "transformers_version": "4.27.4"
6
+ }
model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b043617c0db655e52f70fe984226076cafcd40dcb6980989994deee806a27ee9
3
+ size 15737365355
special_tokens_map.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<|endoftext|>",
3
+ "eos_token": "<|endoftext|>",
4
+ "unk_token": "<|endoftext|>"
5
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "bos_token": "<|endoftext|>",
4
+ "clean_up_tokenization_spaces": true,
5
+ "eos_token": "<|endoftext|>",
6
+ "model_max_length": 1000000000000000019884624838656,
7
+ "tokenizer_class": "GPTNeoXTokenizer",
8
+ "unk_token": "<|endoftext|>"
9
+ }
vocabulary.txt ADDED
The diff for this file is too large to render. See raw diff