loubnabnl HF staff commited on
Commit
1839adb
1 Parent(s): a4f0b6b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +180 -1
README.md CHANGED
@@ -1,3 +1,182 @@
1
  ---
2
- license: bigcode-openrail-m
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ pipeline_tag: text-generation
3
+ inference: true
4
+ widget:
5
+ - text: 'def print_hello_world():'
6
+ example_title: Hello world
7
+ group: Python
8
+ - text: 'Gradient descent is'
9
+ example_title: Machine Learning
10
+ group: English
11
+ - license: bigcode-openrail-m
12
+ datasets:
13
+ - bigcode/the-stack-dedup
14
+ - tiiuae/falcon-refinedweb
15
+ metrics:
16
+ - code_eval
17
+ - mmlu
18
+ - arc
19
+ - hellaswag
20
+ - truthfulqa
21
+ library_name: transformers
22
+ tags:
23
+ - code
24
+ model-index:
25
+ - name: StarCoderPlus
26
+ results:
27
+ - task:
28
+ type: text-generation
29
+ dataset:
30
+ type: openai_humaneval
31
+ name: HumanEval (Prompted)
32
+ metrics:
33
+ - name: pass@1
34
+ type: pass@1
35
+ value: 26.7
36
+ verified: false
37
+ - task:
38
+ type: text-generation
39
+ dataset:
40
+ type: MMLU (5-shot)
41
+ name: MMLU
42
+ metrics:
43
+ - name: Accuracy
44
+ type: Accuracy
45
+ value: 45.1
46
+ verified: false
47
+ - task:
48
+ type: text-generation
49
+ dataset:
50
+ type: HellaSwag (10-shot)
51
+ name: HellaSwag
52
+ metrics:
53
+ - name: Accuracy
54
+ type: Accuracy
55
+ value: 77.3
56
+ verified: false
57
+ - task:
58
+ type: text-generation
59
+ dataset:
60
+ type: ARC (25-shot)
61
+ name: ARC
62
+ metrics:
63
+ - name: Accuracy
64
+ type: Accuracy
65
+ value: 48.9
66
+ verified: false
67
+ - task:
68
+ type: text-generation
69
+ dataset:
70
+ type: ThrutfulQA (0-shot)
71
+ name: ThrutfulQA
72
+ metrics:
73
+ - name: Accuracy
74
+ type: Accuracy
75
+ value: 37.9
76
+ verified: false
77
+ extra_gated_prompt: >-
78
+ ## Model License Agreement
79
+
80
+ Please read the BigCode [OpenRAIL-M
81
+ license](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement)
82
+ agreement before accepting it.
83
+
84
+ extra_gated_fields:
85
+ I accept the above license agreement, and will use the Model complying with the set of use restrictions and sharing requirements: checkbox
86
  ---
87
+
88
+
89
+
90
+ # StarCoderPlus
91
+
92
+ Play with the instruction-tuned StarCoderPlus at [StarChat-Beta](https://huggingface.co/spaces/HuggingFaceH4/starchat-playground).
93
+
94
+ ## Table of Contents
95
+
96
+ 1. [Model Summary](##model-summary)
97
+ 2. [Use](##use)
98
+ 3. [Limitations](##limitations)
99
+ 4. [Training](##training)
100
+ 5. [License](##license)
101
+ 6. [Citation](##citation)
102
+
103
+ ## Model Summary
104
+
105
+ This is the [Megatron-LM](https://github.com/bigcode-project/Megatron-LM) version of [StarCoderPlus](https://huggingface.co/bigcode/starcoderplus/).
106
+
107
+ StarCoderPlus is a fine-tuned version of [StarCoderBase](https://huggingface.co/bigcode/starcoderbase) on 600B tokens from the English web dataset [RedefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
108
+ combined with [StarCoderData](https://huggingface.co/datasets/bigcode/starcoderdata) from [The Stack (v1.2)](https://huggingface.co/datasets/bigcode/the-stack) and a Wikipedia dataset.
109
+ It's a 15.5B parameter Language Model trained on English and 80+ programming languages. The model uses [Multi Query Attention](https://arxiv.org/abs/1911.02150),
110
+ [a context window of 8192 tokens](https://arxiv.org/abs/2205.14135), and was trained using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255) on 1.6 trillion tokens.
111
+
112
+ - **Repository:** [bigcode/Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
113
+ - **Project Website:** [bigcode-project.org](https://www.bigcode-project.org)
114
+ - **Point of Contact:** [contact@bigcode-project.org](mailto:contact@bigcode-project.org)
115
+ - **Languages:** English & 80+ Programming languages
116
+
117
+
118
+ ## Use
119
+
120
+ ### Intended use
121
+
122
+ The model was trained on English and GitHub code. As such it is _not_ an instruction model and commands like "Write a function that computes the square root." do not work well. However, the instruction-tuned version in [StarChat](hhttps://huggingface.co/spaces/HuggingFaceH4/starchat-playground) makes a capable assistant.
123
+
124
+ **Feel free to share your generations in the Community tab!**
125
+
126
+ ### Generation
127
+ ```python
128
+ # pip install -q transformers
129
+ from transformers import AutoModelForCausalLM, AutoTokenizer
130
+
131
+ checkpoint = "bigcode/starcoderplus"
132
+ device = "cuda" # for GPU usage or "cpu" for CPU usage
133
+
134
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
135
+ model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
136
+
137
+ inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
138
+ outputs = model.generate(inputs)
139
+ print(tokenizer.decode(outputs[0]))
140
+ ```
141
+
142
+ ### Fill-in-the-middle
143
+ Fill-in-the-middle uses special tokens to identify the prefix/middle/suffix part of the input and output:
144
+
145
+ ```python
146
+ input_text = "<fim_prefix>def print_hello_world():\n <fim_suffix>\n print('Hello world!')<fim_middle>"
147
+ inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
148
+ outputs = model.generate(inputs)
149
+ print(tokenizer.decode(outputs[0]))
150
+ ```
151
+
152
+ ### Attribution & Other Requirements
153
+
154
+ The training code dataset of the model was filtered for permissive licenses only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a [search index](https://huggingface.co/spaces/bigcode/starcoder-search) that let's you search through the pretraining data to identify where generated code came from and apply the proper attribution to your code.
155
+
156
+ # Limitations
157
+
158
+ The model has been trained on a mixture of English text from the web and GitHub code. Therefore it might encounter limitations when working with non-English text, and can carry the stereotypes and biases commonly encountered online.
159
+ Additionally, the generated code should be used with caution as it may contain errors, inefficiencies, or potential vulnerabilities. For a more comprehensive understanding of the base model's code limitations, please refer to See [StarCoder paper](hhttps://arxiv.org/abs/2305.06161).
160
+
161
+ # Training
162
+ StarCoderPlus is a fine-tuned version on 600B English and code tokens of StarCoderBase, which was pre-trained on 1T code tokens. Below are the fine-tuning details:
163
+
164
+ ## Model
165
+ - **Architecture:** GPT-2 model with multi-query attention and Fill-in-the-Middle objective
166
+ - **Finetuning steps:** 150k
167
+ - **Finetuning tokens:** 600B
168
+ - **Precision:** bfloat16
169
+
170
+ ## Hardware
171
+
172
+ - **GPUs:** 512 Tesla A100
173
+ - **Training time:** 14 days
174
+
175
+ ## Software
176
+
177
+ - **Orchestration:** [Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
178
+ - **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
179
+ - **BP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
180
+
181
+ # License
182
+ The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).