doberst commited on
Commit
5403262
1 Parent(s): 4e1e91d

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -74
README.md CHANGED
@@ -6,7 +6,7 @@ license: apache-2.0
6
 
7
  <!-- Provide a quick summary of what the model is/does. -->
8
 
9
- BLING-1b-0.1 is the first model release in the BLING ("Best Little Instruction-following No-GPU-required") model series.
10
 
11
  BLING models are designed as custom instruct-following laptop-effective GPT decoder-based models (~1B-2.7B parameters). BLING models are currently built on top of Pythia (GPTNeox architecture) base models and other Apache 2.0-licensed GPT-compatible models with primary focus on 'little' models in the range of 1B, 1.3-1.4B, and 2.7B parameters. (Note: in our testing, we have seen relatively limited success with instruct-following models below <1B parameters.)
12
 
@@ -118,79 +118,6 @@ Use the code below to get started with the model.
118
 
119
  - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
120
 
121
- #### Speeds, Sizes, Times [optional]
122
-
123
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
124
-
125
- [More Information Needed]
126
-
127
- ## Evaluation
128
-
129
- <!-- This section describes the evaluation protocols and provides the results. -->
130
-
131
- ### Testing Data, Factors & Metrics
132
-
133
- #### Testing Data
134
-
135
- <!-- This should link to a Data Card if possible. -->
136
-
137
- [More Information Needed]
138
-
139
- #### Factors
140
-
141
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
142
-
143
- [More Information Needed]
144
-
145
- #### Metrics
146
-
147
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
148
-
149
- [More Information Needed]
150
-
151
- ### Results
152
-
153
- [More Information Needed]
154
-
155
- #### Summary
156
-
157
-
158
-
159
- ## Model Examination [optional]
160
-
161
- <!-- Relevant interpretability work for the model goes here -->
162
-
163
- [More Information Needed]
164
-
165
- ## Environmental Impact
166
-
167
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
168
-
169
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
170
-
171
- - **Hardware Type:** [More Information Needed]
172
- - **Hours used:** [More Information Needed]
173
- - **Cloud Provider:** [More Information Needed]
174
- - **Compute Region:** [More Information Needed]
175
- - **Carbon Emitted:** [More Information Needed]
176
-
177
- ## Technical Specifications [optional]
178
-
179
- ### Model Architecture and Objective
180
-
181
- [More Information Needed]
182
-
183
- ### Compute Infrastructure
184
-
185
- [More Information Needed]
186
-
187
- #### Hardware
188
-
189
- [More Information Needed]
190
-
191
- #### Software
192
-
193
- [More Information Needed]
194
 
195
  ## Citation [optional]
196
 
 
6
 
7
  <!-- Provide a quick summary of what the model is/does. -->
8
 
9
+ BLING-1.4b-0.1 is the first model release in the BLING ("Best Little Instruction-following No-GPU-required") model series.
10
 
11
  BLING models are designed as custom instruct-following laptop-effective GPT decoder-based models (~1B-2.7B parameters). BLING models are currently built on top of Pythia (GPTNeox architecture) base models and other Apache 2.0-licensed GPT-compatible models with primary focus on 'little' models in the range of 1B, 1.3-1.4B, and 2.7B parameters. (Note: in our testing, we have seen relatively limited success with instruct-following models below <1B parameters.)
12
 
 
118
 
119
  - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
120
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
121
 
122
  ## Citation [optional]
123