File size: 13,659 Bytes
842533b
7655f7f
 
 
 
 
 
842533b
7655f7f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
842533b
7655f7f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
---
pipeline_tag: text-generation
inference: true
# widget:
# - text: 'Question: Please write a function in Python that performs bubble sort.\n\nAnswer:'
#   example_title: Bubble sort
#   group: Python
license: apache-2.0
datasets:
# Mentionded in paper
- codeparrot/github-code-clean
- bigcode/starcoderdata
# - Stackexchange
# - CommonCrawl
# - open-web-math/open-web-math # Phase 1
# - math-ai/StackMathQA # Phase 2
# - Arxiv
# - Wikipedia
# - conceptofmind/FLAN_2022 # Original link is broken, we used IBM's filtered version | Phase 2
# - bigcode/commitpackft # Phase 2
# - bigcode/oasst-octopack # Phase 2

# Phase 1 datasets 
- togethercomputer/RedPajama-Data-V2 # Common Crawl - CC (Redpajama v2)
- togethercomputer/RedPajama-Data-1T # Books (Redpajama v1)
- allenai/peS2o 
- open-web-math/open-web-math 
- EleutherAI/proof-pile-2 # Algebraic-stack (HF) 
# - Code pile v2 w/o GPL (dp08)
# - Webhose (dp08)
# - Patents (dp08) 
# - Arxiv (dp08)
# - IEEE (dp08)
# - DMMath (dp08)
# - Financial research paper (dp08)
# - Paper with code (dp08)
# - Wikipedia (dp08)
# - Stackexchange (dp08)
# - doabooks (dp08)
# - Freelaw (dp08)
# - Pubmed (dp08)
# - EDGAR (dp08)
# - Secfiling (dp08)
# - FIDC (dp08)
# - Earning call transcript (dp08)
# 
# Phase 2 datasets: add high quality + instruction tuning datasets into the mixture
# Hiqh quality: 
# - sap_revised
# - cybersecurity
# - ibm-redbooks
# - ibm.com
# - superknowa
# - multilingual – wikipedia + doabooks (de/es/fr/ja/pt/ar/cs/it/ko/nl/zh)
# Instruction-tuning
- nvidia/HelpSteer
- garage-bAInd/Open-Platypus
- mosaicml/dolly_hhrlhf
- mosaicml/instruct-v3
- conceptofmind/FLAN_2022
- KnutJaegersberg/longinstruct
- bigcode/oasst-octopack
- CohereForAI/xP3x
- math-ai/StackMathQA
- math-ai/TemplateGSM
- bugdaryan/sql-create-context-instruction
- glaiveai/glaive-function-calling-v2
- glaiveai/glaive-code-assistant-v3
- cognitivecomputations/dolphin-coder
- glaiveai/glaive-code-assistant
- TokenBender/code_instructions_122k_alpaca_style
- TIGER-Lab/MathInstruct
- meta-math/MetaMathQA
- tiedong/goat
- CohereForAI/xP3x
- bigcode/commitpack
- bigcode/commitpackft
- HuggingFaceTB/cosmopedia
- deepmind/code_contests
- ise-uiuc/Magicoder-Evol-Instruct-110K
- ise-uiuc/Magicoder-OSS-Instruct-75K
- theblackcat102/evol-codealpaca-v1
- ajibawa-2023/Code-290k-ShareGPT
- Locutusque/UltraTextbooks-2.0
- teknium/OpenHermes-2.5
- stingning/ultrachat
# - API Blend
# 
# DATASET LINKS
# NL
# - nvidia/HelpSteer
# - garage-bAInd/Open-Platypus
# - mosaicml/dolly_hhrlhf
# - mosaicml/instruct-v3
# - conceptofmind/FLAN_2022
# - KnutJaegersberg/longinstruct
# - CohereForAI/xP3x
# - HuggingFaceTB/cosmopedia
# - open-web-math/open-web-math
# - EleutherAI/proof-pile-2
# - math-ai/StackMathQA
# - math-ai/TemplateGSM
# - IBM ConvAI 0111
# - IBM Forca 30K
# - IBM Hardcoded
# Code
# - bugdaryan/sql-create-context-instruction
# - glaiveai/glaive-function-calling-v2
# - cognitivecomputations/dolphin-coder
# - glaiveai/glaive-code-
# - bigcode/commitpackft
# - TIGER-Lab/MathInstruct
# - meta-math/MetaMathQA
# - tiedong/goat
# - CohereForAI/xP3x
metrics:
- code_eval
library_name: transformers
tags:
- code
model-index:
- name: granite-3b-code-base
  results:
  - task:
      type: text-generation
    dataset:
        type: openai_humaneval # https://arxiv.org/pdf/2107.03374
        name: HumanEval
    metrics:
    - name: pass@1
      type: pass@1
      value: 34.1
      veriefied: false # Check
  - task:
      type: text-generation
    dataset:
        type: evalplus/humanevalplus # https://arxiv.org/pdf/2305.01210 https://github.com/evalplus/evalplus
        name: HumanEval+
    metrics:
    - name: pass@1
      type: pass@1
      value: 29.9
      veriefied: false # Check
  - task:
      type: text-generation
    dataset:
        type: mbpp # https://arxiv.org/abs/2108.07732
        name: MBPP
    metrics:
    - name: pass@1
      type: pass@1
      value: 36.0
      veriefied: false # Check      
  - task:
      type: text-generation
    dataset:
        type: evalplus/mbppplus # 
        name: MBPP+
    metrics:
    - name: pass@1
      type: pass@1
      value: 45.1
      veriefied: false # Check
  - task:
      type: text-generation
    dataset:
        type: bigcode/humanevalpack 
        name: HumanEvalSynthesis(Python)
    metrics:
    - name: pass@1
      type: pass@1
      value: 36.0
      veriefied: false # Check
  - task:
      type: text-generation
    dataset:
        type: bigcode/humanevalpack  
        name: HumanEvalSynthesis(JavaScript)
    metrics:
    - name: pass@1
      type: pass@1
      value: 37.2
      veriefied: false # Check
  - task:
      type: text-generation
    dataset:
        type: bigcode/humanevalpack  
        name: HumanEvalSynthesis(Java)
    metrics:
    - name: pass@1
      type: pass@1
      value: 40.9
      veriefied: false # Check
  - task:
      type: text-generation
    dataset:
        type: bigcode/humanevalpack  
        name: HumanEvalSynthesis(Go)
    metrics:
    - name: pass@1
      type: pass@1
      value: 26.2
      veriefied: false # Check
  - task:
      type: text-generation
    dataset:
        type: bigcode/humanevalpack  
        name: HumanEvalSynthesis(C++)
    metrics:
    - name: pass@1
      type: pass@1
      value: 35.4
      veriefied: false # Check
  - task:
      type: text-generation
    dataset:
        type: bigcode/humanevalpack  
        name: HumanEvalSynthesis(Rust)
    metrics:
    - name: pass@1
      type: pass@1
      value: 22.0
      veriefied: false # Check
  - task:
      type: text-generation
    dataset:
        type: bigcode/humanevalpack  
        name:  HumanEvalExplain(Python)
    metrics:
    - name: pass@1
      type: pass@1
      value: 25.0
      veriefied: false # Check
  - task:
      type: text-generation
    dataset:
        type: bigcode/humanevalpack  
        name:  HumanEvalExplain(JavaScript)
    metrics:
    - name: pass@1
      type: pass@1
      value: 18.9
      veriefied: false # Check
  - task:
      type: text-generation
    dataset:
        type: bigcode/humanevalpack  
        name:  HumanEvalExplain(Java)
    metrics:
    - name: pass@1
      type: pass@1
      value: 29.9
      veriefied: false # Check
  - task:
      type: text-generation
    dataset:
        type: bigcode/humanevalpack  
        name:  HumanEvalExplain(Go)
    metrics:
    - name: pass@1
      type: pass@1
      value: 17.1
      veriefied: false # Check
  - task:
      type: text-generation
    dataset:
        type: bigcode/humanevalpack  
        name:  HumanEvalExplain(C++)
    metrics:
    - name: pass@1
      type: pass@1
      value: 26.8
      veriefied: false # Check
  - task:
      type: text-generation
    dataset:
        type: bigcode/humanevalpack  
        name:  HumanEvalExplain(Rust)
    metrics:
    - name: pass@1
      type: pass@1
      value: 14.0
      veriefied: false # Check
  - task:
      type: text-generation
    dataset:
        type: bigcode/humanevalpack  
        name:  HumanEvalFix(Python)
    metrics:
    - name: pass@1
      type: pass@1
      value: 18.3
      veriefied: false # Check
  - task:
      type: text-generation
    dataset:
        type: bigcode/humanevalpack  
        name:  HumanEvalFix(JavaScript)
    metrics:
    - name: pass@1
      type: pass@1
      value: 23.2
      veriefied: false # Check
  - task:
      type: text-generation
    dataset:
        type: bigcode/humanevalpack  
        name:  HumanEvalFix(Java)
    metrics:
    - name: pass@1
      type: pass@1
      value: 29.9
      veriefied: false # Check
  - task:
      type: text-generation
    dataset:
        type: bigcode/humanevalpack  
        name:  HumanEvalFix(Go)
    metrics:
    - name: pass@1
      type: pass@1
      value: 24.4
      veriefied: false # Check
  - task:
      type: text-generation
    dataset:
        type: bigcode/humanevalpack  
        name:  HumanEvalFix(C++)
    metrics:
    - name: pass@1
      type: pass@1
      value: 16.5 
      veriefied: false # Check
  - task:
      type: text-generation
    dataset:
        type: bigcode/humanevalpack  
        name:  HumanEvalFix(Rust)
    metrics:
    - name: pass@1
      type: pass@1
      value: 3.7
      veriefied: false # Check
---
<!-- 
Granite 3B Code Base

Model Summary: few sentences like starcoder
    - Developers:
    - GH repository:
    - Release date:
    - Lincense:

Usage
Intended use
Generation
Fill-in-the-middle

Training Data

Infrastructure

Limitations

Citation
-->

# Granite 3B Code Base
<!-- ![granite](https://github.com/ibm-granite/granite-code-models/blob/main/figures/granite.png) -->

## Model Summary
**Granite 3B Code Base** model is a decoder-only code model designed for code generative tasks (e.g., code generation, code explanation, code fixing). It was trained from scratch on 4 trillion tokens sourced from 116 programming languages, ensuring a comprehensive understanding of programming languages and syntax. 

- **Developers:** IBM Research
- **GitHub Repository:** [ibm-granite/granite-code-models](https://github.com/ibm-granite/granite-code-models)
- **Paper:** [Granite Code Models: A Family of Open Foundation Models
for Code Intelligence](https://)
- **Release Date**: May 6th, 2024
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) license.

## Usage
### Intended use
Prominent enterprise use cases of LLMs in software engineering productivity include code generation, code explanation, code fixing, generating unit tests, generating documentation, addressing technical debt issues, vulnerability detection, code translation, and more. All Granite Code Base models, including the **3B parameters model**, are able to handle these tasks as they were trained on a large amount of code data from 116 programming languages. 

### Generation
Before proceeding, you need to install the necessary dependencies. You can do this by running the following command:
```
pip install -r requirements.txt
```

This is a simple example of how to use Granite Code Base 3B model.

```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_path = "ibm-granite/granite-3b-code-base"

tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="cuda")
model.eval()

input_text = "def generate():"

input_tokens = tokenizer(input_text, return_tensors="pt")
for i in input_tokens:
    input_tokens[i] = input_tokens[i].cuda()

output = model.generate(**input_tokens)
output = tokenizer.batch_decode(output)
for i in output:
    print(output)
```
### Fill-in-the-middle

Fill-in-the-middle uses special tokens to identify the prefix/middle/suffix part of the input and output:

```python
input_text = "<fim_prefix>def print_hello_world():\n    <fim_suffix>\n    print('Hello world!')<fim_middle>"
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
``` 

## Training Data
- **Data Collection and Filtering:** Pretraining code data is sourced from a combination of publicly available datasets (e.g., [GitHub Code Clean](https://huggingface.co/datasets/codeparrot/github-code-clean), [Starcoder data](https://huggingface.co/datasets/bigcode/starcoderdata)), and additional public code repositories and issues from GitHub. We filter raw data to retain a list of 116 programming languages. After language filtering, we also filter out low-quality code. 
- **Exact and Fuzzy Deduplication:** We adopt an aggressive deduplication strategy that includes both exact and fuzzy deduplication to remove documents having (near) identical code content.
- **HAP, PII, Malware Filtering:** We apply a HAP content filter that reduces models' likelihood of generating hateful, abusive, or profane language. We also make sure to redact Personally Identifiable Information (PII) by replacing PII content (e.g., names, email addresses, keys, passwords) with corresponding tokens (e.g., ⟨NAME⟩, ⟨EMAIL⟩, ⟨KEY⟩, ⟨PASSWORD⟩). Moreover, we scan all datasets using [ClamAV](https://www.clamav.net/) to identify and remove instances of malware in the source code.
- **Natural Language Datasets:** In addition to collecting code data for model training, we curate several publicly available high-quality natural language datasets to improve models' proficiency in language understanding and mathematical reasoning. Unlike the code data, we do not deduplicate these datasets.

## Infrastructure
We train the Granite Code models using two of IBM’s super computing clusters, namely Vela and Blue Vela, both outfitted with NVIDIA A100 and H100 GPUs respectively. These clusters provide a scalable and efficient infrastructure for training our models over thousands of GPUs.

## Limitations
Large Language Models are often prone to generating incorrect information, typically referred to as hallucinations. **Granite 3B Code Base** model is not the exception in this regard. Even though this model is suited for code-related tasks as it is trained on source code from 116 programming languages, the generated code is not guaranteed to work as intended. It can be inefficient and contain bugs or exploits. Moreover, Granite Code Base models are _not_ instruction-following models. Thus, commands like *"Write a function that computes the square root"* may not work well.

## Citation
```
@misc{granite-models,
  author = {author 1, author2, ...},
  title = {Granite Code Large Language Models: IBM Foundation Models for Code},
  journal = {},
  volume = {},
  year = {2024},
  url = {https://arxiv.org/abs/0000.00000},
}
```