loubnabnl HF staff commited on
Commit
8db4e16
1 Parent(s): 305c68b

add examples for loading in other precisions + banner

Browse files
Files changed (1) hide show
  1. README.md +63 -5
README.md CHANGED
@@ -13,10 +13,11 @@ tags:
13
  - code
14
  ---
15
 
16
- # StarCoder
17
 
18
- TODO
19
- ![banner]()
 
20
 
21
  ## Table of Contents
22
 
@@ -29,7 +30,7 @@ TODO
29
 
30
  ## Model Summary
31
 
32
- The StarCoder2-15B model is a 15B parameter model trained on 600+ programming languages from [The Stack v2](https://huggingface.co/datasets/bigcode/the-stack-v2-train), with opt-out requests excluded. The model uses [Grouped Query Attention](https://arxiv.org/abs/2305.13245), [a context window of 16,384 tokens](https://arxiv.org/abs/2205.14135) with [a sliding window attention of 4,096 tokens](https://arxiv.org/abs/2004.05150v2), and was trained using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255) on 4+ trillion tokens.
33
 
34
  - **Project Website:** [bigcode-project.org](https://www.bigcode-project.org)
35
  - **Paper:** TODO
@@ -43,14 +44,24 @@ The StarCoder2-15B model is a 15B parameter model trained on 600+ programming la
43
  The model was trained on GitHub code as well as additional selected data sources such as Arxiv and Wikipedia. As such it is _not_ an instruction model and commands like "Write a function that computes the square root." do not work well.
44
 
45
  ### Generation
 
 
 
 
 
 
 
 
 
46
  ```python
47
- # pip install -q transformers # TODO: from main
48
  from transformers import AutoModelForCausalLM, AutoTokenizer
49
 
50
  checkpoint = "bigcode/starcoder2-15b"
51
  device = "cuda" # for GPU usage or "cpu" for CPU usage
52
 
53
  tokenizer = AutoTokenizer.from_pretrained(checkpoint)
 
54
  model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
55
 
56
  inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
@@ -58,6 +69,53 @@ outputs = model.generate(inputs)
58
  print(tokenizer.decode(outputs[0]))
59
  ```
60
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
61
  ### Attribution & Other Requirements
62
 
63
  The pretraining dataset of the model was filtered for permissive licenses and code with no license only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a [search index](TODO) that let's you search through the pretraining data to identify where generated code came from and apply the proper attribution to your code.
 
13
  - code
14
  ---
15
 
16
+ # StarCoder2
17
 
18
+ <center>
19
+ <img src="https://huggingface.co/datasets/bigcode/admin_private/resolve/main/starcoder2_banner.png" alt="SC2" width="900" height="600">
20
+ </center>
21
 
22
  ## Table of Contents
23
 
 
30
 
31
  ## Model Summary
32
 
33
+ StarCoder2-15B model is a 15B parameter model trained on 600+ programming languages from [The Stack v2](https://huggingface.co/datasets/bigcode/the-stack-v2-train), with opt-out requests excluded. The model uses [Grouped Query Attention](https://arxiv.org/abs/2305.13245), [a context window of 16,384 tokens](https://arxiv.org/abs/2205.14135) with [a sliding window attention of 4,096 tokens](https://arxiv.org/abs/2004.05150v2), and was trained using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255) on 4+ trillion tokens.
34
 
35
  - **Project Website:** [bigcode-project.org](https://www.bigcode-project.org)
36
  - **Paper:** TODO
 
44
  The model was trained on GitHub code as well as additional selected data sources such as Arxiv and Wikipedia. As such it is _not_ an instruction model and commands like "Write a function that computes the square root." do not work well.
45
 
46
  ### Generation
47
+ Here are some examples to get started with the model. You can find a script for fine-tuning in StarCoder2's [GitHub repository](https://github.com/bigcode-project/starcoder2).
48
+
49
+ First, make sure to install `transformers` from source:
50
+ ```bash
51
+ pip install git+https://github.com/huggingface/transformers.git
52
+ ```
53
+
54
+ #### Running the model on CPU/GPU/multi GPU
55
+ * _Using full precision_
56
  ```python
57
+ # pip install git+https://github.com/huggingface/transformers.git # TODO: merge PR to main
58
  from transformers import AutoModelForCausalLM, AutoTokenizer
59
 
60
  checkpoint = "bigcode/starcoder2-15b"
61
  device = "cuda" # for GPU usage or "cpu" for CPU usage
62
 
63
  tokenizer = AutoTokenizer.from_pretrained(checkpoint)
64
+ # to use Multiple GPUs do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")`
65
  model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
66
 
67
  inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
 
69
  print(tokenizer.decode(outputs[0]))
70
  ```
71
 
72
+ * _Using `torch.bfloat16`_
73
+ ```python
74
+ # pip install accelerate
75
+ import torch
76
+ from transformers import AutoTokenizer, AutoModelForCausalLM
77
+
78
+ checkpoint = "bigcode/starcoder2-15b"
79
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
80
+
81
+ # for fp16 use `torch_dtype=torch.float16` instead
82
+ model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", torch_dtype=torch.bfloat16)
83
+
84
+ inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to("cuda")
85
+ outputs = model.generate(inputs)
86
+ print(tokenizer.decode(outputs[0]))
87
+ ```
88
+ ```python
89
+ >>> print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB")
90
+ Memory footprint: 32251.33 MB
91
+ ```
92
+
93
+ #### Quantized Versions through `bitsandbytes`
94
+ * _Using 8-bit precision (int8)_
95
+
96
+ ```python
97
+ # pip install bitsandbytes accelerate
98
+ from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
99
+
100
+ # to use 4bit use `load_in_4bit=True` instead
101
+ quantization_config = BitsAndBytesConfig(load_in_8bit=True)
102
+
103
+ checkpoint = "bigcode/starcoder2-15b_16k"
104
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
105
+ model = AutoModelForCausalLM.from_pretrained("bigcode/starcoder2-15b_16k", quantization_config=quantization_config)
106
+
107
+ inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to("cuda")
108
+ outputs = model.generate(inputs)
109
+ print(tokenizer.decode(outputs[0]))
110
+ ```
111
+ ```bash
112
+ >>> print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB")
113
+ # load_in_8bit
114
+ Memory footprint: 16900.18 MB
115
+ # load_in_4bit
116
+ >>> print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB")
117
+ Memory footprint: 9224.60 MB
118
+ ```
119
  ### Attribution & Other Requirements
120
 
121
  The pretraining dataset of the model was filtered for permissive licenses and code with no license only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a [search index](TODO) that let's you search through the pretraining data to identify where generated code came from and apply the proper attribution to your code.