shikhr commited on
Commit
34cfbe5
1 Parent(s): be515ac

Upload model

Browse files
Files changed (6) hide show
  1. README.md +199 -0
  2. config.json +19 -0
  3. gpt_model.py +258 -0
  4. mgpt_config.py +26 -0
  5. mgpt_modelling.py +14 -0
  6. pytorch_model.bin +3 -0
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags: []
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
config.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "MusicModel"
4
+ ],
5
+ "auto_map": {
6
+ "AutoConfig": "mgpt_config.MGPTConfig",
7
+ "AutoModel": "mgpt_modelling.MusicModel"
8
+ },
9
+ "bias": false,
10
+ "block_size": 1024,
11
+ "dropout": 0.1,
12
+ "model_type": "mgpt",
13
+ "n_embd": 512,
14
+ "n_head": 8,
15
+ "n_layer": 12,
16
+ "torch_dtype": "float32",
17
+ "transformers_version": "4.41.2",
18
+ "vocab_size": 12000
19
+ }
gpt_model.py ADDED
@@ -0,0 +1,258 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import math
2
+ import inspect
3
+ from dataclasses import dataclass
4
+
5
+ import torch
6
+ import torch.nn as nn
7
+ from torch.nn import functional as F
8
+
9
+
10
+ class LayerNorm(nn.Module):
11
+ """LayerNorm but with an optional bias. PyTorch doesn't support simply bias=False"""
12
+
13
+ def __init__(self, ndim, bias):
14
+ super().__init__()
15
+ self.weight = nn.Parameter(torch.ones(ndim))
16
+ self.bias = nn.Parameter(torch.zeros(ndim)) if bias else None
17
+
18
+ def forward(self, input):
19
+ return F.layer_norm(input, self.weight.shape, self.weight, self.bias, 1e-5)
20
+
21
+
22
+ class CausalSelfAttention(nn.Module):
23
+
24
+ def __init__(self, config):
25
+ super().__init__()
26
+ assert config.n_embd % config.n_head == 0
27
+ # key, query, value projections for all heads, but in a batch
28
+ self.c_attn = nn.Linear(config.n_embd, 3 * config.n_embd, bias=config.bias)
29
+ # output projection
30
+ self.c_proj = nn.Linear(config.n_embd, config.n_embd, bias=config.bias)
31
+ # regularization
32
+ self.attn_dropout = nn.Dropout(config.dropout)
33
+ self.resid_dropout = nn.Dropout(config.dropout)
34
+ self.n_head = config.n_head
35
+ self.n_embd = config.n_embd
36
+ self.dropout = config.dropout
37
+ # flash attention make GPU go brrrrr but support is only in PyTorch >= 2.0
38
+ self.flash = hasattr(torch.nn.functional, "scaled_dot_product_attention")
39
+ if not self.flash:
40
+ print(
41
+ "WARNING: using slow attention. Flash Attention requires PyTorch >= 2.0"
42
+ )
43
+ # causal mask to ensure that attention is only applied to the left in the input sequence
44
+ self.register_buffer(
45
+ "bias",
46
+ torch.tril(torch.ones(config.block_size, config.block_size)).view(
47
+ 1, 1, config.block_size, config.block_size
48
+ ),
49
+ )
50
+
51
+ def forward(self, x):
52
+ B, T, C = (
53
+ x.size()
54
+ ) # batch size, sequence length, embedding dimensionality (n_embd)
55
+
56
+ # calculate query, key, values for all heads in batch and move head forward to be the batch dim
57
+ q, k, v = self.c_attn(x).split(self.n_embd, dim=2)
58
+ k = k.view(B, T, self.n_head, C // self.n_head).transpose(
59
+ 1, 2
60
+ ) # (B, nh, T, hs)
61
+ q = q.view(B, T, self.n_head, C // self.n_head).transpose(
62
+ 1, 2
63
+ ) # (B, nh, T, hs)
64
+ v = v.view(B, T, self.n_head, C // self.n_head).transpose(
65
+ 1, 2
66
+ ) # (B, nh, T, hs)
67
+
68
+ # causal self-attention; Self-attend: (B, nh, T, hs) x (B, nh, hs, T) -> (B, nh, T, T)
69
+ if self.flash:
70
+ # efficient attention using Flash Attention CUDA kernels
71
+ y = torch.nn.functional.scaled_dot_product_attention(
72
+ q,
73
+ k,
74
+ v,
75
+ attn_mask=None,
76
+ dropout_p=self.dropout if self.training else 0,
77
+ is_causal=True,
78
+ )
79
+ else:
80
+ # manual implementation of attention
81
+ att = (q @ k.transpose(-2, -1)) * (1.0 / math.sqrt(k.size(-1)))
82
+ att = att.masked_fill(self.bias[:, :, :T, :T] == 0, float("-inf"))
83
+ att = F.softmax(att, dim=-1)
84
+ att = self.attn_dropout(att)
85
+ y = att @ v # (B, nh, T, T) x (B, nh, T, hs) -> (B, nh, T, hs)
86
+ y = (
87
+ y.transpose(1, 2).contiguous().view(B, T, C)
88
+ ) # re-assemble all head outputs side by side
89
+
90
+ # output projection
91
+ y = self.resid_dropout(self.c_proj(y))
92
+ return y
93
+
94
+
95
+ class MLP(nn.Module):
96
+
97
+ def __init__(self, config):
98
+ super().__init__()
99
+ self.c_fc = nn.Linear(config.n_embd, 4 * config.n_embd, bias=config.bias)
100
+ self.gelu = nn.GELU()
101
+ self.c_proj = nn.Linear(4 * config.n_embd, config.n_embd, bias=config.bias)
102
+ self.dropout = nn.Dropout(config.dropout)
103
+
104
+ def forward(self, x):
105
+ x = self.c_fc(x)
106
+ x = self.gelu(x)
107
+ x = self.c_proj(x)
108
+ x = self.dropout(x)
109
+ return x
110
+
111
+
112
+ class Block(nn.Module):
113
+
114
+ def __init__(self, config):
115
+ super().__init__()
116
+ self.ln_1 = LayerNorm(config.n_embd, bias=config.bias)
117
+ self.attn = CausalSelfAttention(config)
118
+ self.ln_2 = LayerNorm(config.n_embd, bias=config.bias)
119
+ self.mlp = MLP(config)
120
+
121
+ def forward(self, x):
122
+ x = x + self.attn(self.ln_1(x))
123
+ x = x + self.mlp(self.ln_2(x))
124
+ return x
125
+
126
+
127
+ class GPT(nn.Module):
128
+
129
+ def __init__(self, config):
130
+ super().__init__()
131
+ assert config.vocab_size is not None
132
+ assert config.block_size is not None
133
+ self.config = config
134
+
135
+ self.transformer = nn.ModuleDict(
136
+ dict(
137
+ wte=nn.Embedding(config.vocab_size, config.n_embd),
138
+ wpe=nn.Embedding(config.block_size, config.n_embd),
139
+ drop=nn.Dropout(config.dropout),
140
+ h=nn.ModuleList([Block(config) for _ in range(config.n_layer)]),
141
+ ln_f=LayerNorm(config.n_embd, bias=config.bias),
142
+ )
143
+ )
144
+ self.lm_head = nn.Linear(config.n_embd, config.vocab_size, bias=False)
145
+ self.transformer.wte.weight = (
146
+ self.lm_head.weight
147
+ ) # https://paperswithcode.com/method/weight-tying
148
+
149
+ # init all weights
150
+ self.apply(self._init_weights)
151
+ # apply special scaled init to the residual projections, per GPT-2 paper
152
+ for pn, p in self.named_parameters():
153
+ if pn.endswith("c_proj.weight"):
154
+ torch.nn.init.normal_(
155
+ p, mean=0.0, std=0.02 / math.sqrt(2 * config.n_layer)
156
+ )
157
+
158
+ # report number of parameters
159
+ print("number of parameters: %.2fM" % (self.get_num_params() / 1e6,))
160
+
161
+ def get_num_params(self, non_embedding=True):
162
+ """
163
+ Return the number of parameters in the model.
164
+ For non-embedding count (default), the position embeddings get subtracted.
165
+ The token embeddings would too, except due to the parameter sharing these
166
+ params are actually used as weights in the final layer, so we include them.
167
+ """
168
+ n_params = sum(p.numel() for p in self.parameters())
169
+ if non_embedding:
170
+ n_params -= self.transformer.wpe.weight.numel()
171
+ return n_params
172
+
173
+ def _init_weights(self, module):
174
+ if isinstance(module, nn.Linear):
175
+ torch.nn.init.normal_(module.weight, mean=0.0, std=0.02)
176
+ if module.bias is not None:
177
+ torch.nn.init.zeros_(module.bias)
178
+ elif isinstance(module, nn.Embedding):
179
+ torch.nn.init.normal_(module.weight, mean=0.0, std=0.02)
180
+
181
+ def forward(self, idx, targets=None):
182
+ device = idx.device
183
+ b, t = idx.size()
184
+ assert (
185
+ t <= self.config.block_size
186
+ ), f"Cannot forward sequence of length {t}, block size is only {self.config.block_size}"
187
+ pos = torch.arange(0, t, dtype=torch.long, device=device) # shape (t)
188
+
189
+ # forward the GPT model itself
190
+ tok_emb = self.transformer.wte(idx) # token embeddings of shape (b, t, n_embd)
191
+ pos_emb = self.transformer.wpe(pos) # position embeddings of shape (t, n_embd)
192
+ x = self.transformer.drop(tok_emb + pos_emb)
193
+ for block in self.transformer.h:
194
+ x = block(x)
195
+ x = self.transformer.ln_f(x)
196
+
197
+ if targets is not None:
198
+ # if we are given some desired targets also calculate the loss
199
+ logits = self.lm_head(x)
200
+ shift_logits = logits[..., :-1, :].contiguous()
201
+ shift_labels = targets[..., 1:].contiguous()
202
+ loss = F.cross_entropy(
203
+ shift_logits.view(-1, shift_logits.size(-1)),
204
+ shift_labels.view(-1),
205
+ ignore_index=-1,
206
+ )
207
+ else:
208
+ # inference-time mini-optimization: only forward the lm_head on the very last position
209
+ logits = self.lm_head(
210
+ x[:, [-1], :]
211
+ ) # note: using list [-1] to preserve the time dim
212
+ loss = None
213
+
214
+ return logits, loss
215
+
216
+ def crop_block_size(self, block_size):
217
+ # model surgery to decrease the block size if necessary
218
+ # e.g. we may load the GPT2 pretrained model checkpoint (block size 1024)
219
+ # but want to use a smaller block size for some smaller, simpler model
220
+ assert block_size <= self.config.block_size
221
+ self.config.block_size = block_size
222
+ self.transformer.wpe.weight = nn.Parameter(
223
+ self.transformer.wpe.weight[:block_size]
224
+ )
225
+ for block in self.transformer.h:
226
+ if hasattr(block.attn, "bias"):
227
+ block.attn.bias = block.attn.bias[:, :, :block_size, :block_size]
228
+
229
+ @torch.no_grad()
230
+ def generate(self, idx, max_new_tokens, temperature=1.0, top_k=None):
231
+ """
232
+ Take a conditioning sequence of indices idx (LongTensor of shape (b,t)) and complete
233
+ the sequence max_new_tokens times, feeding the predictions back into the model each time.
234
+ Most likely you'll want to make sure to be in model.eval() mode of operation for this.
235
+ """
236
+ for _ in range(max_new_tokens):
237
+ # if the sequence context is growing too long we must crop it at block_size
238
+ idx_cond = (
239
+ idx
240
+ if idx.size(1) <= self.config.block_size
241
+ else idx[:, -self.config.block_size :]
242
+ )
243
+ # forward the model to get the logits for the index in the sequence
244
+ logits, _ = self(idx_cond)
245
+ # pluck the logits at the final step and scale by desired temperature
246
+ logits = logits[:, -1, :] / temperature
247
+ # optionally crop the logits to only the top k options
248
+ if top_k is not None:
249
+ v, _ = torch.topk(logits, min(top_k, logits.size(-1)))
250
+ logits[logits < v[:, [-1]]] = -float("Inf")
251
+ # apply softmax to convert logits to (normalized) probabilities
252
+ probs = F.softmax(logits, dim=-1)
253
+ # sample from the distribution
254
+ idx_next = torch.multinomial(probs, num_samples=1)
255
+ # append sampled index to the running sequence and continue
256
+ idx = torch.cat((idx, idx_next), dim=1)
257
+
258
+ return idx
mgpt_config.py ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from transformers import PretrainedConfig
2
+ from typing import List
3
+
4
+
5
+ class MGPTConfig(PretrainedConfig):
6
+ model_type = "mgpt"
7
+
8
+ def __init__(
9
+ self,
10
+ block_size: int = 1024,
11
+ vocab_size: int = 12000,
12
+ n_layer: int = 12,
13
+ n_head: int = 8,
14
+ n_embd: int = 512,
15
+ dropout: float = 0.1,
16
+ bias: bool = False,
17
+ **kwargs,
18
+ ):
19
+ self.block_size = block_size
20
+ self.vocab_size = vocab_size
21
+ self.n_layer = n_layer
22
+ self.n_head = n_head
23
+ self.n_embd = n_embd
24
+ self.dropout = dropout
25
+ self.bias = bias
26
+ super().__init__(**kwargs)
mgpt_modelling.py ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ from transformers import PreTrainedModel
3
+ from .mgpt_config import MGPTConfig
4
+ from .gpt_model import GPT
5
+
6
+ class MusicModel(PreTrainedModel):
7
+ config_class = MGPTConfig
8
+
9
+ def __init__(self, config):
10
+ super().__init__(config)
11
+ self.model = GPT(config)
12
+
13
+ def forward(self, inputs):
14
+ return self.model(inputs)
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1fadba61202fe9624a77d9a8d201479e2b21220196f16c2eaf886a9b3719ae48
3
+ size 177743631