fffiloni commited on
Commit
ad21ea3
1 Parent(s): 3fbf0c1

Upload 3 files

Browse files
xdecoder/modules/attention.py ADDED
@@ -0,0 +1,489 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Code copy from PyTorch, modified by Xueyan Zou
2
+
3
+ import warnings
4
+ from typing import Optional, Tuple
5
+
6
+ import torch
7
+ import torch.nn as nn
8
+ from torch import Tensor
9
+ from torch.nn.init import constant_, xavier_normal_, xavier_uniform_
10
+ from torch.nn.parameter import Parameter
11
+ from torch.overrides import has_torch_function, handle_torch_function
12
+ from torch.nn.functional import pad, linear, softmax, dropout
13
+
14
+
15
+ def multi_head_attention_forward(
16
+ query: Tensor,
17
+ key: Tensor,
18
+ value: Tensor,
19
+ embed_dim_to_check: int,
20
+ num_heads: int,
21
+ in_proj_weight: Tensor,
22
+ in_proj_bias: Tensor,
23
+ bias_k: Optional[Tensor],
24
+ bias_v: Optional[Tensor],
25
+ add_zero_attn: bool,
26
+ dropout_p: float,
27
+ out_proj_weight: Tensor,
28
+ out_proj_bias: Tensor,
29
+ training: bool = True,
30
+ key_padding_mask: Optional[Tensor] = None,
31
+ need_weights: bool = True,
32
+ attn_mask: Optional[Tensor] = None,
33
+ use_separate_proj_weight: bool = False,
34
+ q_proj_weight: Optional[Tensor] = None,
35
+ k_proj_weight: Optional[Tensor] = None,
36
+ v_proj_weight: Optional[Tensor] = None,
37
+ static_k: Optional[Tensor] = None,
38
+ static_v: Optional[Tensor] = None,
39
+ ) -> Tuple[Tensor, Optional[Tensor]]:
40
+ r"""
41
+ Args:
42
+ query, key, value: map a query and a set of key-value pairs to an output.
43
+ See "Attention Is All You Need" for more details.
44
+ embed_dim_to_check: total dimension of the model.
45
+ num_heads: parallel attention heads.
46
+ in_proj_weight, in_proj_bias: input projection weight and bias.
47
+ bias_k, bias_v: bias of the key and value sequences to be added at dim=0.
48
+ add_zero_attn: add a new batch of zeros to the key and
49
+ value sequences at dim=1.
50
+ dropout_p: probability of an element to be zeroed.
51
+ out_proj_weight, out_proj_bias: the output projection weight and bias.
52
+ training: apply dropout if is ``True``.
53
+ key_padding_mask: if provided, specified padding elements in the key will
54
+ be ignored by the attention. This is an binary mask. When the value is True,
55
+ the corresponding value on the attention layer will be filled with -inf.
56
+ need_weights: output attn_output_weights.
57
+ attn_mask: 2D or 3D mask that prevents attention to certain positions. A 2D mask will be broadcasted for all
58
+ the batches while a 3D mask allows to specify a different mask for the entries of each batch.
59
+ use_separate_proj_weight: the function accept the proj. weights for query, key,
60
+ and value in different forms. If false, in_proj_weight will be used, which is
61
+ a combination of q_proj_weight, k_proj_weight, v_proj_weight.
62
+ q_proj_weight, k_proj_weight, v_proj_weight, in_proj_bias: input projection weight and bias.
63
+ static_k, static_v: static key and value used for attention operators.
64
+
65
+
66
+ Shape:
67
+ Inputs:
68
+ - query: :math:`(L, N, E)` where L is the target sequence length, N is the batch size, E is
69
+ the embedding dimension.
70
+ - key: :math:`(S, N, E)`, where S is the source sequence length, N is the batch size, E is
71
+ the embedding dimension.
72
+ - value: :math:`(S, N, E)` where S is the source sequence length, N is the batch size, E is
73
+ the embedding dimension.
74
+ - key_padding_mask: :math:`(N, S)` where N is the batch size, S is the source sequence length.
75
+ If a ByteTensor is provided, the non-zero positions will be ignored while the zero positions
76
+ will be unchanged. If a BoolTensor is provided, the positions with the
77
+ value of ``True`` will be ignored while the position with the value of ``False`` will be unchanged.
78
+ - attn_mask: 2D mask :math:`(L, S)` where L is the target sequence length, S is the source sequence length.
79
+ 3D mask :math:`(N*num_heads, L, S)` where N is the batch size, L is the target sequence length,
80
+ S is the source sequence length. attn_mask ensures that position i is allowed to attend the unmasked
81
+ positions. If a ByteTensor is provided, the non-zero positions are not allowed to attend
82
+ while the zero positions will be unchanged. If a BoolTensor is provided, positions with ``True``
83
+ are not allowed to attend while ``False`` values will be unchanged. If a FloatTensor
84
+ is provided, it will be added to the attention weight.
85
+ - static_k: :math:`(N*num_heads, S, E/num_heads)`, where S is the source sequence length,
86
+ N is the batch size, E is the embedding dimension. E/num_heads is the head dimension.
87
+ - static_v: :math:`(N*num_heads, S, E/num_heads)`, where S is the source sequence length,
88
+ N is the batch size, E is the embedding dimension. E/num_heads is the head dimension.
89
+
90
+ Outputs:
91
+ - attn_output: :math:`(L, N, E)` where L is the target sequence length, N is the batch size,
92
+ E is the embedding dimension.
93
+ - attn_output_weights: :math:`(N, L, S)` where N is the batch size,
94
+ L is the target sequence length, S is the source sequence length.
95
+ """
96
+ tens_ops = (query, key, value, in_proj_weight, in_proj_bias, bias_k, bias_v, out_proj_weight, out_proj_bias)
97
+ if has_torch_function(tens_ops):
98
+ return handle_torch_function(
99
+ multi_head_attention_forward,
100
+ tens_ops,
101
+ query,
102
+ key,
103
+ value,
104
+ embed_dim_to_check,
105
+ num_heads,
106
+ in_proj_weight,
107
+ in_proj_bias,
108
+ bias_k,
109
+ bias_v,
110
+ add_zero_attn,
111
+ dropout_p,
112
+ out_proj_weight,
113
+ out_proj_bias,
114
+ training=training,
115
+ key_padding_mask=key_padding_mask,
116
+ need_weights=need_weights,
117
+ attn_mask=attn_mask,
118
+ use_separate_proj_weight=use_separate_proj_weight,
119
+ q_proj_weight=q_proj_weight,
120
+ k_proj_weight=k_proj_weight,
121
+ v_proj_weight=v_proj_weight,
122
+ static_k=static_k,
123
+ static_v=static_v,
124
+ )
125
+ tgt_len, bsz, embed_dim = query.size()
126
+ assert embed_dim == embed_dim_to_check
127
+ # allow MHA to have different sizes for the feature dimension
128
+ assert key.size(0) == value.size(0) and key.size(1) == value.size(1)
129
+
130
+ head_dim = embed_dim // num_heads
131
+ assert head_dim * num_heads == embed_dim, "embed_dim must be divisible by num_heads"
132
+ scaling = float(head_dim) ** -0.5
133
+
134
+ if not use_separate_proj_weight:
135
+ if (query is key or torch.equal(query, key)) and (key is value or torch.equal(key, value)):
136
+ # self-attention
137
+ q, k, v = linear(query, in_proj_weight, in_proj_bias).chunk(3, dim=-1)
138
+
139
+ elif key is value or torch.equal(key, value):
140
+ # encoder-decoder attention
141
+ # This is inline in_proj function with in_proj_weight and in_proj_bias
142
+ _b = in_proj_bias
143
+ _start = 0
144
+ _end = embed_dim
145
+ _w = in_proj_weight[_start:_end, :]
146
+ if _b is not None:
147
+ _b = _b[_start:_end]
148
+ q = linear(query, _w, _b)
149
+
150
+ if key is None:
151
+ assert value is None
152
+ k = None
153
+ v = None
154
+ else:
155
+
156
+ # This is inline in_proj function with in_proj_weight and in_proj_bias
157
+ _b = in_proj_bias
158
+ _start = embed_dim
159
+ _end = None
160
+ _w = in_proj_weight[_start:, :]
161
+ if _b is not None:
162
+ _b = _b[_start:]
163
+ k, v = linear(key, _w, _b).chunk(2, dim=-1)
164
+
165
+ else:
166
+ # This is inline in_proj function with in_proj_weight and in_proj_bias
167
+ _b = in_proj_bias
168
+ _start = 0
169
+ _end = embed_dim
170
+ _w = in_proj_weight[_start:_end, :]
171
+ if _b is not None:
172
+ _b = _b[_start:_end]
173
+ q = linear(query, _w, _b)
174
+
175
+ # This is inline in_proj function with in_proj_weight and in_proj_bias
176
+ _b = in_proj_bias
177
+ _start = embed_dim
178
+ _end = embed_dim * 2
179
+ _w = in_proj_weight[_start:_end, :]
180
+ if _b is not None:
181
+ _b = _b[_start:_end]
182
+ k = linear(key, _w, _b)
183
+
184
+ # This is inline in_proj function with in_proj_weight and in_proj_bias
185
+ _b = in_proj_bias
186
+ _start = embed_dim * 2
187
+ _end = None
188
+ _w = in_proj_weight[_start:, :]
189
+ if _b is not None:
190
+ _b = _b[_start:]
191
+ v = linear(value, _w, _b)
192
+ else:
193
+ q_proj_weight_non_opt = torch.jit._unwrap_optional(q_proj_weight)
194
+ len1, len2 = q_proj_weight_non_opt.size()
195
+ assert len1 == embed_dim and len2 == query.size(-1)
196
+
197
+ k_proj_weight_non_opt = torch.jit._unwrap_optional(k_proj_weight)
198
+ len1, len2 = k_proj_weight_non_opt.size()
199
+ assert len1 == embed_dim and len2 == key.size(-1)
200
+
201
+ v_proj_weight_non_opt = torch.jit._unwrap_optional(v_proj_weight)
202
+ len1, len2 = v_proj_weight_non_opt.size()
203
+ assert len1 == embed_dim and len2 == value.size(-1)
204
+
205
+ if in_proj_bias is not None:
206
+ q = linear(query, q_proj_weight_non_opt, in_proj_bias[0:embed_dim])
207
+ k = linear(key, k_proj_weight_non_opt, in_proj_bias[embed_dim : (embed_dim * 2)])
208
+ v = linear(value, v_proj_weight_non_opt, in_proj_bias[(embed_dim * 2) :])
209
+ else:
210
+ q = linear(query, q_proj_weight_non_opt, in_proj_bias)
211
+ k = linear(key, k_proj_weight_non_opt, in_proj_bias)
212
+ v = linear(value, v_proj_weight_non_opt, in_proj_bias)
213
+ q = q * scaling
214
+
215
+ if attn_mask is not None:
216
+ assert (
217
+ attn_mask.dtype == torch.float32
218
+ or attn_mask.dtype == torch.float64
219
+ or attn_mask.dtype == torch.float16
220
+ or attn_mask.dtype == torch.uint8
221
+ or attn_mask.dtype == torch.bool
222
+ ), "Only float, byte, and bool types are supported for attn_mask, not {}".format(attn_mask.dtype)
223
+ if attn_mask.dtype == torch.uint8:
224
+ warnings.warn("Byte tensor for attn_mask in nn.MultiheadAttention is deprecated. Use bool tensor instead.")
225
+ attn_mask = attn_mask.to(torch.bool)
226
+
227
+ if attn_mask.dim() == 2:
228
+ attn_mask = attn_mask.unsqueeze(0)
229
+ if list(attn_mask.size()) != [1, query.size(0), key.size(0)]:
230
+ raise RuntimeError("The size of the 2D attn_mask is not correct.")
231
+ elif attn_mask.dim() == 3:
232
+ if list(attn_mask.size()) != [bsz * num_heads, query.size(0), key.size(0)]:
233
+ raise RuntimeError("The size of the 3D attn_mask is not correct.")
234
+ else:
235
+ raise RuntimeError("attn_mask's dimension {} is not supported".format(attn_mask.dim()))
236
+ # attn_mask's dim is 3 now.
237
+
238
+ # convert ByteTensor key_padding_mask to bool
239
+ if key_padding_mask is not None and key_padding_mask.dtype == torch.uint8:
240
+ warnings.warn(
241
+ "Byte tensor for key_padding_mask in nn.MultiheadAttention is deprecated. Use bool tensor instead."
242
+ )
243
+ key_padding_mask = key_padding_mask.to(torch.bool)
244
+
245
+ if bias_k is not None and bias_v is not None:
246
+ if static_k is None and static_v is None:
247
+ k = torch.cat([k, bias_k.repeat(1, bsz, 1)])
248
+ v = torch.cat([v, bias_v.repeat(1, bsz, 1)])
249
+ if attn_mask is not None:
250
+ attn_mask = pad(attn_mask, (0, 1))
251
+ if key_padding_mask is not None:
252
+ key_padding_mask = pad(key_padding_mask, (0, 1))
253
+ else:
254
+ assert static_k is None, "bias cannot be added to static key."
255
+ assert static_v is None, "bias cannot be added to static value."
256
+ else:
257
+ assert bias_k is None
258
+ assert bias_v is None
259
+
260
+ q = q.contiguous().view(tgt_len, bsz * num_heads, head_dim).transpose(0, 1)
261
+ if k is not None:
262
+ k = k.contiguous().view(-1, bsz * num_heads, head_dim).transpose(0, 1)
263
+ if v is not None:
264
+ v = v.contiguous().view(-1, bsz * num_heads, head_dim).transpose(0, 1)
265
+
266
+ if static_k is not None:
267
+ assert static_k.size(0) == bsz * num_heads
268
+ assert static_k.size(2) == head_dim
269
+ k = static_k
270
+
271
+ if static_v is not None:
272
+ assert static_v.size(0) == bsz * num_heads
273
+ assert static_v.size(2) == head_dim
274
+ v = static_v
275
+
276
+ src_len = k.size(1)
277
+
278
+ if key_padding_mask is not None:
279
+ # assert key_padding_mask.size(0) == bsz
280
+ assert key_padding_mask.size(1) == src_len
281
+
282
+ if add_zero_attn:
283
+ src_len += 1
284
+ k = torch.cat([k, torch.zeros((k.size(0), 1) + k.size()[2:], dtype=k.dtype, device=k.device)], dim=1)
285
+ v = torch.cat([v, torch.zeros((v.size(0), 1) + v.size()[2:], dtype=v.dtype, device=v.device)], dim=1)
286
+ if attn_mask is not None:
287
+ attn_mask = pad(attn_mask, (0, 1))
288
+ if key_padding_mask is not None:
289
+ key_padding_mask = pad(key_padding_mask, (0, 1))
290
+
291
+ attn_output_weights = torch.bmm(q, k.transpose(1, 2))
292
+ assert list(attn_output_weights.size()) == [bsz * num_heads, tgt_len, src_len]
293
+
294
+ if attn_mask is not None:
295
+ if attn_mask.dtype == torch.bool:
296
+ attn_output_weights.masked_fill_(attn_mask, float("-inf"))
297
+ else:
298
+ attn_output_weights += attn_mask
299
+
300
+ if key_padding_mask is not None:
301
+ attn_output_weights = attn_output_weights.view(bsz, num_heads, tgt_len, src_len)
302
+ attn_output_weights = attn_output_weights.masked_fill(
303
+ key_padding_mask.unsqueeze(1),
304
+ float("-inf"),
305
+ )
306
+ attn_output_weights = attn_output_weights.view(bsz * num_heads, tgt_len, src_len)
307
+
308
+ attn_output_weights = softmax(attn_output_weights, dim=-1)
309
+ attn_output_weights = dropout(attn_output_weights, p=dropout_p, training=training)
310
+
311
+ attn_output = torch.bmm(attn_output_weights, v)
312
+ assert list(attn_output.size()) == [bsz * num_heads, tgt_len, head_dim]
313
+ attn_output = attn_output.transpose(0, 1).contiguous().view(tgt_len, bsz, embed_dim)
314
+ attn_output = linear(attn_output, out_proj_weight, out_proj_bias)
315
+
316
+ if need_weights:
317
+ # average attention weights over heads
318
+ attn_output_weights = attn_output_weights.view(bsz, num_heads, tgt_len, src_len)
319
+ return attn_output, attn_output_weights.sum(dim=1) / num_heads
320
+ else:
321
+ return attn_output, None
322
+
323
+
324
+ # This class exists solely for Transformer; it has an annotation stating
325
+ # that bias is never None, which appeases TorchScript
326
+ class _LinearWithBias(nn.Linear):
327
+ bias: Tensor # type: ignore
328
+
329
+ def __init__(self, in_features: int, out_features: int) -> None:
330
+ super().__init__(in_features, out_features, bias=True) # type: ignore
331
+
332
+
333
+ class MultiheadAttention(nn.Module):
334
+ r"""Allows the model to jointly attend to information
335
+ from different representation subspaces.
336
+ See `Attention Is All You Need <https://arxiv.org/abs/1706.03762>`_
337
+
338
+ .. math::
339
+ \text{MultiHead}(Q, K, V) = \text{Concat}(head_1,\dots,head_h)W^O
340
+
341
+ where :math:`head_i = \text{Attention}(QW_i^Q, KW_i^K, VW_i^V)`.
342
+
343
+ Args:
344
+ embed_dim: total dimension of the model.
345
+ num_heads: parallel attention heads.
346
+ dropout: a Dropout layer on attn_output_weights. Default: 0.0.
347
+ bias: add bias as module parameter. Default: True.
348
+ add_bias_kv: add bias to the key and value sequences at dim=0.
349
+ add_zero_attn: add a new batch of zeros to the key and
350
+ value sequences at dim=1.
351
+ kdim: total number of features in key. Default: None.
352
+ vdim: total number of features in value. Default: None.
353
+
354
+ Note that if :attr:`kdim` and :attr:`vdim` are None, they will be set
355
+ to :attr:`embed_dim` such that query, key, and value have the same
356
+ number of features.
357
+
358
+ Examples::
359
+
360
+ >>> multihead_attn = nn.MultiheadAttention(embed_dim, num_heads)
361
+ >>> attn_output, attn_output_weights = multihead_attn(query, key, value)
362
+ """
363
+ bias_k: Optional[torch.Tensor]
364
+ bias_v: Optional[torch.Tensor]
365
+
366
+ def __init__(self, embed_dim, num_heads, dropout=0., bias=True, add_bias_kv=False, add_zero_attn=False, kdim=None, vdim=None):
367
+ super(MultiheadAttention, self).__init__()
368
+ self.embed_dim = embed_dim
369
+ self.kdim = kdim if kdim is not None else embed_dim
370
+ self.vdim = vdim if vdim is not None else embed_dim
371
+ self._qkv_same_embed_dim = self.kdim == embed_dim and self.vdim == embed_dim
372
+
373
+ self.num_heads = num_heads
374
+ self.dropout = dropout
375
+ self.head_dim = embed_dim // num_heads
376
+ assert self.head_dim * num_heads == self.embed_dim, "embed_dim must be divisible by num_heads"
377
+
378
+ if self._qkv_same_embed_dim is False:
379
+ self.q_proj_weight = Parameter(torch.Tensor(embed_dim, embed_dim))
380
+ self.k_proj_weight = Parameter(torch.Tensor(embed_dim, self.kdim))
381
+ self.v_proj_weight = Parameter(torch.Tensor(embed_dim, self.vdim))
382
+ self.register_parameter('in_proj_weight', None)
383
+ else:
384
+ self.in_proj_weight = Parameter(torch.empty(3 * embed_dim, embed_dim))
385
+ self.register_parameter('q_proj_weight', None)
386
+ self.register_parameter('k_proj_weight', None)
387
+ self.register_parameter('v_proj_weight', None)
388
+
389
+ if bias:
390
+ self.in_proj_bias = Parameter(torch.empty(3 * embed_dim))
391
+ else:
392
+ self.register_parameter('in_proj_bias', None)
393
+ self.out_proj = _LinearWithBias(embed_dim, embed_dim)
394
+
395
+ if add_bias_kv:
396
+ self.bias_k = Parameter(torch.empty(1, 1, embed_dim))
397
+ self.bias_v = Parameter(torch.empty(1, 1, embed_dim))
398
+ else:
399
+ self.bias_k = self.bias_v = None
400
+
401
+ self.add_zero_attn = add_zero_attn
402
+
403
+ self._reset_parameters()
404
+
405
+ def _reset_parameters(self):
406
+ if self._qkv_same_embed_dim:
407
+ xavier_uniform_(self.in_proj_weight)
408
+ else:
409
+ xavier_uniform_(self.q_proj_weight)
410
+ xavier_uniform_(self.k_proj_weight)
411
+ xavier_uniform_(self.v_proj_weight)
412
+
413
+ if self.in_proj_bias is not None:
414
+ constant_(self.in_proj_bias, 0.)
415
+ constant_(self.out_proj.bias, 0.)
416
+ if self.bias_k is not None:
417
+ xavier_normal_(self.bias_k)
418
+ if self.bias_v is not None:
419
+ xavier_normal_(self.bias_v)
420
+
421
+ def __setstate__(self, state):
422
+ # Support loading old MultiheadAttention checkpoints generated by v1.1.0
423
+ if '_qkv_same_embed_dim' not in state:
424
+ state['_qkv_same_embed_dim'] = True
425
+
426
+ super(MultiheadAttention, self).__setstate__(state)
427
+
428
+ def forward(self, query: Tensor, key: Tensor, value: Tensor, key_padding_mask: Optional[Tensor] = None,
429
+ need_weights: bool = True, attn_mask: Optional[Tensor] = None) -> Tuple[Tensor, Optional[Tensor]]:
430
+ r"""
431
+ Args:
432
+ query, key, value: map a query and a set of key-value pairs to an output.
433
+ See "Attention Is All You Need" for more details.
434
+ key_padding_mask: if provided, specified padding elements in the key will
435
+ be ignored by the attention. When given a binary mask and a value is True,
436
+ the corresponding value on the attention layer will be ignored. When given
437
+ a byte mask and a value is non-zero, the corresponding value on the attention
438
+ layer will be ignored
439
+ need_weights: output attn_output_weights.
440
+ attn_mask: 2D or 3D mask that prevents attention to certain positions. A 2D mask will be broadcasted for all
441
+ the batches while a 3D mask allows to specify a different mask for the entries of each batch.
442
+
443
+ Shapes for inputs:
444
+ - query: :math:`(L, N, E)` where L is the target sequence length, N is the batch size, E is
445
+ the embedding dimension.
446
+ - key: :math:`(S, N, E)`, where S is the source sequence length, N is the batch size, E is
447
+ the embedding dimension.
448
+ - value: :math:`(S, N, E)` where S is the source sequence length, N is the batch size, E is
449
+ the embedding dimension.
450
+ - key_padding_mask: :math:`(N, S)` where N is the batch size, S is the source sequence length.
451
+ If a ByteTensor is provided, the non-zero positions will be ignored while the position
452
+ with the zero positions will be unchanged. If a BoolTensor is provided, the positions with the
453
+ value of ``True`` will be ignored while the position with the value of ``False`` will be unchanged.
454
+ - attn_mask: if a 2D mask: :math:`(L, S)` where L is the target sequence length, S is the
455
+ source sequence length.
456
+
457
+ If a 3D mask: :math:`(N\cdot\text{num\_heads}, L, S)` where N is the batch size, L is the target sequence
458
+ length, S is the source sequence length. ``attn_mask`` ensure that position i is allowed to attend
459
+ the unmasked positions. If a ByteTensor is provided, the non-zero positions are not allowed to attend
460
+ while the zero positions will be unchanged. If a BoolTensor is provided, positions with ``True``
461
+ is not allowed to attend while ``False`` values will be unchanged. If a FloatTensor
462
+ is provided, it will be added to the attention weight.
463
+
464
+ Shapes for outputs:
465
+ - attn_output: :math:`(L, N, E)` where L is the target sequence length, N is the batch size,
466
+ E is the embedding dimension.
467
+ - attn_output_weights: :math:`(N, L, S)` where N is the batch size,
468
+ L is the target sequence length, S is the source sequence length.
469
+ """
470
+ if not self._qkv_same_embed_dim:
471
+ return multi_head_attention_forward(
472
+ query, key, value, self.embed_dim, self.num_heads,
473
+ self.in_proj_weight, self.in_proj_bias,
474
+ self.bias_k, self.bias_v, self.add_zero_attn,
475
+ self.dropout, self.out_proj.weight, self.out_proj.bias,
476
+ training=self.training,
477
+ key_padding_mask=key_padding_mask, need_weights=need_weights,
478
+ attn_mask=attn_mask, use_separate_proj_weight=True,
479
+ q_proj_weight=self.q_proj_weight, k_proj_weight=self.k_proj_weight,
480
+ v_proj_weight=self.v_proj_weight)
481
+ else:
482
+ return multi_head_attention_forward(
483
+ query, key, value, self.embed_dim, self.num_heads,
484
+ self.in_proj_weight, self.in_proj_bias,
485
+ self.bias_k, self.bias_v, self.add_zero_attn,
486
+ self.dropout, self.out_proj.weight, self.out_proj.bias,
487
+ training=self.training,
488
+ key_padding_mask=key_padding_mask, need_weights=need_weights,
489
+ attn_mask=attn_mask)
xdecoder/modules/position_encoding.py ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Facebook, Inc. and its affiliates.
2
+ ## Modified by Bowen Cheng from: https://github.com/facebookresearch/detr/blob/master/models/position_encoding.py
3
+ """
4
+ Various positional encodings for the transformer.
5
+ """
6
+ import math
7
+
8
+ import torch
9
+ from torch import nn
10
+
11
+
12
+ class PositionEmbeddingSine(nn.Module):
13
+ """
14
+ This is a more standard version of the position embedding, very similar to the one
15
+ used by the Attention is all you need paper, generalized to work on images.
16
+ """
17
+
18
+ def __init__(self, num_pos_feats=64, temperature=10000, normalize=False, scale=None):
19
+ super().__init__()
20
+ self.num_pos_feats = num_pos_feats
21
+ self.temperature = temperature
22
+ self.normalize = normalize
23
+ if scale is not None and normalize is False:
24
+ raise ValueError("normalize should be True if scale is passed")
25
+ if scale is None:
26
+ scale = 2 * math.pi
27
+ self.scale = scale
28
+
29
+ def forward(self, x, mask=None):
30
+ if mask is None:
31
+ mask = torch.zeros((x.size(0), x.size(2), x.size(3)), device=x.device, dtype=torch.bool)
32
+ not_mask = ~mask
33
+ y_embed = not_mask.cumsum(1, dtype=x.dtype)
34
+ x_embed = not_mask.cumsum(2, dtype=x.dtype)
35
+ if self.normalize:
36
+ eps = 1e-6
37
+ y_embed = y_embed / (y_embed[:, -1:, :] + eps) * self.scale
38
+ x_embed = x_embed / (x_embed[:, :, -1:] + eps) * self.scale
39
+
40
+ dim_t = torch.arange(self.num_pos_feats, dtype=x.dtype, device=x.device)
41
+ dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats)
42
+
43
+ pos_x = x_embed[:, :, :, None] / dim_t
44
+ pos_y = y_embed[:, :, :, None] / dim_t
45
+ pos_x = torch.stack(
46
+ (pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4
47
+ ).flatten(3)
48
+ pos_y = torch.stack(
49
+ (pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4
50
+ ).flatten(3)
51
+ pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2)
52
+ return pos
53
+
54
+ def __repr__(self, _repr_indent=4):
55
+ head = "Positional encoding " + self.__class__.__name__
56
+ body = [
57
+ "num_pos_feats: {}".format(self.num_pos_feats),
58
+ "temperature: {}".format(self.temperature),
59
+ "normalize: {}".format(self.normalize),
60
+ "scale: {}".format(self.scale),
61
+ ]
62
+ # _repr_indent = 4
63
+ lines = [head] + [" " * _repr_indent + line for line in body]
64
+ return "\n".join(lines)
xdecoder/modules/postprocessing.py ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Facebook, Inc. and its affiliates.
2
+ import torch
3
+ from torch.nn import functional as F
4
+
5
+ from detectron2.structures import Instances, ROIMasks
6
+
7
+
8
+ # perhaps should rename to "resize_instance"
9
+ def detector_postprocess(
10
+ results: Instances, output_height: int, output_width: int, mask_threshold: float = 0.5
11
+ ):
12
+ """
13
+ Resize the output instances.
14
+ The input images are often resized when entering an object detector.
15
+ As a result, we often need the outputs of the detector in a different
16
+ resolution from its inputs.
17
+
18
+ This function will resize the raw outputs of an R-CNN detector
19
+ to produce outputs according to the desired output resolution.
20
+
21
+ Args:
22
+ results (Instances): the raw outputs from the detector.
23
+ `results.image_size` contains the input image resolution the detector sees.
24
+ This object might be modified in-place.
25
+ output_height, output_width: the desired output resolution.
26
+
27
+ Returns:
28
+ Instances: the resized output from the model, based on the output resolution
29
+ """
30
+ if isinstance(output_width, torch.Tensor):
31
+ # This shape might (but not necessarily) be tensors during tracing.
32
+ # Converts integer tensors to float temporaries to ensure true
33
+ # division is performed when computing scale_x and scale_y.
34
+ output_width_tmp = output_width.float()
35
+ output_height_tmp = output_height.float()
36
+ new_size = torch.stack([output_height, output_width])
37
+ else:
38
+ new_size = (output_height, output_width)
39
+ output_width_tmp = output_width
40
+ output_height_tmp = output_height
41
+
42
+ scale_x, scale_y = (
43
+ output_width_tmp / results.image_size[1],
44
+ output_height_tmp / results.image_size[0],
45
+ )
46
+ results = Instances(new_size, **results.get_fields())
47
+
48
+ if results.has("pred_boxes"):
49
+ output_boxes = results.pred_boxes
50
+ elif results.has("proposal_boxes"):
51
+ output_boxes = results.proposal_boxes
52
+ else:
53
+ output_boxes = None
54
+ assert output_boxes is not None, "Predictions must contain boxes!"
55
+
56
+ output_boxes.scale(scale_x, scale_y)
57
+ output_boxes.clip(results.image_size)
58
+
59
+ results = results[output_boxes.nonempty()]
60
+
61
+ if results.has("pred_masks"):
62
+ if isinstance(results.pred_masks, ROIMasks):
63
+ roi_masks = results.pred_masks
64
+ else:
65
+ # pred_masks is a tensor of shape (N, 1, M, M)
66
+ roi_masks = ROIMasks(results.pred_masks[:, 0, :, :])
67
+ results.pred_masks = roi_masks.to_bitmasks(
68
+ results.pred_boxes, output_height, output_width, mask_threshold
69
+ ).tensor # TODO return ROIMasks/BitMask object in the future
70
+
71
+ if results.has("pred_keypoints"):
72
+ results.pred_keypoints[:, :, 0] *= scale_x
73
+ results.pred_keypoints[:, :, 1] *= scale_y
74
+
75
+ return results
76
+
77
+ def bbox_postprocess(result, input_size, img_size, output_height, output_width):
78
+ """
79
+ result: [xc,yc,w,h] range [0,1] to [x1,y1,x2,y2] range [0,w], [0,h]
80
+ """
81
+ if result is None:
82
+ return None
83
+
84
+ scale = torch.tensor([input_size[1], input_size[0], input_size[1], input_size[0]])[None,:].to(result.device)
85
+ result = result.sigmoid() * scale
86
+ x1,y1,x2,y2 = result[:,0] - result[:,2]/2, result[:,1] - result[:,3]/2, result[:,0] + result[:,2]/2, result[:,1] + result[:,3]/2
87
+ h,w = img_size
88
+
89
+ x1 = x1.clamp(min=0, max=w)
90
+ y1 = y1.clamp(min=0, max=h)
91
+ x2 = x2.clamp(min=0, max=w)
92
+ y2 = y2.clamp(min=0, max=h)
93
+
94
+ box = torch.stack([x1,y1,x2,y2]).permute(1,0)
95
+ scale = torch.tensor([output_width/w, output_height/h, output_width/w, output_height/h])[None,:].to(result.device)
96
+ box = box*scale
97
+ return box
98
+
99
+ def sem_seg_postprocess(result, img_size, output_height, output_width):
100
+ """
101
+ Return semantic segmentation predictions in the original resolution.
102
+
103
+ The input images are often resized when entering semantic segmentor. Moreover, in same
104
+ cases, they also padded inside segmentor to be divisible by maximum network stride.
105
+ As a result, we often need the predictions of the segmentor in a different
106
+ resolution from its inputs.
107
+
108
+ Args:
109
+ result (Tensor): semantic segmentation prediction logits. A tensor of shape (C, H, W),
110
+ where C is the number of classes, and H, W are the height and width of the prediction.
111
+ img_size (tuple): image size that segmentor is taking as input.
112
+ output_height, output_width: the desired output resolution.
113
+
114
+ Returns:
115
+ semantic segmentation prediction (Tensor): A tensor of the shape
116
+ (C, output_height, output_width) that contains per-pixel soft predictions.
117
+ """
118
+ result = result[:, : img_size[0], : img_size[1]].expand(1, -1, -1, -1)
119
+ result = F.interpolate(
120
+ result, size=(output_height, output_width), mode="bilinear", align_corners=False
121
+ )[0]
122
+ return result