ccdv commited on
Commit
f7812f2
1 Parent(s): 6cddfd1
README.md ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - summarization
6
+ datasets:
7
+ - scientific_papers
8
+ metrics:
9
+ - rouge
10
+ model-index:
11
+ - name: ccdv/lsg-bart-base-16384-mediasum
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ **This model relies on a custom modeling file, you need to add trust_remote_code=True**\
19
+ **See [\#13467](https://github.com/huggingface/transformers/pull/13467)**
20
+
21
+ ```python
22
+ from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
23
+
24
+ tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-base-16384-mediasum", trust_remote_code=True)
25
+ model = AutoModelForSeq2SeqLM.from_pretrained("ccdv/lsg-bart-base-16384-mediasum", trust_remote_code=True)
26
+
27
+ text = "Replace by what you want."
28
+ pipe = pipeline("text2text-generation", model=model, tokenizer=tokenizer, device=0)
29
+ generated_text = pipe(
30
+ text,
31
+ truncation=True,
32
+ max_length=64,
33
+ no_repeat_ngram_size=7,
34
+ num_beams=2,
35
+ early_stopping=True
36
+ )
37
+ ```
38
+
39
+ # ccdv/lsg-bart-base-16384-mediasum
40
+
41
+ This model is a fine-tuned version of [ccdv/lsg-bart-base-4096-mediasum](https://huggingface.co/ccdv/lsg-bart-base-4096-mediasum) on the [ccdv/mediasum roberta_prepended mediasum](https://huggingface.co/datasets/ccdv/mediasum) dataset. \
42
+ The model is converted to handle 16384 long sequences and fine-tuned accordingly during 1 epoch. \
43
+ It achieves the following results on the test set:
44
+
45
+ | Length | Global tokens | Fine-tuning | Block Size | Sparsity | Connexions | R1 | R2 | RL | RLsum |
46
+ |:------ |:------------- |:----------- |:---------- |:-------- | :--------- |:----- |:----- |:----- |:----- |
47
+ | 16384 | 64 | Full | 256 | 0 | 768 | 35.31 | 18.35 | 31.81 | 32.47 |
48
+ | 16384 | 1 | Full | 256 | 0 | 768 | 35.21 | 18.20 | 31.73 | 32.37 |
49
+ | 16384 | 64 | Global only | 256 | 0 | 768 | 35.22 | 18.08 | 31.54 | 32.21 |
50
+ | 16384 | 1 | None | 256 | 0 | 768 | 35.17 | 18.13 | 31.54 | 32.20 |
51
+
52
+ Reference model:
53
+
54
+ | Length | Global tokens | Fine-tuning | Block Size | Sparsity | Connexions | R1 | R2 | RL | RLsum |
55
+ |:------ |:------------- |:----------- |:---------- |:-------- | :--------- |:----- |:----- |:----- |:----- |
56
+ | 4096 | 1 | - | 256 | 0 | 768 | 35.16 | 18.13 | 31.54 | 32.20
57
+
58
+ ## Model description
59
+ The model relies on Local-Sparse-Global attention to handle long sequences:
60
+ ![attn](attn.png)
61
+
62
+ The model has about ~145 millions parameters (6 encoder layers - 6 decoder layers). \
63
+ The model is warm started from [ccdv/lsg-bart-base-4096-mediasum](https://huggingface.co/ccdv/lsg-bart-base-4096-mediasum), converted to handle long sequences (encoder only) and fine tuned.
64
+
65
+ ## Intended uses & limitations
66
+
67
+ More information needed
68
+
69
+ ## Training and evaluation data
70
+
71
+ More information needed
72
+
73
+ ## Training procedure
74
+
75
+ ### Training hyperparameters
76
+
77
+ The following hyperparameters were used during training:
78
+ - learning_rate: 8e-05
79
+ - train_batch_size: 8
80
+ - seed: 42
81
+ - gradient_accumulation_steps: 4
82
+ - total_train_batch_size: 32
83
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
84
+ - lr_scheduler_type: linear
85
+ - lr_scheduler_warmup_ratio: 0.1
86
+ - num_epochs: 1.0
87
+
88
+ ### Generate hyperparameters
89
+
90
+ The following hyperparameters were used during generation:
91
+ - dataset_name: ccdv/mediasum
92
+ - dataset_config_name: roberta_prepended
93
+ - eval_batch_size: 8
94
+ - eval_samples: 10000
95
+ - early_stopping: True
96
+ - ignore_pad_token_for_loss: True
97
+ - length_penalty: 2.0
98
+ - max_length: 128
99
+ - min_length: 3
100
+ - num_beams: 5
101
+ - no_repeat_ngram_size: None
102
+ - seed: 123
103
+
104
+ ### Framework versions
105
+
106
+ - Transformers 4.18.0
107
+ - Pytorch 1.10.1+cu102
108
+ - Datasets 2.1.0
109
+ - Tokenizers 0.11.6
all_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "train_loss": 1.5347596183515975,
4
+ "train_runtime": 69651.7637,
5
+ "train_samples": 443596,
6
+ "train_samples_per_second": 6.369,
7
+ "train_steps_per_second": 0.199
8
+ }
config.json ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "ccdv/lsg-bart-base-16384-mediasum",
3
+ "activation_dropout": 0.1,
4
+ "activation_function": "gelu",
5
+ "adaptive": true,
6
+ "add_bias_logits": false,
7
+ "add_final_layer_norm": false,
8
+ "architectures": [
9
+ "LSGBartForConditionalGeneration"
10
+ ],
11
+ "attention_dropout": 0.1,
12
+ "auto_map": {
13
+ "AutoConfig": "modeling_lsg_bart.LSGBartConfig",
14
+ "AutoModel": "modeling_lsg_bart.LSGBartModel",
15
+ "AutoModelForCausalLM": "modeling_lsg_bart.LSGBartForCausalLM",
16
+ "AutoModelForQuestionAnswering": "modeling_lsg_bart.LSGBartForQuestionAnswering",
17
+ "AutoModelForSeq2SeqLM": "modeling_lsg_bart.LSGBartForConditionalGeneration",
18
+ "AutoModelForSequenceClassification": "modeling_lsg_bart.LSGBartForSequenceClassification"
19
+ },
20
+ "base_model_prefix": "lsg",
21
+ "block_size": 256,
22
+ "bos_token_id": 0,
23
+ "classif_dropout": 0.1,
24
+ "classifier_dropout": 0.0,
25
+ "d_model": 768,
26
+ "decoder_attention_heads": 12,
27
+ "decoder_ffn_dim": 3072,
28
+ "decoder_layerdrop": 0.0,
29
+ "decoder_layers": 6,
30
+ "decoder_start_token_id": 2,
31
+ "dropout": 0.1,
32
+ "early_stopping": true,
33
+ "encoder_attention_heads": 12,
34
+ "encoder_ffn_dim": 3072,
35
+ "encoder_layerdrop": 0.0,
36
+ "encoder_layers": 6,
37
+ "eos_token_id": 2,
38
+ "forced_bos_token_id": 0,
39
+ "forced_eos_token_id": 2,
40
+ "gradient_checkpointing": false,
41
+ "id2label": {
42
+ "0": "LABEL_0",
43
+ "1": "LABEL_1",
44
+ "2": "LABEL_2"
45
+ },
46
+ "init_std": 0.02,
47
+ "is_encoder_decoder": true,
48
+ "label2id": {
49
+ "LABEL_0": 0,
50
+ "LABEL_1": 1,
51
+ "LABEL_2": 2
52
+ },
53
+ "lsh_num_pre_rounds": 1,
54
+ "mask_first_token": false,
55
+ "max_position_embeddings": 16384,
56
+ "model_type": "bart",
57
+ "no_repeat_ngram_size": 3,
58
+ "normalize_before": false,
59
+ "normalize_embedding": true,
60
+ "num_beams": 4,
61
+ "num_global_tokens": 64,
62
+ "num_hidden_layers": 6,
63
+ "pad_token_id": 1,
64
+ "pass_global_tokens_to_decoder": true,
65
+ "pool_with_global": true,
66
+ "scale_embedding": false,
67
+ "sparse_block_size": 0,
68
+ "sparsity_factor": 4,
69
+ "sparsity_type": "none",
70
+ "task_specific_params": {
71
+ "summarization": {
72
+ "length_penalty": 1.0,
73
+ "max_length": 128,
74
+ "min_length": 12,
75
+ "num_beams": 4
76
+ },
77
+ "summarization_cnn": {
78
+ "length_penalty": 2.0,
79
+ "max_length": 142,
80
+ "min_length": 56,
81
+ "num_beams": 4
82
+ },
83
+ "summarization_xsum": {
84
+ "length_penalty": 1.0,
85
+ "max_length": 62,
86
+ "min_length": 11,
87
+ "num_beams": 6
88
+ }
89
+ },
90
+ "torch_dtype": "float32",
91
+ "transformers_version": "4.19.2",
92
+ "use_cache": true,
93
+ "vocab_size": 50265
94
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
modeling_lsg_bart.py ADDED
@@ -0,0 +1,1122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from logging import warn
2
+ import torch
3
+ from transformers.models.bart.modeling_bart import *
4
+ from transformers.models.bart.modeling_bart import _expand_mask
5
+ import torch.nn as nn
6
+ from torch.nn import BCEWithLogitsLoss
7
+ import sys
8
+
9
+ AUTO_MAP = {
10
+ "AutoModel": "modeling_lsg_bart.LSGBartModel",
11
+ "AutoModelForCausalLM": "modeling_lsg_bart.LSGBartForCausalLM",
12
+ "AutoModelForQuestionAnswering": "modeling_lsg_bart.LSGBartForQuestionAnswering",
13
+ "AutoModelForSequenceClassification": "modeling_lsg_bart.LSGBartForSequenceClassification",
14
+ "AutoModelForSeq2SeqLM": "modeling_lsg_bart.LSGBartForConditionalGeneration"
15
+ }
16
+
17
+ class LSGBartConfig(BartConfig):
18
+ """
19
+ This class overrides :class:`~transformers.RobertaConfig`. Please check the superclass for the appropriate
20
+ documentation alongside usage examples.
21
+ """
22
+
23
+ base_model_prefix = "lsg"
24
+ model_type = "bart"
25
+ keys_to_ignore_at_inference = ["past_key_values"]
26
+ attribute_map = {"num_attention_heads": "encoder_attention_heads", "hidden_size": "d_model"}
27
+
28
+ def __init__(
29
+ self,
30
+ adaptive=True,
31
+ base_model_prefix="lsg",
32
+ block_size=128,
33
+ lsh_num_pre_rounds=1,
34
+ mask_first_token=False,
35
+ num_global_tokens=1,
36
+ pass_global_tokens_to_decoder=True,
37
+ pool_with_global=True,
38
+ sparse_block_size=128,
39
+ sparsity_factor=2,
40
+ sparsity_type="norm",
41
+ **kwargs
42
+ ):
43
+ """Constructs LSGConfig."""
44
+ super().__init__(**kwargs)
45
+
46
+ self.adaptive = adaptive
47
+ self.auto_map = AUTO_MAP
48
+ self.base_model_prefix = base_model_prefix
49
+ self.block_size = block_size
50
+ self.lsh_num_pre_rounds = lsh_num_pre_rounds
51
+ self.mask_first_token = mask_first_token
52
+ self.num_global_tokens = num_global_tokens
53
+ self.pass_global_tokens_to_decoder = pass_global_tokens_to_decoder
54
+ self.pool_with_global = pool_with_global
55
+ self.sparse_block_size = sparse_block_size
56
+ self.sparsity_factor = sparsity_factor
57
+ self.sparsity_type = sparsity_type
58
+
59
+ if sparsity_type not in [None, "none", "norm", "lsh", "pooling", "stride", "block_stride"]:
60
+ logger.warning(
61
+ "[WARNING CONFIG]: sparsity_mode not in [None, 'none', 'norm', 'lsh', 'pooling', 'stride', 'block_stride'], setting sparsity_type=None, computation will skip sparse attention")
62
+ self.sparsity_type = None
63
+
64
+ if self.sparsity_type in ["stride", "block_stride"]:
65
+ if self.sparsity_factor > self.encoder_attention_heads:
66
+ logger.warning(
67
+ "[WARNING CONFIG]: sparsity_factor > encoder_attention_heads is not recommended for stride/block_stride sparsity"
68
+ )
69
+
70
+ if self.num_global_tokens < 1:
71
+ logger.warning(
72
+ "[WARNING CONFIG]: num_global_tokens < 1 is not compatible, setting num_global_tokens=1"
73
+ )
74
+ self.num_global_tokens = 1
75
+ elif self.num_global_tokens > 512:
76
+ logger.warning(
77
+ "[WARNING CONFIG]: num_global_tokens > 512 is not compatible, setting num_global_tokens=512"
78
+ )
79
+ self.num_global_tokens = 512
80
+
81
+ if self.sparsity_factor > 0:
82
+ assert self.block_size % self.sparsity_factor == 0, "[ERROR CONFIG]: block_size must be divisible by sparsity_factor"
83
+ assert self.block_size//self.sparsity_factor >= 1, "[ERROR CONFIG]: make sure block_size >= sparsity_factor"
84
+
85
+
86
+ class BaseSelfAttention(nn.Module):
87
+
88
+ def __init__(
89
+ self,
90
+ embed_dim,
91
+ num_heads,
92
+ dropout=0.0,
93
+ is_decoder=False,
94
+ bias=True,
95
+ ):
96
+
97
+ super().__init__()
98
+ self.embed_dim = embed_dim
99
+ self.num_heads = num_heads
100
+ self.dropout = dropout
101
+ self.head_dim = embed_dim // num_heads
102
+
103
+ if (self.head_dim * num_heads) != self.embed_dim:
104
+ raise ValueError(
105
+ f"embed_dim must be divisible by num_heads (got `embed_dim`: {self.embed_dim}"
106
+ f" and `num_heads`: {num_heads})."
107
+ )
108
+ self.scaling = self.head_dim ** -0.5
109
+ self.is_decoder = is_decoder
110
+
111
+ self.k_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
112
+ self.v_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
113
+ self.q_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
114
+ self.out_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
115
+
116
+ def transpose_for_scores(self, x):
117
+ new_x_shape = x.size()[:-1] + (
118
+ self.num_heads,
119
+ self.head_dim,
120
+ )
121
+ x = x.view(*new_x_shape)
122
+ return x.permute(0, 2, 1, 3)
123
+
124
+ def reshape_output(self, context_layer):
125
+ context_layer = context_layer.permute(0, 2, 1, 3).contiguous()
126
+ new_context_layer_shape = context_layer.size()[:-2] + (self.embed_dim,)
127
+ return context_layer.view(*new_context_layer_shape)
128
+
129
+ def project_QKV(self, hidden_states):
130
+
131
+ query_layer = self.transpose_for_scores(self.q_proj(hidden_states))
132
+ key_layer = self.transpose_for_scores(self.k_proj(hidden_states))
133
+ value_layer = self.transpose_for_scores(self.v_proj(hidden_states))
134
+ return query_layer, key_layer, value_layer
135
+
136
+
137
+ class BaseAttentionProduct(nn.Module):
138
+
139
+ def __init__(self, config):
140
+ """
141
+ Compute attention: softmax(Q @ K.T) @ V
142
+ """
143
+ super().__init__()
144
+ self.dropout = nn.Dropout(config.attention_dropout)
145
+
146
+ def forward(self, query_layer, key_layer, value_layer, attention_mask=None):
147
+
148
+ d = query_layer.shape[-1]
149
+
150
+ # Take the dot product between "query" and "key" to get the raw attention scores.
151
+ attention_scores = query_layer @ key_layer.transpose(-1, -2) / math.sqrt(d)
152
+
153
+ del query_layer
154
+ del key_layer
155
+
156
+ if attention_mask is not None:
157
+ # Apply the attention mask is (precomputed for all layers in RobertaModel forward() function)
158
+ attention_scores = attention_scores + attention_mask
159
+ del attention_mask
160
+
161
+ # Normalize the attention scores to probabilities.
162
+ attention_probs = nn.Softmax(dim=-1)(attention_scores)
163
+
164
+ # This is actually dropping out entire tokens to attend to, which might
165
+ # seem a bit unusual, but is taken from the original Transformer paper.
166
+ context_layer = self.dropout(attention_probs) @ value_layer
167
+
168
+ return context_layer
169
+
170
+
171
+ class LSGAttentionProduct(nn.Module):
172
+
173
+ def __init__(self, config, block_size=None, sparse_block_size=None, sparsity_factor=4):
174
+ """
175
+ Compute block or overlapping blocks attention products
176
+ """
177
+ super().__init__()
178
+
179
+ self.block_size = block_size
180
+ self.sparse_block_size = sparse_block_size
181
+ self.sparsity_factor = sparsity_factor
182
+
183
+ if self.block_size is None:
184
+ self.block_size = config.block_size
185
+
186
+ if self.sparse_block_size is None:
187
+ self.sparse_block_size = config.sparse_block_size
188
+
189
+ # Shape of blocks
190
+ self.local_shapes = (self.block_size*3, self.block_size)
191
+ if self.sparse_block_size and self.sparsity_factor > 0:
192
+ self.sparse_shapes = (self.sparse_block_size*3, self.block_size//self.sparsity_factor)
193
+
194
+ self.attention = BaseAttentionProduct(config)
195
+
196
+ def build_lsg_inputs(self, hidden_states, sparse_hidden_states, global_hidden_states, is_attn_mask=False):
197
+
198
+ # Build local tokens
199
+ local_hidden_states = self.reshape_to_local_block(hidden_states, is_attn_mask)
200
+ del hidden_states
201
+
202
+ # Build sparse tokens
203
+ if sparse_hidden_states is not None:
204
+ sparse_hidden_states = self.reshape_to_sparse_block(sparse_hidden_states, is_attn_mask)
205
+
206
+ return self.cat_global_sparse_local_tokens(global_hidden_states, sparse_hidden_states, local_hidden_states)
207
+
208
+ def forward(
209
+ self,
210
+ query_layer,
211
+ key_layer,
212
+ value_layer,
213
+ attention_mask=None,
214
+ sparse_key=None,
215
+ sparse_value=None,
216
+ sparse_mask=None,
217
+ global_key=None,
218
+ global_value=None,
219
+ global_mask=None
220
+ ):
221
+
222
+ # Input batch, heads, length, hidden_size
223
+ n, h, t, d = query_layer.size()
224
+ n_blocks = t // self.block_size
225
+ assert t % self.block_size == 0
226
+
227
+ key_layer = self.build_lsg_inputs(
228
+ key_layer,
229
+ sparse_key,
230
+ global_key
231
+ )
232
+ del sparse_key
233
+ del global_key
234
+
235
+ value_layer = self.build_lsg_inputs(
236
+ value_layer,
237
+ sparse_value,
238
+ global_value
239
+ )
240
+ del sparse_value
241
+ del global_value
242
+
243
+ attention_mask = self.build_lsg_inputs(
244
+ attention_mask,
245
+ sparse_mask,
246
+ global_mask.transpose(-1, -2),
247
+ is_attn_mask=True
248
+ ).transpose(-1, -2)
249
+ del sparse_mask
250
+ del global_mask
251
+
252
+ # expect (..., t, d) shape
253
+ # Compute attention
254
+ context_layer = self.attention(
255
+ query_layer=self.chunk(query_layer, n_blocks),
256
+ key_layer=key_layer,
257
+ value_layer=value_layer,
258
+ attention_mask=attention_mask
259
+ )
260
+
261
+ return context_layer.reshape(n, h, -1, d)
262
+
263
+ def reshape_to_local_block(self, hidden_states, is_attn_mask=False):
264
+
265
+ size, step = self.local_shapes
266
+ s = (size - step) // 2
267
+
268
+ # Pad before block reshaping
269
+ if is_attn_mask:
270
+ pad_value = -10000
271
+ hidden_states = hidden_states.transpose(-1, -2)
272
+ else:
273
+ pad_value = 0
274
+
275
+ hidden_states = torch.nn.functional.pad(
276
+ hidden_states.transpose(-1, -2),
277
+ pad=(s, s),
278
+ value=pad_value
279
+ ).transpose(-1, -2)
280
+
281
+ # Make blocks
282
+ hidden_states = hidden_states.unfold(-2, size=size, step=step).transpose(-1, -2)
283
+
284
+ return hidden_states
285
+
286
+ def reshape_to_sparse_block(self, hidden_states, is_attn_mask=False):
287
+
288
+ size, step = self.sparse_shapes
289
+
290
+ # In case of odd case
291
+ odd_offset = (step % 2)
292
+
293
+ # n, h, t, d*2 + 1
294
+ size = size*2
295
+ s = (size - step) // 2 + odd_offset
296
+
297
+ # Pad before block reshaping
298
+ if is_attn_mask:
299
+ pad_value = -10000
300
+ hidden_states = hidden_states.transpose(-1, -2)
301
+ else:
302
+ pad_value = 0
303
+
304
+ hidden_states = torch.nn.functional.pad(
305
+ hidden_states.transpose(-1, -2),
306
+ pad=(s, s),
307
+ value=pad_value
308
+ ).transpose(-1, -2)
309
+
310
+ # Make blocks
311
+ hidden_states = hidden_states.unfold(-2, size=size, step=step).transpose(-1, -2)
312
+
313
+ # Fix case where block_size == sparsify_factor
314
+ if odd_offset:
315
+ hidden_states = hidden_states[..., :-1, :, :]
316
+
317
+ # Indexes for selection
318
+ u = (size - self.block_size * 3 // self.sparsity_factor) // 2 + odd_offset
319
+ s = self.sparse_block_size
320
+
321
+ u_ = u + odd_offset
322
+ return torch.cat([hidden_states[..., u-s:u, :], hidden_states[..., -u_:-u_+s, :]], dim=-2)
323
+
324
+ def cat_global_sparse_local_tokens(self, x_global, x_sparse=None, x_local=None, dim=-2):
325
+
326
+ n, h, b, t, d = x_local.size()
327
+ x_global = x_global.unsqueeze(-3).expand(-1, -1, b, -1, -1)
328
+ if x_sparse is not None:
329
+ return torch.cat([x_global, x_sparse, x_local], dim=dim)
330
+ return torch.cat([x_global, x_local], dim=dim)
331
+
332
+ def chunk(self, x, n_blocks):
333
+
334
+ t, d = x.size()[-2:]
335
+ return x.reshape(*x.size()[:-2], n_blocks, -1, d)
336
+
337
+
338
+ class LSGBartEncoderAttention(BaseSelfAttention):
339
+ '''
340
+ Compute local attention with overlapping blocs
341
+ Use global attention for tokens with highest norm
342
+ '''
343
+ def __init__(
344
+ self,
345
+ config,
346
+ embed_dim,
347
+ num_heads,
348
+ dropout
349
+ ):
350
+
351
+ super().__init__(embed_dim, num_heads, dropout)
352
+
353
+ self.block_size = config.block_size
354
+ self.sparse_block_size = config.sparse_block_size
355
+ self.num_global_tokens = config.num_global_tokens
356
+ self.sparsity_factor = config.sparsity_factor
357
+
358
+ self.attention = LSGAttentionProduct(
359
+ config,
360
+ block_size=config.block_size,
361
+ sparse_block_size=config.sparse_block_size,
362
+ sparsity_factor=self.sparsity_factor,
363
+ )
364
+
365
+ self.full_attention = BaseAttentionProduct(config)
366
+
367
+ sparse_functions = {
368
+ "norm": self.get_sparse_tokens_with_norm,
369
+ "pooling": self.get_sparse_tokens_with_pooling,
370
+ "lsh": self.get_sparse_tokens_with_lsh,
371
+ "stride": self.get_sparse_tokens_with_stride,
372
+ "block_stride": self.get_sparse_tokens_with_block_stride,
373
+ }
374
+
375
+ self.sparsity_type = config.sparsity_type
376
+ self.get_sparse_elements = sparse_functions.get(self.sparsity_type, lambda x, y, z: (None, None, None))
377
+
378
+ if config.sparsity_type == "lsh":
379
+ self.lsh_num_pre_rounds = config.lsh_num_pre_rounds
380
+
381
+ def get_sparse_tokens_with_norm(self, keys, values, mask):
382
+
383
+ if self.sparsity_factor == 1:
384
+ return keys, values, mask.expand(-1, keys.size()[1], -1, -1)
385
+
386
+ with torch.no_grad():
387
+
388
+ block_size = min(self.block_size, self.sparse_block_size)
389
+ key_norm = keys.detach().norm(dim=-1, keepdim=True)
390
+ key_norm = key_norm * ~mask.transpose(-1, -2).bool()
391
+ key_norm = self.chunk(key_norm, block_size)
392
+
393
+ n, h, b, t, d = key_norm.size()
394
+
395
+ idx = key_norm.argsort(dim=-2)
396
+ del key_norm
397
+ idx += (torch.arange(b, device=keys.device)*t).reshape(1, 1, b, 1, 1)
398
+
399
+ split = (t - block_size // self.sparsity_factor, block_size // self.sparsity_factor)
400
+ sparse_idx = idx.split(split, -2)[-1].reshape(n, h, -1, 1)
401
+
402
+ d = keys.size()[-1]
403
+ keys = keys.gather(dim=-2, index=sparse_idx.expand(-1, -1, -1, d))
404
+ values = values.gather(dim=-2, index=sparse_idx.expand(-1, -1, -1, d))
405
+ mask = mask.expand(-1, h, -1, -1).transpose(-1, -2).gather(dim=-2, index=sparse_idx).transpose(-1, -2)
406
+
407
+ return keys, values, mask
408
+
409
+ def get_sparse_tokens_with_pooling(self, keys, values, mask):
410
+
411
+ if self.sparsity_factor == 1:
412
+ return keys, values, mask.expand(-1, keys.size()[1], -1, -1)
413
+
414
+ keys = self.chunk(keys, self.sparsity_factor)
415
+ values = self.chunk(values, self.sparsity_factor)
416
+
417
+ n, h, b, t, d = keys.size()
418
+ mask = mask.reshape(n, 1, b, 1, t)
419
+ mask = ~mask.transpose(-1, -2).bool()
420
+
421
+ keys = keys * mask
422
+ values = values * mask
423
+
424
+ mask = mask.sum(dim=-2)
425
+ keys = keys.sum(dim=-2) / (mask + 1e-6)
426
+ values = values.sum(dim=-2) / (mask + 1e-6)
427
+
428
+ mask = - (1. - mask.clamp(0, 1)) * 1e4
429
+ return keys.reshape(n, h, -1, d), values.reshape(n, h, -1, d), mask.expand(-1, h, -1, -1).transpose(-1, -2)
430
+
431
+ def get_sparse_tokens_with_stride(self, keys, values, mask):
432
+
433
+ if self.sparsity_factor == 1:
434
+ return keys, values, mask.expand(-1, keys.size()[1], -1, -1)
435
+
436
+ n, h, t, d = keys.size()
437
+ sparse_idx = torch.arange(t // self.sparsity_factor, device=keys.device) * self.sparsity_factor
438
+ sparse_idx = sparse_idx.reshape(1, 1, -1, 1) + (torch.arange(h, device=keys.device) % self.sparsity_factor).reshape(1, h, 1, 1)
439
+ sparse_idx = sparse_idx.expand(n, h, -1, 1)
440
+
441
+ keys = keys.gather(dim=-2, index=sparse_idx.expand(-1, -1, -1, d))
442
+ values = values.gather(dim=-2, index=sparse_idx.expand(-1, -1, -1, d))
443
+ mask = mask.expand(-1, h, -1, -1).transpose(-1, -2).gather(dim=-2, index=sparse_idx).transpose(-1, -2)
444
+
445
+ return keys, values, mask
446
+
447
+ def get_sparse_tokens_with_block_stride(self, keys, values, mask):
448
+
449
+ if self.sparsity_factor == 1:
450
+ return keys, values, mask.expand(-1, keys.size()[1], -1, -1)
451
+
452
+ n, h, t, d = keys.size()
453
+
454
+ t, b = self.block_size, t // self.block_size
455
+ sparse_idx = torch.arange(t // self.sparsity_factor, device=keys.device)
456
+ sparse_idx = sparse_idx.reshape(1, 1, 1, -1, 1) + torch.arange(h, device=keys.device).reshape(1, h, 1, 1, 1) * (t // self.sparsity_factor)
457
+ sparse_idx = (sparse_idx % t)
458
+ sparse_idx = sparse_idx + torch.arange(b, device=keys.device).reshape(1, 1, -1, 1, 1) * t
459
+ sparse_idx = sparse_idx.reshape(1, h, -1, 1).expand(n, h, -1, 1)
460
+
461
+ keys = keys.gather(dim=-2, index=sparse_idx.expand(-1, -1, -1, d))
462
+ values = values.gather(dim=-2, index=sparse_idx.expand(-1, -1, -1, d))
463
+ mask = mask.expand(-1, h, -1, -1).transpose(-1, -2).gather(dim=-2, index=sparse_idx).transpose(-1, -2)
464
+
465
+ return keys, values, mask
466
+
467
+ def get_sparse_tokens_with_lsh(self, keys, values, mask):
468
+
469
+ if self.sparsity_factor == 1:
470
+ return keys, values, mask.expand(-1, keys.size()[1], -1, -1)
471
+
472
+ block_size = min(self.block_size, self.sparse_block_size)
473
+ keys = self.chunk(keys, block_size)
474
+ values = self.chunk(values, block_size)
475
+
476
+ n, h, b, t, d = keys.size()
477
+ mask = mask.reshape(n, 1, b, 1, t)
478
+ mask = ~mask.transpose(-1, -2).bool()
479
+
480
+ keys = keys * mask
481
+ values = values * mask
482
+ mask = mask.expand(-1, h, -1, -1, -1).float()
483
+
484
+ extra_factor = 1
485
+
486
+ for _ in range(self.lsh_num_pre_rounds):
487
+ keys, values, mask = self.lsh_round(keys, values, mask, t*extra_factor)
488
+
489
+ keys, values, mask = self.lsh_round(keys, values, mask, t//self.sparsity_factor)
490
+ keys /= mask + 1e-8
491
+ values /= mask + 1e-8
492
+
493
+ mask = -10000 * (1. - mask.clamp(0, 1))
494
+
495
+ return keys.reshape(n, h, -1, d), values.reshape(n, h, -1, d), mask.transpose(-1, -2).reshape(n, h, 1, -1)
496
+
497
+ def lsh_round(self, keys, values, mask, output_size):
498
+
499
+ with torch.no_grad():
500
+
501
+ n_hashes = output_size // 2
502
+ n, h, b, t, d = keys.size()
503
+ binary_mask = mask.clamp(0, 1)
504
+
505
+ indexes = (torch.nn.functional.normalize(keys, dim=-1) * binary_mask) @ torch.randn(1, h, 1, d, n_hashes, device=keys.device)
506
+ indexes = torch.cat([indexes, -indexes], dim=-1).argmax(dim=-1, keepdim=True)
507
+
508
+ n, h, b, t, d = keys.size()
509
+
510
+ x_ = torch.zeros(n, h, b, output_size, d, device=keys.device)
511
+ mask_ = torch.zeros(n, h, b, output_size, 1, device=keys.device)
512
+ keys = torch.scatter_add(x_, dim=-2, index=indexes.expand(-1, -1, -1, -1, d), src=keys)
513
+ values = torch.scatter_add(x_, dim=-2, index=indexes.expand(-1, -1, -1, -1, d), src=values)
514
+ mask = torch.scatter_add(mask_, dim=-2, index=indexes, src=mask)
515
+
516
+ return keys[..., :output_size, :], values[..., :output_size, :], mask[..., :output_size, :]
517
+
518
+ def forward(
519
+ self,
520
+ hidden_states,
521
+ attention_mask=None,
522
+ layer_head_mask=None,
523
+ output_attentions=False
524
+ ):
525
+
526
+ query_layer, key_layer, value_layer = self.project_QKV(hidden_states)
527
+ outputs = self.not_causal_forward(
528
+ query_layer,
529
+ key_layer,
530
+ value_layer,
531
+ attention_mask=attention_mask[:, :, :1, :],
532
+ head_mask=layer_head_mask,
533
+ output_attentions=output_attentions
534
+ )
535
+
536
+ return self.out_proj(outputs), None, None
537
+
538
+ def not_causal_forward(
539
+ self,
540
+ query_layer,
541
+ key_layer,
542
+ value_layer,
543
+ attention_mask=None,
544
+ head_mask=None,
545
+ output_attentions=False,
546
+ ):
547
+
548
+ n, h, t, d = query_layer.size()
549
+
550
+ # Cat global mask
551
+ attention_mask = torch.nn.functional.pad(attention_mask, (self.num_global_tokens, 0), value=0)
552
+
553
+ # Use normal attention if local attention covers every tokens
554
+ if t <= 2 * self.block_size + self.num_global_tokens:
555
+ context_layer = self.full_attention(
556
+ query_layer=query_layer,
557
+ key_layer=key_layer,
558
+ value_layer=value_layer,
559
+ attention_mask=attention_mask
560
+ )
561
+
562
+ if head_mask is not None:
563
+ context_layer = context_layer * head_mask[:, :, :1, :1]
564
+ return self.reshape_output(context_layer)
565
+
566
+ # Split input into global tokens and other tokens
567
+ split = (self.num_global_tokens, t - self.num_global_tokens)
568
+ global_query, query_layer = query_layer.split(split, dim=-2)
569
+
570
+ # Get global_attention
571
+ bos = self.full_attention(
572
+ query_layer=global_query,
573
+ key_layer=key_layer,
574
+ value_layer=value_layer,
575
+ attention_mask=attention_mask
576
+ )
577
+
578
+ # Split K Q M on global and non global
579
+ global_key, key_layer = key_layer.split(split, dim=-2)
580
+ global_value, value_layer = value_layer.split(split, dim=-2)
581
+ global_mask, attention_mask = attention_mask.split(split, dim=-1)
582
+
583
+ n, h, t, d = key_layer.size()
584
+
585
+ # Get sparse idx
586
+ sparse_key, sparse_value, sparse_mask = (None, None, None)
587
+
588
+ if self.sparse_block_size and self.sparsity_factor > 0:
589
+ sparse_key, sparse_value, sparse_mask = self.get_sparse_elements(key_layer, value_layer, attention_mask)
590
+
591
+ # Expand masks on heads
592
+ attention_mask = attention_mask.expand(-1, h, -1, -1)
593
+ global_mask = global_mask.expand(-1, h, -1, -1)
594
+
595
+ # Compute dot product attention
596
+ context_layer = self.attention(
597
+ query_layer,
598
+ key_layer,
599
+ value_layer,
600
+ attention_mask,
601
+ sparse_key=sparse_key,
602
+ sparse_value=sparse_value,
603
+ sparse_mask=sparse_mask,
604
+ global_key=global_key,
605
+ global_value=global_value,
606
+ global_mask=global_mask
607
+ )
608
+
609
+ # Merge global and local-sparse tokens
610
+ context_layer = torch.cat([bos, context_layer], dim=-2)
611
+ if head_mask is not None:
612
+ context_layer = context_layer * head_mask[:, :, :1, :1]
613
+ context_layer = self.reshape_output(context_layer)
614
+
615
+ return context_layer
616
+
617
+ def chunk(self, x, chunk_size):
618
+
619
+ n, h, t, d = x.size()
620
+ return x.reshape(n, h, -1, chunk_size, d)
621
+
622
+
623
+ class LSGBartEncoderLayer(BartEncoderLayer):
624
+
625
+ def __init__(self, config):
626
+
627
+ super().__init__(config)
628
+ self.self_attn = LSGBartEncoderAttention(
629
+ config=config,
630
+ embed_dim=self.embed_dim,
631
+ num_heads=config.encoder_attention_heads,
632
+ dropout=config.attention_dropout,
633
+ )
634
+
635
+
636
+ class LSGBartDecoderLayer(BartDecoderLayer):
637
+
638
+ def __init__(self, config):
639
+
640
+ super().__init__(config)
641
+
642
+
643
+ class LSGBartClassificationHead(BartClassificationHead):
644
+ """Head for sentence-level classification tasks."""
645
+
646
+ def __init__(
647
+ self,
648
+ input_dim,
649
+ inner_dim,
650
+ num_classes,
651
+ pooler_dropout,
652
+ ):
653
+
654
+ super().__init__(input_dim, inner_dim, num_classes, pooler_dropout)
655
+
656
+
657
+ class LSGBartPretrainedModel(BartPretrainedModel):
658
+
659
+ config_class = LSGBartConfig
660
+
661
+ def _set_gradient_checkpointing(self, module, value=False):
662
+
663
+ if isinstance(module, (BartDecoder, BartEncoder, LSGBartDecoder, LSGBartEncoder)):
664
+ module.gradient_checkpointing = value
665
+
666
+
667
+ class PretrainedLSGBartModel(LSGBartPretrainedModel):
668
+
669
+ def __init_subclass__(self):
670
+ warnings.warn(
671
+ "The class `PretrainedBartModel` has been depreciated, please use `LSGBartPretrainedModel` instead.",
672
+ FutureWarning,
673
+ )
674
+
675
+
676
+ class LSGBartEncoder(LSGBartPretrainedModel, BartEncoder):
677
+ """
678
+ Transformer encoder consisting of *config.encoder_layers* self attention layers. Each layer is a
679
+ :class:`BartEncoderLayer`.
680
+ Args:
681
+ config: BartConfig
682
+ embed_tokens (nn.Embedding): output embedding
683
+ """
684
+
685
+ def __init__(self, config, embed_tokens=None):
686
+
687
+ super().__init__(config)
688
+ self.dropout = config.dropout
689
+ self.layerdrop = config.encoder_layerdrop
690
+
691
+ embed_dim = config.d_model
692
+ self.padding_idx = config.pad_token_id
693
+ self.max_source_positions = config.max_position_embeddings
694
+ self.embed_scale = math.sqrt(embed_dim) if config.scale_embedding else 1.0
695
+
696
+ if embed_tokens is not None:
697
+ self.embed_tokens = embed_tokens
698
+ else:
699
+ self.embed_tokens = nn.Embedding(config.vocab_size, embed_dim, self.padding_idx)
700
+
701
+ self.embed_positions = BartLearnedPositionalEmbedding(
702
+ config.max_position_embeddings,
703
+ embed_dim,
704
+ )
705
+ self.layers = nn.ModuleList([LSGBartEncoderLayer(config) for _ in range(config.encoder_layers)])
706
+ self.layernorm_embedding = nn.LayerNorm(embed_dim)
707
+
708
+ #
709
+ assert hasattr(config, "num_global_tokens")
710
+ self.num_global_tokens = config.num_global_tokens
711
+ self.pad_idx = config.pad_token_id
712
+
713
+ assert hasattr(config, "block_size") and hasattr(config, "adaptive")
714
+ self.block_size = config.block_size
715
+ self.adaptive = config.adaptive
716
+ self.mask_first_token = config.mask_first_token
717
+ self.pool_with_global = config.pool_with_global
718
+ self.pass_global_tokens_to_decoder = config.pass_global_tokens_to_decoder
719
+
720
+ self.global_embeddings = nn.Embedding(512, embedding_dim=config.d_model)
721
+
722
+ self.gradient_checkpointing = False
723
+
724
+ # Initialize weights and apply final processing
725
+ self.post_init()
726
+
727
+ def forward(self,
728
+ input_ids=None,
729
+ attention_mask=None,
730
+ head_mask=None,
731
+ inputs_embeds=None,
732
+ output_attentions=None,
733
+ output_hidden_states=None,
734
+ return_dict=None
735
+ ):
736
+
737
+
738
+ inputs_ = input_ids if input_ids is not None else inputs_embeds
739
+ n, t = inputs_.size()[:2]
740
+
741
+ if attention_mask is None:
742
+ attention_mask = torch.ones(n, t, device=inputs_.device)
743
+ if self.mask_first_token:
744
+ attention_mask[:, 0] = 0
745
+
746
+ b = self.block_size * 2
747
+ pad = t % self.block_size
748
+
749
+ # Check if t is multiple of block_size and pad
750
+ if self.adaptive and t > b and pad > 0:
751
+ pad_length = self.block_size - pad
752
+ if input_ids is not None:
753
+ input_ids = torch.nn.functional.pad(input_ids, (0, pad_length), value=self.pad_idx)
754
+ else:
755
+ inputs_embeds = torch.nn.functional.pad(inputs_embeds.transpose(-1, -2), (0, pad_length), value=0.).transpose(-1, -2)
756
+ attention_mask = torch.nn.functional.pad(attention_mask, (0, pad_length), value=0)
757
+
758
+ n, t_ = attention_mask.size()
759
+
760
+ encoder_outputs = self.forward_with_adaptive(
761
+ input_ids=input_ids,
762
+ attention_mask=attention_mask,
763
+ head_mask=head_mask,
764
+ inputs_embeds=inputs_embeds,
765
+ output_attentions=output_attentions,
766
+ output_hidden_states=output_hidden_states,
767
+ return_dict=return_dict,
768
+ )
769
+
770
+ context = encoder_outputs[0]
771
+ diff = t - t_
772
+
773
+ if self.pass_global_tokens_to_decoder:
774
+ offset = self.num_global_tokens
775
+ else:
776
+ if self.pool_with_global:
777
+ context[:, self.num_global_tokens] = context[:, 0]
778
+ context = context[..., self.num_global_tokens:, :]
779
+ offset = 0
780
+
781
+ # Adapt sequence to initial shape
782
+ if diff < 0:
783
+ context = context[:, :t + offset]
784
+
785
+ if return_dict:
786
+ encoder_outputs.last_hidden_state = context
787
+ else:
788
+ encoder_outputs = (context, ) + encoder_outputs[1:]
789
+
790
+ return encoder_outputs
791
+
792
+ def forward_with_adaptive(
793
+ self,
794
+ input_ids=None,
795
+ attention_mask=None,
796
+ head_mask=None,
797
+ inputs_embeds=None,
798
+ output_attentions=None,
799
+ output_hidden_states=None,
800
+ return_dict=None,
801
+ ):
802
+
803
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
804
+ output_hidden_states = (
805
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
806
+ )
807
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
808
+
809
+ # retrieve input_ids and inputs_embeds
810
+ if input_ids is not None and inputs_embeds is not None:
811
+ raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
812
+ elif input_ids is not None:
813
+ input_shape = input_ids.size()
814
+ input_ids = input_ids.view(-1, input_shape[-1])
815
+ elif inputs_embeds is not None:
816
+ input_shape = inputs_embeds.size()[:-1]
817
+ else:
818
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
819
+
820
+ if inputs_embeds is None:
821
+ inputs_embeds = self.embed_tokens(input_ids) * self.embed_scale
822
+
823
+ embed_pos = self.embed_positions(input_shape)
824
+ hidden_states = inputs_embeds + embed_pos
825
+
826
+ # Add global tokens
827
+ n, t, d = hidden_states.size()
828
+ global_idx = torch.arange(self.num_global_tokens, device=hidden_states.device).reshape(1, -1)
829
+ hidden_states = torch.cat([self.global_embeddings(global_idx).expand(n, -1, -1), hidden_states], dim=-2)
830
+
831
+ hidden_states = self.layernorm_embedding(hidden_states)
832
+ hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
833
+
834
+ # expand attention_mask
835
+ if attention_mask is not None:
836
+ # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
837
+ attention_mask = _expand_mask(attention_mask, inputs_embeds.dtype)
838
+
839
+ encoder_states = () if output_hidden_states else None
840
+ all_attentions = () if output_attentions else None
841
+
842
+ # check if head_mask has a correct number of layers specified if desired
843
+ if head_mask is not None:
844
+ if head_mask.size()[0] != (len(self.layers)):
845
+ raise ValueError(
846
+ f"The head_mask should be specified for {len(self.layers)} layers, but it is for {head_mask.size()[0]}."
847
+ )
848
+
849
+ for idx, encoder_layer in enumerate(self.layers):
850
+ if output_hidden_states:
851
+ encoder_states = encoder_states + (hidden_states,)
852
+ # add LayerDrop (see https://arxiv.org/abs/1909.11556 for description)
853
+ dropout_probability = random.uniform(0, 1)
854
+ if self.training and (dropout_probability < self.layerdrop): # skip the layer
855
+ layer_outputs = (None, None)
856
+ else:
857
+ if self.gradient_checkpointing and self.training:
858
+
859
+ def create_custom_forward(module):
860
+ def custom_forward(*inputs):
861
+ return module(*inputs, output_attentions)
862
+
863
+ return custom_forward
864
+
865
+ layer_outputs = torch.utils.checkpoint.checkpoint(
866
+ create_custom_forward(encoder_layer),
867
+ hidden_states,
868
+ attention_mask,
869
+ (head_mask[idx] if head_mask is not None else None),
870
+ )
871
+ else:
872
+ layer_outputs = encoder_layer(
873
+ hidden_states,
874
+ attention_mask,
875
+ layer_head_mask=(head_mask[idx] if head_mask is not None else None),
876
+ output_attentions=output_attentions,
877
+ )
878
+
879
+ hidden_states = layer_outputs[0]
880
+
881
+ if output_attentions:
882
+ all_attentions = all_attentions + (layer_outputs[1],)
883
+
884
+ if output_hidden_states:
885
+ encoder_states = encoder_states + (hidden_states,)
886
+
887
+ if not return_dict:
888
+ return tuple(v for v in [hidden_states, encoder_states, all_attentions] if v is not None)
889
+ return BaseModelOutput(
890
+ last_hidden_state=hidden_states, hidden_states=encoder_states, attentions=all_attentions
891
+ )
892
+
893
+
894
+ class LSGBartDecoder(BartDecoder, LSGBartPretrainedModel):
895
+ """
896
+ Transformer decoder consisting of *config.decoder_layers* layers. Each layer is a :class:`LSGBartDecoderLayer`
897
+ Args:
898
+ config: BartConfig
899
+ embed_tokens (nn.Embedding): output embedding
900
+ """
901
+
902
+ def __init__(self, config, embed_tokens=None):
903
+
904
+ LSGBartPretrainedModel.__init__(self, config)
905
+
906
+ self.dropout = config.dropout
907
+ self.layerdrop = config.decoder_layerdrop
908
+ self.padding_idx = config.pad_token_id
909
+ self.max_target_positions = config.max_position_embeddings
910
+ self.embed_scale = math.sqrt(config.d_model) if config.scale_embedding else 1.0
911
+ self.adaptive = config.adaptive
912
+
913
+ if embed_tokens is not None:
914
+ self.embed_tokens = embed_tokens
915
+ else:
916
+ self.embed_tokens = nn.Embedding(config.vocab_size, config.d_model, self.padding_idx)
917
+
918
+ self.embed_positions = BartLearnedPositionalEmbedding(
919
+ config.max_position_embeddings,
920
+ config.d_model,
921
+ )
922
+ self.layers = nn.ModuleList([LSGBartDecoderLayer(config) for _ in range(config.decoder_layers)])
923
+ self.layernorm_embedding = nn.LayerNorm(config.d_model)
924
+
925
+ self.gradient_checkpointing = False
926
+
927
+ # Initialize weights and apply final processing
928
+ self.post_init()
929
+
930
+
931
+ class LSGBartModel(LSGBartPretrainedModel, BartModel):
932
+
933
+ def __init__(self, config):
934
+
935
+ LSGBartPretrainedModel.__init__(self, config)
936
+
937
+ padding_idx, vocab_size = config.pad_token_id, config.vocab_size
938
+ self.shared = nn.Embedding(vocab_size, config.d_model, padding_idx)
939
+
940
+ self.pass_global_tokens_to_decoder = config.pass_global_tokens_to_decoder
941
+ self.num_global_tokens = config.num_global_tokens
942
+
943
+ self.encoder = LSGBartEncoder(config, self.shared)
944
+ self.decoder = LSGBartDecoder(config, self.shared)
945
+
946
+ # Initialize weights and apply final processing
947
+ self.post_init()
948
+
949
+ def forward(
950
+ self,
951
+ input_ids=None,
952
+ attention_mask=None,
953
+ decoder_input_ids=None,
954
+ decoder_attention_mask=None,
955
+ head_mask=None,
956
+ decoder_head_mask=None,
957
+ cross_attn_head_mask=None,
958
+ encoder_outputs=None,
959
+ past_key_values=None,
960
+ inputs_embeds=None,
961
+ decoder_inputs_embeds=None,
962
+ use_cache=None,
963
+ output_attentions=None,
964
+ output_hidden_states=None,
965
+ return_dict=None,
966
+ ):
967
+
968
+ # different to other models, Bart automatically creates decoder_input_ids from
969
+ # input_ids if no decoder_input_ids are provided
970
+ if decoder_input_ids is None and decoder_inputs_embeds is None:
971
+ decoder_input_ids = shift_tokens_right(
972
+ input_ids, self.config.pad_token_id, self.config.decoder_start_token_id
973
+ )
974
+
975
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
976
+ output_hidden_states = (
977
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
978
+ )
979
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
980
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
981
+
982
+ if encoder_outputs is None:
983
+ encoder_outputs = self.encoder(
984
+ input_ids=input_ids,
985
+ attention_mask=attention_mask,
986
+ head_mask=head_mask,
987
+ inputs_embeds=inputs_embeds,
988
+ output_attentions=output_attentions,
989
+ output_hidden_states=output_hidden_states,
990
+ return_dict=return_dict,
991
+ )
992
+ # If the user passed a tuple for encoder_outputs, we wrap it in a BaseModelOutput when return_dict=True
993
+ elif return_dict and not isinstance(encoder_outputs, BaseModelOutput):
994
+ encoder_outputs = BaseModelOutput(
995
+ last_hidden_state=encoder_outputs[0],
996
+ hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None,
997
+ attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None,
998
+ )
999
+
1000
+ # Pad mask for global tokens
1001
+ if self.pass_global_tokens_to_decoder and attention_mask is not None:
1002
+ attention_mask = torch.nn.functional.pad(attention_mask, pad=(self.num_global_tokens, 0), value=1)
1003
+
1004
+ # decoder outputs consists of (dec_features, past_key_value, dec_hidden, dec_attn)
1005
+ decoder_outputs = self.decoder(
1006
+ input_ids=decoder_input_ids,
1007
+ attention_mask=decoder_attention_mask,
1008
+ encoder_hidden_states=encoder_outputs[0],
1009
+ encoder_attention_mask=attention_mask,
1010
+ head_mask=decoder_head_mask,
1011
+ cross_attn_head_mask=cross_attn_head_mask,
1012
+ past_key_values=past_key_values,
1013
+ inputs_embeds=decoder_inputs_embeds,
1014
+ use_cache=use_cache,
1015
+ output_attentions=output_attentions,
1016
+ output_hidden_states=output_hidden_states,
1017
+ return_dict=return_dict,
1018
+ )
1019
+
1020
+ if not return_dict:
1021
+ return decoder_outputs + encoder_outputs
1022
+
1023
+ return Seq2SeqModelOutput(
1024
+ last_hidden_state=decoder_outputs.last_hidden_state,
1025
+ past_key_values=decoder_outputs.past_key_values,
1026
+ decoder_hidden_states=decoder_outputs.hidden_states,
1027
+ decoder_attentions=decoder_outputs.attentions,
1028
+ cross_attentions=decoder_outputs.cross_attentions,
1029
+ encoder_last_hidden_state=encoder_outputs.last_hidden_state,
1030
+ encoder_hidden_states=encoder_outputs.hidden_states,
1031
+ encoder_attentions=encoder_outputs.attentions,
1032
+ )
1033
+
1034
+
1035
+ class LSGBartForConditionalGeneration(BartForConditionalGeneration, LSGBartPretrainedModel):
1036
+
1037
+ base_model_prefix = "model"
1038
+ _keys_to_ignore_on_load_missing = [r"final_logits_bias", r"lm_head\.weight"]
1039
+
1040
+ def __init__(self, config):
1041
+
1042
+ LSGBartPretrainedModel.__init__(self, config)
1043
+ self.model = LSGBartModel(config)
1044
+ self.register_buffer("final_logits_bias", torch.zeros((1, self.model.shared.num_embeddings)))
1045
+ self.lm_head = nn.Linear(config.d_model, self.model.shared.num_embeddings, bias=False)
1046
+
1047
+ # Initialize weights and apply final processing
1048
+ self.post_init()
1049
+
1050
+
1051
+ class LSGBartForSequenceClassification(BartForSequenceClassification, LSGBartPretrainedModel):
1052
+
1053
+ def __init__(self, config: LSGBartConfig, **kwargs):
1054
+
1055
+ LSGBartPretrainedModel.__init__(self, config, **kwargs)
1056
+ self.model = LSGBartModel(config)
1057
+ self.classification_head = LSGBartClassificationHead(
1058
+ config.d_model,
1059
+ config.d_model,
1060
+ config.num_labels,
1061
+ config.classifier_dropout,
1062
+ )
1063
+ self.model._init_weights(self.classification_head.dense)
1064
+ self.model._init_weights(self.classification_head.out_proj)
1065
+
1066
+
1067
+ class LSGBartForQuestionAnswering(BartForQuestionAnswering, LSGBartPretrainedModel):
1068
+
1069
+ def __init__(self, config: LSGBartConfig):
1070
+
1071
+ LSGBartPretrainedModel.__init__(self, config)
1072
+
1073
+ config.num_labels = 2
1074
+ self.num_labels = config.num_labels
1075
+
1076
+ self.model = LSGBartModel(config)
1077
+ self.qa_outputs = nn.Linear(config.hidden_size, config.num_labels)
1078
+
1079
+ self.model._init_weights(self.qa_outputs)
1080
+
1081
+
1082
+ class LSGBartDecoderWrapper(LSGBartPretrainedModel):
1083
+ """
1084
+ This wrapper class is a helper class to correctly load pretrained checkpoints when the causal language model is
1085
+ used in combination with the :class:`~transformers.EncoderDecoderModel` framework.
1086
+ """
1087
+
1088
+ def __init__(self, config: LSGBartConfig):
1089
+ super().__init__(config)
1090
+ self.decoder = LSGBartDecoder(config)
1091
+
1092
+ def forward(self, *args, **kwargs):
1093
+ return self.decoder(*args, **kwargs)
1094
+
1095
+
1096
+ class LSGBartForCausalLM(BartForCausalLM, LSGBartPretrainedModel):
1097
+
1098
+ def __init__(self, config: LSGBartConfig):
1099
+
1100
+ config = copy.deepcopy(config)
1101
+ config.is_decoder = True
1102
+ config.is_encoder_decoder = False
1103
+ LSGBartPretrainedModel.__init__(self, config)
1104
+ self.model = LSGBartDecoderWrapper(config)
1105
+
1106
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
1107
+
1108
+ # Initialize weights and apply final processing
1109
+ self.post_init()
1110
+
1111
+
1112
+ def str_to_class(classname):
1113
+ return getattr(sys.modules[__name__], classname)
1114
+
1115
+ # Register model in Auto API
1116
+ try:
1117
+ LSGBartConfig.register_for_auto_class()
1118
+ for key, value in AUTO_MAP.items():
1119
+ str_to_class(value.split(".")[-1]).register_for_auto_class(key)
1120
+ except:
1121
+ warn("AutoRegister isn't available, you'll have to manually copy modeling.py after .save_pretrained(...).")
1122
+ warn("Update to transformers >= 4.17.0 to fix.")
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:414b98e7d1f31950bf2737cfa78f72e63340ef5ac0be251272d4d9152e9e7f60
3
+ size 653914167
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "sep_token": "</s>", "pad_token": "<pad>", "cls_token": "<s>", "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": false}}
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"errors": "replace", "bos_token": "<s>", "eos_token": "</s>", "sep_token": "</s>", "cls_token": "<s>", "unk_token": "<unk>", "pad_token": "<pad>", "mask_token": "<mask>", "add_prefix_space": false, "trim_offsets": true, "model_max_length": 16384, "special_tokens_map_file": null, "name_or_path": "/data/ccondevaux/lsg/text-summarization/tmp_final/mediasum/lsg_local_16384", "tokenizer_class": "BartTokenizer"}
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "train_loss": 1.5347596183515975,
4
+ "train_runtime": 69651.7637,
5
+ "train_samples": 443596,
6
+ "train_samples_per_second": 6.369,
7
+ "train_steps_per_second": 0.199
8
+ }
trainer_state.json ADDED
@@ -0,0 +1,187 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 0.9999729483584162,
5
+ "global_step": 13862,
6
+ "is_hyper_param_search": false,
7
+ "is_local_process_zero": true,
8
+ "is_world_process_zero": true,
9
+ "log_history": [
10
+ {
11
+ "epoch": 0.04,
12
+ "learning_rate": 2.8839221341023793e-05,
13
+ "loss": 1.4339,
14
+ "step": 500
15
+ },
16
+ {
17
+ "epoch": 0.07,
18
+ "learning_rate": 5.767844268204759e-05,
19
+ "loss": 1.4587,
20
+ "step": 1000
21
+ },
22
+ {
23
+ "epoch": 0.11,
24
+ "learning_rate": 7.927535070140282e-05,
25
+ "loss": 1.5324,
26
+ "step": 1500
27
+ },
28
+ {
29
+ "epoch": 0.14,
30
+ "learning_rate": 7.606893787575151e-05,
31
+ "loss": 1.5695,
32
+ "step": 2000
33
+ },
34
+ {
35
+ "epoch": 0.18,
36
+ "learning_rate": 7.286252505010021e-05,
37
+ "loss": 1.5931,
38
+ "step": 2500
39
+ },
40
+ {
41
+ "epoch": 0.22,
42
+ "learning_rate": 6.96561122244489e-05,
43
+ "loss": 1.5994,
44
+ "step": 3000
45
+ },
46
+ {
47
+ "epoch": 0.25,
48
+ "learning_rate": 6.64496993987976e-05,
49
+ "loss": 1.6028,
50
+ "step": 3500
51
+ },
52
+ {
53
+ "epoch": 0.29,
54
+ "learning_rate": 6.32432865731463e-05,
55
+ "loss": 1.5965,
56
+ "step": 4000
57
+ },
58
+ {
59
+ "epoch": 0.32,
60
+ "learning_rate": 6.0036873747494996e-05,
61
+ "loss": 1.6002,
62
+ "step": 4500
63
+ },
64
+ {
65
+ "epoch": 0.36,
66
+ "learning_rate": 5.683046092184369e-05,
67
+ "loss": 1.6017,
68
+ "step": 5000
69
+ },
70
+ {
71
+ "epoch": 0.4,
72
+ "learning_rate": 5.362404809619239e-05,
73
+ "loss": 1.5735,
74
+ "step": 5500
75
+ },
76
+ {
77
+ "epoch": 0.43,
78
+ "learning_rate": 5.041763527054109e-05,
79
+ "loss": 1.5765,
80
+ "step": 6000
81
+ },
82
+ {
83
+ "epoch": 0.47,
84
+ "learning_rate": 4.7211222444889784e-05,
85
+ "loss": 1.5713,
86
+ "step": 6500
87
+ },
88
+ {
89
+ "epoch": 0.5,
90
+ "learning_rate": 4.400480961923849e-05,
91
+ "loss": 1.5619,
92
+ "step": 7000
93
+ },
94
+ {
95
+ "epoch": 0.54,
96
+ "learning_rate": 4.0798396793587175e-05,
97
+ "loss": 1.5509,
98
+ "step": 7500
99
+ },
100
+ {
101
+ "epoch": 0.58,
102
+ "learning_rate": 3.759198396793588e-05,
103
+ "loss": 1.5421,
104
+ "step": 8000
105
+ },
106
+ {
107
+ "epoch": 0.61,
108
+ "learning_rate": 3.438557114228457e-05,
109
+ "loss": 1.5299,
110
+ "step": 8500
111
+ },
112
+ {
113
+ "epoch": 0.65,
114
+ "learning_rate": 3.117915831663327e-05,
115
+ "loss": 1.5285,
116
+ "step": 9000
117
+ },
118
+ {
119
+ "epoch": 0.69,
120
+ "learning_rate": 2.7972745490981967e-05,
121
+ "loss": 1.5326,
122
+ "step": 9500
123
+ },
124
+ {
125
+ "epoch": 0.72,
126
+ "learning_rate": 2.4766332665330663e-05,
127
+ "loss": 1.5119,
128
+ "step": 10000
129
+ },
130
+ {
131
+ "epoch": 0.76,
132
+ "learning_rate": 2.1559919839679358e-05,
133
+ "loss": 1.5147,
134
+ "step": 10500
135
+ },
136
+ {
137
+ "epoch": 0.79,
138
+ "learning_rate": 1.8353507014028057e-05,
139
+ "loss": 1.4978,
140
+ "step": 11000
141
+ },
142
+ {
143
+ "epoch": 0.83,
144
+ "learning_rate": 1.5147094188376754e-05,
145
+ "loss": 1.4914,
146
+ "step": 11500
147
+ },
148
+ {
149
+ "epoch": 0.87,
150
+ "learning_rate": 1.1940681362725453e-05,
151
+ "loss": 1.4889,
152
+ "step": 12000
153
+ },
154
+ {
155
+ "epoch": 0.9,
156
+ "learning_rate": 8.734268537074148e-06,
157
+ "loss": 1.4894,
158
+ "step": 12500
159
+ },
160
+ {
161
+ "epoch": 0.94,
162
+ "learning_rate": 5.527855711422846e-06,
163
+ "loss": 1.4743,
164
+ "step": 13000
165
+ },
166
+ {
167
+ "epoch": 0.97,
168
+ "learning_rate": 2.321442885771543e-06,
169
+ "loss": 1.4627,
170
+ "step": 13500
171
+ },
172
+ {
173
+ "epoch": 1.0,
174
+ "step": 13862,
175
+ "total_flos": 7.817060132659814e+17,
176
+ "train_loss": 1.5347596183515975,
177
+ "train_runtime": 69651.7637,
178
+ "train_samples_per_second": 6.369,
179
+ "train_steps_per_second": 0.199
180
+ }
181
+ ],
182
+ "max_steps": 13862,
183
+ "num_train_epochs": 1,
184
+ "total_flos": 7.817060132659814e+17,
185
+ "trial_name": null,
186
+ "trial_params": null
187
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:23b05dd0e9ec7f982e19087b8e7878149c1ef65c9ba808ba9b25866700cfc6ed
3
+ size 3439
vocab.json ADDED
The diff for this file is too large to render. See raw diff