ccdv commited on
Commit
0faa84c
1 Parent(s): 604fd9b
README.md ADDED
@@ -0,0 +1,117 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - summarization
6
+ datasets:
7
+ - ccdv/mediasum
8
+ metrics:
9
+ - rouge
10
+ model-index:
11
+ - name: ccdv/lsg-bart-base-4096-mediasum
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ **This model relies on a custom modeling file, you need to add trust_remote_code=True**\
19
+ **See [\#13467](https://github.com/huggingface/transformers/pull/13467)**
20
+
21
+ ```python
22
+ from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
23
+
24
+ tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-base-4096-mediasum", trust_remote_code=True)
25
+ model = AutoModelForSeq2SeqLM.from_pretrained("ccdv/lsg-bart-base-4096-mediasum", trust_remote_code=True)
26
+
27
+ text = "Replace by what you want."
28
+ pipe = pipeline("text2text-generation", model=model, tokenizer=tokenizer, device=0)
29
+ generated_text = pipe(
30
+ text,
31
+ truncation=True,
32
+ max_length=64,
33
+ no_repeat_ngram_size=7,
34
+ num_beams=2,
35
+ early_stopping=True
36
+ )
37
+ ```
38
+
39
+ # ccdv/lsg-bart-base-4096-mediasum
40
+
41
+ This model is a fine-tuned version of [ccdv/lsg-bart-base-4096](https://huggingface.co/ccdv/lsg-bart-base-4096) on the ccdv/mediasum roberta_prepended dataset. \
42
+ It achieves the following results on the test set:
43
+
44
+ | Length | Sparse Type | Block Size | Sparsity | Connexions | R1 | R2 | RL | RLsum |
45
+ |:------ |:------------ |:---------- |:-------- | :--------- |:----- |:----- |:----- |:----- |
46
+ | 4096 | Local | 256 | 0 | 768 | 35.16 | 18.13 | 31.54 | 32.20 |
47
+ | 4096 | Local | 128 | 0 | 384 | 34.16 | 17.61 | 30.75 | 31.41 |
48
+ | 4096 | Pooling | 128 | 4 | 644 | 34.52 | 17.71 | 31.01 | 31.67 |
49
+ | 4096 | Stride | 128 | 4 | 644 | 35.05 | 18.11 | 31.47 | 32.13 |
50
+ | 4096 | Block Stride | 128 | 4 | 644 | 34.72 | 17.81 | 31.13 | 31.82 |
51
+ | 4096 | Norm | 128 | 4 | 644 | 34.75 | 17.86 | 31.10 | 31.77 |
52
+ | 4096 | LSH | 128 | 4 | 644 | 34.54 | 17.81 | 31.05 | 31.71 |
53
+
54
+ With smaller block size (lower ressources):
55
+
56
+ | Length | Sparse Type | Block Size | Sparsity | Connexions | R1 | R2 | RL | RLsum |
57
+ |:------ |:------------ |:---------- |:-------- | :--------- |:----- |:----- |:----- |:----- |
58
+ | 4096 | Local | 64 | 0 | 192 | 32.55 | 16.66 | 29.36 | 30.00 |
59
+ | 4096 | Local | 32 | 0 | 96 | 30.98 | 15.41 | 27.84 | 28.46 |
60
+ | 4096 | Pooling | 32 | 4 | 160 | 31.84 | 16.02 | 28.68 | 29.30 |
61
+ | 4096 | Stride | 32 | 4 | 160 | 32.67 | 16.68 | 29.47 | 30.10 |
62
+ | 4096 | Block Stride | 32 | 4 | 160 | 32.51 | 16.64 | 29.33 | 29.94 |
63
+ | 4096 | Norm | 32 | 4 | 160 | 32.44 | 16.48 | 29.20 | 29.79 |
64
+ | 4096 | LSH | 32 | 4 | 160 | 31.79 | 16.04 | 28.67 | 29.31 |
65
+
66
+ ## Model description
67
+ The model relies on Local-Sparse-Global attention to handle long sequences:
68
+ ![attn](attn.png)
69
+
70
+ The model has about ~145 millions parameters (6 encoder layers - 6 decoder layers). \
71
+ The model is warm started from BART-base, converted to handle long sequences (encoder only) and fine tuned.
72
+
73
+ ## Intended uses & limitations
74
+
75
+ More information needed
76
+
77
+ ## Training and evaluation data
78
+
79
+ More information needed
80
+
81
+ ## Training procedure
82
+
83
+ ### Training hyperparameters
84
+
85
+ The following hyperparameters were used during training:
86
+ - learning_rate: 8e-05
87
+ - train_batch_size: 8
88
+ - seed: 42
89
+ - gradient_accumulation_steps: 4
90
+ - total_train_batch_size: 32
91
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
92
+ - lr_scheduler_type: linear
93
+ - lr_scheduler_warmup_ratio: 0.1
94
+ - num_epochs: 6.0
95
+
96
+ ### Generate hyperparameters
97
+
98
+ The following hyperparameters were used during generation:
99
+ - dataset_name: ccdv/mediasum
100
+ - dataset_config_name: roberta_prepended
101
+ - eval_batch_size: 8
102
+ - eval_samples: 10000
103
+ - early_stopping: True
104
+ - ignore_pad_token_for_loss: True
105
+ - length_penalty: 2.0
106
+ - max_length: 128
107
+ - min_length: 3
108
+ - num_beams: 5
109
+ - no_repeat_ngram_size: None
110
+ - seed: 123
111
+
112
+ ### Framework versions
113
+
114
+ - Transformers 4.18.0
115
+ - Pytorch 1.10.1+cu102
116
+ - Datasets 2.1.0
117
+ - Tokenizers 0.11.6
attn.png ADDED
config.json ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/data/ccondevaux/lsg/text-summarization/tmp_final/mediasum/lsg_local_prepended2",
3
+ "activation_dropout": 0.1,
4
+ "activation_function": "gelu",
5
+ "adaptive": true,
6
+ "add_bias_logits": false,
7
+ "add_final_layer_norm": false,
8
+ "architectures": [
9
+ "LSGBartForConditionalGeneration"
10
+ ],
11
+ "attention_dropout": 0.1,
12
+ "auto_map": {
13
+ "AutoConfig": "modeling_lsg_bart.LSGBartConfig",
14
+ "AutoModel": "modeling_lsg_bart.LSGBartModel",
15
+ "AutoModelForCausalLM": "modeling_lsg_bart.LSGBartForCausalLM",
16
+ "AutoModelForQuestionAnswering": "modeling_lsg_bart.LSGBartForQuestionAnswering",
17
+ "AutoModelForSeq2SeqLM": "modeling_lsg_bart.LSGBartForConditionalGeneration",
18
+ "AutoModelForSequenceClassification": "modeling_lsg_bart.LSGBartForSequenceClassification"
19
+ },
20
+ "base_model_prefix": "lsg",
21
+ "block_size": 256,
22
+ "bos_token_id": 0,
23
+ "classif_dropout": 0.1,
24
+ "classifier_dropout": 0.0,
25
+ "d_model": 768,
26
+ "decoder_attention_heads": 12,
27
+ "decoder_ffn_dim": 3072,
28
+ "decoder_layerdrop": 0.0,
29
+ "decoder_layers": 6,
30
+ "decoder_start_token_id": 2,
31
+ "dropout": 0.1,
32
+ "early_stopping": true,
33
+ "encoder_attention_heads": 12,
34
+ "encoder_ffn_dim": 3072,
35
+ "encoder_layerdrop": 0.0,
36
+ "encoder_layers": 6,
37
+ "eos_token_id": 2,
38
+ "forced_bos_token_id": 0,
39
+ "forced_eos_token_id": 2,
40
+ "gradient_checkpointing": false,
41
+ "id2label": {
42
+ "0": "LABEL_0",
43
+ "1": "LABEL_1",
44
+ "2": "LABEL_2"
45
+ },
46
+ "init_std": 0.02,
47
+ "is_encoder_decoder": true,
48
+ "label2id": {
49
+ "LABEL_0": 0,
50
+ "LABEL_1": 1,
51
+ "LABEL_2": 2
52
+ },
53
+ "length_penalty": 2.0,
54
+ "lsh_num_pre_rounds": 1,
55
+ "max_length": 128,
56
+ "max_position_embeddings": 4096,
57
+ "min_length": 3,
58
+ "model_type": "bart",
59
+ "no_repeat_ngram_size": null,
60
+ "normalize_before": false,
61
+ "normalize_embedding": true,
62
+ "num_beams": 5,
63
+ "num_global_tokens": 1,
64
+ "num_hidden_layers": 6,
65
+ "pad_token_id": 1,
66
+ "pass_global_tokens_to_decoder": true,
67
+ "pool_with_global": true,
68
+ "scale_embedding": false,
69
+ "sparse_block_size": 0,
70
+ "sparsity_factor": 4,
71
+ "sparsity_type": "none",
72
+ "task_specific_params": {
73
+ "summarization": {
74
+ "length_penalty": 1.0,
75
+ "max_length": 128,
76
+ "min_length": 12,
77
+ "num_beams": 4
78
+ },
79
+ "summarization_cnn": {
80
+ "length_penalty": 2.0,
81
+ "max_length": 142,
82
+ "min_length": 56,
83
+ "num_beams": 4
84
+ },
85
+ "summarization_xsum": {
86
+ "length_penalty": 1.0,
87
+ "max_length": 62,
88
+ "min_length": 11,
89
+ "num_beams": 6
90
+ }
91
+ },
92
+ "torch_dtype": "float32",
93
+ "transformers_version": "4.19.2",
94
+ "use_cache": true,
95
+ "vocab_size": 50265
96
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
modeling_lsg_bart.py ADDED
@@ -0,0 +1,1763 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from logging import warn
2
+ import torch
3
+ from transformers.models.bart.modeling_bart import *
4
+ from transformers.models.bart.modeling_bart import _expand_mask
5
+ import torch.nn as nn
6
+ from torch.nn import BCEWithLogitsLoss
7
+ import sys
8
+
9
+ AUTO_MAP = {
10
+ "AutoModel": "modeling_lsg_bart.LSGBartModel",
11
+ "AutoModelForCausalLM": "modeling_lsg_bart.LSGBartForCausalLM",
12
+ "AutoModelForQuestionAnswering": "modeling_lsg_bart.LSGBartForQuestionAnswering",
13
+ "AutoModelForSequenceClassification": "modeling_lsg_bart.LSGBartForSequenceClassification",
14
+ "AutoModelForSeq2SeqLM": "modeling_lsg_bart.LSGBartForConditionalGeneration"
15
+ }
16
+
17
+ class LSGBartConfig(BartConfig):
18
+ """
19
+ This class overrides :class:`~transformers.RobertaConfig`. Please check the superclass for the appropriate
20
+ documentation alongside usage examples.
21
+ """
22
+
23
+ base_model_prefix = "lsg"
24
+ model_type = "bart"
25
+ keys_to_ignore_at_inference = ["past_key_values"]
26
+ attribute_map = {"num_attention_heads": "encoder_attention_heads", "hidden_size": "d_model"}
27
+
28
+ def __init__(
29
+ self,
30
+ adaptive=True,
31
+ base_model_prefix="lsg",
32
+ block_size=128,
33
+ lsh_num_pre_rounds=1,
34
+ num_global_tokens=1,
35
+ pass_global_tokens_to_decoder=True,
36
+ pool_with_global=True,
37
+ sparse_block_size=128,
38
+ sparsity_factor=2,
39
+ sparsity_type="norm",
40
+ **kwargs
41
+ ):
42
+ """Constructs LSGConfig."""
43
+ super().__init__(**kwargs)
44
+
45
+ self.adaptive = adaptive
46
+ self.auto_map = AUTO_MAP
47
+ self.base_model_prefix = base_model_prefix
48
+ self.block_size = block_size
49
+ self.lsh_num_pre_rounds = lsh_num_pre_rounds
50
+ self.num_global_tokens = num_global_tokens
51
+ self.pass_global_tokens_to_decoder = pass_global_tokens_to_decoder
52
+ self.pool_with_global = pool_with_global
53
+ self.sparse_block_size = sparse_block_size
54
+ self.sparsity_factor = sparsity_factor
55
+ self.sparsity_type = sparsity_type
56
+
57
+ if sparsity_type not in [None, "none", "norm", "lsh", "pooling", "stride", "block_stride"]:
58
+ logger.warning(
59
+ "[WARNING CONFIG]: sparsity_mode not in [None, 'none', 'norm', 'lsh', 'pooling', 'stride', 'block_stride'], setting sparsity_type=None, computation will skip sparse attention")
60
+ self.sparsity_type = None
61
+
62
+ if self.sparsity_type in ["stride", "block_stride"]:
63
+ if self.sparsity_factor > self.encoder_attention_heads:
64
+ logger.warning(
65
+ "[WARNING CONFIG]: sparsity_factor > encoder_attention_heads is not recommended for stride/block_stride sparsity"
66
+ )
67
+
68
+ if self.num_global_tokens < 1:
69
+ logger.warning(
70
+ "[WARNING CONFIG]: num_global_tokens < 1 is not compatible, setting num_global_tokens=1"
71
+ )
72
+ self.num_global_tokens = 1
73
+ elif self.num_global_tokens > 512:
74
+ logger.warning(
75
+ "[WARNING CONFIG]: num_global_tokens > 512 is not compatible, setting num_global_tokens=512"
76
+ )
77
+ self.num_global_tokens = 512
78
+
79
+ if self.sparsity_factor > 0:
80
+ assert self.block_size % self.sparsity_factor == 0, "[ERROR CONFIG]: block_size must be divisible by sparsity_factor"
81
+ assert self.block_size//self.sparsity_factor >= 1, "[ERROR CONFIG]: make sure block_size >= sparsity_factor"
82
+
83
+
84
+ def shift_tokens_right(input_ids, pad_token_id, decoder_start_token_id):
85
+ """
86
+ Shift input ids one token to the right.
87
+ """
88
+ shifted_input_ids = input_ids.new_zeros(input_ids.shape)
89
+ shifted_input_ids[:, 1:] = input_ids[:, :-1].clone()
90
+ shifted_input_ids[:, 0] = decoder_start_token_id
91
+
92
+ if pad_token_id is None:
93
+ raise ValueError("self.model.config.pad_token_id has to be defined.")
94
+ # replace possible -100 values in labels by `pad_token_id`
95
+ shifted_input_ids.masked_fill_(shifted_input_ids == -100, pad_token_id)
96
+
97
+ return shifted_input_ids
98
+
99
+
100
+ def _make_causal_mask(input_ids_shape, dtype, past_key_values_length=0):
101
+ """
102
+ Make causal mask used for bi-directional self-attention.
103
+ """
104
+ bsz, tgt_len = input_ids_shape
105
+ mask = torch.full((tgt_len, tgt_len), float("-inf"))
106
+ mask_cond = torch.arange(mask.size(-1))
107
+ mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0)
108
+ mask = mask.to(dtype)
109
+
110
+ if past_key_values_length > 0:
111
+ mask = torch.cat([torch.zeros(tgt_len, past_key_values_length, dtype=dtype), mask], dim=-1)
112
+ return mask[None, None, :, :].expand(bsz, 1, tgt_len, tgt_len + past_key_values_length)
113
+
114
+
115
+ def _expand_mask(mask, dtype, tgt_len=None):
116
+ """
117
+ Expands attention_mask from `[bsz, seq_len]` to `[bsz, 1, tgt_seq_len, src_seq_len]`.
118
+ """
119
+ bsz, src_len = mask.size()
120
+ tgt_len = tgt_len if tgt_len is not None else src_len
121
+
122
+ expanded_mask = mask[:, None, None, :].expand(bsz, 1, tgt_len, src_len).to(dtype)
123
+
124
+ inverted_mask = 1.0 - expanded_mask
125
+
126
+ return inverted_mask.masked_fill(inverted_mask.bool(), torch.finfo(dtype).min)
127
+
128
+
129
+ class BaseSelfAttention(nn.Module):
130
+
131
+ def __init__(
132
+ self,
133
+ embed_dim,
134
+ num_heads,
135
+ dropout=0.0,
136
+ is_decoder=False,
137
+ bias=True,
138
+ ):
139
+
140
+ super().__init__()
141
+ self.embed_dim = embed_dim
142
+ self.num_heads = num_heads
143
+ self.dropout = dropout
144
+ self.head_dim = embed_dim // num_heads
145
+
146
+ if (self.head_dim * num_heads) != self.embed_dim:
147
+ raise ValueError(
148
+ f"embed_dim must be divisible by num_heads (got `embed_dim`: {self.embed_dim}"
149
+ f" and `num_heads`: {num_heads})."
150
+ )
151
+ self.scaling = self.head_dim ** -0.5
152
+ self.is_decoder = is_decoder
153
+
154
+ self.k_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
155
+ self.v_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
156
+ self.q_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
157
+ self.out_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
158
+
159
+ def transpose_for_scores(self, x):
160
+ new_x_shape = x.size()[:-1] + (
161
+ self.num_heads,
162
+ self.head_dim,
163
+ )
164
+ x = x.view(*new_x_shape)
165
+ return x.permute(0, 2, 1, 3)
166
+
167
+ def reshape_output(self, context_layer):
168
+ context_layer = context_layer.permute(0, 2, 1, 3).contiguous()
169
+ new_context_layer_shape = context_layer.size()[:-2] + (self.embed_dim,)
170
+ return context_layer.view(*new_context_layer_shape)
171
+
172
+ def project_QKV(self, hidden_states):
173
+
174
+ query_layer = self.transpose_for_scores(self.q_proj(hidden_states))
175
+ key_layer = self.transpose_for_scores(self.k_proj(hidden_states))
176
+ value_layer = self.transpose_for_scores(self.v_proj(hidden_states))
177
+ return query_layer, key_layer, value_layer
178
+
179
+
180
+ class BaseAttentionProduct(nn.Module):
181
+
182
+ def __init__(self, config):
183
+ """
184
+ Compute attention: softmax(Q @ K.T) @ V
185
+ """
186
+ super().__init__()
187
+ self.dropout = nn.Dropout(config.attention_dropout)
188
+
189
+ def forward(self, query_layer, key_layer, value_layer, attention_mask=None):
190
+
191
+ d = query_layer.shape[-1]
192
+
193
+ # Take the dot product between "query" and "key" to get the raw attention scores.
194
+ attention_scores = query_layer @ key_layer.transpose(-1, -2) / math.sqrt(d)
195
+
196
+ del query_layer
197
+ del key_layer
198
+
199
+ if attention_mask is not None:
200
+ # Apply the attention mask is (precomputed for all layers in RobertaModel forward() function)
201
+ attention_scores = attention_scores + attention_mask
202
+ del attention_mask
203
+
204
+ # Normalize the attention scores to probabilities.
205
+ attention_probs = nn.Softmax(dim=-1)(attention_scores)
206
+
207
+ # This is actually dropping out entire tokens to attend to, which might
208
+ # seem a bit unusual, but is taken from the original Transformer paper.
209
+ context_layer = self.dropout(attention_probs) @ value_layer
210
+
211
+ return context_layer
212
+
213
+
214
+ class LSGAttentionProduct(nn.Module):
215
+
216
+ def __init__(self, config, block_size=None, sparse_block_size=None, sparsity_factor=4):
217
+ """
218
+ Compute block or overlapping blocks attention products
219
+ """
220
+ super().__init__()
221
+
222
+ self.block_size = block_size
223
+ self.sparse_block_size = sparse_block_size
224
+ self.sparsity_factor = sparsity_factor
225
+
226
+ if self.block_size is None:
227
+ self.block_size = config.block_size
228
+
229
+ if self.sparse_block_size is None:
230
+ self.sparse_block_size = config.sparse_block_size
231
+
232
+ # Shape of blocks
233
+ self.local_shapes = (self.block_size*3, self.block_size)
234
+ if self.sparse_block_size and self.sparsity_factor > 0:
235
+ self.sparse_shapes = (self.sparse_block_size*3, self.block_size//self.sparsity_factor)
236
+
237
+ self.attention = BaseAttentionProduct(config)
238
+
239
+ def build_lsg_inputs(self, hidden_states, sparse_hidden_states, global_hidden_states, is_attn_mask=False):
240
+
241
+ # Build local tokens
242
+ local_hidden_states = self.reshape_to_local_block(hidden_states, is_attn_mask)
243
+ del hidden_states
244
+
245
+ # Build sparse tokens
246
+ if sparse_hidden_states is not None:
247
+ sparse_hidden_states = self.reshape_to_sparse_block(sparse_hidden_states, is_attn_mask)
248
+
249
+ return self.cat_global_sparse_local_tokens(global_hidden_states, sparse_hidden_states, local_hidden_states)
250
+
251
+ def forward(
252
+ self,
253
+ query_layer,
254
+ key_layer,
255
+ value_layer,
256
+ attention_mask=None,
257
+ sparse_key=None,
258
+ sparse_value=None,
259
+ sparse_mask=None,
260
+ global_key=None,
261
+ global_value=None,
262
+ global_mask=None
263
+ ):
264
+
265
+ # Input batch, heads, length, hidden_size
266
+ n, h, t, d = query_layer.size()
267
+ n_blocks = t // self.block_size
268
+ assert t % self.block_size == 0
269
+
270
+ key_layer = self.build_lsg_inputs(
271
+ key_layer,
272
+ sparse_key,
273
+ global_key
274
+ )
275
+ del sparse_key
276
+ del global_key
277
+
278
+ value_layer = self.build_lsg_inputs(
279
+ value_layer,
280
+ sparse_value,
281
+ global_value
282
+ )
283
+ del sparse_value
284
+ del global_value
285
+
286
+ attention_mask = self.build_lsg_inputs(
287
+ attention_mask,
288
+ sparse_mask,
289
+ global_mask.transpose(-1, -2),
290
+ is_attn_mask=True
291
+ ).transpose(-1, -2)
292
+ del sparse_mask
293
+ del global_mask
294
+
295
+ # expect (..., t, d) shape
296
+ # Compute attention
297
+ context_layer = self.attention(
298
+ query_layer=self.chunk(query_layer, n_blocks),
299
+ key_layer=key_layer,
300
+ value_layer=value_layer,
301
+ attention_mask=attention_mask
302
+ )
303
+
304
+ return context_layer.reshape(n, h, -1, d)
305
+
306
+ def reshape_to_local_block(self, hidden_states, is_attn_mask=False):
307
+
308
+ size, step = self.local_shapes
309
+ s = (size - step) // 2
310
+
311
+ # Pad before block reshaping
312
+ if is_attn_mask:
313
+ pad_value = -10000
314
+ hidden_states = hidden_states.transpose(-1, -2)
315
+ else:
316
+ pad_value = 0
317
+
318
+ hidden_states = torch.nn.functional.pad(
319
+ hidden_states.transpose(-1, -2),
320
+ pad=(s, s),
321
+ value=pad_value
322
+ ).transpose(-1, -2)
323
+
324
+ # Make blocks
325
+ hidden_states = hidden_states.unfold(-2, size=size, step=step).transpose(-1, -2)
326
+
327
+ return hidden_states
328
+
329
+ def reshape_to_sparse_block(self, hidden_states, is_attn_mask=False):
330
+
331
+ size, step = self.sparse_shapes
332
+
333
+ # In case of odd case
334
+ odd_offset = (step % 2)
335
+
336
+ # n, h, t, d*2 + 1
337
+ size = size*2
338
+ s = (size - step) // 2 + odd_offset
339
+
340
+ # Pad before block reshaping
341
+ if is_attn_mask:
342
+ pad_value = -10000
343
+ hidden_states = hidden_states.transpose(-1, -2)
344
+ else:
345
+ pad_value = 0
346
+
347
+ hidden_states = torch.nn.functional.pad(
348
+ hidden_states.transpose(-1, -2),
349
+ pad=(s, s),
350
+ value=pad_value
351
+ ).transpose(-1, -2)
352
+
353
+ # Make blocks
354
+ hidden_states = hidden_states.unfold(-2, size=size, step=step).transpose(-1, -2)
355
+
356
+ # Fix case where block_size == sparsify_factor
357
+ if odd_offset:
358
+ hidden_states = hidden_states[..., :-1, :, :]
359
+
360
+ # Indexes for selection
361
+ u = (size - self.block_size * 3 // self.sparsity_factor) // 2 + odd_offset
362
+ s = self.sparse_block_size
363
+
364
+ u_ = u + odd_offset
365
+ return torch.cat([hidden_states[..., u-s:u, :], hidden_states[..., -u_:-u_+s, :]], dim=-2)
366
+
367
+ def cat_global_sparse_local_tokens(self, x_global, x_sparse=None, x_local=None, dim=-2):
368
+
369
+ n, h, b, t, d = x_local.size()
370
+ x_global = x_global.unsqueeze(-3).expand(-1, -1, b, -1, -1)
371
+ if x_sparse is not None:
372
+ return torch.cat([x_global, x_sparse, x_local], dim=dim)
373
+ return torch.cat([x_global, x_local], dim=dim)
374
+
375
+ def chunk(self, x, n_blocks):
376
+
377
+ t, d = x.size()[-2:]
378
+ return x.reshape(*x.size()[:-2], n_blocks, -1, d)
379
+
380
+
381
+ class LSGBartEncoderAttention(BaseSelfAttention):
382
+ '''
383
+ Compute local attention with overlapping blocs
384
+ Use global attention for tokens with highest norm
385
+ '''
386
+ def __init__(
387
+ self,
388
+ config,
389
+ embed_dim,
390
+ num_heads,
391
+ dropout
392
+ ):
393
+
394
+ super().__init__(embed_dim, num_heads, dropout)
395
+
396
+ self.block_size = config.block_size
397
+ self.sparse_block_size = config.sparse_block_size
398
+ self.num_global_tokens = config.num_global_tokens
399
+ self.sparsity_factor = config.sparsity_factor
400
+
401
+ self.attention = LSGAttentionProduct(
402
+ config,
403
+ block_size=config.block_size,
404
+ sparse_block_size=config.sparse_block_size,
405
+ sparsity_factor=self.sparsity_factor,
406
+ )
407
+
408
+ self.full_attention = BaseAttentionProduct(config)
409
+
410
+ sparse_functions = {
411
+ "norm": self.get_sparse_tokens_with_norm,
412
+ "pooling": self.get_sparse_tokens_with_pooling,
413
+ "lsh": self.get_sparse_tokens_with_lsh,
414
+ "stride": self.get_sparse_tokens_with_stride,
415
+ "block_stride": self.get_sparse_tokens_with_block_stride,
416
+ }
417
+
418
+ self.sparsity_type = config.sparsity_type
419
+ self.get_sparse_elements = sparse_functions.get(self.sparsity_type, lambda x, y, z: (None, None, None))
420
+
421
+ if config.sparsity_type == "lsh":
422
+ self.lsh_num_pre_rounds = config.lsh_num_pre_rounds
423
+
424
+ def get_sparse_tokens_with_norm(self, keys, values, mask):
425
+
426
+ if self.sparsity_factor == 1:
427
+ return keys, values, mask.expand(-1, keys.size()[1], -1, -1)
428
+
429
+ with torch.no_grad():
430
+
431
+ block_size = min(self.block_size, self.sparse_block_size)
432
+ key_norm = keys.detach().norm(dim=-1, keepdim=True)
433
+ key_norm = key_norm * ~mask.transpose(-1, -2).bool()
434
+ key_norm = self.chunk(key_norm, block_size)
435
+
436
+ n, h, b, t, d = key_norm.size()
437
+
438
+ idx = key_norm.argsort(dim=-2)
439
+ del key_norm
440
+ idx += (torch.arange(b, device=keys.device)*t).reshape(1, 1, b, 1, 1)
441
+
442
+ split = (t - block_size // self.sparsity_factor, block_size // self.sparsity_factor)
443
+ sparse_idx = idx.split(split, -2)[-1].reshape(n, h, -1, 1)
444
+
445
+ d = keys.size()[-1]
446
+ keys = keys.gather(dim=-2, index=sparse_idx.expand(-1, -1, -1, d))
447
+ values = values.gather(dim=-2, index=sparse_idx.expand(-1, -1, -1, d))
448
+ mask = mask.expand(-1, h, -1, -1).transpose(-1, -2).gather(dim=-2, index=sparse_idx).transpose(-1, -2)
449
+
450
+ return keys, values, mask
451
+
452
+ def get_sparse_tokens_with_pooling(self, keys, values, mask):
453
+
454
+ if self.sparsity_factor == 1:
455
+ return keys, values, mask.expand(-1, keys.size()[1], -1, -1)
456
+
457
+ keys = self.chunk(keys, self.sparsity_factor)
458
+ values = self.chunk(values, self.sparsity_factor)
459
+
460
+ n, h, b, t, d = keys.size()
461
+ mask = mask.reshape(n, 1, b, 1, t)
462
+ mask = ~mask.transpose(-1, -2).bool()
463
+
464
+ keys = keys * mask
465
+ values = values * mask
466
+
467
+ mask = mask.sum(dim=-2)
468
+ keys = keys.sum(dim=-2) / (mask + 1e-6)
469
+ values = values.sum(dim=-2) / (mask + 1e-6)
470
+
471
+ mask = - (1. - mask.clamp(0, 1)) * 1e4
472
+ return keys.reshape(n, h, -1, d), values.reshape(n, h, -1, d), mask.expand(-1, h, -1, -1).transpose(-1, -2)
473
+
474
+ def get_sparse_tokens_with_stride(self, keys, values, mask):
475
+
476
+ if self.sparsity_factor == 1:
477
+ return keys, values, mask.expand(-1, keys.size()[1], -1, -1)
478
+
479
+ n, h, t, d = keys.size()
480
+ sparse_idx = torch.arange(t // self.sparsity_factor, device=keys.device) * self.sparsity_factor
481
+ sparse_idx = sparse_idx.reshape(1, 1, -1, 1) + (torch.arange(h, device=keys.device) % self.sparsity_factor).reshape(1, h, 1, 1)
482
+ sparse_idx = sparse_idx.expand(n, h, -1, 1)
483
+
484
+ keys = keys.gather(dim=-2, index=sparse_idx.expand(-1, -1, -1, d))
485
+ values = values.gather(dim=-2, index=sparse_idx.expand(-1, -1, -1, d))
486
+ mask = mask.expand(-1, h, -1, -1).transpose(-1, -2).gather(dim=-2, index=sparse_idx).transpose(-1, -2)
487
+
488
+ return keys, values, mask
489
+
490
+ def get_sparse_tokens_with_block_stride(self, keys, values, mask):
491
+
492
+ if self.sparsity_factor == 1:
493
+ return keys, values, mask.expand(-1, keys.size()[1], -1, -1)
494
+
495
+ n, h, t, d = keys.size()
496
+
497
+ t, b = self.block_size, t // self.block_size
498
+ sparse_idx = torch.arange(t // self.sparsity_factor, device=keys.device)
499
+ sparse_idx = sparse_idx.reshape(1, 1, 1, -1, 1) + torch.arange(h, device=keys.device).reshape(1, h, 1, 1, 1) * (t // self.sparsity_factor)
500
+ sparse_idx = (sparse_idx % t)
501
+ sparse_idx = sparse_idx + torch.arange(b, device=keys.device).reshape(1, 1, -1, 1, 1) * t
502
+ sparse_idx = sparse_idx.reshape(1, h, -1, 1).expand(n, h, -1, 1)
503
+
504
+ keys = keys.gather(dim=-2, index=sparse_idx.expand(-1, -1, -1, d))
505
+ values = values.gather(dim=-2, index=sparse_idx.expand(-1, -1, -1, d))
506
+ mask = mask.expand(-1, h, -1, -1).transpose(-1, -2).gather(dim=-2, index=sparse_idx).transpose(-1, -2)
507
+
508
+ return keys, values, mask
509
+
510
+ def get_sparse_tokens_with_lsh(self, keys, values, mask):
511
+
512
+ if self.sparsity_factor == 1:
513
+ return keys, values, mask.expand(-1, keys.size()[1], -1, -1)
514
+
515
+ block_size = min(self.block_size, self.sparse_block_size)
516
+ keys = self.chunk(keys, block_size)
517
+ values = self.chunk(values, block_size)
518
+
519
+ n, h, b, t, d = keys.size()
520
+ mask = mask.reshape(n, 1, b, 1, t)
521
+ mask = ~mask.transpose(-1, -2).bool()
522
+
523
+ keys = keys * mask
524
+ values = values * mask
525
+ mask = mask.expand(-1, h, -1, -1, -1).float()
526
+
527
+ extra_factor = 1
528
+
529
+ for _ in range(self.lsh_num_pre_rounds):
530
+ keys, values, mask = self.lsh_round(keys, values, mask, t*extra_factor)
531
+
532
+ keys, values, mask = self.lsh_round(keys, values, mask, t//self.sparsity_factor)
533
+ keys /= mask + 1e-8
534
+ values /= mask + 1e-8
535
+
536
+ mask = -10000 * (1. - mask.clamp(0, 1))
537
+
538
+ return keys.reshape(n, h, -1, d), values.reshape(n, h, -1, d), mask.transpose(-1, -2).reshape(n, h, 1, -1)
539
+
540
+ def lsh_round(self, keys, values, mask, output_size):
541
+
542
+ with torch.no_grad():
543
+
544
+ n_hashes = output_size // 2
545
+ n, h, b, t, d = keys.size()
546
+ binary_mask = mask.clamp(0, 1)
547
+
548
+ indexes = (torch.nn.functional.normalize(keys, dim=-1) * binary_mask) @ torch.randn(1, h, 1, d, n_hashes, device=keys.device)
549
+ indexes = torch.cat([indexes, -indexes], dim=-1).argmax(dim=-1, keepdim=True)
550
+
551
+ n, h, b, t, d = keys.size()
552
+
553
+ x_ = torch.zeros(n, h, b, output_size, d, device=keys.device)
554
+ mask_ = torch.zeros(n, h, b, output_size, 1, device=keys.device)
555
+ keys = torch.scatter_add(x_, dim=-2, index=indexes.expand(-1, -1, -1, -1, d), src=keys)
556
+ values = torch.scatter_add(x_, dim=-2, index=indexes.expand(-1, -1, -1, -1, d), src=values)
557
+ mask = torch.scatter_add(mask_, dim=-2, index=indexes, src=mask)
558
+
559
+ return keys[..., :output_size, :], values[..., :output_size, :], mask[..., :output_size, :]
560
+
561
+ def forward(
562
+ self,
563
+ hidden_states,
564
+ attention_mask=None,
565
+ layer_head_mask=None,
566
+ output_attentions=False
567
+ ):
568
+
569
+ query_layer, key_layer, value_layer = self.project_QKV(hidden_states)
570
+ outputs = self.not_causal_forward(
571
+ query_layer,
572
+ key_layer,
573
+ value_layer,
574
+ attention_mask=attention_mask[:, :, :1, :],
575
+ head_mask=layer_head_mask,
576
+ output_attentions=output_attentions
577
+ )
578
+
579
+ return self.out_proj(outputs), None, None
580
+
581
+ def not_causal_forward(
582
+ self,
583
+ query_layer,
584
+ key_layer,
585
+ value_layer,
586
+ attention_mask=None,
587
+ head_mask=None,
588
+ output_attentions=False,
589
+ ):
590
+
591
+ n, h, t, d = query_layer.size()
592
+
593
+ # Cat global mask
594
+ attention_mask = torch.nn.functional.pad(attention_mask, (self.num_global_tokens, 0), value=0)
595
+
596
+ # Use normal attention if local attention covers every tokens
597
+ if t <= 2 * self.block_size + self.num_global_tokens:
598
+ context_layer = self.full_attention(
599
+ query_layer=query_layer,
600
+ key_layer=key_layer,
601
+ value_layer=value_layer,
602
+ attention_mask=attention_mask
603
+ )
604
+
605
+ if head_mask is not None:
606
+ context_layer = context_layer * head_mask[:, :, :1, :1]
607
+ return self.reshape_output(context_layer)
608
+
609
+ # Split input into global tokens and other tokens
610
+ split = (self.num_global_tokens, t - self.num_global_tokens)
611
+ global_query, query_layer = query_layer.split(split, dim=-2)
612
+
613
+ # Get global_attention
614
+ bos = self.full_attention(
615
+ query_layer=global_query,
616
+ key_layer=key_layer,
617
+ value_layer=value_layer,
618
+ attention_mask=attention_mask
619
+ )
620
+
621
+ # Split K Q M on global and non global
622
+ global_key, key_layer = key_layer.split(split, dim=-2)
623
+ global_value, value_layer = value_layer.split(split, dim=-2)
624
+ global_mask, attention_mask = attention_mask.split(split, dim=-1)
625
+
626
+ n, h, t, d = key_layer.size()
627
+
628
+ # Get sparse idx
629
+ sparse_key, sparse_value, sparse_mask = (None, None, None)
630
+
631
+ if self.sparse_block_size and self.sparsity_factor > 0:
632
+ sparse_key, sparse_value, sparse_mask = self.get_sparse_elements(key_layer, value_layer, attention_mask)
633
+
634
+ # Expand masks on heads
635
+ attention_mask = attention_mask.expand(-1, h, -1, -1)
636
+ global_mask = global_mask.expand(-1, h, -1, -1)
637
+
638
+ # Compute dot product attention
639
+ context_layer = self.attention(
640
+ query_layer,
641
+ key_layer,
642
+ value_layer,
643
+ attention_mask,
644
+ sparse_key=sparse_key,
645
+ sparse_value=sparse_value,
646
+ sparse_mask=sparse_mask,
647
+ global_key=global_key,
648
+ global_value=global_value,
649
+ global_mask=global_mask
650
+ )
651
+
652
+ # Merge global and local-sparse tokens
653
+ context_layer = torch.cat([bos, context_layer], dim=-2)
654
+ if head_mask is not None:
655
+ context_layer = context_layer * head_mask[:, :, :1, :1]
656
+ context_layer = self.reshape_output(context_layer)
657
+
658
+ return context_layer
659
+
660
+ def chunk(self, x, chunk_size):
661
+
662
+ n, h, t, d = x.size()
663
+ return x.reshape(n, h, -1, chunk_size, d)
664
+
665
+
666
+ class LSGBartDecoderAttention(nn.Module):
667
+
668
+ """Multi-headed attention from 'Attention Is All You Need' paper"""
669
+
670
+ def __init__(
671
+ self,
672
+ embed_dim,
673
+ num_heads,
674
+ dropout=0.0,
675
+ is_decoder=False,
676
+ bias=True,
677
+ ):
678
+
679
+ super().__init__()
680
+ self.embed_dim = embed_dim
681
+ self.num_heads = num_heads
682
+ self.dropout = dropout
683
+ self.head_dim = embed_dim // num_heads
684
+
685
+ if (self.head_dim * num_heads) != self.embed_dim:
686
+ raise ValueError(
687
+ f"embed_dim must be divisible by num_heads (got `embed_dim`: {self.embed_dim}"
688
+ f" and `num_heads`: {num_heads})."
689
+ )
690
+ self.scaling = self.head_dim ** -0.5
691
+ self.is_decoder = is_decoder
692
+
693
+ self.k_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
694
+ self.v_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
695
+ self.q_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
696
+ self.out_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
697
+
698
+ def _shape(self, tensor, seq_len, bsz):
699
+ return tensor.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous()
700
+
701
+ def forward(
702
+ self,
703
+ hidden_states,
704
+ key_value_states=None,
705
+ past_key_value=None,
706
+ attention_mask=None,
707
+ layer_head_mask=None,
708
+ output_attentions=False,
709
+ ):
710
+
711
+ # if key_value_states are provided this layer is used as a cross-attention layer
712
+ # for the decoder
713
+ is_cross_attention = key_value_states is not None
714
+
715
+ bsz, tgt_len, _ = hidden_states.size()
716
+
717
+ # get query proj
718
+ query_states = self.q_proj(hidden_states) * self.scaling
719
+ # get key, value proj
720
+ if is_cross_attention and past_key_value is not None:
721
+ # reuse k,v, cross_attentions
722
+ key_states = past_key_value[0]
723
+ value_states = past_key_value[1]
724
+ elif is_cross_attention:
725
+ # cross_attentions
726
+ key_states = self._shape(self.k_proj(key_value_states), -1, bsz)
727
+ value_states = self._shape(self.v_proj(key_value_states), -1, bsz)
728
+ elif past_key_value is not None:
729
+ # reuse k, v, self_attention
730
+ key_states = self._shape(self.k_proj(hidden_states), -1, bsz)
731
+ value_states = self._shape(self.v_proj(hidden_states), -1, bsz)
732
+ key_states = torch.cat([past_key_value[0], key_states], dim=2)
733
+ value_states = torch.cat([past_key_value[1], value_states], dim=2)
734
+ else:
735
+ # self_attention
736
+ key_states = self._shape(self.k_proj(hidden_states), -1, bsz)
737
+ value_states = self._shape(self.v_proj(hidden_states), -1, bsz)
738
+
739
+ if self.is_decoder:
740
+ # if cross_attention save Tuple(torch.Tensor, torch.Tensor) of all cross attention key/value_states.
741
+ # Further calls to cross_attention layer can then reuse all cross-attention
742
+ # key/value_states (first "if" case)
743
+ # if uni-directional self-attention (decoder) save Tuple(torch.Tensor, torch.Tensor) of
744
+ # all previous decoder key/value_states. Further calls to uni-directional self-attention
745
+ # can concat previous decoder key/value_states to current projected key/value_states (third "elif" case)
746
+ # if encoder bi-directional self-attention `past_key_value` is always `None`
747
+ past_key_value = (key_states, value_states)
748
+
749
+ proj_shape = (bsz * self.num_heads, -1, self.head_dim)
750
+ query_states = self._shape(query_states, tgt_len, bsz).view(*proj_shape)
751
+ key_states = key_states.view(*proj_shape)
752
+ value_states = value_states.view(*proj_shape)
753
+
754
+ src_len = key_states.size(1)
755
+ attn_weights = torch.bmm(query_states, key_states.transpose(1, 2))
756
+
757
+ if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len):
758
+ raise ValueError(
759
+ f"Attention weights should be of size {(bsz * self.num_heads, tgt_len, src_len)}, but is {attn_weights.size()}"
760
+ )
761
+
762
+ if attention_mask is not None:
763
+ if attention_mask.size() != (bsz, 1, tgt_len, src_len):
764
+ raise ValueError(
765
+ f"Attention mask should be of size {(bsz, 1, tgt_len, src_len)}, but is {attention_mask.size()}"
766
+ )
767
+ attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) + attention_mask
768
+ attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len)
769
+
770
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1)
771
+
772
+ if layer_head_mask is not None:
773
+ if layer_head_mask.size() != (self.num_heads,):
774
+ raise ValueError(
775
+ f"Head mask for a single layer should be of size {(self.num_heads,)}, but is {layer_head_mask.size()}"
776
+ )
777
+ attn_weights = layer_head_mask.view(1, -1, 1, 1) * attn_weights.view(bsz, self.num_heads, tgt_len, src_len)
778
+ attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len)
779
+
780
+ if output_attentions:
781
+ # this operation is a bit awkward, but it's required to
782
+ # make sure that attn_weights keeps its gradient.
783
+ # In order to do so, attn_weights have to be reshaped
784
+ # twice and have to be reused in the following
785
+ attn_weights_reshaped = attn_weights.view(bsz, self.num_heads, tgt_len, src_len)
786
+ attn_weights = attn_weights_reshaped.view(bsz * self.num_heads, tgt_len, src_len)
787
+ else:
788
+ attn_weights_reshaped = None
789
+
790
+ attn_probs = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training)
791
+
792
+ attn_output = torch.bmm(attn_probs, value_states)
793
+
794
+ if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim):
795
+ raise ValueError(
796
+ f"`attn_output` should be of size {(bsz, self.num_heads, tgt_len, self.head_dim)}, but is {attn_output.size()}"
797
+ )
798
+
799
+ attn_output = attn_output.view(bsz, self.num_heads, tgt_len, self.head_dim)
800
+ attn_output = attn_output.transpose(1, 2)
801
+
802
+ # Use the `embed_dim` from the config (stored in the class) rather than `hidden_state` because `attn_output` can be
803
+ # partitioned aross GPUs when using tensor-parallelism.
804
+ attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim)
805
+
806
+ attn_output = self.out_proj(attn_output)
807
+
808
+ return attn_output, attn_weights_reshaped, past_key_value
809
+
810
+
811
+ class LSGBartLearnedPositionalEmbedding(nn.Embedding):
812
+ """
813
+ This module learns positional embeddings up to a fixed maximum size.
814
+ """
815
+
816
+ def __init__(self, num_embeddings, embedding_dim):
817
+ # Bart is set up so that if padding_idx is specified then offset the embedding ids by 2
818
+ # and adjust num_embeddings appropriately. Other models don't have this hack
819
+ self.offset = 2
820
+ super().__init__(num_embeddings + self.offset, embedding_dim)
821
+
822
+ def forward(self, input_ids_shape, past_key_values_length=0):
823
+
824
+ """`input_ids_shape` is expected to be [bsz x seqlen]."""
825
+ bsz, seq_len = input_ids_shape[:2]
826
+ positions = torch.arange(
827
+ past_key_values_length, past_key_values_length + seq_len, dtype=torch.long, device=self.weight.device
828
+ )
829
+ return super().forward(positions + self.offset)
830
+
831
+
832
+ class LSGBartEncoderLayer(nn.Module):
833
+
834
+ def __init__(self, config):
835
+
836
+ super().__init__()
837
+ self.embed_dim = config.d_model
838
+ self.self_attn = LSGBartEncoderAttention(
839
+ config=config,
840
+ embed_dim=self.embed_dim,
841
+ num_heads=config.encoder_attention_heads,
842
+ dropout=config.attention_dropout,
843
+ )
844
+ self.self_attn_layer_norm = nn.LayerNorm(self.embed_dim)
845
+ self.dropout = config.dropout
846
+ self.activation_fn = ACT2FN[config.activation_function]
847
+ self.activation_dropout = config.activation_dropout
848
+ self.fc1 = nn.Linear(self.embed_dim, config.encoder_ffn_dim)
849
+ self.fc2 = nn.Linear(config.encoder_ffn_dim, self.embed_dim)
850
+ self.final_layer_norm = nn.LayerNorm(self.embed_dim)
851
+
852
+ def forward(
853
+ self,
854
+ hidden_states,
855
+ attention_mask,
856
+ layer_head_mask,
857
+ output_attentions=False,
858
+ ):
859
+ """
860
+ Args:
861
+ hidden_states (:obj:`torch.FloatTensor`): input to the layer of shape `(seq_len, batch, embed_dim)`
862
+ attention_mask (:obj:`torch.FloatTensor`): attention mask of size
863
+ `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
864
+ layer_head_mask (:obj:`torch.FloatTensor`): mask for attention heads in a given layer of size
865
+ `(encoder_attention_heads,)`.
866
+ output_attentions (:obj:`bool`, `optional`):
867
+ Whether or not to return the attentions tensors of all attention layers. See ``attentions`` under
868
+ returned tensors for more detail.
869
+ """
870
+ residual = hidden_states
871
+ hidden_states, attn_weights, _ = self.self_attn(
872
+ hidden_states=hidden_states,
873
+ attention_mask=attention_mask,
874
+ layer_head_mask=layer_head_mask,
875
+ output_attentions=output_attentions,
876
+ )
877
+ hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
878
+ hidden_states = residual + hidden_states
879
+ hidden_states = self.self_attn_layer_norm(hidden_states)
880
+
881
+ residual = hidden_states
882
+ hidden_states = self.activation_fn(self.fc1(hidden_states))
883
+ hidden_states = nn.functional.dropout(hidden_states, p=self.activation_dropout, training=self.training)
884
+ hidden_states = self.fc2(hidden_states)
885
+ hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
886
+ hidden_states = residual + hidden_states
887
+ hidden_states = self.final_layer_norm(hidden_states)
888
+
889
+ if hidden_states.dtype == torch.float16 and (
890
+ torch.isinf(hidden_states).any() or torch.isnan(hidden_states).any()
891
+ ):
892
+ clamp_value = torch.finfo(hidden_states.dtype).max - 1000
893
+ hidden_states = torch.clamp(hidden_states, min=-clamp_value, max=clamp_value)
894
+
895
+ outputs = (hidden_states,)
896
+
897
+ if output_attentions:
898
+ outputs += (attn_weights,)
899
+
900
+ return outputs
901
+
902
+
903
+ class LSGBartDecoderLayer(nn.Module):
904
+
905
+ def __init__(self, config):
906
+
907
+ super().__init__()
908
+ self.embed_dim = config.d_model
909
+
910
+ self.self_attn = LSGBartDecoderAttention(
911
+ embed_dim=self.embed_dim,
912
+ num_heads=config.decoder_attention_heads,
913
+ dropout=config.attention_dropout,
914
+ is_decoder=True,
915
+ )
916
+ self.dropout = config.dropout
917
+ self.activation_fn = ACT2FN[config.activation_function]
918
+ self.activation_dropout = config.activation_dropout
919
+
920
+ self.self_attn_layer_norm = nn.LayerNorm(self.embed_dim)
921
+ self.encoder_attn = LSGBartDecoderAttention(
922
+ self.embed_dim,
923
+ config.decoder_attention_heads,
924
+ dropout=config.attention_dropout,
925
+ is_decoder=True,
926
+ )
927
+ self.encoder_attn_layer_norm = nn.LayerNorm(self.embed_dim)
928
+ self.fc1 = nn.Linear(self.embed_dim, config.decoder_ffn_dim)
929
+ self.fc2 = nn.Linear(config.decoder_ffn_dim, self.embed_dim)
930
+ self.final_layer_norm = nn.LayerNorm(self.embed_dim)
931
+
932
+ def forward(
933
+ self,
934
+ hidden_states,
935
+ attention_mask=None,
936
+ encoder_hidden_states=None,
937
+ encoder_attention_mask=None,
938
+ layer_head_mask=None,
939
+ cross_attn_layer_head_mask=None,
940
+ past_key_value=None,
941
+ output_attentions=False,
942
+ use_cache=True,
943
+ ):
944
+ """
945
+ Args:
946
+ hidden_states (:obj:`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
947
+ attention_mask (:obj:`torch.FloatTensor`): attention mask of size
948
+ `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
949
+ encoder_hidden_states (:obj:`torch.FloatTensor`): cross attention input to the layer of shape `(batch, seq_len, embed_dim)`
950
+ encoder_attention_mask (:obj:`torch.FloatTensor`): encoder attention mask of size
951
+ `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
952
+ layer_head_mask (:obj:`torch.FloatTensor`): mask for attention heads in a given layer of size
953
+ `(encoder_attention_heads,)`.
954
+ cross_attn_layer_head_mask (:obj:`torch.FloatTensor`): mask for cross-attention heads in a given layer of
955
+ size `(decoder_attention_heads,)`.
956
+ past_key_value (:obj:`Tuple(torch.FloatTensor)`): cached past key and value projection states
957
+ output_attentions (:obj:`bool`, `optional`):
958
+ Whether or not to return the attentions tensors of all attention layers. See ``attentions`` under
959
+ returned tensors for more detail.
960
+ """
961
+ residual = hidden_states
962
+
963
+ # Self Attention
964
+ # decoder uni-directional self-attention cached key/values tuple is at positions 1,2
965
+ self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None
966
+ # add present self-attn cache to positions 1,2 of present_key_value tuple
967
+
968
+ hidden_states, self_attn_weights, present_key_value = self.self_attn(
969
+ hidden_states=hidden_states,
970
+ past_key_value=self_attn_past_key_value,
971
+ attention_mask=attention_mask,
972
+ layer_head_mask=layer_head_mask,
973
+ output_attentions=output_attentions,
974
+ )
975
+ hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
976
+ hidden_states = residual + hidden_states
977
+ hidden_states = self.self_attn_layer_norm(hidden_states)
978
+
979
+ # Cross-Attention Block
980
+ cross_attn_present_key_value = None
981
+ cross_attn_weights = None
982
+ if encoder_hidden_states is not None:
983
+ residual = hidden_states
984
+
985
+ # cross_attn cached key/values tuple is at positions 3,4 of present_key_value tuple
986
+ cross_attn_past_key_value = past_key_value[-2:] if past_key_value is not None else None
987
+
988
+ hidden_states, cross_attn_weights, cross_attn_present_key_value = self.encoder_attn(
989
+ hidden_states=hidden_states,
990
+ key_value_states=encoder_hidden_states,
991
+ attention_mask=encoder_attention_mask,
992
+ layer_head_mask=cross_attn_layer_head_mask,
993
+ past_key_value=cross_attn_past_key_value,
994
+ output_attentions=output_attentions,
995
+ )
996
+ hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
997
+ hidden_states = residual + hidden_states
998
+ hidden_states = self.encoder_attn_layer_norm(hidden_states)
999
+
1000
+ # add cross-attn to positions 3,4 of present_key_value tuple
1001
+ present_key_value = present_key_value + cross_attn_present_key_value
1002
+
1003
+ # Fully Connected
1004
+ residual = hidden_states
1005
+ hidden_states = self.activation_fn(self.fc1(hidden_states))
1006
+ hidden_states = nn.functional.dropout(hidden_states, p=self.activation_dropout, training=self.training)
1007
+ hidden_states = self.fc2(hidden_states)
1008
+ hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
1009
+ hidden_states = residual + hidden_states
1010
+ hidden_states = self.final_layer_norm(hidden_states)
1011
+
1012
+ outputs = (hidden_states,)
1013
+
1014
+ if output_attentions:
1015
+ outputs += (self_attn_weights, cross_attn_weights)
1016
+
1017
+ if use_cache:
1018
+ outputs += (present_key_value,)
1019
+
1020
+ return outputs
1021
+
1022
+
1023
+ class LSGBartClassificationHead(nn.Module):
1024
+ """Head for sentence-level classification tasks."""
1025
+
1026
+ def __init__(
1027
+ self,
1028
+ input_dim,
1029
+ inner_dim,
1030
+ num_classes,
1031
+ pooler_dropout,
1032
+ ):
1033
+
1034
+ super().__init__()
1035
+ self.dense = nn.Linear(input_dim, inner_dim)
1036
+ self.dropout = nn.Dropout(p=pooler_dropout)
1037
+ self.out_proj = nn.Linear(inner_dim, num_classes)
1038
+
1039
+ def forward(self, hidden_states):
1040
+
1041
+ hidden_states = self.dropout(hidden_states)
1042
+ hidden_states = self.dense(hidden_states)
1043
+ hidden_states = torch.tanh(hidden_states)
1044
+ hidden_states = self.dropout(hidden_states)
1045
+ hidden_states = self.out_proj(hidden_states)
1046
+ return hidden_states
1047
+
1048
+
1049
+ class LSGBartPretrainedModel(PreTrainedModel):
1050
+
1051
+ config_class = LSGBartConfig
1052
+ base_model_prefix = "model"
1053
+ supports_gradient_checkpointing = True
1054
+ _keys_to_ignore_on_load_unexpected = [r"encoder\.version", r"decoder\.version"]
1055
+
1056
+ def _init_weights(self, module):
1057
+
1058
+ std = self.config.init_std
1059
+ if isinstance(module, nn.Linear):
1060
+ module.weight.data.normal_(mean=0.0, std=std)
1061
+ if module.bias is not None:
1062
+ module.bias.data.zero_()
1063
+ elif isinstance(module, nn.Embedding):
1064
+ module.weight.data.normal_(mean=0.0, std=std)
1065
+ if module.padding_idx is not None:
1066
+ module.weight.data[module.padding_idx].zero_()
1067
+
1068
+ def _set_gradient_checkpointing(self, module, value=False):
1069
+
1070
+ if isinstance(module, (LSGBartDecoder, LSGBartEncoder)):
1071
+ module.gradient_checkpointing = value
1072
+
1073
+ @property
1074
+ def dummy_inputs(self):
1075
+ pad_token = self.config.pad_token_id
1076
+ input_ids = torch.tensor([[0, 6, 10, 4, 2], [0, 8, 12, 2, pad_token]], device=self.device)
1077
+ dummy_inputs = {
1078
+ "attention_mask": input_ids.ne(pad_token),
1079
+ "input_ids": input_ids,
1080
+ }
1081
+ return dummy_inputs
1082
+
1083
+
1084
+ class PretrainedLSGBartModel(LSGBartPretrainedModel):
1085
+
1086
+ def __init_subclass__(self):
1087
+ warnings.warn(
1088
+ "The class `PretrainedBartModel` has been depreciated, please use `LSGBartPretrainedModel` instead.",
1089
+ FutureWarning,
1090
+ )
1091
+
1092
+
1093
+ class LSGBartEncoder(LSGBartPretrainedModel):
1094
+ """
1095
+ Transformer encoder consisting of *config.encoder_layers* self attention layers. Each layer is a
1096
+ :class:`BartEncoderLayer`.
1097
+ Args:
1098
+ config: BartConfig
1099
+ embed_tokens (nn.Embedding): output embedding
1100
+ """
1101
+
1102
+ def __init__(self, config, embed_tokens=None):
1103
+
1104
+ super().__init__(config)
1105
+ self.dropout = config.dropout
1106
+ self.layerdrop = config.encoder_layerdrop
1107
+
1108
+ embed_dim = config.d_model
1109
+ self.padding_idx = config.pad_token_id
1110
+ self.max_source_positions = config.max_position_embeddings
1111
+ self.embed_scale = math.sqrt(embed_dim) if config.scale_embedding else 1.0
1112
+
1113
+ if embed_tokens is not None:
1114
+ self.embed_tokens = embed_tokens
1115
+ else:
1116
+ self.embed_tokens = nn.Embedding(config.vocab_size, embed_dim, self.padding_idx)
1117
+
1118
+ self.embed_positions = LSGBartLearnedPositionalEmbedding(
1119
+ config.max_position_embeddings,
1120
+ embed_dim,
1121
+ )
1122
+ self.layers = nn.ModuleList([LSGBartEncoderLayer(config) for _ in range(config.encoder_layers)])
1123
+ self.layernorm_embedding = nn.LayerNorm(embed_dim)
1124
+
1125
+ #
1126
+ assert hasattr(config, "num_global_tokens")
1127
+ self.num_global_tokens = config.num_global_tokens
1128
+ self.pad_idx = config.pad_token_id
1129
+
1130
+ assert hasattr(config, "block_size") and hasattr(config, "adaptive")
1131
+ self.block_size = config.block_size
1132
+ self.adaptive = config.adaptive
1133
+ self.pool_with_global = config.pool_with_global
1134
+ self.pass_global_tokens_to_decoder = config.pass_global_tokens_to_decoder
1135
+
1136
+ self.global_embeddings = nn.Embedding(512, embedding_dim=config.d_model)
1137
+
1138
+ self.gradient_checkpointing = False
1139
+
1140
+ # Initialize weights and apply final processing
1141
+ self.post_init()
1142
+
1143
+ def get_input_embeddings(self):
1144
+ return self.embed_tokens
1145
+
1146
+ def set_input_embeddings(self, value):
1147
+ self.embed_tokens = value
1148
+
1149
+ def forward(self,
1150
+ input_ids=None,
1151
+ attention_mask=None,
1152
+ head_mask=None,
1153
+ inputs_embeds=None,
1154
+ output_attentions=None,
1155
+ output_hidden_states=None,
1156
+ return_dict=None
1157
+ ):
1158
+
1159
+
1160
+ inputs_ = input_ids if input_ids is not None else inputs_embeds
1161
+ n, t = inputs_.size()[:2]
1162
+
1163
+ if attention_mask is None:
1164
+ attention_mask = torch.ones(n, t, device=inputs_.device)
1165
+
1166
+ b = self.block_size * 2
1167
+ pad = t % self.block_size
1168
+
1169
+ # Check if t is multiple of block_size and pad
1170
+ if t > b and pad > 0:
1171
+ pad_length = self.block_size - pad
1172
+ if input_ids is not None:
1173
+ input_ids = torch.nn.functional.pad(input_ids, (0, pad_length), value=self.pad_idx)
1174
+ else:
1175
+ inputs_embeds = torch.nn.functional.pad(inputs_embeds.transpose(-1, -2), (0, pad_length), value=0.).transpose(-1, -2)
1176
+ attention_mask = torch.nn.functional.pad(attention_mask, (0, pad_length), value=0)
1177
+
1178
+ # else adaptive sequence length
1179
+ elif self.adaptive:
1180
+ # Get last non zero mask index
1181
+ s = int(attention_mask.cumsum(dim=-1).argmax(dim=-1).max()) + 1
1182
+ if s < t and self.block_size is not None:
1183
+ s = max(2, s // self.block_size + 1) * self.block_size if s > b else s
1184
+ if input_ids is not None:
1185
+ input_ids = input_ids[:, :s]
1186
+ else:
1187
+ inputs_embeds = inputs_embeds[:, :s]
1188
+ attention_mask = attention_mask[:, :s]
1189
+
1190
+ n, t_ = attention_mask.size()
1191
+
1192
+ encoder_outputs = self.forward_with_adaptive(
1193
+ input_ids=input_ids,
1194
+ attention_mask=attention_mask,
1195
+ head_mask=head_mask,
1196
+ inputs_embeds=inputs_embeds,
1197
+ output_attentions=output_attentions,
1198
+ output_hidden_states=output_hidden_states,
1199
+ return_dict=return_dict,
1200
+ )
1201
+
1202
+ context = encoder_outputs[0]
1203
+ diff = t - t_
1204
+
1205
+ if self.pass_global_tokens_to_decoder:
1206
+ offset = self.num_global_tokens
1207
+ else:
1208
+ if self.pool_with_global:
1209
+ context[:, self.num_global_tokens] = context[:, 0]
1210
+ context = context[..., self.num_global_tokens:, :]
1211
+ offset = 0
1212
+
1213
+ # Adapt sequence to initial shape
1214
+ if diff > 0:
1215
+ context = torch.nn.functional.pad(context.transpose(-1, -2), pad=(0, diff), value=0).transpose(-1, -2)
1216
+ elif diff < 0:
1217
+ context = context[:, :t + offset]
1218
+
1219
+ if return_dict:
1220
+ encoder_outputs.last_hidden_state = context
1221
+ else:
1222
+ encoder_outputs = (context, ) + encoder_outputs[1:]
1223
+
1224
+ return encoder_outputs
1225
+
1226
+ def forward_with_adaptive(
1227
+ self,
1228
+ input_ids=None,
1229
+ attention_mask=None,
1230
+ head_mask=None,
1231
+ inputs_embeds=None,
1232
+ output_attentions=None,
1233
+ output_hidden_states=None,
1234
+ return_dict=None,
1235
+ ):
1236
+
1237
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
1238
+ output_hidden_states = (
1239
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
1240
+ )
1241
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1242
+
1243
+ # retrieve input_ids and inputs_embeds
1244
+ if input_ids is not None and inputs_embeds is not None:
1245
+ raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
1246
+ elif input_ids is not None:
1247
+ input_shape = input_ids.size()
1248
+ input_ids = input_ids.view(-1, input_shape[-1])
1249
+ elif inputs_embeds is not None:
1250
+ input_shape = inputs_embeds.size()[:-1]
1251
+ else:
1252
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
1253
+
1254
+ if inputs_embeds is None:
1255
+ inputs_embeds = self.embed_tokens(input_ids) * self.embed_scale
1256
+
1257
+ embed_pos = self.embed_positions(input_shape)
1258
+ hidden_states = inputs_embeds + embed_pos
1259
+
1260
+ # Add global tokens
1261
+ n, t, d = hidden_states.size()
1262
+ global_idx = torch.arange(self.num_global_tokens, device=hidden_states.device).reshape(1, -1)
1263
+ hidden_states = torch.cat([self.global_embeddings(global_idx).expand(n, -1, -1), hidden_states], dim=-2)
1264
+
1265
+ hidden_states = self.layernorm_embedding(hidden_states)
1266
+ hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
1267
+
1268
+ # expand attention_mask
1269
+ if attention_mask is not None:
1270
+ # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
1271
+ attention_mask = _expand_mask(attention_mask, inputs_embeds.dtype)
1272
+
1273
+ encoder_states = () if output_hidden_states else None
1274
+ all_attentions = () if output_attentions else None
1275
+
1276
+ # check if head_mask has a correct number of layers specified if desired
1277
+ if head_mask is not None:
1278
+ if head_mask.size()[0] != (len(self.layers)):
1279
+ raise ValueError(
1280
+ f"The head_mask should be specified for {len(self.layers)} layers, but it is for {head_mask.size()[0]}."
1281
+ )
1282
+
1283
+ for idx, encoder_layer in enumerate(self.layers):
1284
+ if output_hidden_states:
1285
+ encoder_states = encoder_states + (hidden_states,)
1286
+ # add LayerDrop (see https://arxiv.org/abs/1909.11556 for description)
1287
+ dropout_probability = random.uniform(0, 1)
1288
+ if self.training and (dropout_probability < self.layerdrop): # skip the layer
1289
+ layer_outputs = (None, None)
1290
+ else:
1291
+ if self.gradient_checkpointing and self.training:
1292
+
1293
+ def create_custom_forward(module):
1294
+ def custom_forward(*inputs):
1295
+ return module(*inputs, output_attentions)
1296
+
1297
+ return custom_forward
1298
+
1299
+ layer_outputs = torch.utils.checkpoint.checkpoint(
1300
+ create_custom_forward(encoder_layer),
1301
+ hidden_states,
1302
+ attention_mask,
1303
+ (head_mask[idx] if head_mask is not None else None),
1304
+ )
1305
+ else:
1306
+ layer_outputs = encoder_layer(
1307
+ hidden_states,
1308
+ attention_mask,
1309
+ layer_head_mask=(head_mask[idx] if head_mask is not None else None),
1310
+ output_attentions=output_attentions,
1311
+ )
1312
+
1313
+ hidden_states = layer_outputs[0]
1314
+
1315
+ if output_attentions:
1316
+ all_attentions = all_attentions + (layer_outputs[1],)
1317
+
1318
+ if output_hidden_states:
1319
+ encoder_states = encoder_states + (hidden_states,)
1320
+
1321
+ if not return_dict:
1322
+ return tuple(v for v in [hidden_states, encoder_states, all_attentions] if v is not None)
1323
+ return BaseModelOutput(
1324
+ last_hidden_state=hidden_states, hidden_states=encoder_states, attentions=all_attentions
1325
+ )
1326
+
1327
+
1328
+ class LSGBartDecoder(LSGBartPretrainedModel):
1329
+ """
1330
+ Transformer decoder consisting of *config.decoder_layers* layers. Each layer is a :class:`LSGBartDecoderLayer`
1331
+ Args:
1332
+ config: BartConfig
1333
+ embed_tokens (nn.Embedding): output embedding
1334
+ """
1335
+
1336
+ def __init__(self, config, embed_tokens=None):
1337
+
1338
+ super().__init__(config)
1339
+ self.dropout = config.dropout
1340
+ self.layerdrop = config.decoder_layerdrop
1341
+ self.padding_idx = config.pad_token_id
1342
+ self.max_target_positions = config.max_position_embeddings
1343
+ self.embed_scale = math.sqrt(config.d_model) if config.scale_embedding else 1.0
1344
+ self.adaptive = config.adaptive
1345
+
1346
+ if embed_tokens is not None:
1347
+ self.embed_tokens = embed_tokens
1348
+ else:
1349
+ self.embed_tokens = nn.Embedding(config.vocab_size, config.d_model, self.padding_idx)
1350
+
1351
+ self.embed_positions = LSGBartLearnedPositionalEmbedding(
1352
+ config.max_position_embeddings,
1353
+ config.d_model,
1354
+ )
1355
+ self.layers = nn.ModuleList([LSGBartDecoderLayer(config) for _ in range(config.decoder_layers)])
1356
+ self.layernorm_embedding = nn.LayerNorm(config.d_model)
1357
+
1358
+ self.gradient_checkpointing = False
1359
+
1360
+ # Initialize weights and apply final processing
1361
+ self.post_init()
1362
+
1363
+ def get_input_embeddings(self):
1364
+ return self.embed_tokens
1365
+
1366
+ def set_input_embeddings(self, value):
1367
+ self.embed_tokens = value
1368
+
1369
+ def _prepare_decoder_attention_mask(self, attention_mask, input_shape, inputs_embeds, past_key_values_length):
1370
+ # create causal mask
1371
+ # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
1372
+ combined_attention_mask = None
1373
+ if input_shape[-1] > 1:
1374
+ combined_attention_mask = _make_causal_mask(
1375
+ input_shape, inputs_embeds.dtype, past_key_values_length=past_key_values_length
1376
+ ).to(self.device)
1377
+
1378
+ if attention_mask is not None:
1379
+ # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
1380
+ expanded_attn_mask = _expand_mask(attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1])
1381
+ combined_attention_mask = (
1382
+ expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask
1383
+ )
1384
+
1385
+ return combined_attention_mask
1386
+
1387
+ def resize_inputs(self, inputs_embeds, attention_mask):
1388
+ pad = 0
1389
+
1390
+ max_len = int(attention_mask.sum(dim=-1).max())
1391
+ pad = attention_mask.size()[-1] - max_len
1392
+ inputs_embeds = inputs_embeds[:, :max_len]
1393
+ attention_mask = attention_mask[..., :max_len]
1394
+ return pad, inputs_embeds, attention_mask
1395
+
1396
+ def forward(
1397
+ self,
1398
+ input_ids=None,
1399
+ attention_mask=None,
1400
+ encoder_hidden_states=None,
1401
+ encoder_attention_mask=None,
1402
+ head_mask=None,
1403
+ cross_attn_head_mask=None,
1404
+ past_key_values=None,
1405
+ inputs_embeds=None,
1406
+ use_cache=None,
1407
+ output_attentions=None,
1408
+ output_hidden_states=None,
1409
+ return_dict=None,
1410
+ ):
1411
+
1412
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
1413
+ output_hidden_states = (
1414
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
1415
+ )
1416
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
1417
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1418
+
1419
+ # retrieve input_ids and inputs_embeds
1420
+ if input_ids is not None and inputs_embeds is not None:
1421
+ raise ValueError("You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time")
1422
+ elif input_ids is not None:
1423
+ input_shape = input_ids.size()
1424
+ input_ids = input_ids.view(-1, input_shape[-1])
1425
+ elif inputs_embeds is not None:
1426
+ input_shape = inputs_embeds.size()[:-1]
1427
+ else:
1428
+ raise ValueError("You have to specify either decoder_input_ids or decoder_inputs_embeds")
1429
+
1430
+ # past_key_values_length
1431
+ past_key_values_length = past_key_values[0][0].shape[2] if past_key_values is not None else 0
1432
+
1433
+ if inputs_embeds is None:
1434
+ inputs_embeds = self.embed_tokens(input_ids) * self.embed_scale
1435
+
1436
+ # Resize to reduce computation
1437
+ pad = 0
1438
+ if self.adaptive:
1439
+ if attention_mask is not None:
1440
+ pad, inputs_embeds, attention_mask = self.resize_inputs(inputs_embeds, attention_mask)
1441
+ input_shape = inputs_embeds.size()[:-1]
1442
+ if encoder_attention_mask is not None:
1443
+ _, encoder_hidden_states, encoder_attention_mask = self.resize_inputs(encoder_hidden_states, encoder_attention_mask)
1444
+
1445
+ attention_mask = self._prepare_decoder_attention_mask(
1446
+ attention_mask, input_shape, inputs_embeds, past_key_values_length
1447
+ )
1448
+
1449
+ # expand encoder attention mask
1450
+ if encoder_hidden_states is not None and encoder_attention_mask is not None:
1451
+ # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
1452
+ encoder_attention_mask = _expand_mask(encoder_attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1])
1453
+
1454
+ # embed positions
1455
+ positions = self.embed_positions(input_shape, past_key_values_length)
1456
+
1457
+ hidden_states = inputs_embeds + positions
1458
+ hidden_states = self.layernorm_embedding(hidden_states)
1459
+
1460
+ hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
1461
+
1462
+ # decoder layers
1463
+ all_hidden_states = () if output_hidden_states else None
1464
+ all_self_attns = () if output_attentions else None
1465
+ all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None
1466
+ next_decoder_cache = () if use_cache else None
1467
+
1468
+ # check if head_mask/cross_attn_head_mask has a correct number of layers specified if desired
1469
+ for attn_mask, mask_name in zip([head_mask, cross_attn_head_mask], ["head_mask", "cross_attn_head_mask"]):
1470
+ if attn_mask is not None:
1471
+ if attn_mask.size()[0] != (len(self.layers)):
1472
+ raise ValueError(
1473
+ "The `{mask_name}` should be specified for {len(self.layers)} layers, but it is for {head_mask.size()[0]}."
1474
+ )
1475
+
1476
+ for idx, decoder_layer in enumerate(self.layers):
1477
+ # add LayerDrop (see https://arxiv.org/abs/1909.11556 for description)
1478
+ if output_hidden_states:
1479
+ all_hidden_states += (hidden_states,)
1480
+ dropout_probability = random.uniform(0, 1)
1481
+ if self.training and (dropout_probability < self.layerdrop):
1482
+ continue
1483
+
1484
+ past_key_value = past_key_values[idx] if past_key_values is not None else None
1485
+
1486
+ if self.gradient_checkpointing and self.training:
1487
+
1488
+ if use_cache:
1489
+ logger.warning(
1490
+ "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
1491
+ )
1492
+ use_cache = False
1493
+
1494
+ def create_custom_forward(module):
1495
+ def custom_forward(*inputs):
1496
+ # None for past_key_value
1497
+ return module(*inputs, output_attentions, use_cache)
1498
+
1499
+ return custom_forward
1500
+
1501
+ layer_outputs = torch.utils.checkpoint.checkpoint(
1502
+ create_custom_forward(decoder_layer),
1503
+ hidden_states,
1504
+ attention_mask,
1505
+ encoder_hidden_states,
1506
+ encoder_attention_mask,
1507
+ head_mask[idx] if head_mask is not None else None,
1508
+ cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None,
1509
+ None,
1510
+ )
1511
+ else:
1512
+
1513
+ layer_outputs = decoder_layer(
1514
+ hidden_states,
1515
+ attention_mask=attention_mask,
1516
+ encoder_hidden_states=encoder_hidden_states,
1517
+ encoder_attention_mask=encoder_attention_mask,
1518
+ layer_head_mask=(head_mask[idx] if head_mask is not None else None),
1519
+ cross_attn_layer_head_mask=(
1520
+ cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None
1521
+ ),
1522
+ past_key_value=past_key_value,
1523
+ output_attentions=output_attentions,
1524
+ use_cache=use_cache,
1525
+ )
1526
+ hidden_states = layer_outputs[0]
1527
+
1528
+ if use_cache:
1529
+ next_decoder_cache += (layer_outputs[3 if output_attentions else 1],)
1530
+
1531
+ if output_attentions:
1532
+ all_self_attns += (layer_outputs[1],)
1533
+
1534
+ if encoder_hidden_states is not None:
1535
+ all_cross_attentions += (layer_outputs[2],)
1536
+
1537
+ # Resize to original shape
1538
+ hidden_states = torch.nn.functional.pad(hidden_states.transpose(-1, -2), pad=(0, pad), value=0).transpose(-1, -2)
1539
+
1540
+ # add hidden states from the last decoder layer
1541
+ if output_hidden_states:
1542
+ all_hidden_states += (hidden_states,)
1543
+
1544
+ next_cache = next_decoder_cache if use_cache else None
1545
+ if not return_dict:
1546
+ return tuple(
1547
+ v
1548
+ for v in [hidden_states, next_cache, all_hidden_states, all_self_attns, all_cross_attentions]
1549
+ if v is not None
1550
+ )
1551
+ return BaseModelOutputWithPastAndCrossAttentions(
1552
+ last_hidden_state=hidden_states,
1553
+ past_key_values=next_cache,
1554
+ hidden_states=all_hidden_states,
1555
+ attentions=all_self_attns,
1556
+ cross_attentions=all_cross_attentions,
1557
+ )
1558
+
1559
+
1560
+ class LSGBartModel(LSGBartPretrainedModel):
1561
+
1562
+ def __init__(self, config):
1563
+
1564
+ super().__init__(config)
1565
+
1566
+ padding_idx, vocab_size = config.pad_token_id, config.vocab_size
1567
+ self.shared = nn.Embedding(vocab_size, config.d_model, padding_idx)
1568
+ self.pass_global_tokens_to_decoder = config.pass_global_tokens_to_decoder
1569
+ self.num_global_tokens = config.num_global_tokens
1570
+ self.encoder = LSGBartEncoder(config, self.shared)
1571
+ self.decoder = LSGBartDecoder(config, self.shared)
1572
+
1573
+ # Initialize weights and apply final processing
1574
+ self.post_init()
1575
+
1576
+ def get_input_embeddings(self):
1577
+ return self.shared
1578
+
1579
+ def set_input_embeddings(self, value):
1580
+ self.shared = value
1581
+ self.encoder.embed_tokens = self.shared
1582
+ self.decoder.embed_tokens = self.shared
1583
+
1584
+ def get_encoder(self):
1585
+ return self.encoder
1586
+
1587
+ def get_decoder(self):
1588
+ return self.decoder
1589
+
1590
+ def forward(
1591
+ self,
1592
+ input_ids=None,
1593
+ attention_mask=None,
1594
+ decoder_input_ids=None,
1595
+ decoder_attention_mask=None,
1596
+ head_mask=None,
1597
+ decoder_head_mask=None,
1598
+ cross_attn_head_mask=None,
1599
+ encoder_outputs=None,
1600
+ past_key_values=None,
1601
+ inputs_embeds=None,
1602
+ decoder_inputs_embeds=None,
1603
+ use_cache=None,
1604
+ output_attentions=None,
1605
+ output_hidden_states=None,
1606
+ return_dict=None,
1607
+ ):
1608
+
1609
+ # different to other models, Bart automatically creates decoder_input_ids from
1610
+ # input_ids if no decoder_input_ids are provided
1611
+ if decoder_input_ids is None and decoder_inputs_embeds is None:
1612
+ decoder_input_ids = shift_tokens_right(
1613
+ input_ids, self.config.pad_token_id, self.config.decoder_start_token_id
1614
+ )
1615
+
1616
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
1617
+ output_hidden_states = (
1618
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
1619
+ )
1620
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
1621
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1622
+
1623
+ if encoder_outputs is None:
1624
+ encoder_outputs = self.encoder(
1625
+ input_ids=input_ids,
1626
+ attention_mask=attention_mask,
1627
+ head_mask=head_mask,
1628
+ inputs_embeds=inputs_embeds,
1629
+ output_attentions=output_attentions,
1630
+ output_hidden_states=output_hidden_states,
1631
+ return_dict=return_dict,
1632
+ )
1633
+ # If the user passed a tuple for encoder_outputs, we wrap it in a BaseModelOutput when return_dict=True
1634
+ elif return_dict and not isinstance(encoder_outputs, BaseModelOutput):
1635
+ encoder_outputs = BaseModelOutput(
1636
+ last_hidden_state=encoder_outputs[0],
1637
+ hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None,
1638
+ attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None,
1639
+ )
1640
+
1641
+ # Pad mask for global tokens
1642
+ if self.pass_global_tokens_to_decoder:
1643
+ attention_mask = torch.nn.functional.pad(attention_mask, pad=(self.num_global_tokens, 0), value=1)
1644
+
1645
+ # decoder outputs consists of (dec_features, past_key_value, dec_hidden, dec_attn)
1646
+ decoder_outputs = self.decoder(
1647
+ input_ids=decoder_input_ids,
1648
+ attention_mask=decoder_attention_mask,
1649
+ encoder_hidden_states=encoder_outputs[0],
1650
+ encoder_attention_mask=attention_mask,
1651
+ head_mask=decoder_head_mask,
1652
+ cross_attn_head_mask=cross_attn_head_mask,
1653
+ past_key_values=past_key_values,
1654
+ inputs_embeds=decoder_inputs_embeds,
1655
+ use_cache=use_cache,
1656
+ output_attentions=output_attentions,
1657
+ output_hidden_states=output_hidden_states,
1658
+ return_dict=return_dict,
1659
+ )
1660
+
1661
+ if not return_dict:
1662
+ return decoder_outputs + encoder_outputs
1663
+
1664
+ return Seq2SeqModelOutput(
1665
+ last_hidden_state=decoder_outputs.last_hidden_state,
1666
+ past_key_values=decoder_outputs.past_key_values,
1667
+ decoder_hidden_states=decoder_outputs.hidden_states,
1668
+ decoder_attentions=decoder_outputs.attentions,
1669
+ cross_attentions=decoder_outputs.cross_attentions,
1670
+ encoder_last_hidden_state=encoder_outputs.last_hidden_state,
1671
+ encoder_hidden_states=encoder_outputs.hidden_states,
1672
+ encoder_attentions=encoder_outputs.attentions,
1673
+ )
1674
+
1675
+
1676
+ class LSGBartForConditionalGeneration(BartForConditionalGeneration, LSGBartPretrainedModel):
1677
+
1678
+ base_model_prefix = "model"
1679
+ _keys_to_ignore_on_load_missing = [r"final_logits_bias", r"lm_head\.weight"]
1680
+
1681
+ def __init__(self, config):
1682
+
1683
+ LSGBartPretrainedModel.__init__(self, config)
1684
+ self.model = LSGBartModel(config)
1685
+ self.register_buffer("final_logits_bias", torch.zeros((1, self.model.shared.num_embeddings)))
1686
+ self.lm_head = nn.Linear(config.d_model, self.model.shared.num_embeddings, bias=False)
1687
+
1688
+ # Initialize weights and apply final processing
1689
+ self.post_init()
1690
+
1691
+
1692
+ class LSGBartForSequenceClassification(BartForSequenceClassification, LSGBartPretrainedModel):
1693
+
1694
+ def __init__(self, config: LSGBartConfig, **kwargs):
1695
+
1696
+ LSGBartPretrainedModel.__init__(self, config, **kwargs)
1697
+ self.model = LSGBartModel(config)
1698
+ self.classification_head = LSGBartClassificationHead(
1699
+ config.d_model,
1700
+ config.d_model,
1701
+ config.num_labels,
1702
+ config.classifier_dropout,
1703
+ )
1704
+ self.model._init_weights(self.classification_head.dense)
1705
+ self.model._init_weights(self.classification_head.out_proj)
1706
+
1707
+
1708
+ class LSGBartForQuestionAnswering(BartForQuestionAnswering, LSGBartPretrainedModel):
1709
+
1710
+ def __init__(self, config: LSGBartConfig):
1711
+
1712
+ LSGBartPretrainedModel.__init__(self, config)
1713
+
1714
+ config.num_labels = 2
1715
+ self.num_labels = config.num_labels
1716
+
1717
+ self.model = LSGBartModel(config)
1718
+ self.qa_outputs = nn.Linear(config.hidden_size, config.num_labels)
1719
+
1720
+ self.model._init_weights(self.qa_outputs)
1721
+
1722
+
1723
+ class LSGBartDecoderWrapper(LSGBartPretrainedModel):
1724
+ """
1725
+ This wrapper class is a helper class to correctly load pretrained checkpoints when the causal language model is
1726
+ used in combination with the :class:`~transformers.EncoderDecoderModel` framework.
1727
+ """
1728
+
1729
+ def __init__(self, config: LSGBartConfig):
1730
+ super().__init__(config)
1731
+ self.decoder = LSGBartDecoder(config)
1732
+
1733
+ def forward(self, *args, **kwargs):
1734
+ return self.decoder(*args, **kwargs)
1735
+
1736
+
1737
+ class LSGBartForCausalLM(BartForCausalLM, LSGBartPretrainedModel):
1738
+
1739
+ def __init__(self, config: LSGBartConfig):
1740
+
1741
+ config = copy.deepcopy(config)
1742
+ config.is_decoder = True
1743
+ config.is_encoder_decoder = False
1744
+ LSGBartPretrainedModel.__init__(self, config)
1745
+ self.model = LSGBartDecoderWrapper(config)
1746
+
1747
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
1748
+
1749
+ # Initialize weights and apply final processing
1750
+ self.post_init()
1751
+
1752
+
1753
+ def str_to_class(classname):
1754
+ return getattr(sys.modules[__name__], classname)
1755
+
1756
+ # Register model in Auto API
1757
+ try:
1758
+ LSGBartConfig.register_for_auto_class()
1759
+ for key, value in AUTO_MAP.items():
1760
+ str_to_class(value.split(".")[-1]).register_for_auto_class(key)
1761
+ except:
1762
+ warn("AutoRegister isn't available, you'll have to manually copy modeling.py after .save_pretrained(...).")
1763
+ warn("Update to transformers >= 4.17.0 to fix.")
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e5a29df154b326bc71f3c06dfb1edd557edb38f4d221811e96b20b9cc33202f8
3
+ size 578416695
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "sep_token": "</s>", "pad_token": "<pad>", "cls_token": "<s>", "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": false}}
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"errors": "replace", "bos_token": "<s>", "eos_token": "</s>", "sep_token": "</s>", "cls_token": "<s>", "unk_token": "<unk>", "pad_token": "<pad>", "mask_token": "<mask>", "add_prefix_space": false, "trim_offsets": true, "model_max_length": 4096, "special_tokens_map_file": null, "name_or_path": "/data/ccondevaux/lsg/text-summarization/tmp_final/mediasum/lsg_local_prepended2", "tokenizer_class": "BartTokenizer"}
vocab.json ADDED
The diff for this file is too large to render. See raw diff