ccdv commited on
Commit
59f8fad
1 Parent(s): 0534bd6
README.md ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - summarization
6
+ datasets:
7
+ - ccdv/WCEP-10
8
+ metrics:
9
+ - rouge
10
+ model-index:
11
+ - name: ccdv/lsg-bart-base-4096-wcep
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ **This model relies on a custom modeling file, you need to add trust_remote_code=True**\
19
+ **See [\#13467](https://github.com/huggingface/transformers/pull/13467)**
20
+
21
+ ```python
22
+ from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, pipeline
23
+
24
+ tokenizer = AutoTokenizer.from_pretrained("ccdv/lsg-bart-base-4096-wcep", trust_remote_code=True)
25
+ model = AutoModelForSeq2SeqLM.from_pretrained("ccdv/lsg-bart-base-4096-wcep", trust_remote_code=True)
26
+
27
+ text = "Replace by what you want."
28
+ pipe = pipeline("text2text-generation", model=model, tokenizer=tokenizer, device=0)
29
+ generated_text = pipe(text, truncation=True, max_length=64, no_repeat_ngram_size=7)
30
+ ```
31
+ # ccdv/lsg-bart-base-4096-wcep
32
+
33
+ This model is a fine-tuned version of [ccdv/lsg-bart-base-4096](https://huggingface.co/ccdv/lsg-bart-base-4096) on the ccdv/WCEP-10 roberta dataset. \
34
+ It achieves the following results on the test set:
35
+
36
+ | Length | Sparse Type | Block Size | Sparsity | Connexions | R1 | R2 | RL | RLsum |
37
+ |:------ |:------------ |:---------- |:-------- | :--------- |:----- |:----- |:----- |:----- |
38
+ | 4096 | Local | 256 | 0 | 768 | 46.02 | 24.23 | 37.38 | 38.72 |
39
+ | 4096 | Local | 128 | 0 | 384 | 45.43 | 23.86 | 36.94 | 38.30 |
40
+ | 4096 | Pooling | 128 | 4 | 644 | 45.36 | 23.61 | 36.75 | 38.06 |
41
+ | 4096 | Stride | 128 | 4 | 644 | 45.87 | 24.31 | 37.41 | 38.70 |
42
+ | 4096 | Block Stride | 128 | 4 | 644 | 45.78 | 24.16 | 37.20 | 38.48 |
43
+ | 4096 | Norm | 128 | 4 | 644 | 45.34 | 23.39 | 36.47 | 37.78 |
44
+ | 4096 | LSH | 128 | 4 | 644 | 45.15 | 23.53 | 36.74 | 38.02 |
45
+
46
+ With smaller block size (lower ressources):
47
+
48
+ | Length | Sparse Type | Block Size | Sparsity | Connexions | R1 | R2 | RL | RLsum |
49
+ |:------ |:------------ |:---------- |:-------- | :--------- |:----- |:----- |:----- |:----- |
50
+ | 4096 | Local | 64 | 0 | 192 | 44.48 | 22.98 | 36.20 | 37.52 |
51
+ | 4096 | Local | 32 | 0 | 96 | 43.60 | 22.17 | 35.61 | 36.66 |
52
+ | 4096 | Pooling | 32 | 4 | 160 | 43.91 | 22.41 | 35.80 | 36.92 |
53
+ | 4096 | Stride | 32 | 4 | 160 | 44.62 | 23.11 | 36.32 | 37.53 |
54
+ | 4096 | Block Stride | 32 | 4 | 160 | 44.47 | 23.02 | 36.28 | 37.46 |
55
+ | 4096 | Norm | 32 | 4 | 160 | 44.45 | 23.03 | 36.10 | 37.33 |
56
+ | 4096 | LSH | 32 | 4 | 160 | 43.87 | 22.50 | 35.75 | 36.93 |
57
+
58
+ ## Model description
59
+ The model relies on Local-Sparse-Global attention to handle long sequences:
60
+ ![attn](attn.png)
61
+
62
+ The model has about ~145 millions parameters (6 encoder layers - 6 decoder layers). \
63
+ The model is warm started from BART-base, converted to handle long sequences (encoder only) and fine tuned. \
64
+
65
+ ## Intended uses & limitations
66
+
67
+ More information needed
68
+
69
+ ## Training and evaluation data
70
+
71
+ More information needed
72
+
73
+ ## Training procedure
74
+
75
+ ### Training hyperparameters
76
+
77
+ The following hyperparameters were used during training:
78
+ - learning_rate: 8e-05
79
+ - train_batch_size: 8
80
+ - seed: 42
81
+ - gradient_accumulation_steps: 4
82
+ - total_train_batch_size: 32
83
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
84
+ - lr_scheduler_type: linear
85
+ - lr_scheduler_warmup_ratio: 0.1
86
+ - num_epochs: 10.0
87
+
88
+ ### Generate hyperparameters
89
+
90
+ The following hyperparameters were used during generation:
91
+ - dataset_name: ccdv/WCEP-10
92
+ - dataset_config_name: roberta
93
+ - eval_batch_size: 8
94
+ - eval_samples: 1022
95
+ - early_stopping: True
96
+ - ignore_pad_token_for_loss: True
97
+ - length_penalty: 2.0
98
+ - max_length: 64
99
+ - min_length: 0
100
+ - num_beams: 5
101
+ - no_repeat_ngram_size: None
102
+ - seed: 123
103
+
104
+ ### Framework versions
105
+
106
+ - Transformers 4.18.0
107
+ - Pytorch 1.10.1+cu102
108
+ - Datasets 2.1.0
109
+ - Tokenizers 0.11.6
config.json ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "models/ccdv/lsg-bart-base-4096-wcep",
3
+ "activation_dropout": 0.1,
4
+ "activation_function": "gelu",
5
+ "adaptive": true,
6
+ "add_bias_logits": false,
7
+ "add_final_layer_norm": false,
8
+ "architectures": [
9
+ "LSGBartForConditionalGeneration"
10
+ ],
11
+ "attention_dropout": 0.1,
12
+ "auto_map": {
13
+ "AutoConfig": "modeling_lsg_bart.LSGBartConfig",
14
+ "AutoModel": "modeling_lsg_bart.LSGBartModel",
15
+ "AutoModelForCausalLM": "modeling_lsg_bart.LSGBartForCausalLM",
16
+ "AutoModelForQuestionAnswering": "modeling_lsg_bart.LSGBartForQuestionAnswering",
17
+ "AutoModelForSeq2SeqLM": "modeling_lsg_bart.LSGBartForConditionalGeneration",
18
+ "AutoModelForSequenceClassification": "modeling_lsg_bart.LSGBartForSequenceClassification"
19
+ },
20
+ "base_model_prefix": "lsg",
21
+ "block_size": 256,
22
+ "bos_token_id": 0,
23
+ "classif_dropout": 0.1,
24
+ "classifier_dropout": 0.0,
25
+ "d_model": 768,
26
+ "decoder_attention_heads": 12,
27
+ "decoder_ffn_dim": 3072,
28
+ "decoder_layerdrop": 0.0,
29
+ "decoder_layers": 6,
30
+ "decoder_start_token_id": 2,
31
+ "dropout": 0.1,
32
+ "early_stopping": true,
33
+ "encoder_attention_heads": 12,
34
+ "encoder_ffn_dim": 3072,
35
+ "encoder_layerdrop": 0.0,
36
+ "encoder_layers": 6,
37
+ "eos_token_id": 2,
38
+ "forced_bos_token_id": 0,
39
+ "forced_eos_token_id": 2,
40
+ "gradient_checkpointing": false,
41
+ "id2label": {
42
+ "0": "LABEL_0",
43
+ "1": "LABEL_1",
44
+ "2": "LABEL_2"
45
+ },
46
+ "init_std": 0.02,
47
+ "is_encoder_decoder": true,
48
+ "label2id": {
49
+ "LABEL_0": 0,
50
+ "LABEL_1": 1,
51
+ "LABEL_2": 2
52
+ },
53
+ "length_penalty": 2.0,
54
+ "lsh_num_pre_rounds": 1,
55
+ "max_length": 64,
56
+ "max_position_embeddings": 4096,
57
+ "model_type": "bart",
58
+ "no_repeat_ngram_size": null,
59
+ "normalize_before": false,
60
+ "normalize_embedding": true,
61
+ "num_beams": 5,
62
+ "num_global_tokens": 1,
63
+ "num_hidden_layers": 6,
64
+ "pad_token_id": 1,
65
+ "pass_global_tokens_to_decoder": true,
66
+ "pool_with_global": true,
67
+ "scale_embedding": false,
68
+ "sparse_block_size": 0,
69
+ "sparsity_factor": 2,
70
+ "sparsity_type": "norm",
71
+ "task_specific_params": {
72
+ "summarization": {
73
+ "length_penalty": 1.0,
74
+ "max_length": 128,
75
+ "min_length": 12,
76
+ "num_beams": 4
77
+ },
78
+ "summarization_cnn": {
79
+ "length_penalty": 2.0,
80
+ "max_length": 142,
81
+ "min_length": 56,
82
+ "num_beams": 4
83
+ },
84
+ "summarization_xsum": {
85
+ "length_penalty": 1.0,
86
+ "max_length": 62,
87
+ "min_length": 11,
88
+ "num_beams": 6
89
+ }
90
+ },
91
+ "torch_dtype": "float32",
92
+ "transformers_version": "4.18.0",
93
+ "use_cache": true,
94
+ "vocab_size": 50265
95
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
modeling_lsg_bart.py ADDED
@@ -0,0 +1,1759 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from logging import warn
2
+ import torch
3
+ from transformers.models.bart.modeling_bart import *
4
+ from transformers.models.bart.modeling_bart import _expand_mask
5
+ import torch.nn as nn
6
+ from torch.nn import BCEWithLogitsLoss
7
+ import sys
8
+
9
+ AUTO_MAP = {
10
+ "AutoModel": "modeling_lsg_bart.LSGBartModel",
11
+ "AutoModelForCausalLM": "modeling_lsg_bart.LSGBartForCausalLM",
12
+ "AutoModelForQuestionAnswering": "modeling_lsg_bart.LSGBartForQuestionAnswering",
13
+ "AutoModelForSequenceClassification": "modeling_lsg_bart.LSGBartForSequenceClassification",
14
+ "AutoModelForSeq2SeqLM": "modeling_lsg_bart.LSGBartForConditionalGeneration"
15
+ }
16
+
17
+ class LSGBartConfig(BartConfig):
18
+ """
19
+ This class overrides :class:`~transformers.RobertaConfig`. Please check the superclass for the appropriate
20
+ documentation alongside usage examples.
21
+ """
22
+
23
+ base_model_prefix = "lsg"
24
+ model_type = "bart"
25
+ keys_to_ignore_at_inference = ["past_key_values"]
26
+ attribute_map = {"num_attention_heads": "encoder_attention_heads", "hidden_size": "d_model"}
27
+
28
+ def __init__(
29
+ self,
30
+ adaptive=True,
31
+ base_model_prefix="lsg",
32
+ block_size=128,
33
+ lsh_num_pre_rounds=1,
34
+ num_global_tokens=1,
35
+ pass_global_tokens_to_decoder=True,
36
+ pool_with_global=True,
37
+ sparse_block_size=128,
38
+ sparsity_factor=2,
39
+ sparsity_type="norm",
40
+ **kwargs
41
+ ):
42
+ """Constructs LSGConfig."""
43
+ super().__init__(**kwargs)
44
+
45
+ self.adaptive = adaptive
46
+ self.auto_map = AUTO_MAP
47
+ self.base_model_prefix = base_model_prefix
48
+ self.block_size = block_size
49
+ self.lsh_num_pre_rounds = lsh_num_pre_rounds
50
+ self.num_global_tokens = num_global_tokens
51
+ self.pass_global_tokens_to_decoder = pass_global_tokens_to_decoder
52
+ self.pool_with_global = pool_with_global
53
+ self.sparse_block_size = sparse_block_size
54
+ self.sparsity_factor = sparsity_factor
55
+ self.sparsity_type = sparsity_type
56
+
57
+ if sparsity_type not in [None, "none", "norm", "lsh", "pooling", "stride"]:
58
+ logger.warning(
59
+ "[WARNING CONFIG]: sparsity_mode not in [None, 'none', 'norm', 'lsh', 'pooling', 'stride'], setting sparsity_type=None, computation will skip sparse attention")
60
+ self.sparsity_type = None
61
+
62
+ if self.sparsity_type == "stride":
63
+ if self.sparsity_factor > self.encoder_attention_heads:
64
+ logger.warning(
65
+ "[WARNING CONFIG]: sparsity_factor > encoder_attention_heads is not recommended for stride sparsity"
66
+ )
67
+
68
+ if self.num_global_tokens < 1:
69
+ logger.warning(
70
+ "[WARNING CONFIG]: num_global_tokens < 1 is not compatible, setting num_global_tokens=1"
71
+ )
72
+ self.num_global_tokens = 1
73
+ elif self.num_global_tokens > 512:
74
+ logger.warning(
75
+ "[WARNING CONFIG]: num_global_tokens > 512 is not compatible, setting num_global_tokens=512"
76
+ )
77
+ self.num_global_tokens = 512
78
+
79
+ if self.sparsity_factor > 0:
80
+ assert self.block_size % self.sparsity_factor == 0, "[ERROR CONFIG]: block_size must be divisible by sparsity_factor"
81
+ assert self.block_size//self.sparsity_factor >= 1, "[ERROR CONFIG]: make sure block_size >= sparsity_factor"
82
+
83
+
84
+ def shift_tokens_right(input_ids, pad_token_id, decoder_start_token_id):
85
+ """
86
+ Shift input ids one token to the right.
87
+ """
88
+ shifted_input_ids = input_ids.new_zeros(input_ids.shape)
89
+ shifted_input_ids[:, 1:] = input_ids[:, :-1].clone()
90
+ shifted_input_ids[:, 0] = decoder_start_token_id
91
+
92
+ if pad_token_id is None:
93
+ raise ValueError("self.model.config.pad_token_id has to be defined.")
94
+ # replace possible -100 values in labels by `pad_token_id`
95
+ shifted_input_ids.masked_fill_(shifted_input_ids == -100, pad_token_id)
96
+
97
+ return shifted_input_ids
98
+
99
+
100
+ def _make_causal_mask(input_ids_shape, dtype, past_key_values_length=0):
101
+ """
102
+ Make causal mask used for bi-directional self-attention.
103
+ """
104
+ bsz, tgt_len = input_ids_shape
105
+ mask = torch.full((tgt_len, tgt_len), float("-inf"))
106
+ mask_cond = torch.arange(mask.size(-1))
107
+ mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0)
108
+ mask = mask.to(dtype)
109
+
110
+ if past_key_values_length > 0:
111
+ mask = torch.cat([torch.zeros(tgt_len, past_key_values_length, dtype=dtype), mask], dim=-1)
112
+ return mask[None, None, :, :].expand(bsz, 1, tgt_len, tgt_len + past_key_values_length)
113
+
114
+
115
+ def _expand_mask(mask, dtype, tgt_len=None):
116
+ """
117
+ Expands attention_mask from `[bsz, seq_len]` to `[bsz, 1, tgt_seq_len, src_seq_len]`.
118
+ """
119
+ bsz, src_len = mask.size()
120
+ tgt_len = tgt_len if tgt_len is not None else src_len
121
+
122
+ expanded_mask = mask[:, None, None, :].expand(bsz, 1, tgt_len, src_len).to(dtype)
123
+
124
+ inverted_mask = 1.0 - expanded_mask
125
+
126
+ return inverted_mask.masked_fill(inverted_mask.bool(), torch.finfo(dtype).min)
127
+
128
+
129
+ class BaseSelfAttention(nn.Module):
130
+
131
+ def __init__(
132
+ self,
133
+ embed_dim,
134
+ num_heads,
135
+ dropout=0.0,
136
+ is_decoder=False,
137
+ bias=True,
138
+ ):
139
+
140
+ super().__init__()
141
+ self.embed_dim = embed_dim
142
+ self.num_heads = num_heads
143
+ self.dropout = dropout
144
+ self.head_dim = embed_dim // num_heads
145
+
146
+ if (self.head_dim * num_heads) != self.embed_dim:
147
+ raise ValueError(
148
+ f"embed_dim must be divisible by num_heads (got `embed_dim`: {self.embed_dim}"
149
+ f" and `num_heads`: {num_heads})."
150
+ )
151
+ self.scaling = self.head_dim ** -0.5
152
+ self.is_decoder = is_decoder
153
+
154
+ self.k_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
155
+ self.v_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
156
+ self.q_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
157
+ self.out_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
158
+
159
+ def transpose_for_scores(self, x):
160
+ new_x_shape = x.size()[:-1] + (
161
+ self.num_heads,
162
+ self.head_dim,
163
+ )
164
+ x = x.view(*new_x_shape)
165
+ return x.permute(0, 2, 1, 3)
166
+
167
+ def reshape_output(self, context_layer):
168
+ context_layer = context_layer.permute(0, 2, 1, 3).contiguous()
169
+ new_context_layer_shape = context_layer.size()[:-2] + (self.embed_dim,)
170
+ return context_layer.view(*new_context_layer_shape)
171
+
172
+ def project_QKV(self, hidden_states):
173
+
174
+ query_layer = self.transpose_for_scores(self.q_proj(hidden_states))
175
+ key_layer = self.transpose_for_scores(self.k_proj(hidden_states))
176
+ value_layer = self.transpose_for_scores(self.v_proj(hidden_states))
177
+ return query_layer, key_layer, value_layer
178
+
179
+
180
+ class BaseAttentionProduct(nn.Module):
181
+
182
+ def __init__(self, config):
183
+ """
184
+ Compute attention: softmax(Q @ K.T) @ V
185
+ """
186
+ super().__init__()
187
+ self.dropout = nn.Dropout(config.attention_dropout)
188
+
189
+ def forward(self, query_layer, key_layer, value_layer, attention_mask=None):
190
+
191
+ d = query_layer.shape[-1]
192
+
193
+ # Take the dot product between "query" and "key" to get the raw attention scores.
194
+ attention_scores = query_layer @ key_layer.transpose(-1, -2) / math.sqrt(d)
195
+
196
+ del query_layer
197
+ del key_layer
198
+
199
+ if attention_mask is not None:
200
+ # Apply the attention mask is (precomputed for all layers in RobertaModel forward() function)
201
+ attention_scores = attention_scores + attention_mask
202
+ del attention_mask
203
+
204
+ # Normalize the attention scores to probabilities.
205
+ attention_probs = nn.Softmax(dim=-1)(attention_scores)
206
+
207
+ # This is actually dropping out entire tokens to attend to, which might
208
+ # seem a bit unusual, but is taken from the original Transformer paper.
209
+ context_layer = self.dropout(attention_probs) @ value_layer
210
+
211
+ return context_layer
212
+
213
+
214
+ class LSGAttentionProduct(nn.Module):
215
+
216
+ def __init__(self, config, block_size=None, sparse_block_size=None, sparsity_factor=4):
217
+ """
218
+ Compute block or overlapping blocks attention products
219
+ """
220
+ super().__init__()
221
+
222
+ self.block_size = block_size
223
+ self.sparse_block_size = sparse_block_size
224
+ self.sparsity_factor = sparsity_factor
225
+
226
+ if self.block_size is None:
227
+ self.block_size = config.block_size
228
+
229
+ if self.sparse_block_size is None:
230
+ self.sparse_block_size = config.sparse_block_size
231
+
232
+ # Shape of blocks
233
+ self.local_shapes = (self.block_size*3, self.block_size)
234
+ if self.sparse_block_size and self.sparsity_factor > 0:
235
+ self.sparse_shapes = (self.sparse_block_size*3, self.block_size//self.sparsity_factor)
236
+
237
+ self.attention = BaseAttentionProduct(config)
238
+
239
+ def build_lsg_inputs(self, hidden_states, sparse_hidden_states, global_hidden_states, is_attn_mask=False):
240
+
241
+ # Build local tokens
242
+ local_hidden_states = self.reshape_to_local_block(hidden_states, is_attn_mask)
243
+ del hidden_states
244
+
245
+ # Build sparse tokens
246
+ if sparse_hidden_states is not None:
247
+ sparse_hidden_states = self.reshape_to_sparse_block(sparse_hidden_states, is_attn_mask)
248
+
249
+ return self.cat_global_sparse_local_tokens(global_hidden_states, sparse_hidden_states, local_hidden_states)
250
+
251
+ def forward(
252
+ self,
253
+ query_layer,
254
+ key_layer,
255
+ value_layer,
256
+ attention_mask=None,
257
+ sparse_key=None,
258
+ sparse_value=None,
259
+ sparse_mask=None,
260
+ global_key=None,
261
+ global_value=None,
262
+ global_mask=None
263
+ ):
264
+
265
+ # Input batch, heads, length, hidden_size
266
+ n, h, t, d = query_layer.size()
267
+ n_blocks = t // self.block_size
268
+ assert t % self.block_size == 0
269
+
270
+ key_layer = self.build_lsg_inputs(
271
+ key_layer,
272
+ sparse_key,
273
+ global_key
274
+ )
275
+ del sparse_key
276
+ del global_key
277
+
278
+ value_layer = self.build_lsg_inputs(
279
+ value_layer,
280
+ sparse_value,
281
+ global_value
282
+ )
283
+ del sparse_value
284
+ del global_value
285
+
286
+ attention_mask = self.build_lsg_inputs(
287
+ attention_mask,
288
+ sparse_mask,
289
+ global_mask.transpose(-1, -2),
290
+ is_attn_mask=True
291
+ ).transpose(-1, -2)
292
+ del sparse_mask
293
+ del global_mask
294
+
295
+ # expect (..., t, d) shape
296
+ # Compute attention
297
+ context_layer = self.attention(
298
+ query_layer=self.chunk(query_layer, n_blocks),
299
+ key_layer=key_layer,
300
+ value_layer=value_layer,
301
+ attention_mask=attention_mask
302
+ )
303
+
304
+ return context_layer.reshape(n, h, -1, d)
305
+
306
+ def reshape_to_local_block(self, hidden_states, is_attn_mask=False):
307
+
308
+ size, step = self.local_shapes
309
+ s = (size - step) // 2
310
+
311
+ # Pad before block reshaping
312
+ if is_attn_mask:
313
+ pad_value = -10000
314
+ hidden_states = hidden_states.transpose(-1, -2)
315
+ else:
316
+ pad_value = 0
317
+
318
+ hidden_states = torch.nn.functional.pad(
319
+ hidden_states.transpose(-1, -2),
320
+ pad=(s, s),
321
+ value=pad_value
322
+ ).transpose(-1, -2)
323
+
324
+ # Make blocks
325
+ hidden_states = hidden_states.unfold(-2, size=size, step=step).transpose(-1, -2)
326
+
327
+ return hidden_states
328
+
329
+ def reshape_to_sparse_block(self, hidden_states, is_attn_mask=False):
330
+
331
+ size, step = self.sparse_shapes
332
+
333
+ # In case of odd case
334
+ odd_offset = (step % 2)
335
+
336
+ # n, h, t, d*2 + 1
337
+ size = size*2
338
+ s = (size - step) // 2 + odd_offset
339
+
340
+ # Pad before block reshaping
341
+ if is_attn_mask:
342
+ pad_value = -10000
343
+ hidden_states = hidden_states.transpose(-1, -2)
344
+ else:
345
+ pad_value = 0
346
+
347
+ hidden_states = torch.nn.functional.pad(
348
+ hidden_states.transpose(-1, -2),
349
+ pad=(s, s),
350
+ value=pad_value
351
+ ).transpose(-1, -2)
352
+
353
+ # Make blocks
354
+ hidden_states = hidden_states.unfold(-2, size=size, step=step).transpose(-1, -2)
355
+
356
+ # Fix case where block_size == sparsify_factor
357
+ if odd_offset:
358
+ hidden_states = hidden_states[..., :-1, :, :]
359
+
360
+ # Indexes for selection
361
+ u = (size - self.block_size * 3 // self.sparsity_factor) // 2 + odd_offset
362
+ s = self.sparse_block_size
363
+
364
+ u_ = u + odd_offset
365
+ return torch.cat([hidden_states[..., u-s:u, :], hidden_states[..., -u_:-u_+s, :]], dim=-2)
366
+
367
+ def cat_global_sparse_local_tokens(self, x_global, x_sparse=None, x_local=None, dim=-2):
368
+
369
+ n, h, b, t, d = x_local.size()
370
+ x_global = x_global.unsqueeze(-3).expand(-1, -1, b, -1, -1)
371
+ if x_sparse is not None:
372
+ return torch.cat([x_global, x_sparse, x_local], dim=dim)
373
+ return torch.cat([x_global, x_local], dim=dim)
374
+
375
+ def chunk(self, x, n_blocks):
376
+
377
+ t, d = x.size()[-2:]
378
+ return x.reshape(*x.size()[:-2], n_blocks, -1, d)
379
+
380
+
381
+ class LSGBartEncoderAttention(BaseSelfAttention):
382
+ '''
383
+ Compute local attention with overlapping blocs
384
+ Use global attention for tokens with highest norm
385
+ '''
386
+ def __init__(
387
+ self,
388
+ config,
389
+ embed_dim,
390
+ num_heads,
391
+ dropout
392
+ ):
393
+
394
+ super().__init__(embed_dim, num_heads, dropout)
395
+
396
+ self.block_size = config.block_size
397
+ self.sparse_block_size = config.sparse_block_size
398
+ self.num_global_tokens = config.num_global_tokens
399
+ self.sparsity_factor = config.sparsity_factor
400
+
401
+ self.attention = LSGAttentionProduct(
402
+ config,
403
+ block_size=config.block_size,
404
+ sparse_block_size=config.sparse_block_size,
405
+ sparsity_factor=self.sparsity_factor,
406
+ )
407
+
408
+ self.full_attention = BaseAttentionProduct(config)
409
+
410
+ sparse_functions = {
411
+ "norm": self.get_sparse_tokens_with_norm,
412
+ "pooling": self.get_sparse_tokens_with_pooling,
413
+ "lsh": self.get_sparse_tokens_with_lsh,
414
+ "stride": self.get_sparse_tokens_with_stride,
415
+ }
416
+
417
+ self.sparsity_type = config.sparsity_type
418
+ self.get_sparse_elements = sparse_functions.get(self.sparsity_type, lambda x, y, z: (None, None, None))
419
+
420
+ if config.sparsity_type == "lsh":
421
+ self.lsh_num_pre_rounds = config.lsh_num_pre_rounds
422
+
423
+ def get_sparse_tokens_with_norm(self, keys, values, mask):
424
+
425
+ if self.sparsity_factor == 1:
426
+ return keys, values, mask.expand(-1, keys.size()[1], -1, -1)
427
+
428
+ with torch.no_grad():
429
+
430
+ block_size = min(self.block_size, self.sparse_block_size)
431
+ key_norm = keys.detach().norm(dim=-1, keepdim=True)
432
+ key_norm = key_norm * ~mask.transpose(-1, -2).bool()
433
+ key_norm = self.chunk(key_norm, block_size)
434
+
435
+ n, h, b, t, d = key_norm.size()
436
+
437
+ idx = key_norm.argsort(dim=-2)
438
+ del key_norm
439
+ idx += (torch.arange(b, device=keys.device)*t).reshape(1, 1, b, 1, 1)
440
+
441
+ split = (t - block_size // self.sparsity_factor, block_size // self.sparsity_factor)
442
+ sparse_idx = idx.split(split, -2)[-1].reshape(n, h, -1, 1)
443
+
444
+ d = keys.size()[-1]
445
+ keys = keys.gather(dim=-2, index=sparse_idx.expand(-1, -1, -1, d))
446
+ values = values.gather(dim=-2, index=sparse_idx.expand(-1, -1, -1, d))
447
+ mask = mask.expand(-1, h, -1, -1).transpose(-1, -2).gather(dim=-2, index=sparse_idx).transpose(-1, -2)
448
+
449
+ return keys, values, mask
450
+
451
+ def get_sparse_tokens_with_pooling(self, keys, values, mask):
452
+
453
+ if self.sparsity_factor == 1:
454
+ return keys, values, mask.expand(-1, keys.size()[1], -1, -1)
455
+
456
+ keys = self.chunk(keys, self.sparsity_factor)
457
+ values = self.chunk(values, self.sparsity_factor)
458
+
459
+ n, h, b, t, d = keys.size()
460
+ mask = mask.reshape(n, 1, b, 1, t)
461
+ mask = ~mask.transpose(-1, -2).bool()
462
+
463
+ keys = keys * mask
464
+ values = values * mask
465
+
466
+ mask = mask.sum(dim=-2)
467
+ keys = keys.sum(dim=-2) / (mask + 1e-6)
468
+ values = values.sum(dim=-2) / (mask + 1e-6)
469
+
470
+ mask = - (1. - mask.clamp(0, 1)) * 1e4
471
+ return keys.reshape(n, h, -1, d), values.reshape(n, h, -1, d), mask.expand(-1, h, -1, -1).transpose(-1, -2)
472
+
473
+ def get_sparse_tokens_with_stride(self, keys, values, mask):
474
+
475
+ if self.sparsity_factor == 1:
476
+ return keys, values, mask.expand(-1, keys.size()[1], -1, -1)
477
+
478
+ n, h, t, d = keys.size()
479
+ sparse_idx = torch.arange(t // self.sparsity_factor, device=keys.device) * self.sparsity_factor
480
+ sparse_idx = sparse_idx.reshape(1, 1, -1, 1) + (torch.arange(h, device=keys.device) % self.sparsity_factor).reshape(1, h, 1, 1)
481
+ sparse_idx = sparse_idx.expand(n, h, -1, 1)
482
+
483
+ """
484
+ t, b = self.block_size, t // self.block_size
485
+ sparse_idx = torch.arange(t // self.sparsity_factor, device=keys.device) * self.sparsity_factor
486
+ sparse_idx = sparse_idx.reshape(1, 1, 1, -1, 1) + (torch.arange(h, device=keys.device) % self.sparsity_factor).reshape(1, h, 1, 1, 1)
487
+ sparse_idx = sparse_idx + torch.arange(b, device=keys.device).reshape(1, 1, -1, 1, 1) * t
488
+ sparse_idx = sparse_idx.reshape(1, h, -1, 1).expand(n, h, -1, 1)
489
+
490
+
491
+ t, b = self.block_size, t // self.block_size
492
+ sparse_idx = torch.arange(t // self.sparsity_factor, device=keys.device)
493
+ sparse_idx = sparse_idx.reshape(1, 1, 1, -1, 1) + torch.arange(h, device=keys.device).reshape(1, h, 1, 1, 1) * (t // self.sparsity_factor)
494
+ sparse_idx = (sparse_idx % t)
495
+ #sparse_idx[..., -t//2:, :] = (sparse_idx[..., -t//2:, :] + t//2) % t
496
+ sparse_idx = sparse_idx + torch.arange(b, device=keys.device).reshape(1, 1, -1, 1, 1) * t
497
+ sparse_idx = sparse_idx.reshape(1, h, -1, 1).expand(n, h, -1, 1)
498
+ """
499
+
500
+ keys = keys.gather(dim=-2, index=sparse_idx.expand(-1, -1, -1, d))
501
+ values = values.gather(dim=-2, index=sparse_idx.expand(-1, -1, -1, d))
502
+ mask = mask.expand(-1, h, -1, -1).transpose(-1, -2).gather(dim=-2, index=sparse_idx).transpose(-1, -2)
503
+
504
+ return keys, values, mask
505
+
506
+ def get_sparse_tokens_with_lsh(self, keys, values, mask):
507
+
508
+ if self.sparsity_factor == 1:
509
+ return keys, values, mask.expand(-1, keys.size()[1], -1, -1)
510
+
511
+ block_size = min(self.block_size, self.sparse_block_size)
512
+ keys = self.chunk(keys, block_size)
513
+ values = self.chunk(values, block_size)
514
+
515
+ n, h, b, t, d = keys.size()
516
+ mask = mask.reshape(n, 1, b, 1, t)
517
+ mask = ~mask.transpose(-1, -2).bool()
518
+
519
+ keys = keys * mask
520
+ values = values * mask
521
+ mask = mask.expand(-1, h, -1, -1, -1).float()
522
+
523
+ extra_factor = 1
524
+
525
+ for _ in range(self.lsh_num_pre_rounds):
526
+ keys, values, mask = self.lsh_round(keys, values, mask, t*extra_factor)
527
+
528
+ keys, values, mask = self.lsh_round(keys, values, mask, t//self.sparsity_factor)
529
+ keys /= mask + 1e-8
530
+ values /= mask + 1e-8
531
+
532
+ mask = -10000 * (1. - mask.clamp(0, 1))
533
+
534
+ return keys.reshape(n, h, -1, d), values.reshape(n, h, -1, d), mask.transpose(-1, -2).reshape(n, h, 1, -1)
535
+
536
+ def lsh_round(self, keys, values, mask, output_size):
537
+
538
+ with torch.no_grad():
539
+
540
+ n_hashes = output_size // 2
541
+ n, h, b, t, d = keys.size()
542
+ binary_mask = mask.clamp(0, 1)
543
+
544
+ indexes = (torch.nn.functional.normalize(keys, dim=-1) * binary_mask) @ torch.randn(1, h, 1, d, n_hashes, device=keys.device)
545
+ indexes = torch.cat([indexes, -indexes], dim=-1).argmax(dim=-1, keepdim=True)
546
+
547
+ n, h, b, t, d = keys.size()
548
+
549
+ x_ = torch.zeros(n, h, b, output_size, d, device=keys.device)
550
+ mask_ = torch.zeros(n, h, b, output_size, 1, device=keys.device)
551
+ keys = torch.scatter_add(x_, dim=-2, index=indexes.expand(-1, -1, -1, -1, d), src=keys)
552
+ values = torch.scatter_add(x_, dim=-2, index=indexes.expand(-1, -1, -1, -1, d), src=values)
553
+ mask = torch.scatter_add(mask_, dim=-2, index=indexes, src=mask)
554
+
555
+ return keys[..., :output_size, :], values[..., :output_size, :], mask[..., :output_size, :]
556
+
557
+ def forward(
558
+ self,
559
+ hidden_states,
560
+ attention_mask=None,
561
+ layer_head_mask=None,
562
+ output_attentions=False
563
+ ):
564
+
565
+ query_layer, key_layer, value_layer = self.project_QKV(hidden_states)
566
+ outputs = self.not_causal_forward(
567
+ query_layer,
568
+ key_layer,
569
+ value_layer,
570
+ attention_mask=attention_mask[:, :, :1, :],
571
+ head_mask=layer_head_mask,
572
+ output_attentions=output_attentions
573
+ )
574
+
575
+ return self.out_proj(outputs), None, None
576
+
577
+ def not_causal_forward(
578
+ self,
579
+ query_layer,
580
+ key_layer,
581
+ value_layer,
582
+ attention_mask=None,
583
+ head_mask=None,
584
+ output_attentions=False,
585
+ ):
586
+
587
+ n, h, t, d = query_layer.size()
588
+
589
+ # Cat global mask
590
+ attention_mask = torch.nn.functional.pad(attention_mask, (self.num_global_tokens, 0), value=0)
591
+
592
+ # Use normal attention if local attention covers every tokens
593
+ if t <= 2 * self.block_size + self.num_global_tokens:
594
+ context_layer = self.full_attention(
595
+ query_layer=query_layer,
596
+ key_layer=key_layer,
597
+ value_layer=value_layer,
598
+ attention_mask=attention_mask
599
+ )
600
+
601
+ if head_mask is not None:
602
+ context_layer = context_layer * head_mask[:, :, :1, :1]
603
+ return self.reshape_output(context_layer)
604
+
605
+ # Split input into global tokens and other tokens
606
+ split = (self.num_global_tokens, t - self.num_global_tokens)
607
+ global_query, query_layer = query_layer.split(split, dim=-2)
608
+
609
+ # Get global_attention
610
+ bos = self.full_attention(
611
+ query_layer=global_query,
612
+ key_layer=key_layer,
613
+ value_layer=value_layer,
614
+ attention_mask=attention_mask
615
+ )
616
+
617
+ # Split K Q M on global and non global
618
+ global_key, key_layer = key_layer.split(split, dim=-2)
619
+ global_value, value_layer = value_layer.split(split, dim=-2)
620
+ global_mask, attention_mask = attention_mask.split(split, dim=-1)
621
+
622
+ n, h, t, d = key_layer.size()
623
+
624
+ # Get sparse idx
625
+ sparse_key, sparse_value, sparse_mask = (None, None, None)
626
+
627
+ if self.sparse_block_size and self.sparsity_factor > 0:
628
+ sparse_key, sparse_value, sparse_mask = self.get_sparse_elements(key_layer, value_layer, attention_mask)
629
+
630
+ # Expand masks on heads
631
+ attention_mask = attention_mask.expand(-1, h, -1, -1)
632
+ global_mask = global_mask.expand(-1, h, -1, -1)
633
+
634
+ # Compute dot product attention
635
+ context_layer = self.attention(
636
+ query_layer,
637
+ key_layer,
638
+ value_layer,
639
+ attention_mask,
640
+ sparse_key=sparse_key,
641
+ sparse_value=sparse_value,
642
+ sparse_mask=sparse_mask,
643
+ global_key=global_key,
644
+ global_value=global_value,
645
+ global_mask=global_mask
646
+ )
647
+
648
+ # Merge global and local-sparse tokens
649
+ context_layer = torch.cat([bos, context_layer], dim=-2)
650
+ if head_mask is not None:
651
+ context_layer = context_layer * head_mask[:, :, :1, :1]
652
+ context_layer = self.reshape_output(context_layer)
653
+
654
+ return context_layer
655
+
656
+ def chunk(self, x, chunk_size):
657
+
658
+ n, h, t, d = x.size()
659
+ return x.reshape(n, h, -1, chunk_size, d)
660
+
661
+
662
+ class LSGBartDecoderAttention(nn.Module):
663
+
664
+ """Multi-headed attention from 'Attention Is All You Need' paper"""
665
+
666
+ def __init__(
667
+ self,
668
+ embed_dim,
669
+ num_heads,
670
+ dropout=0.0,
671
+ is_decoder=False,
672
+ bias=True,
673
+ ):
674
+
675
+ super().__init__()
676
+ self.embed_dim = embed_dim
677
+ self.num_heads = num_heads
678
+ self.dropout = dropout
679
+ self.head_dim = embed_dim // num_heads
680
+
681
+ if (self.head_dim * num_heads) != self.embed_dim:
682
+ raise ValueError(
683
+ f"embed_dim must be divisible by num_heads (got `embed_dim`: {self.embed_dim}"
684
+ f" and `num_heads`: {num_heads})."
685
+ )
686
+ self.scaling = self.head_dim ** -0.5
687
+ self.is_decoder = is_decoder
688
+
689
+ self.k_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
690
+ self.v_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
691
+ self.q_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
692
+ self.out_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
693
+
694
+ def _shape(self, tensor, seq_len, bsz):
695
+ return tensor.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous()
696
+
697
+ def forward(
698
+ self,
699
+ hidden_states,
700
+ key_value_states=None,
701
+ past_key_value=None,
702
+ attention_mask=None,
703
+ layer_head_mask=None,
704
+ output_attentions=False,
705
+ ):
706
+
707
+ # if key_value_states are provided this layer is used as a cross-attention layer
708
+ # for the decoder
709
+ is_cross_attention = key_value_states is not None
710
+
711
+ bsz, tgt_len, _ = hidden_states.size()
712
+
713
+ # get query proj
714
+ query_states = self.q_proj(hidden_states) * self.scaling
715
+ # get key, value proj
716
+ if is_cross_attention and past_key_value is not None:
717
+ # reuse k,v, cross_attentions
718
+ key_states = past_key_value[0]
719
+ value_states = past_key_value[1]
720
+ elif is_cross_attention:
721
+ # cross_attentions
722
+ key_states = self._shape(self.k_proj(key_value_states), -1, bsz)
723
+ value_states = self._shape(self.v_proj(key_value_states), -1, bsz)
724
+ elif past_key_value is not None:
725
+ # reuse k, v, self_attention
726
+ key_states = self._shape(self.k_proj(hidden_states), -1, bsz)
727
+ value_states = self._shape(self.v_proj(hidden_states), -1, bsz)
728
+ key_states = torch.cat([past_key_value[0], key_states], dim=2)
729
+ value_states = torch.cat([past_key_value[1], value_states], dim=2)
730
+ else:
731
+ # self_attention
732
+ key_states = self._shape(self.k_proj(hidden_states), -1, bsz)
733
+ value_states = self._shape(self.v_proj(hidden_states), -1, bsz)
734
+
735
+ if self.is_decoder:
736
+ # if cross_attention save Tuple(torch.Tensor, torch.Tensor) of all cross attention key/value_states.
737
+ # Further calls to cross_attention layer can then reuse all cross-attention
738
+ # key/value_states (first "if" case)
739
+ # if uni-directional self-attention (decoder) save Tuple(torch.Tensor, torch.Tensor) of
740
+ # all previous decoder key/value_states. Further calls to uni-directional self-attention
741
+ # can concat previous decoder key/value_states to current projected key/value_states (third "elif" case)
742
+ # if encoder bi-directional self-attention `past_key_value` is always `None`
743
+ past_key_value = (key_states, value_states)
744
+
745
+ proj_shape = (bsz * self.num_heads, -1, self.head_dim)
746
+ query_states = self._shape(query_states, tgt_len, bsz).view(*proj_shape)
747
+ key_states = key_states.view(*proj_shape)
748
+ value_states = value_states.view(*proj_shape)
749
+
750
+ src_len = key_states.size(1)
751
+ attn_weights = torch.bmm(query_states, key_states.transpose(1, 2))
752
+
753
+ if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len):
754
+ raise ValueError(
755
+ f"Attention weights should be of size {(bsz * self.num_heads, tgt_len, src_len)}, but is {attn_weights.size()}"
756
+ )
757
+
758
+ if attention_mask is not None:
759
+ if attention_mask.size() != (bsz, 1, tgt_len, src_len):
760
+ raise ValueError(
761
+ f"Attention mask should be of size {(bsz, 1, tgt_len, src_len)}, but is {attention_mask.size()}"
762
+ )
763
+ attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) + attention_mask
764
+ attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len)
765
+
766
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1)
767
+
768
+ if layer_head_mask is not None:
769
+ if layer_head_mask.size() != (self.num_heads,):
770
+ raise ValueError(
771
+ f"Head mask for a single layer should be of size {(self.num_heads,)}, but is {layer_head_mask.size()}"
772
+ )
773
+ attn_weights = layer_head_mask.view(1, -1, 1, 1) * attn_weights.view(bsz, self.num_heads, tgt_len, src_len)
774
+ attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len)
775
+
776
+ if output_attentions:
777
+ # this operation is a bit awkward, but it's required to
778
+ # make sure that attn_weights keeps its gradient.
779
+ # In order to do so, attn_weights have to be reshaped
780
+ # twice and have to be reused in the following
781
+ attn_weights_reshaped = attn_weights.view(bsz, self.num_heads, tgt_len, src_len)
782
+ attn_weights = attn_weights_reshaped.view(bsz * self.num_heads, tgt_len, src_len)
783
+ else:
784
+ attn_weights_reshaped = None
785
+
786
+ attn_probs = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training)
787
+
788
+ attn_output = torch.bmm(attn_probs, value_states)
789
+
790
+ if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim):
791
+ raise ValueError(
792
+ f"`attn_output` should be of size {(bsz, self.num_heads, tgt_len, self.head_dim)}, but is {attn_output.size()}"
793
+ )
794
+
795
+ attn_output = attn_output.view(bsz, self.num_heads, tgt_len, self.head_dim)
796
+ attn_output = attn_output.transpose(1, 2)
797
+
798
+ # Use the `embed_dim` from the config (stored in the class) rather than `hidden_state` because `attn_output` can be
799
+ # partitioned aross GPUs when using tensor-parallelism.
800
+ attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim)
801
+
802
+ attn_output = self.out_proj(attn_output)
803
+
804
+ return attn_output, attn_weights_reshaped, past_key_value
805
+
806
+
807
+ class LSGBartLearnedPositionalEmbedding(nn.Embedding):
808
+ """
809
+ This module learns positional embeddings up to a fixed maximum size.
810
+ """
811
+
812
+ def __init__(self, num_embeddings, embedding_dim):
813
+ # Bart is set up so that if padding_idx is specified then offset the embedding ids by 2
814
+ # and adjust num_embeddings appropriately. Other models don't have this hack
815
+ self.offset = 2
816
+ super().__init__(num_embeddings + self.offset, embedding_dim)
817
+
818
+ def forward(self, input_ids_shape, past_key_values_length=0):
819
+
820
+ """`input_ids_shape` is expected to be [bsz x seqlen]."""
821
+ bsz, seq_len = input_ids_shape[:2]
822
+ positions = torch.arange(
823
+ past_key_values_length, past_key_values_length + seq_len, dtype=torch.long, device=self.weight.device
824
+ )
825
+ return super().forward(positions + self.offset)
826
+
827
+
828
+ class LSGBartEncoderLayer(nn.Module):
829
+
830
+ def __init__(self, config):
831
+
832
+ super().__init__()
833
+ self.embed_dim = config.d_model
834
+ self.self_attn = LSGBartEncoderAttention(
835
+ config=config,
836
+ embed_dim=self.embed_dim,
837
+ num_heads=config.encoder_attention_heads,
838
+ dropout=config.attention_dropout,
839
+ )
840
+ self.self_attn_layer_norm = nn.LayerNorm(self.embed_dim)
841
+ self.dropout = config.dropout
842
+ self.activation_fn = ACT2FN[config.activation_function]
843
+ self.activation_dropout = config.activation_dropout
844
+ self.fc1 = nn.Linear(self.embed_dim, config.encoder_ffn_dim)
845
+ self.fc2 = nn.Linear(config.encoder_ffn_dim, self.embed_dim)
846
+ self.final_layer_norm = nn.LayerNorm(self.embed_dim)
847
+
848
+ def forward(
849
+ self,
850
+ hidden_states,
851
+ attention_mask,
852
+ layer_head_mask,
853
+ output_attentions=False,
854
+ ):
855
+ """
856
+ Args:
857
+ hidden_states (:obj:`torch.FloatTensor`): input to the layer of shape `(seq_len, batch, embed_dim)`
858
+ attention_mask (:obj:`torch.FloatTensor`): attention mask of size
859
+ `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
860
+ layer_head_mask (:obj:`torch.FloatTensor`): mask for attention heads in a given layer of size
861
+ `(encoder_attention_heads,)`.
862
+ output_attentions (:obj:`bool`, `optional`):
863
+ Whether or not to return the attentions tensors of all attention layers. See ``attentions`` under
864
+ returned tensors for more detail.
865
+ """
866
+ residual = hidden_states
867
+ hidden_states, attn_weights, _ = self.self_attn(
868
+ hidden_states=hidden_states,
869
+ attention_mask=attention_mask,
870
+ layer_head_mask=layer_head_mask,
871
+ output_attentions=output_attentions,
872
+ )
873
+ hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
874
+ hidden_states = residual + hidden_states
875
+ hidden_states = self.self_attn_layer_norm(hidden_states)
876
+
877
+ residual = hidden_states
878
+ hidden_states = self.activation_fn(self.fc1(hidden_states))
879
+ hidden_states = nn.functional.dropout(hidden_states, p=self.activation_dropout, training=self.training)
880
+ hidden_states = self.fc2(hidden_states)
881
+ hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
882
+ hidden_states = residual + hidden_states
883
+ hidden_states = self.final_layer_norm(hidden_states)
884
+
885
+ if hidden_states.dtype == torch.float16 and (
886
+ torch.isinf(hidden_states).any() or torch.isnan(hidden_states).any()
887
+ ):
888
+ clamp_value = torch.finfo(hidden_states.dtype).max - 1000
889
+ hidden_states = torch.clamp(hidden_states, min=-clamp_value, max=clamp_value)
890
+
891
+ outputs = (hidden_states,)
892
+
893
+ if output_attentions:
894
+ outputs += (attn_weights,)
895
+
896
+ return outputs
897
+
898
+
899
+ class LSGBartDecoderLayer(nn.Module):
900
+
901
+ def __init__(self, config):
902
+
903
+ super().__init__()
904
+ self.embed_dim = config.d_model
905
+
906
+ self.self_attn = LSGBartDecoderAttention(
907
+ embed_dim=self.embed_dim,
908
+ num_heads=config.decoder_attention_heads,
909
+ dropout=config.attention_dropout,
910
+ is_decoder=True,
911
+ )
912
+ self.dropout = config.dropout
913
+ self.activation_fn = ACT2FN[config.activation_function]
914
+ self.activation_dropout = config.activation_dropout
915
+
916
+ self.self_attn_layer_norm = nn.LayerNorm(self.embed_dim)
917
+ self.encoder_attn = LSGBartDecoderAttention(
918
+ self.embed_dim,
919
+ config.decoder_attention_heads,
920
+ dropout=config.attention_dropout,
921
+ is_decoder=True,
922
+ )
923
+ self.encoder_attn_layer_norm = nn.LayerNorm(self.embed_dim)
924
+ self.fc1 = nn.Linear(self.embed_dim, config.decoder_ffn_dim)
925
+ self.fc2 = nn.Linear(config.decoder_ffn_dim, self.embed_dim)
926
+ self.final_layer_norm = nn.LayerNorm(self.embed_dim)
927
+
928
+ def forward(
929
+ self,
930
+ hidden_states,
931
+ attention_mask=None,
932
+ encoder_hidden_states=None,
933
+ encoder_attention_mask=None,
934
+ layer_head_mask=None,
935
+ cross_attn_layer_head_mask=None,
936
+ past_key_value=None,
937
+ output_attentions=False,
938
+ use_cache=True,
939
+ ):
940
+ """
941
+ Args:
942
+ hidden_states (:obj:`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
943
+ attention_mask (:obj:`torch.FloatTensor`): attention mask of size
944
+ `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
945
+ encoder_hidden_states (:obj:`torch.FloatTensor`): cross attention input to the layer of shape `(batch, seq_len, embed_dim)`
946
+ encoder_attention_mask (:obj:`torch.FloatTensor`): encoder attention mask of size
947
+ `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
948
+ layer_head_mask (:obj:`torch.FloatTensor`): mask for attention heads in a given layer of size
949
+ `(encoder_attention_heads,)`.
950
+ cross_attn_layer_head_mask (:obj:`torch.FloatTensor`): mask for cross-attention heads in a given layer of
951
+ size `(decoder_attention_heads,)`.
952
+ past_key_value (:obj:`Tuple(torch.FloatTensor)`): cached past key and value projection states
953
+ output_attentions (:obj:`bool`, `optional`):
954
+ Whether or not to return the attentions tensors of all attention layers. See ``attentions`` under
955
+ returned tensors for more detail.
956
+ """
957
+ residual = hidden_states
958
+
959
+ # Self Attention
960
+ # decoder uni-directional self-attention cached key/values tuple is at positions 1,2
961
+ self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None
962
+ # add present self-attn cache to positions 1,2 of present_key_value tuple
963
+
964
+ hidden_states, self_attn_weights, present_key_value = self.self_attn(
965
+ hidden_states=hidden_states,
966
+ past_key_value=self_attn_past_key_value,
967
+ attention_mask=attention_mask,
968
+ layer_head_mask=layer_head_mask,
969
+ output_attentions=output_attentions,
970
+ )
971
+ hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
972
+ hidden_states = residual + hidden_states
973
+ hidden_states = self.self_attn_layer_norm(hidden_states)
974
+
975
+ # Cross-Attention Block
976
+ cross_attn_present_key_value = None
977
+ cross_attn_weights = None
978
+ if encoder_hidden_states is not None:
979
+ residual = hidden_states
980
+
981
+ # cross_attn cached key/values tuple is at positions 3,4 of present_key_value tuple
982
+ cross_attn_past_key_value = past_key_value[-2:] if past_key_value is not None else None
983
+
984
+ hidden_states, cross_attn_weights, cross_attn_present_key_value = self.encoder_attn(
985
+ hidden_states=hidden_states,
986
+ key_value_states=encoder_hidden_states,
987
+ attention_mask=encoder_attention_mask,
988
+ layer_head_mask=cross_attn_layer_head_mask,
989
+ past_key_value=cross_attn_past_key_value,
990
+ output_attentions=output_attentions,
991
+ )
992
+ hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
993
+ hidden_states = residual + hidden_states
994
+ hidden_states = self.encoder_attn_layer_norm(hidden_states)
995
+
996
+ # add cross-attn to positions 3,4 of present_key_value tuple
997
+ present_key_value = present_key_value + cross_attn_present_key_value
998
+
999
+ # Fully Connected
1000
+ residual = hidden_states
1001
+ hidden_states = self.activation_fn(self.fc1(hidden_states))
1002
+ hidden_states = nn.functional.dropout(hidden_states, p=self.activation_dropout, training=self.training)
1003
+ hidden_states = self.fc2(hidden_states)
1004
+ hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
1005
+ hidden_states = residual + hidden_states
1006
+ hidden_states = self.final_layer_norm(hidden_states)
1007
+
1008
+ outputs = (hidden_states,)
1009
+
1010
+ if output_attentions:
1011
+ outputs += (self_attn_weights, cross_attn_weights)
1012
+
1013
+ if use_cache:
1014
+ outputs += (present_key_value,)
1015
+
1016
+ return outputs
1017
+
1018
+
1019
+ class LSGBartClassificationHead(nn.Module):
1020
+ """Head for sentence-level classification tasks."""
1021
+
1022
+ def __init__(
1023
+ self,
1024
+ input_dim,
1025
+ inner_dim,
1026
+ num_classes,
1027
+ pooler_dropout,
1028
+ ):
1029
+
1030
+ super().__init__()
1031
+ self.dense = nn.Linear(input_dim, inner_dim)
1032
+ self.dropout = nn.Dropout(p=pooler_dropout)
1033
+ self.out_proj = nn.Linear(inner_dim, num_classes)
1034
+
1035
+ def forward(self, hidden_states):
1036
+
1037
+ hidden_states = self.dropout(hidden_states)
1038
+ hidden_states = self.dense(hidden_states)
1039
+ hidden_states = torch.tanh(hidden_states)
1040
+ hidden_states = self.dropout(hidden_states)
1041
+ hidden_states = self.out_proj(hidden_states)
1042
+ return hidden_states
1043
+
1044
+
1045
+ class LSGBartPretrainedModel(PreTrainedModel):
1046
+
1047
+ config_class = LSGBartConfig
1048
+ base_model_prefix = "model"
1049
+ supports_gradient_checkpointing = True
1050
+ _keys_to_ignore_on_load_unexpected = [r"encoder\.version", r"decoder\.version"]
1051
+
1052
+ def _init_weights(self, module):
1053
+
1054
+ std = self.config.init_std
1055
+ if isinstance(module, nn.Linear):
1056
+ module.weight.data.normal_(mean=0.0, std=std)
1057
+ if module.bias is not None:
1058
+ module.bias.data.zero_()
1059
+ elif isinstance(module, nn.Embedding):
1060
+ module.weight.data.normal_(mean=0.0, std=std)
1061
+ if module.padding_idx is not None:
1062
+ module.weight.data[module.padding_idx].zero_()
1063
+
1064
+ def _set_gradient_checkpointing(self, module, value=False):
1065
+
1066
+ if isinstance(module, (LSGBartDecoder, LSGBartEncoder)):
1067
+ module.gradient_checkpointing = value
1068
+
1069
+ @property
1070
+ def dummy_inputs(self):
1071
+ pad_token = self.config.pad_token_id
1072
+ input_ids = torch.tensor([[0, 6, 10, 4, 2], [0, 8, 12, 2, pad_token]], device=self.device)
1073
+ dummy_inputs = {
1074
+ "attention_mask": input_ids.ne(pad_token),
1075
+ "input_ids": input_ids,
1076
+ }
1077
+ return dummy_inputs
1078
+
1079
+
1080
+ class PretrainedLSGBartModel(LSGBartPretrainedModel):
1081
+
1082
+ def __init_subclass__(self):
1083
+ warnings.warn(
1084
+ "The class `PretrainedBartModel` has been depreciated, please use `LSGBartPretrainedModel` instead.",
1085
+ FutureWarning,
1086
+ )
1087
+
1088
+
1089
+ class LSGBartEncoder(LSGBartPretrainedModel):
1090
+ """
1091
+ Transformer encoder consisting of *config.encoder_layers* self attention layers. Each layer is a
1092
+ :class:`BartEncoderLayer`.
1093
+ Args:
1094
+ config: BartConfig
1095
+ embed_tokens (nn.Embedding): output embedding
1096
+ """
1097
+
1098
+ def __init__(self, config, embed_tokens=None):
1099
+
1100
+ super().__init__(config)
1101
+ self.dropout = config.dropout
1102
+ self.layerdrop = config.encoder_layerdrop
1103
+
1104
+ embed_dim = config.d_model
1105
+ self.padding_idx = config.pad_token_id
1106
+ self.max_source_positions = config.max_position_embeddings
1107
+ self.embed_scale = math.sqrt(embed_dim) if config.scale_embedding else 1.0
1108
+
1109
+ if embed_tokens is not None:
1110
+ self.embed_tokens = embed_tokens
1111
+ else:
1112
+ self.embed_tokens = nn.Embedding(config.vocab_size, embed_dim, self.padding_idx)
1113
+
1114
+ self.embed_positions = LSGBartLearnedPositionalEmbedding(
1115
+ config.max_position_embeddings,
1116
+ embed_dim,
1117
+ )
1118
+ self.layers = nn.ModuleList([LSGBartEncoderLayer(config) for _ in range(config.encoder_layers)])
1119
+ self.layernorm_embedding = nn.LayerNorm(embed_dim)
1120
+
1121
+ #
1122
+ assert hasattr(config, "num_global_tokens")
1123
+ self.num_global_tokens = config.num_global_tokens
1124
+ self.pad_idx = config.pad_token_id
1125
+
1126
+ assert hasattr(config, "block_size") and hasattr(config, "adaptive")
1127
+ self.block_size = config.block_size
1128
+ self.adaptive = config.adaptive
1129
+ self.pool_with_global = config.pool_with_global
1130
+ self.pass_global_tokens_to_decoder = config.pass_global_tokens_to_decoder
1131
+
1132
+ self.global_embeddings = nn.Embedding(512, embedding_dim=config.d_model)
1133
+
1134
+ self.gradient_checkpointing = False
1135
+
1136
+ # Initialize weights and apply final processing
1137
+ self.post_init()
1138
+
1139
+ def get_input_embeddings(self):
1140
+ return self.embed_tokens
1141
+
1142
+ def set_input_embeddings(self, value):
1143
+ self.embed_tokens = value
1144
+
1145
+ def forward(self,
1146
+ input_ids=None,
1147
+ attention_mask=None,
1148
+ head_mask=None,
1149
+ inputs_embeds=None,
1150
+ output_attentions=None,
1151
+ output_hidden_states=None,
1152
+ return_dict=None
1153
+ ):
1154
+
1155
+
1156
+ inputs_ = input_ids if input_ids is not None else inputs_embeds
1157
+ n, t = inputs_.size()[:2]
1158
+
1159
+ if attention_mask is None:
1160
+ attention_mask = torch.ones(n, t, device=inputs_.device)
1161
+
1162
+ b = self.block_size * 2
1163
+ pad = t % self.block_size
1164
+
1165
+ # Check if t is multiple of block_size and pad
1166
+ if t > b and pad > 0:
1167
+ pad_length = self.block_size - pad
1168
+ if input_ids is not None:
1169
+ input_ids = torch.nn.functional.pad(input_ids, (0, pad_length), value=self.pad_idx)
1170
+ else:
1171
+ inputs_embeds = torch.nn.functional.pad(inputs_embeds.transpose(-1, -2), (0, pad_length), value=0.).transpose(-1, -2)
1172
+ attention_mask = torch.nn.functional.pad(attention_mask, (0, pad_length), value=0)
1173
+
1174
+ # else adaptive sequence length
1175
+ elif self.adaptive:
1176
+ # Get last non zero mask index
1177
+ s = int(attention_mask.cumsum(dim=-1).argmax(dim=-1).max()) + 1
1178
+ if s < t and self.block_size is not None:
1179
+ s = max(2, s // self.block_size + 1) * self.block_size if s > b else s
1180
+ if input_ids is not None:
1181
+ input_ids = input_ids[:, :s]
1182
+ else:
1183
+ inputs_embeds = inputs_embeds[:, :s]
1184
+ attention_mask = attention_mask[:, :s]
1185
+
1186
+ n, t_ = attention_mask.size()
1187
+
1188
+ encoder_outputs = self.forward_with_adaptive(
1189
+ input_ids=input_ids,
1190
+ attention_mask=attention_mask,
1191
+ head_mask=head_mask,
1192
+ inputs_embeds=inputs_embeds,
1193
+ output_attentions=output_attentions,
1194
+ output_hidden_states=output_hidden_states,
1195
+ return_dict=return_dict,
1196
+ )
1197
+
1198
+ context = encoder_outputs[0]
1199
+ diff = t - t_
1200
+
1201
+ if self.pass_global_tokens_to_decoder:
1202
+ offset = self.num_global_tokens
1203
+ else:
1204
+ if self.pool_with_global:
1205
+ context[:, self.num_global_tokens] = context[:, 0]
1206
+ context = context[..., self.num_global_tokens:, :]
1207
+ offset = 0
1208
+
1209
+ # Adapt sequence to initial shape
1210
+ if diff > 0:
1211
+ context = torch.nn.functional.pad(context.transpose(-1, -2), pad=(0, diff), value=0).transpose(-1, -2)
1212
+ elif diff < 0:
1213
+ context = context[:, :t + offset]
1214
+
1215
+ if return_dict:
1216
+ encoder_outputs.last_hidden_state = context
1217
+ else:
1218
+ encoder_outputs = (context, ) + encoder_outputs[1:]
1219
+
1220
+ return encoder_outputs
1221
+
1222
+ def forward_with_adaptive(
1223
+ self,
1224
+ input_ids=None,
1225
+ attention_mask=None,
1226
+ head_mask=None,
1227
+ inputs_embeds=None,
1228
+ output_attentions=None,
1229
+ output_hidden_states=None,
1230
+ return_dict=None,
1231
+ ):
1232
+
1233
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
1234
+ output_hidden_states = (
1235
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
1236
+ )
1237
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1238
+
1239
+ # retrieve input_ids and inputs_embeds
1240
+ if input_ids is not None and inputs_embeds is not None:
1241
+ raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
1242
+ elif input_ids is not None:
1243
+ input_shape = input_ids.size()
1244
+ input_ids = input_ids.view(-1, input_shape[-1])
1245
+ elif inputs_embeds is not None:
1246
+ input_shape = inputs_embeds.size()[:-1]
1247
+ else:
1248
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
1249
+
1250
+ if inputs_embeds is None:
1251
+ inputs_embeds = self.embed_tokens(input_ids) * self.embed_scale
1252
+
1253
+ embed_pos = self.embed_positions(input_shape)
1254
+ hidden_states = inputs_embeds + embed_pos
1255
+
1256
+ # Add global tokens
1257
+ n, t, d = hidden_states.size()
1258
+ global_idx = torch.arange(self.num_global_tokens, device=hidden_states.device).reshape(1, -1)
1259
+ hidden_states = torch.cat([self.global_embeddings(global_idx).expand(n, -1, -1), hidden_states], dim=-2)
1260
+
1261
+ hidden_states = self.layernorm_embedding(hidden_states)
1262
+ hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
1263
+
1264
+ # expand attention_mask
1265
+ if attention_mask is not None:
1266
+ # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
1267
+ attention_mask = _expand_mask(attention_mask, inputs_embeds.dtype)
1268
+
1269
+ encoder_states = () if output_hidden_states else None
1270
+ all_attentions = () if output_attentions else None
1271
+
1272
+ # check if head_mask has a correct number of layers specified if desired
1273
+ if head_mask is not None:
1274
+ if head_mask.size()[0] != (len(self.layers)):
1275
+ raise ValueError(
1276
+ f"The head_mask should be specified for {len(self.layers)} layers, but it is for {head_mask.size()[0]}."
1277
+ )
1278
+
1279
+ for idx, encoder_layer in enumerate(self.layers):
1280
+ if output_hidden_states:
1281
+ encoder_states = encoder_states + (hidden_states,)
1282
+ # add LayerDrop (see https://arxiv.org/abs/1909.11556 for description)
1283
+ dropout_probability = random.uniform(0, 1)
1284
+ if self.training and (dropout_probability < self.layerdrop): # skip the layer
1285
+ layer_outputs = (None, None)
1286
+ else:
1287
+ if self.gradient_checkpointing and self.training:
1288
+
1289
+ def create_custom_forward(module):
1290
+ def custom_forward(*inputs):
1291
+ return module(*inputs, output_attentions)
1292
+
1293
+ return custom_forward
1294
+
1295
+ layer_outputs = torch.utils.checkpoint.checkpoint(
1296
+ create_custom_forward(encoder_layer),
1297
+ hidden_states,
1298
+ attention_mask,
1299
+ (head_mask[idx] if head_mask is not None else None),
1300
+ )
1301
+ else:
1302
+ layer_outputs = encoder_layer(
1303
+ hidden_states,
1304
+ attention_mask,
1305
+ layer_head_mask=(head_mask[idx] if head_mask is not None else None),
1306
+ output_attentions=output_attentions,
1307
+ )
1308
+
1309
+ hidden_states = layer_outputs[0]
1310
+
1311
+ if output_attentions:
1312
+ all_attentions = all_attentions + (layer_outputs[1],)
1313
+
1314
+ if output_hidden_states:
1315
+ encoder_states = encoder_states + (hidden_states,)
1316
+
1317
+ if not return_dict:
1318
+ return tuple(v for v in [hidden_states, encoder_states, all_attentions] if v is not None)
1319
+ return BaseModelOutput(
1320
+ last_hidden_state=hidden_states, hidden_states=encoder_states, attentions=all_attentions
1321
+ )
1322
+
1323
+
1324
+ class LSGBartDecoder(LSGBartPretrainedModel):
1325
+ """
1326
+ Transformer decoder consisting of *config.decoder_layers* layers. Each layer is a :class:`LSGBartDecoderLayer`
1327
+ Args:
1328
+ config: BartConfig
1329
+ embed_tokens (nn.Embedding): output embedding
1330
+ """
1331
+
1332
+ def __init__(self, config, embed_tokens=None):
1333
+
1334
+ super().__init__(config)
1335
+ self.dropout = config.dropout
1336
+ self.layerdrop = config.decoder_layerdrop
1337
+ self.padding_idx = config.pad_token_id
1338
+ self.max_target_positions = config.max_position_embeddings
1339
+ self.embed_scale = math.sqrt(config.d_model) if config.scale_embedding else 1.0
1340
+ self.adaptive = config.adaptive
1341
+
1342
+ if embed_tokens is not None:
1343
+ self.embed_tokens = embed_tokens
1344
+ else:
1345
+ self.embed_tokens = nn.Embedding(config.vocab_size, config.d_model, self.padding_idx)
1346
+
1347
+ self.embed_positions = LSGBartLearnedPositionalEmbedding(
1348
+ config.max_position_embeddings,
1349
+ config.d_model,
1350
+ )
1351
+ self.layers = nn.ModuleList([LSGBartDecoderLayer(config) for _ in range(config.decoder_layers)])
1352
+ self.layernorm_embedding = nn.LayerNorm(config.d_model)
1353
+
1354
+ self.gradient_checkpointing = False
1355
+
1356
+ # Initialize weights and apply final processing
1357
+ self.post_init()
1358
+
1359
+ def get_input_embeddings(self):
1360
+ return self.embed_tokens
1361
+
1362
+ def set_input_embeddings(self, value):
1363
+ self.embed_tokens = value
1364
+
1365
+ def _prepare_decoder_attention_mask(self, attention_mask, input_shape, inputs_embeds, past_key_values_length):
1366
+ # create causal mask
1367
+ # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
1368
+ combined_attention_mask = None
1369
+ if input_shape[-1] > 1:
1370
+ combined_attention_mask = _make_causal_mask(
1371
+ input_shape, inputs_embeds.dtype, past_key_values_length=past_key_values_length
1372
+ ).to(self.device)
1373
+
1374
+ if attention_mask is not None:
1375
+ # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
1376
+ expanded_attn_mask = _expand_mask(attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1])
1377
+ combined_attention_mask = (
1378
+ expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask
1379
+ )
1380
+
1381
+ return combined_attention_mask
1382
+
1383
+ def resize_inputs(self, inputs_embeds, attention_mask):
1384
+ pad = 0
1385
+
1386
+ max_len = int(attention_mask.sum(dim=-1).max())
1387
+ pad = attention_mask.size()[-1] - max_len
1388
+ inputs_embeds = inputs_embeds[:, :max_len]
1389
+ attention_mask = attention_mask[..., :max_len]
1390
+ return pad, inputs_embeds, attention_mask
1391
+
1392
+ def forward(
1393
+ self,
1394
+ input_ids=None,
1395
+ attention_mask=None,
1396
+ encoder_hidden_states=None,
1397
+ encoder_attention_mask=None,
1398
+ head_mask=None,
1399
+ cross_attn_head_mask=None,
1400
+ past_key_values=None,
1401
+ inputs_embeds=None,
1402
+ use_cache=None,
1403
+ output_attentions=None,
1404
+ output_hidden_states=None,
1405
+ return_dict=None,
1406
+ ):
1407
+
1408
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
1409
+ output_hidden_states = (
1410
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
1411
+ )
1412
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
1413
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1414
+
1415
+ # retrieve input_ids and inputs_embeds
1416
+ if input_ids is not None and inputs_embeds is not None:
1417
+ raise ValueError("You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time")
1418
+ elif input_ids is not None:
1419
+ input_shape = input_ids.size()
1420
+ input_ids = input_ids.view(-1, input_shape[-1])
1421
+ elif inputs_embeds is not None:
1422
+ input_shape = inputs_embeds.size()[:-1]
1423
+ else:
1424
+ raise ValueError("You have to specify either decoder_input_ids or decoder_inputs_embeds")
1425
+
1426
+ # past_key_values_length
1427
+ past_key_values_length = past_key_values[0][0].shape[2] if past_key_values is not None else 0
1428
+
1429
+ if inputs_embeds is None:
1430
+ inputs_embeds = self.embed_tokens(input_ids) * self.embed_scale
1431
+
1432
+ # Resize to reduce computation
1433
+ pad = 0
1434
+ if self.adaptive:
1435
+ if attention_mask is not None:
1436
+ pad, inputs_embeds, attention_mask = self.resize_inputs(inputs_embeds, attention_mask)
1437
+ input_shape = inputs_embeds.size()[:-1]
1438
+ if encoder_attention_mask is not None:
1439
+ _, encoder_hidden_states, encoder_attention_mask = self.resize_inputs(encoder_hidden_states, encoder_attention_mask)
1440
+
1441
+ attention_mask = self._prepare_decoder_attention_mask(
1442
+ attention_mask, input_shape, inputs_embeds, past_key_values_length
1443
+ )
1444
+
1445
+ # expand encoder attention mask
1446
+ if encoder_hidden_states is not None and encoder_attention_mask is not None:
1447
+ # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]
1448
+ encoder_attention_mask = _expand_mask(encoder_attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1])
1449
+
1450
+ # embed positions
1451
+ positions = self.embed_positions(input_shape, past_key_values_length)
1452
+
1453
+ hidden_states = inputs_embeds + positions
1454
+ hidden_states = self.layernorm_embedding(hidden_states)
1455
+
1456
+ hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
1457
+
1458
+ # decoder layers
1459
+ all_hidden_states = () if output_hidden_states else None
1460
+ all_self_attns = () if output_attentions else None
1461
+ all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None
1462
+ next_decoder_cache = () if use_cache else None
1463
+
1464
+ # check if head_mask/cross_attn_head_mask has a correct number of layers specified if desired
1465
+ for attn_mask, mask_name in zip([head_mask, cross_attn_head_mask], ["head_mask", "cross_attn_head_mask"]):
1466
+ if attn_mask is not None:
1467
+ if attn_mask.size()[0] != (len(self.layers)):
1468
+ raise ValueError(
1469
+ "The `{mask_name}` should be specified for {len(self.layers)} layers, but it is for {head_mask.size()[0]}."
1470
+ )
1471
+
1472
+ for idx, decoder_layer in enumerate(self.layers):
1473
+ # add LayerDrop (see https://arxiv.org/abs/1909.11556 for description)
1474
+ if output_hidden_states:
1475
+ all_hidden_states += (hidden_states,)
1476
+ dropout_probability = random.uniform(0, 1)
1477
+ if self.training and (dropout_probability < self.layerdrop):
1478
+ continue
1479
+
1480
+ past_key_value = past_key_values[idx] if past_key_values is not None else None
1481
+
1482
+ if self.gradient_checkpointing and self.training:
1483
+
1484
+ if use_cache:
1485
+ logger.warning(
1486
+ "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
1487
+ )
1488
+ use_cache = False
1489
+
1490
+ def create_custom_forward(module):
1491
+ def custom_forward(*inputs):
1492
+ # None for past_key_value
1493
+ return module(*inputs, output_attentions, use_cache)
1494
+
1495
+ return custom_forward
1496
+
1497
+ layer_outputs = torch.utils.checkpoint.checkpoint(
1498
+ create_custom_forward(decoder_layer),
1499
+ hidden_states,
1500
+ attention_mask,
1501
+ encoder_hidden_states,
1502
+ encoder_attention_mask,
1503
+ head_mask[idx] if head_mask is not None else None,
1504
+ cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None,
1505
+ None,
1506
+ )
1507
+ else:
1508
+
1509
+ layer_outputs = decoder_layer(
1510
+ hidden_states,
1511
+ attention_mask=attention_mask,
1512
+ encoder_hidden_states=encoder_hidden_states,
1513
+ encoder_attention_mask=encoder_attention_mask,
1514
+ layer_head_mask=(head_mask[idx] if head_mask is not None else None),
1515
+ cross_attn_layer_head_mask=(
1516
+ cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None
1517
+ ),
1518
+ past_key_value=past_key_value,
1519
+ output_attentions=output_attentions,
1520
+ use_cache=use_cache,
1521
+ )
1522
+ hidden_states = layer_outputs[0]
1523
+
1524
+ if use_cache:
1525
+ next_decoder_cache += (layer_outputs[3 if output_attentions else 1],)
1526
+
1527
+ if output_attentions:
1528
+ all_self_attns += (layer_outputs[1],)
1529
+
1530
+ if encoder_hidden_states is not None:
1531
+ all_cross_attentions += (layer_outputs[2],)
1532
+
1533
+ # Resize to original shape
1534
+ hidden_states = torch.nn.functional.pad(hidden_states.transpose(-1, -2), pad=(0, pad), value=0).transpose(-1, -2)
1535
+
1536
+ # add hidden states from the last decoder layer
1537
+ if output_hidden_states:
1538
+ all_hidden_states += (hidden_states,)
1539
+
1540
+ next_cache = next_decoder_cache if use_cache else None
1541
+ if not return_dict:
1542
+ return tuple(
1543
+ v
1544
+ for v in [hidden_states, next_cache, all_hidden_states, all_self_attns, all_cross_attentions]
1545
+ if v is not None
1546
+ )
1547
+ return BaseModelOutputWithPastAndCrossAttentions(
1548
+ last_hidden_state=hidden_states,
1549
+ past_key_values=next_cache,
1550
+ hidden_states=all_hidden_states,
1551
+ attentions=all_self_attns,
1552
+ cross_attentions=all_cross_attentions,
1553
+ )
1554
+
1555
+
1556
+ class LSGBartModel(LSGBartPretrainedModel):
1557
+
1558
+ def __init__(self, config):
1559
+
1560
+ super().__init__(config)
1561
+
1562
+ padding_idx, vocab_size = config.pad_token_id, config.vocab_size
1563
+ self.shared = nn.Embedding(vocab_size, config.d_model, padding_idx)
1564
+ self.pass_global_tokens_to_decoder = config.pass_global_tokens_to_decoder
1565
+ self.num_global_tokens = config.num_global_tokens
1566
+ self.encoder = LSGBartEncoder(config, self.shared)
1567
+ self.decoder = LSGBartDecoder(config, self.shared)
1568
+
1569
+ # Initialize weights and apply final processing
1570
+ self.post_init()
1571
+
1572
+ def get_input_embeddings(self):
1573
+ return self.shared
1574
+
1575
+ def set_input_embeddings(self, value):
1576
+ self.shared = value
1577
+ self.encoder.embed_tokens = self.shared
1578
+ self.decoder.embed_tokens = self.shared
1579
+
1580
+ def get_encoder(self):
1581
+ return self.encoder
1582
+
1583
+ def get_decoder(self):
1584
+ return self.decoder
1585
+
1586
+ def forward(
1587
+ self,
1588
+ input_ids=None,
1589
+ attention_mask=None,
1590
+ decoder_input_ids=None,
1591
+ decoder_attention_mask=None,
1592
+ head_mask=None,
1593
+ decoder_head_mask=None,
1594
+ cross_attn_head_mask=None,
1595
+ encoder_outputs=None,
1596
+ past_key_values=None,
1597
+ inputs_embeds=None,
1598
+ decoder_inputs_embeds=None,
1599
+ use_cache=None,
1600
+ output_attentions=None,
1601
+ output_hidden_states=None,
1602
+ return_dict=None,
1603
+ ):
1604
+
1605
+ # different to other models, Bart automatically creates decoder_input_ids from
1606
+ # input_ids if no decoder_input_ids are provided
1607
+ if decoder_input_ids is None and decoder_inputs_embeds is None:
1608
+ decoder_input_ids = shift_tokens_right(
1609
+ input_ids, self.config.pad_token_id, self.config.decoder_start_token_id
1610
+ )
1611
+
1612
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
1613
+ output_hidden_states = (
1614
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
1615
+ )
1616
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
1617
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1618
+
1619
+ if encoder_outputs is None:
1620
+ encoder_outputs = self.encoder(
1621
+ input_ids=input_ids,
1622
+ attention_mask=attention_mask,
1623
+ head_mask=head_mask,
1624
+ inputs_embeds=inputs_embeds,
1625
+ output_attentions=output_attentions,
1626
+ output_hidden_states=output_hidden_states,
1627
+ return_dict=return_dict,
1628
+ )
1629
+ # If the user passed a tuple for encoder_outputs, we wrap it in a BaseModelOutput when return_dict=True
1630
+ elif return_dict and not isinstance(encoder_outputs, BaseModelOutput):
1631
+ encoder_outputs = BaseModelOutput(
1632
+ last_hidden_state=encoder_outputs[0],
1633
+ hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None,
1634
+ attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None,
1635
+ )
1636
+
1637
+ # Pad mask for global tokens
1638
+ if self.pass_global_tokens_to_decoder:
1639
+ attention_mask = torch.nn.functional.pad(attention_mask, pad=(self.num_global_tokens, 0), value=1)
1640
+
1641
+ # decoder outputs consists of (dec_features, past_key_value, dec_hidden, dec_attn)
1642
+ decoder_outputs = self.decoder(
1643
+ input_ids=decoder_input_ids,
1644
+ attention_mask=decoder_attention_mask,
1645
+ encoder_hidden_states=encoder_outputs[0],
1646
+ encoder_attention_mask=attention_mask,
1647
+ head_mask=decoder_head_mask,
1648
+ cross_attn_head_mask=cross_attn_head_mask,
1649
+ past_key_values=past_key_values,
1650
+ inputs_embeds=decoder_inputs_embeds,
1651
+ use_cache=use_cache,
1652
+ output_attentions=output_attentions,
1653
+ output_hidden_states=output_hidden_states,
1654
+ return_dict=return_dict,
1655
+ )
1656
+
1657
+ if not return_dict:
1658
+ return decoder_outputs + encoder_outputs
1659
+
1660
+ return Seq2SeqModelOutput(
1661
+ last_hidden_state=decoder_outputs.last_hidden_state,
1662
+ past_key_values=decoder_outputs.past_key_values,
1663
+ decoder_hidden_states=decoder_outputs.hidden_states,
1664
+ decoder_attentions=decoder_outputs.attentions,
1665
+ cross_attentions=decoder_outputs.cross_attentions,
1666
+ encoder_last_hidden_state=encoder_outputs.last_hidden_state,
1667
+ encoder_hidden_states=encoder_outputs.hidden_states,
1668
+ encoder_attentions=encoder_outputs.attentions,
1669
+ )
1670
+
1671
+
1672
+ class LSGBartForConditionalGeneration(BartForConditionalGeneration, LSGBartPretrainedModel):
1673
+
1674
+ base_model_prefix = "model"
1675
+ _keys_to_ignore_on_load_missing = [r"final_logits_bias", r"lm_head\.weight"]
1676
+
1677
+ def __init__(self, config):
1678
+
1679
+ LSGBartPretrainedModel.__init__(self, config)
1680
+ self.model = LSGBartModel(config)
1681
+ self.register_buffer("final_logits_bias", torch.zeros((1, self.model.shared.num_embeddings)))
1682
+ self.lm_head = nn.Linear(config.d_model, self.model.shared.num_embeddings, bias=False)
1683
+
1684
+ # Initialize weights and apply final processing
1685
+ self.post_init()
1686
+
1687
+
1688
+ class LSGBartForSequenceClassification(BartForSequenceClassification, LSGBartPretrainedModel):
1689
+
1690
+ def __init__(self, config: LSGBartConfig, **kwargs):
1691
+
1692
+ LSGBartPretrainedModel.__init__(self, config, **kwargs)
1693
+ self.model = LSGBartModel(config)
1694
+ self.classification_head = LSGBartClassificationHead(
1695
+ config.d_model,
1696
+ config.d_model,
1697
+ config.num_labels,
1698
+ config.classifier_dropout,
1699
+ )
1700
+ self.model._init_weights(self.classification_head.dense)
1701
+ self.model._init_weights(self.classification_head.out_proj)
1702
+
1703
+
1704
+ class LSGBartForQuestionAnswering(BartForQuestionAnswering, LSGBartPretrainedModel):
1705
+
1706
+ def __init__(self, config: LSGBartConfig):
1707
+
1708
+ LSGBartPretrainedModel.__init__(self, config)
1709
+
1710
+ config.num_labels = 2
1711
+ self.num_labels = config.num_labels
1712
+
1713
+ self.model = LSGBartModel(config)
1714
+ self.qa_outputs = nn.Linear(config.hidden_size, config.num_labels)
1715
+
1716
+ self.model._init_weights(self.qa_outputs)
1717
+
1718
+
1719
+ class LSGBartDecoderWrapper(LSGBartPretrainedModel):
1720
+ """
1721
+ This wrapper class is a helper class to correctly load pretrained checkpoints when the causal language model is
1722
+ used in combination with the :class:`~transformers.EncoderDecoderModel` framework.
1723
+ """
1724
+
1725
+ def __init__(self, config: LSGBartConfig):
1726
+ super().__init__(config)
1727
+ self.decoder = LSGBartDecoder(config)
1728
+
1729
+ def forward(self, *args, **kwargs):
1730
+ return self.decoder(*args, **kwargs)
1731
+
1732
+
1733
+ class LSGBartForCausalLM(BartForCausalLM, LSGBartPretrainedModel):
1734
+
1735
+ def __init__(self, config: LSGBartConfig):
1736
+
1737
+ config = copy.deepcopy(config)
1738
+ config.is_decoder = True
1739
+ config.is_encoder_decoder = False
1740
+ LSGBartPretrainedModel.__init__(self, config)
1741
+ self.model = LSGBartDecoderWrapper(config)
1742
+
1743
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
1744
+
1745
+ # Initialize weights and apply final processing
1746
+ self.post_init()
1747
+
1748
+
1749
+ def str_to_class(classname):
1750
+ return getattr(sys.modules[__name__], classname)
1751
+
1752
+ # Register model in Auto API
1753
+ try:
1754
+ LSGBartConfig.register_for_auto_class()
1755
+ for key, value in AUTO_MAP.items():
1756
+ str_to_class(value.split(".")[-1]).register_for_auto_class(key)
1757
+ except:
1758
+ warn("AutoRegister isn't available, you'll have to manually copy modeling.py after .save_pretrained(...).")
1759
+ warn("Update to transformers >= 4.17.0 to fix.")
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1bd0c7c9e856bbbf28661b9cbc4f39606ec06811a64e455632cee5c4ff7dcfc4
3
+ size 578416695
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
1
+ {"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "sep_token": "</s>", "pad_token": "<pad>", "cls_token": "<s>", "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": false}}
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
1
+ {"errors": "replace", "bos_token": "<s>", "eos_token": "</s>", "sep_token": "</s>", "cls_token": "<s>", "unk_token": "<unk>", "pad_token": "<pad>", "mask_token": "<mask>", "add_prefix_space": false, "trim_offsets": true, "model_max_length": 4096, "special_tokens_map_file": null, "name_or_path": "/data/ccondevaux/lsg/text-summarization/tmp_final/wcep/lsg_local", "tokenizer_class": "BartTokenizer"}
vocab.json ADDED
The diff for this file is too large to render. See raw diff