tau
/

Transformers
English
tau/sled
Inference Endpoints
maorivgi commited on
Commit
ea67f32
1 Parent(s): fb5c341

initial commit

Browse files
Files changed (3) hide show
  1. README.md +98 -0
  2. config.json +9 -0
  3. tokenizer_config.json +5 -0
README.md CHANGED
@@ -1,3 +1,101 @@
1
  ---
2
  license: mit
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ language: en
4
  ---
5
+
6
+ # BART-SLED (SLiding-Encoder and Decoder, large-sized model)
7
+
8
+ SLED models use pretrained, short-range encoder-decoder models, and apply them over
9
+ long-text inputs by splitting the input into multiple overlapping chunks, encoding each independently and perform fusion-in-decoder
10
+
11
+ ## Model description
12
+
13
+ This SLED model is based on the BART model, which is described in its [model card](https://huggingface.co/facebook/bart-large).
14
+ BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works
15
+ well for comprehension tasks (e.g. text classification, question answering). When used as a BART-SLED model, it can be applied on long text tasks.
16
+
17
+ ## Intended uses & limitations
18
+
19
+ You can use the raw model for text infilling. However, the model is mostly meant to be fine-tuned on a supervised dataset.
20
+
21
+ ### How to use
22
+ To use the model, you first need to install `py-sled` in your environment (or clone the code from the [official repository](https://github.com/Mivg/SLED/blob/main/README.md))
23
+ ```
24
+ pip install py-sled
25
+ ```
26
+ For more installation instructions, see [here](https://github.com/Mivg/SLED#Installation).
27
+
28
+
29
+ Once installed, SLED is fully compatible with HuggingFace's AutoClasses (AutoTokenizer, AutoConfig, AutoModel
30
+ and AutoModelForCausalLM) and can be loaded using the from_pretrained methods
31
+ ```python
32
+ import sled # *** required so that SledModels will be registered for the AutoClasses ***
33
+ model = AutoModel.from_pretrained('tau/bart-large-sled')
34
+ ```
35
+
36
+ Here is how to use this model in PyTorch:
37
+
38
+ ```python
39
+ from sled import SledTokenizer, SledModel
40
+ tokenizer = SledTokenizer.from_pretrained('tau/bart-large-sled')
41
+ model = SledModel.from_pretrained('tau/bart-large-sled')
42
+ inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
43
+ outputs = model(**inputs)
44
+ last_hidden_states = outputs.last_hidden_state
45
+ ```
46
+ You can also replace SledModel by SledModelForConditionalGeneration for Seq2Seq generation
47
+ ```python
48
+ model = SledModelForConditionalGeneration.from_pretrained('tau/bart-large-sled')
49
+ ```
50
+ In case you wish to apply SLED on a task containing a prefix (e.g. question) which should be given as a context to
51
+ every chunk, you can pass the `prefix_length` tensor input as well (A LongTensor in the length of the batch size).
52
+ ```python
53
+ import torch
54
+ import sled # *** required so that SledModels will be registered for the AutoClasses ***
55
+ tokenizer = AutoTokenizer.from_pretrained('tau/bart-large-sled')
56
+ model = AutoModel.from_pretrained('tau/bart-large-sled')
57
+ document_input_ids = tokenizer("Dogs are great for you.", return_tensors="pt").input_ids
58
+ prefix_input_ids = tokenizer("Are dogs good for you?", return_tensors="pt").input_ids
59
+ input_ids = torch.cat((prefix_input_ids, document_input_ids), dim=-1)
60
+ attention_mask = torch.ones_like(input_ids)
61
+ prefix_length = torch.LongTensor([[prefix_input_ids.size(1)]])
62
+
63
+ outputs = model(input_ids=input_ids, attention_mask=attention_mask, prefix_length=prefix_length)
64
+ last_hidden_states = outputs.last_hidden_state
65
+ ```
66
+
67
+ ### BibTeX entry and citation info
68
+
69
+ Please cite both the SLED [paper](https://arxiv.org/abs/2208.00748.pdf) and the BART [paper](https://arxiv.org/abs/1910.13461) by Lewis et al
70
+
71
+ ```bibtex
72
+ @inproceedings{Ivgi2022EfficientLU,
73
+ title={Efficient Long-Text Understanding with Short-Text Models},
74
+ author={Maor Ivgi and Uri Shaham and Jonathan Berant},
75
+ year={2022}
76
+ }
77
+ ```
78
+
79
+ ```bibtex
80
+ @article{DBLP:journals/corr/abs-1910-13461,
81
+ author = {Mike Lewis and
82
+ Yinhan Liu and
83
+ Naman Goyal and
84
+ Marjan Ghazvininejad and
85
+ Abdelrahman Mohamed and
86
+ Omer Levy and
87
+ Veselin Stoyanov and
88
+ Luke Zettlemoyer},
89
+ title = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language
90
+ Generation, Translation, and Comprehension},
91
+ journal = {CoRR},
92
+ volume = {abs/1910.13461},
93
+ year = {2019},
94
+ url = {http://arxiv.org/abs/1910.13461},
95
+ eprinttype = {arXiv},
96
+ eprint = {1910.13461},
97
+ timestamp = {Thu, 31 Oct 2019 14:02:26 +0100},
98
+ biburl = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib},
99
+ bibsource = {dblp computer science bibliography, https://dblp.org}
100
+ }
101
+ ```
config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_type": "tau/sled",
3
+ "underlying_config": "facebook/bart-large",
4
+ "context_size": 256,
5
+ "window_fraction": 0.5,
6
+ "prepend_prefix": true,
7
+ "encode_prefix": true,
8
+ "sliding_method": "dynamic"
9
+ }
tokenizer_config.json ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
1
+ {
2
+ "tokenizer_class": "SledTokenizer",
3
+ "base_tokenizer": "facebook/bart-large",
4
+ "model_max_length": 16384
5
+ }