ccdv commited on
Commit
3a0e5eb
1 Parent(s): dbb8b42
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -13,7 +13,8 @@ pipeline_tag: fill-mask
13
  **This model relies on a custom modeling file, you need to add trust_remote_code=True**\
14
  **See [\#13467](https://github.com/huggingface/transformers/pull/13467)**
15
 
16
- Conversion script is available at this [link](https://github.com/ccdv-ai/convert_checkpoint_to_lsg).
 
17
 
18
  * [Usage](#usage)
19
  * [Parameters](#parameters)
@@ -25,7 +26,7 @@ This model is adapted from [BART-base](https://huggingface.co/facebook/bart-base
25
 
26
  This model can handle long sequences but faster and more efficiently than Longformer (LED) or BigBird (Pegasus) from the hub and relies on Local + Sparse + Global attention (LSG).
27
 
28
- The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...). \
29
 
30
  Implemented in PyTorch.
31
 
 
13
  **This model relies on a custom modeling file, you need to add trust_remote_code=True**\
14
  **See [\#13467](https://github.com/huggingface/transformers/pull/13467)**
15
 
16
+ LSG ArXiv [paper](https://arxiv.org/abs/2210.15497). \
17
+ Github/conversion script is available at this [link](https://github.com/ccdv-ai/convert_checkpoint_to_lsg).
18
 
19
  * [Usage](#usage)
20
  * [Parameters](#parameters)
 
26
 
27
  This model can handle long sequences but faster and more efficiently than Longformer (LED) or BigBird (Pegasus) from the hub and relies on Local + Sparse + Global attention (LSG).
28
 
29
+ The model requires sequences whose length is a multiple of the block size. The model is "adaptive" and automatically pads the sequences if needed (adaptive=True in config). It is however recommended, thanks to the tokenizer, to truncate the inputs (truncation=True) and optionally to pad with a multiple of the block size (pad_to_multiple_of=...).
30
 
31
  Implemented in PyTorch.
32