AshtonIsNotHere commited on
Commit
c240683
1 Parent(s): a7580aa

Updated README for clarity

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -12,7 +12,7 @@ datasets:
12
  ## XLM-R Longformer Model / XLM-Long
13
  This is an XLM-RoBERTa longformer model that was pre-trained from the XLM-RoBERTa checkpoint using the Longformer [pre-training scheme](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) on the English WikiText-103 corpus.
14
 
15
- This model is identical to [markussagen's xlm-r longformer model,](https://huggingface.co/markussagen/xlm-roberta-longformer-base-4096) the difference being that the weights have been transferred to a Longformer model, in order to enable loading with ```.from_pretrained()```.
16
 
17
  ## How to Use
18
  The model can be used as expected to fine-tune on a downstream task.
 
12
  ## XLM-R Longformer Model / XLM-Long
13
  This is an XLM-RoBERTa longformer model that was pre-trained from the XLM-RoBERTa checkpoint using the Longformer [pre-training scheme](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) on the English WikiText-103 corpus.
14
 
15
+ This model is identical to [markussagen's xlm-r longformer model,](https://huggingface.co/markussagen/xlm-roberta-longformer-base-4096) the difference being that the weights have been transferred to a Longformer model, in order to enable loading with ```AutoModel.from_pretrained()``` without the need for external libraries.
16
 
17
  ## How to Use
18
  The model can be used as expected to fine-tune on a downstream task.