Feature Extraction
Transformers
PyTorch
English
data2vec-text
exbert
Inference Endpoints
patrickvonplaten commited on
Commit
bd0db19
1 Parent(s): f0e7dae

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -3
README.md CHANGED
@@ -18,6 +18,12 @@ makes a difference between english and English.
18
  Disclaimer: The team releasing Data2Vec-Text did not write a model card for this model so this model card has been written by
19
  the Hugging Face team.
20
 
 
 
 
 
 
 
21
  ## Abstract
22
 
23
  *While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because
@@ -58,9 +64,6 @@ The RoBERTa model was pretrained on the reunion of five datasets:
58
 
59
  Together theses datasets weight 160GB of text.
60
 
61
- ![model image](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/data2vec.png)
62
-
63
-
64
  ### BibTeX entry and citation info
65
 
66
  ```bibtex
 
18
  Disclaimer: The team releasing Data2Vec-Text did not write a model card for this model so this model card has been written by
19
  the Hugging Face team.
20
 
21
+ ## Pre-Training method
22
+
23
+ ![model image](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/data2vec.png)
24
+
25
+ For more information, please take a look at the [official paper](https://arxiv.org/abs/2202.03555).
26
+
27
  ## Abstract
28
 
29
  *While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because
 
64
 
65
  Together theses datasets weight 160GB of text.
66
 
 
 
 
67
  ### BibTeX entry and citation info
68
 
69
  ```bibtex