nielsr HF staff commited on
Commit
0bcfde9
1 Parent(s): efc0cca

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +70 -0
README.md ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ datasets:
5
+ - imagenet-1k
6
+ ---
7
+
8
+ # Vision Transformer (large-sized model) pre-trained with MAE
9
+
10
+ Vision Transformer (ViT) model pre-trained using the MAE method. It was introduced in the paper [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/abs/2111.06377) by Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick and first released in [this repository](https://github.com/facebookresearch/mae).
11
+
12
+ Disclaimer: The team releasing MAE did not write a model card for this model so this model card has been written by the Hugging Face team.
13
+
14
+ ## Model description
15
+
16
+ The Vision Transformer (ViT) is a transformer encoder model (BERT-like). Images are presented to the model as a sequence of fixed-size patches.
17
+
18
+ During pre-training, one randomly masks out a high portion (75%) of the image patches. First, the encoder is used to encode the visual patches. Next, a learnable (shared) mask token is added at the positions of the masked patches. The decoder takes the encoded visual patches and mask tokens as input and reconstructs raw pixel values for the masked positions.
19
+
20
+ By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder.
21
+
22
+ ## Intended uses & limitations
23
+
24
+ You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=facebook/vit-mae) to look for
25
+ fine-tuned versions on a task that interests you.
26
+
27
+ ### How to use
28
+
29
+ Here is how to use this model:
30
+
31
+ ```python
32
+ from transformers import AutoFeatureExtractor, ViTMAEForPreTraining
33
+ from PIL import Image
34
+ import requests
35
+
36
+ url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
37
+ image = Image.open(requests.get(url, stream=True).raw)
38
+
39
+ feature_extractor = AutoFeatureExtractor.from_pretrained('facebook/vit-mae-large')
40
+ model = ViTMAEForPreTraining.from_pretrained('facebook/vit-mae-large')
41
+
42
+ inputs = feature_extractor(images=image, return_tensors="pt")
43
+ outputs = model(**inputs)
44
+ loss = outputs.loss
45
+ mask = outputs.mask
46
+ ids_restore = outputs.ids_restore
47
+ ```
48
+
49
+ ### BibTeX entry and citation info
50
+
51
+ ```bibtex
52
+ @article{DBLP:journals/corr/abs-2111-06377,
53
+ author = {Kaiming He and
54
+ Xinlei Chen and
55
+ Saining Xie and
56
+ Yanghao Li and
57
+ Piotr Doll{\'{a}}r and
58
+ Ross B. Girshick},
59
+ title = {Masked Autoencoders Are Scalable Vision Learners},
60
+ journal = {CoRR},
61
+ volume = {abs/2111.06377},
62
+ year = {2021},
63
+ url = {https://arxiv.org/abs/2111.06377},
64
+ eprinttype = {arXiv},
65
+ eprint = {2111.06377},
66
+ timestamp = {Tue, 16 Nov 2021 12:12:31 +0100},
67
+ biburl = {https://dblp.org/rec/journals/corr/abs-2111-06377.bib},
68
+ bibsource = {dblp computer science bibliography, https://dblp.org}
69
+ }
70
+ ```