Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,90 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
tags:
|
4 |
+
- vision
|
5 |
+
- image-classification
|
6 |
+
datasets:
|
7 |
+
- imagenet-1k
|
8 |
+
widget:
|
9 |
+
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
|
10 |
+
example_title: Tiger
|
11 |
+
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
|
12 |
+
example_title: Teapot
|
13 |
+
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
|
14 |
+
example_title: Palace
|
15 |
+
---
|
16 |
+
|
17 |
+
# Swin Transformer v2 (base-sized model)
|
18 |
+
|
19 |
+
Swin Transformer v2 model pre-trained on ImageNet-1k at resolution 256x256. It was introduced in the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Liu et al. and first released in [this repository](https://github.com/microsoft/Swin-Transformer).
|
20 |
+
|
21 |
+
Disclaimer: The team releasing Swin Transformer v2 did not write a model card for this model so this model card has been written by the Hugging Face team.
|
22 |
+
|
23 |
+
## Model description
|
24 |
+
|
25 |
+
The Swin Transformer is a type of Vision Transformer. It builds hierarchical feature maps by merging image patches (shown in gray) in deeper layers and has linear computation complexity to input image size due to computation of self-attention only within each local window (shown in red). It can thus serve as a general-purpose backbone for both image classification and dense recognition tasks. In contrast, previous vision Transformers produce feature maps of a single low resolution and have quadratic computation complexity to input image size due to computation of self-attention globally.
|
26 |
+
|
27 |
+
Swin Transformer v2 adds 3 main improvements: 1) a residual-post-norm method combined with cosine attention to improve training stability; 2) a log-spaced continuous position bias method to effectively transfer models pre-trained using low-resolution images to downstream tasks with high-resolution inputs; 3) a self-supervised pre-training method, SimMIM, to reduce the needs of vast labeled images.
|
28 |
+
|
29 |
+
![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/swin_transformer_architecture.png)
|
30 |
+
|
31 |
+
[Source](https://paperswithcode.com/method/swin-transformer)
|
32 |
+
|
33 |
+
## Intended uses & limitations
|
34 |
+
|
35 |
+
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=swinv2) to look for
|
36 |
+
fine-tuned versions on a task that interests you.
|
37 |
+
|
38 |
+
### How to use
|
39 |
+
|
40 |
+
Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes:
|
41 |
+
|
42 |
+
```python
|
43 |
+
from transformers import AutoImageProcessor, AutoModelForImageClassification
|
44 |
+
from PIL import Image
|
45 |
+
import requests
|
46 |
+
|
47 |
+
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
|
48 |
+
image = Image.open(requests.get(url, stream=True).raw)
|
49 |
+
|
50 |
+
processor = AutoImageProcessor.from_pretrained("microsoft/swinv2-base-patch4-window16-256")
|
51 |
+
model = AutoModelForImageClassification.from_pretrained("microsoft/swinv2-base-patch4-window16-256")
|
52 |
+
|
53 |
+
inputs = processor(images=image, return_tensors="pt")
|
54 |
+
outputs = model(**inputs)
|
55 |
+
logits = outputs.logits
|
56 |
+
# model predicts one of the 1000 ImageNet classes
|
57 |
+
predicted_class_idx = logits.argmax(-1).item()
|
58 |
+
print("Predicted class:", model.config.id2label[predicted_class_idx])
|
59 |
+
```
|
60 |
+
|
61 |
+
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/swinv2.html#).
|
62 |
+
|
63 |
+
### BibTeX entry and citation info
|
64 |
+
|
65 |
+
```bibtex
|
66 |
+
@article{DBLP:journals/corr/abs-2111-09883,
|
67 |
+
author = {Ze Liu and
|
68 |
+
Han Hu and
|
69 |
+
Yutong Lin and
|
70 |
+
Zhuliang Yao and
|
71 |
+
Zhenda Xie and
|
72 |
+
Yixuan Wei and
|
73 |
+
Jia Ning and
|
74 |
+
Yue Cao and
|
75 |
+
Zheng Zhang and
|
76 |
+
Li Dong and
|
77 |
+
Furu Wei and
|
78 |
+
Baining Guo},
|
79 |
+
title = {Swin Transformer {V2:} Scaling Up Capacity and Resolution},
|
80 |
+
journal = {CoRR},
|
81 |
+
volume = {abs/2111.09883},
|
82 |
+
year = {2021},
|
83 |
+
url = {https://arxiv.org/abs/2111.09883},
|
84 |
+
eprinttype = {arXiv},
|
85 |
+
eprint = {2111.09883},
|
86 |
+
timestamp = {Thu, 02 Dec 2021 15:54:22 +0100},
|
87 |
+
biburl = {https://dblp.org/rec/journals/corr/abs-2111-09883.bib},
|
88 |
+
bibsource = {dblp computer science bibliography, https://dblp.org}
|
89 |
+
}
|
90 |
+
```
|