File size: 3,816 Bytes
c89d98f
302b97b
 
05a5d2c
 
3384168
 
 
c89d98f
302b97b
c89d98f
 
302b97b
c89d98f
302b97b
 
 
c89d98f
302b97b
 
c89d98f
302b97b
c89d98f
302b97b
c89d98f
302b97b
c89d98f
302b97b
c89d98f
3384168
c89d98f
302b97b
c89d98f
302b97b
 
c89d98f
302b97b
 
 
c89d98f
627ae3a
302b97b
c89d98f
302b97b
 
c89d98f
302b97b
 
c89d98f
302b97b
 
 
c89d98f
302b97b
 
 
c89d98f
3384168
302b97b
ea456df
 
 
302b97b
 
 
 
3384168
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
---
datasets:
- imagenet-1k
language:
- en
library_name: transformers
license: cc-by-nc-4.0
pipeline_tag: image-classification
---
# Hiera Model (Tiny, fine-tuned on IN1K)


**Hiera** is a _hierarchical_ vision transformer that is fast, powerful, and, above all, _simple_. It was introduced in the paper [Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles](https://arxiv.org/abs/2306.00989/) and outperforms the state-of-the-art across a wide array of image and video tasks _while being much faster_. 

<p align="center">
  <img src="https://github.com/facebookresearch/hiera/raw/main/examples/img/inference_speed.png" width="75%">
</p>

## How does it work?
![A diagram of Hiera's architecture.](https://github.com/facebookresearch/hiera/raw/main/examples/img/hiera_arch.png)

Vision transformers like [ViT](https://arxiv.org/abs/2010.11929) use the same spatial resolution and number of features throughout the whole network. But this is inefficient: the early layers don't need that many features, and the later layers don't need that much spatial resolution. Prior hierarchical models like [ResNet](https://arxiv.org/abs/1512.03385) accounted for this by using fewer features at the start and less spatial resolution at the end.

Several domain specific vision transformers have been introduced that employ this hierarchical design, such as [Swin](https://arxiv.org/abs/2103.14030) or [MViT](https://arxiv.org/abs/2104.11227). But in the pursuit of state-of-the-art results using fully supervised training on ImageNet-1K, these models have become more and more complicated as they add specialized modules to make up for spatial biases that ViTs lack. While these changes produce effective models with attractive FLOP counts, under the hood the added complexity makes these models _slower_ overall.

We show that a lot of this bulk is actually _unnecessary_. Instead of manually adding spatial bases through architectural changes, we opt to _teach_ the model these biases instead. By training with [MAE](https://arxiv.org/abs/2111.06377), we can simplify or remove _all_ of these bulky modules in existing transformers and _increase accuracy_ in the process. The result is Hiera, an extremely efficient and simple architecture that outperforms the state-of-the-art in several image and video recognition tasks.

## Intended uses & limitations

Hiera can be used for image classification, feature extraction or masked image modeling. This checkpoint in specific is intended for **Image Classificaiton**.

### How to use

```python
import requests

import torch
from PIL import Image
from transformers import AutoImageProcessor, AutoModelForImageClassification 

model_id = "facebook/hiera-tiny-224-in1k-hf"
device = "cuda" if torch.cuda.is_available() else "cpu"

image_processor = AutoImageProcessor.from_pretrained(model_id)
model = AutoModelForImageClassification.from_pretrained(model_id).to(device)

image_url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(image_url, stream=True).raw)

inputs = image_processor(images=image, return_tensors="pt").to(device)
with torch.no_grad():
    outputs = model(**inputs)

predicted_id = outputs.logits.argmax(dim=-1).item()
predicted_class = model.config.id2label[predicted_id] # 'tabby, tabby cat'
```


### BibTeX entry and citation info
If you use Hiera or this code in your work, please cite:
```
@article{ryali2023hiera,
  title={Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles},
  author={Ryali, Chaitanya and Hu, Yuan-Ting and Bolya, Daniel and Wei, Chen and Fan, Haoqi and Huang, Po-Yao and Aggarwal, Vaibhav and Chowdhury, Arkabandhu and Poursaeed, Omid and Hoffman, Judy and Malik, Jitendra and Li, Yanghao and Feichtenhofer, Christoph},
  journal={ICML},
  year={2023}
}