File size: 4,574 Bytes
f0452e7
 
e29e67c
 
 
 
f0452e7
e29e67c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
---
license: apache-2.0
tags:
- vision
datasets:
- imagenet-21k
---

# ImageGPT (small-sized model) 

ImageGPT (iGPT) model pre-trained on ImageNet ILSVRC 2012 (14 million images, 21,843 classes) at resolution 32x32. It was introduced in the paper [Generative Pretraining from Pixels](https://cdn.openai.com/papers/Generative_Pretraining_from_Pixels_V2.pdf) by Chen et al. and first released in [this repository](https://github.com/openai/image-gpt). See also the official [blog post](https://openai.com/blog/image-gpt/).


## Model description

The ImageGPT (iGPT) is a transformer decoder model (GPT-like) pretrained on a large collection of images in a self-supervised fashion, namely ImageNet-21k, at a resolution of 32x32 pixels. 

The goal for the model is simply to predict the next pixel value, given the previous ones.

By pre-training the model, it learns an inner representation of images that can then be used to:
- extract features useful for downstream tasks: one can either use ImageGPT to produce fixed image features, in order to train a linear model (like a sklearn logistic regression model or SVM). This is also referred to as "linear probing".
- perform (un)conditional image generation. 

## Intended uses & limitations

You can use the raw model for either feature extractor or (un) conditional image generation. 

### How to use

Here is how to use this model as feature extractor:

```python
from transformers import AutoFeatureExtractor
from onnxruntime import InferenceSession
from datasets import load_dataset

# load image
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]

# load model
feature_extractor = AutoFeatureExtractor.from_pretrained("openai/imagegpt-small")
session = InferenceSession("model/model.onnx")

# ONNX Runtime expects NumPy arrays as input
inputs = feature_extractor(image, return_tensors="np")
outputs = session.run(output_names=["last_hidden_state"], input_feed=dict(inputs))
```
Or you can use the model with classification head that returns logits
```python
from transformers import AutoFeatureExtractor
from onnxruntime import InferenceSession
from datasets import load_dataset

# load image
dataset = load_dataset("huggingface/cats-image")
image = dataset["test"]["image"][0]

# load model
feature_extractor = AutoFeatureExtractor.from_pretrained("openai/imagegpt-small")
session = InferenceSession("model/model_classification.onnx")

# ONNX Runtime expects NumPy arrays as input
inputs = feature_extractor(image, return_tensors="np")
outputs = session.run(output_names=["logits"], input_feed=dict(inputs))
```
## Original implementation

Follow [this link](https://huggingface.co/openai/imagegpt-small) to see the original implementation.

## Training data

The ImageGPT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes. 

## Training procedure

### Preprocessing

Images are first resized/rescaled to the same resolution (32x32) and normalized across the RGB channels. Next, color-clustering is performed. This means that every pixel is turned into one of 512 possible cluster values. This way, one ends up with a sequence of 32x32 = 1024 pixel values, rather than 32x32x3 = 3072, which is prohibitively large for Transformer-based models. 

### Pretraining

Training details can be found in section 3.4 of v2 of the paper.

## Evaluation results

For evaluation results on several image classification benchmarks, we refer to the original paper.

### BibTeX entry and citation info

```bibtex
@InProceedings{pmlr-v119-chen20s,
  title = 	 {Generative Pretraining From Pixels},
  author =       {Chen, Mark and Radford, Alec and Child, Rewon and Wu, Jeffrey and Jun, Heewoo and Luan, David and Sutskever, Ilya},
  booktitle = 	 {Proceedings of the 37th International Conference on Machine Learning},
  pages = 	 {1691--1703},
  year = 	 {2020},
  editor = 	 {III, Hal Daumé and Singh, Aarti},
  volume = 	 {119},
  series = 	 {Proceedings of Machine Learning Research},
  month = 	 {13--18 Jul},
  publisher =    {PMLR},
  pdf = 	 {http://proceedings.mlr.press/v119/chen20s/chen20s.pdf},
  url = 	 {https://proceedings.mlr.press/v119/chen20s.html
}
```

```bibtex
@inproceedings{deng2009imagenet,
  title={Imagenet: A large-scale hierarchical image database},
  author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
  booktitle={2009 IEEE conference on computer vision and pattern recognition},
  pages={248--255},
  year={2009},
  organization={Ieee}
}
```