innocent-charles commited on
Commit
46a6847
1 Parent(s): 9231b87

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +108 -0
README.md ADDED
@@ -0,0 +1,108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: apple-sample-code-license
4
+ license_link: LICENSE
5
+ ---
6
+ A CLIP (Contrastive Language-Image Pre-training) model trained on DFN-5B.
7
+ Data Filtering Networks (DFNs) are small networks used to automatically filter large pools of uncurated data.
8
+ This model was trained on 5B images that were filtered from a pool of 43B uncurated image-text pairs
9
+ (12.8B image-text pairs from CommonPool-12.8B + 30B additional public image-text pairs).
10
+
11
+ This model has been converted to PyTorch from the original JAX checkpoints from Axlearn (https://github.com/apple/axlearn).
12
+ These weights are directly usable in OpenCLIP (image + text).
13
+
14
+
15
+ ## Model Details
16
+
17
+ - **Model Type:** Contrastive Image-Text, Zero-Shot Image Classification.
18
+ - **Dataset:** DFN-5b
19
+ - **Papers:**
20
+ - Data Filtering Networks: https://arxiv.org/abs/2309.17425
21
+ - **Samples Seen:** 39B (224 x 224) + 5B (384 x 384)
22
+ ## Model Metrics
23
+ | dataset | metric |
24
+ |:-----------------------|---------:|
25
+ | ImageNet 1k | 0.84218 |
26
+ | Caltech-101 | 0.954479 |
27
+ | CIFAR-10 | 0.9879 |
28
+ | CIFAR-100 | 0.9041 |
29
+ | CLEVR Counts | 0.362467 |
30
+ | CLEVR Distance | 0.206067 |
31
+ | Country211 | 0.37673 |
32
+ | Describable Textures | 0.71383 |
33
+ | EuroSAT | 0.608333 |
34
+ | FGVC Aircraft | 0.719938 |
35
+ | Food-101 | 0.963129 |
36
+ | GTSRB | 0.679018 |
37
+ | ImageNet Sketch | 0.73338 |
38
+ | ImageNet v2 | 0.7837 |
39
+ | ImageNet-A | 0.7992 |
40
+ | ImageNet-O | 0.3785 |
41
+ | ImageNet-R | 0.937633 |
42
+ | KITTI Vehicle Distance | 0.38256 |
43
+ | MNIST | 0.8372 |
44
+ | ObjectNet <sup>1</sup> | 0.796867 |
45
+ | Oxford Flowers-102 | 0.896834 |
46
+ | Oxford-IIIT Pet | 0.966841 |
47
+ | Pascal VOC 2007 | 0.826255 |
48
+ | PatchCamelyon | 0.695953 |
49
+ | Rendered SST2 | 0.566722 |
50
+ | RESISC45 | 0.755079 |
51
+ | Stanford Cars | 0.959955 |
52
+ | STL-10 | 0.991125 |
53
+ | SUN397 | 0.772799 |
54
+ | SVHN | 0.671251 |
55
+ | Flickr | 0.8808 |
56
+ | MSCOCO | 0.636889 |
57
+ | WinoGAViL | 0.571813 |
58
+ | iWildCam | 0.224911 |
59
+ | Camelyon17 | 0.711536 |
60
+ | FMoW | 0.209024 |
61
+ | Dollar Street | 0.71729 |
62
+ | GeoDE | 0.935699 |
63
+ | **Average** | **0.709421** |
64
+
65
+
66
+ [1]: Center-crop pre-processing used for ObjectNet (squashing results in lower accuracy of 0.737)
67
+ ## Model Usage
68
+ ### With OpenCLIP
69
+ ```
70
+ import torch
71
+ import torch.nn.functional as F
72
+ from urllib.request import urlopen
73
+ from PIL import Image
74
+ from open_clip import create_model_from_pretrained, get_tokenizer
75
+
76
+ model, preprocess = create_model_from_pretrained('hf-hub:apple/DFN5B-CLIP-ViT-H-14-384')
77
+ tokenizer = get_tokenizer('ViT-H-14')
78
+
79
+ image = Image.open(urlopen(
80
+ 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
81
+ ))
82
+ image = preprocess(image).unsqueeze(0)
83
+
84
+ labels_list = ["a dog", "a cat", "a donut", "a beignet"]
85
+ text = tokenizer(labels_list, context_length=model.context_length)
86
+
87
+ with torch.no_grad(), torch.cuda.amp.autocast():
88
+ image_features = model.encode_image(image)
89
+ text_features = model.encode_text(text)
90
+ image_features = F.normalize(image_features, dim=-1)
91
+ text_features = F.normalize(text_features, dim=-1)
92
+
93
+ text_probs = torch.sigmoid(image_features @ text_features.T * model.logit_scale.exp() + model.logit_bias)
94
+
95
+ zipped_list = list(zip(labels_list, [round(p.item(), 3) for p in text_probs[0]]))
96
+ print("Label probabilities: ", zipped_list)
97
+ ```
98
+
99
+ ## Citation
100
+ ```bibtex
101
+ @article{fang2023data,
102
+ title={Data Filtering Networks},
103
+ author={Fang, Alex and Jose, Albin Madappally and Jain, Amit and Schmidt, Ludwig and Toshev, Alexander and Shankar, Vaishaal},
104
+ journal={arXiv preprint arXiv:2309.17425},
105
+ year={2023}
106
+ }
107
+
108
+ ```