geolocal commited on
Commit
8fb3273
1 Parent(s): df930b1

Upload 3 files

Browse files
Files changed (3) hide show
  1. README.md +193 -0
  2. nagasaki.jpg +0 -0
  3. sanfrancisco.jpeg +0 -0
README.md ADDED
@@ -0,0 +1,193 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ language:
4
+ - en
5
+ pipeline_tag: zero-shot-image-classification
6
+ widget:
7
+ - src: https://huggingface.co/lhaas/StreetCLIP/resolve/main/nagasaki.jpg
8
+ candidate_labels: China, South Korea, Japan, Phillipines, Taiwan, Vietnam, Cambodia
9
+ example_title: Countries
10
+ - src: https://huggingface.co/lhaas/StreetCLIP/resolve/main/sanfrancisco.jpeg
11
+ candidate_labels: San Jose, San Diego, Los Angeles, Las Vegas, San Francisco, Seattle
12
+ example_title: Cities
13
+ - src: https://huggingface.co/lhaas/StreetCLIP/resolve/main/australia.jpeg
14
+ candidate_labels: tropical climate, dry climate, temperate climate, continental climate, polar climate
15
+ example_title: Climate
16
+ library_name: transformers
17
+ tags:
18
+ - geolocalization
19
+ - geolocation
20
+ - geographic
21
+ - street
22
+ - climate
23
+ - clip
24
+ - urban
25
+ - rural
26
+ ---
27
+ # Model Card for Model ID
28
+
29
+ <!-- Provide a quick summary of what the model is/does. -->
30
+
31
+
32
+ # Model Details
33
+
34
+ ## Model Description
35
+
36
+ <!-- Provide a longer summary of what this model is. -->
37
+
38
+
39
+ - **Developed by:** Authors not disclosed
40
+ - **Model type:** [CLIP](https://openai.com/blog/clip/)
41
+ - **Language:** English
42
+ - **License:** Create Commons Attribution Non Commercial 4.0
43
+ - **Finetuned from model:** [openai/clip-vit-large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336)
44
+
45
+ ## Model Sources
46
+
47
+ <!-- Provide the basic links for the model. -->
48
+
49
+ - **Paper:** Pre-print available soon ..
50
+ - **Demo:** Currently in development ...
51
+
52
+ # Uses
53
+
54
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
55
+
56
+ ## Direct Use
57
+
58
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
59
+
60
+ [More Information Needed]
61
+
62
+ ## Downstream Use [optional]
63
+
64
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
65
+
66
+ [More Information Needed]
67
+
68
+ ## Out-of-Scope Use
69
+
70
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
71
+
72
+ [More Information Needed]
73
+
74
+ # Bias, Risks, and Limitations
75
+
76
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
77
+
78
+ [More Information Needed]
79
+
80
+ ## Recommendations
81
+
82
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
83
+
84
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
85
+
86
+ ## How to Get Started with the Model
87
+
88
+ Use the code below to get started with the model.
89
+
90
+ ```python
91
+ from PIL import Image
92
+ import requests
93
+
94
+ from transformers import CLIPProcessor, CLIPModel
95
+
96
+ model = CLIPModel.from_pretrained("lhaas/StreetCLIP")
97
+ processor = CLIPProcessor.from_pretrained("lhaas/StreetCLIP")
98
+
99
+ url = "https://huggingface.co/lhaas/StreetCLIP/resolve/main/sanfrancisco.jpeg"
100
+ image = Image.open(requests.get(url, stream=True).raw)
101
+
102
+ choices = ["San Jose", "San Diego", "Los Angeles", "Las Vegas", "San Francisco"]
103
+ inputs = processor(text=choices, images=image, return_tensors="pt", padding=True)
104
+
105
+ outputs = model(**inputs)
106
+ logits_per_image = outputs.logits_per_image # this is the image-text similarity score
107
+ probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
108
+ ```
109
+
110
+ # Training Details
111
+
112
+ ## Training Data
113
+
114
+ <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
115
+
116
+ [More Information Needed]
117
+
118
+ ## Training Procedure [optional]
119
+
120
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
121
+
122
+ ### Preprocessing
123
+
124
+ [More Information Needed]
125
+
126
+ ### Speeds, Sizes, Times
127
+
128
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
129
+
130
+ [More Information Needed]
131
+
132
+ # Evaluation
133
+
134
+ <!-- This section describes the evaluation protocols and provides the results. -->
135
+
136
+ ## Testing Data, Factors & Metrics
137
+
138
+ ### Testing Data
139
+
140
+ <!-- This should link to a Data Card if possible. -->
141
+
142
+ [More Information Needed]
143
+
144
+ ### Factors
145
+
146
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
147
+
148
+ [More Information Needed]
149
+
150
+ ### Metrics
151
+
152
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
153
+
154
+ [More Information Needed]
155
+
156
+ ## Results
157
+
158
+ [More Information Needed]
159
+
160
+ ### Summary
161
+
162
+
163
+
164
+ # Model Examination [optional]
165
+
166
+ <!-- Relevant interpretability work for the model goes here -->
167
+
168
+ [More Information Needed]
169
+
170
+ # Environmental Impact
171
+
172
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
173
+
174
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
175
+
176
+ - **Hardware Type:** 4 NVIDIA A100 GPUs
177
+ - **Hours used:** 12
178
+
179
+ # Example Image Attribution
180
+
181
+ [More information needed]
182
+
183
+ # Citation [optional]
184
+
185
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
186
+
187
+ **BibTeX:**
188
+
189
+ [More Information Needed]
190
+
191
+ **APA:**
192
+
193
+ [More Information Needed]
nagasaki.jpg ADDED
sanfrancisco.jpeg ADDED