File size: 4,001 Bytes
ae27bf5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 |
---
license: mit
library_name: transformers.js
---
https://github.com/VicenteVivan/geo-clip with ONNX weights to be compatible with Transformers.js.
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@xenova/transformers) using:
```bash
npm i @xenova/transformers
```
**Example:** Perform worldwide image geolocalization.
```js
import { AutoModel, AutoProcessor, RawImage, Tensor, dot, softmax } from '@xenova/transformers';
// Load vision and location models
const model_id = 'Xenova/geoclip-large-patch14';
const vision_model = await AutoModel.from_pretrained(model_id, {
model_file_name: 'vision_model',
});
const location_model = await AutoModel.from_pretrained(model_id, {
model_file_name: 'location_model',
quantized: false,
});
// Load image processor
const processor = await AutoProcessor.from_pretrained('openai/clip-vit-large-patch14');
// Read and preprocess image
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/moraine-lake.png';
const image = await RawImage.fromURL(url);
const vision_inputs = await processor(image);
// Compute image embeddings
const { image_embeds } = await vision_model(vision_inputs);
const norm_image_embeds = image_embeds.normalize().data;
// Define a list of candidate GPS coordinates
// https://github.com/VicenteVivan/geo-clip/blob/main/geoclip/model/gps_gallery/coordinates_100K.csv
const coordinate_data = 'https://huggingface.co/Xenova/geoclip-large-patch14/resolve/main/gps_gallery/coordinates_100K.json';
const gps_data = await (await fetch(coordinate_data)).json();
// Compute location embeddings and compare to image embeddings
const coordinate_batch_size = 512;
const exp_logit_scale = Math.exp(3.681034803390503); // Used for scaling logits
const scores = [];
for (let i = 0; i < gps_data.length; i += coordinate_batch_size) {
const chunk = gps_data.slice(i, i + coordinate_batch_size);
const { location_embeds } = await location_model({
location: new Tensor('float32', chunk.flat(), [chunk.length, 2])
});
const norm_location_embeds = location_embeds.normalize().tolist();
for (const embed of norm_location_embeds) {
const score = exp_logit_scale * dot(norm_image_embeds, embed);
scores.push(score);
}
}
// Get top predictions
const top_k = 50;
const results = softmax(scores)
.map((x, i) => [x, i])
.sort((a, b) => b[0] - a[0])
.slice(0, top_k)
.map(([score, index]) => ({ index, gps: gps_data[index], score }));
console.log('=======================');
console.log('Top 5 GPS Predictions 📍');
console.log('=======================');
for (let i = 0; i < 5; ++i) {
console.log(results[i]);
}
console.log('=======================');
```
Outputs:
```
=======================
Top 5 GPS Predictions 📍
=======================
{
index: 75129,
gps: [ 51.327447, -116.183509 ],
score: 0.06895832493631944
}
{
index: 5158,
gps: [ 51.326401, -116.18263 ],
score: 0.06843383108770337
}
{
index: 77752,
gps: [ 51.328198, -116.180934 ],
score: 0.06652924543010541
}
{
index: 32529,
gps: [ 51.327809, -116.180334 ],
score: 0.065981075526145
}
{
index: 461,
gps: [ 51.322353, -116.18557 ],
score: 0.06476605375767822
}
=======================
```
As seen on the map, this is indeed the location of [Moraine Lake](https://en.wikipedia.org/wiki/Moraine_Lake) in Canada.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b253b7ac5ecaae3d1efe0c/ThtHHkOmXEll2tyGV85GY.png)
---
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
|