Edit model card

https://github.com/VicenteVivan/geo-clip with ONNX weights to be compatible with Transformers.js.

Usage (Transformers.js)

If you haven't already, you can install the Transformers.js JavaScript library from NPM using:

npm i @xenova/transformers

Example: Perform worldwide image geolocalization.

import { AutoModel, AutoProcessor, RawImage, Tensor, dot, softmax } from '@xenova/transformers';

// Load vision and location models
const model_id = 'Xenova/geoclip-large-patch14';
const vision_model = await AutoModel.from_pretrained(model_id, {
    model_file_name: 'vision_model',
});
const location_model = await AutoModel.from_pretrained(model_id, {
    model_file_name: 'location_model',
    quantized: false,
});

// Load image processor
const processor = await AutoProcessor.from_pretrained('openai/clip-vit-large-patch14');

// Read and preprocess image
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/moraine-lake.png';
const image = await RawImage.fromURL(url);
const vision_inputs = await processor(image);

// Compute image embeddings
const { image_embeds } = await vision_model(vision_inputs);
const norm_image_embeds = image_embeds.normalize().data;

// Define a list of candidate GPS coordinates
// https://github.com/VicenteVivan/geo-clip/blob/main/geoclip/model/gps_gallery/coordinates_100K.csv
const coordinate_data = 'https://huggingface.co/Xenova/geoclip-large-patch14/resolve/main/gps_gallery/coordinates_100K.json';
const gps_data = await (await fetch(coordinate_data)).json();

// Compute location embeddings and compare to image embeddings
const coordinate_batch_size = 512;
const exp_logit_scale = Math.exp(3.681034803390503); // Used for scaling logits
const scores = [];
for (let i = 0; i < gps_data.length; i += coordinate_batch_size) {
    const chunk = gps_data.slice(i, i + coordinate_batch_size);

    const { location_embeds } = await location_model({
        location: new Tensor('float32', chunk.flat(), [chunk.length, 2])
    });

    const norm_location_embeds = location_embeds.normalize().tolist();
    for (const embed of norm_location_embeds) {
        const score = exp_logit_scale * dot(norm_image_embeds, embed);
        scores.push(score);
    }
}

// Get top predictions
const top_k = 50;
const results = softmax(scores)
    .map((x, i) => [x, i])
    .sort((a, b) => b[0] - a[0])
    .slice(0, top_k)
    .map(([score, index]) => ({ index, gps: gps_data[index], score }));

console.log('=======================');
console.log('Top 5 GPS Predictions πŸ“');
console.log('=======================');
for (let i = 0; i < 5; ++i) {
    console.log(results[i]);
}
console.log('=======================');

Outputs: ```

Top 5 GPS Predictions πŸ“

{ index: 75129, gps: [ 51.327447, -116.183509 ], score: 0.06895832493631944 } { index: 5158, gps: [ 51.326401, -116.18263 ], score: 0.06843383108770337 } { index: 77752, gps: [ 51.328198, -116.180934 ], score: 0.06652924543010541 } { index: 32529, gps: [ 51.327809, -116.180334 ], score: 0.065981075526145 } { index: 461, gps: [ 51.322353, -116.18557 ], score: 0.06476605375767822 }


As seen on the map, this is indeed the location of [Moraine Lake](https://en.wikipedia.org/wiki/Moraine_Lake) in Canada.

![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b253b7ac5ecaae3d1efe0c/ThtHHkOmXEll2tyGV85GY.png)

---

Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [πŸ€— Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
Downloads last month
1
Unable to determine this model’s pipeline type. Check the docs .