YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

LAJ CNN Image-to-GPS Model Iteration 1

This project features a convolutional neural network (CNN) for predicting GPS coordinates (latitude and longitude) from image inputs. Below, you'll find details on loading the model, performing inference, and the architecture of the network.


1. Loading the Model

To load the model, look at the sampleRun_v4.ipynb and run the same commands.

2. Running the Model

To perform inference on our model, just normalize the latitudes and longitudes to our means and standard deviations below. Then run code similar to the code provided to test code provided below:

# Evaluate on Test Set
model.eval()
all_preds, all_actuals = [], []
with torch.no_grad():
    for images, gps_coords in val_loader:
        images, gps_coords = images.to(device), gps_coords.to(device)
        outputs = model(images)
        all_preds.append(outputs.cpu())
        all_actuals.append(gps_coords.cpu())
all_preds = torch.cat(all_preds).numpy()
all_actuals = torch.cat(all_actuals).numpy()

# Denormalize Predictions
all_preds_denorm = all_preds * np.array([lat_std, lon_std]) + np.array([lat_mean, lon_mean])
all_actuals_denorm = all_actuals * np.array([lat_std, lon_std]) + np.array([lat_mean, lon_mean])

# Compute Error Metrics
mae = mean_absolute_error(all_actuals_denorm, all_preds_denorm)
rmse = mean_squared_error(all_actuals_denorm, all_preds_denorm, squared=False)
print(f"Test Set Mean Absolute Error: {mae:}")
print(f"Test Set Root Mean Squared Error: {rmse:}")

3. Latitude and Longitude Means and Standard Deviations

The following values represent the means and standard deviations of the latitude and longitude used in this model:

  • Latitude Mean: 39.95173729922173
  • Latitude Standard Deviation: 0.0006877829213952256
  • Longitude Mean: -75.19138804851796
  • Longitude Standard Deviation: 0.0006182574854250925 These values are used to normalize and denormalize the latitude and longitude predictions during inference.

4. CNN Architecture

Finally here is the architecture of the CNN we used:


# Model Definition
class CustomGPSModel(nn.Module):
    def __init__(self):
        super(CustomGPSModel, self).__init__()

        # Load EfficientNetV2-S with pretrained weights
        self.efficientnet = efficientnet_v2_s(pretrained=True)

        # Modify the final layer for regression (predicting latitude and longitude)
        num_features = self.efficientnet.classifier[1].in_features
        self.efficientnet.classifier[1] = nn.Linear(num_features, 2)  # Output layer has 2 outputs for latitude & longitude

        # Don't freeze earlier layers
        for param in self.efficientnet.features.parameters():
            param.requires_grad = True

    def forward(self, x):
        return self.efficientnet(x)  # Forward pass through EfficientNet

5. Sample Run Code (how to install and run everything)

!pip install datasets
!pip install huggingface_hub
!pip install requests
import torch
import torch.nn as nn
import torch.optim as optim
from torchvision.models import efficientnet_v2_s
from torch.optim.lr_scheduler import CosineAnnealingLR
from torchvision import transforms
from torch.utils.data import DataLoader, Dataset
from torchvision.transforms import functional as F
from PIL import Image
import numpy as np
from sklearn.metrics import mean_absolute_error, mean_squared_error
from huggingface_hub import PyTorchModelHubMixin
import os


# Model Definition
class CustomGPSModel(nn.Module):
    def __init__(self):
        super(CustomGPSModel, self).__init__()

        # Load EfficientNetV2-S with pretrained weights
        self.efficientnet = efficientnet_v2_s(pretrained=True)

        # Modify the final layer for regression (predicting latitude and longitude)
        num_features = self.efficientnet.classifier[1].in_features
        self.efficientnet.classifier[1] = nn.Linear(num_features, 2)  # Output layer has 2 outputs for latitude & longitude

        # Don't freeze earlier layers
        for param in self.efficientnet.features.parameters():
            param.requires_grad = True

    def forward(self, x):
        return self.efficientnet(x)  # Forward pass through EfficientNet

from huggingface_hub import hf_hub_download
import torch
path_name = "efficientnet_gps_regressor_complete_changed_betas_v2.pth"
repo_name = "CustomGPSModel_EfficientNetV2_Run3"
organization_name = "LAJ-519-Image-Project"

# Specify the repository and the filename of the model you want to load
repo_id = f"{organization_name}/{repo_name}"
filename = f"{path_name}"

model_path = hf_hub_download(repo_id=repo_id, filename=filename)

# Load the model using torch
model_test = torch.load(model_path)
model_test.eval()
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.