Edit model card

Model card for REVA-QCAV

A DEtection TRansformer (DETR) model with a ResNet-50 backbone (facebook/detr-resnet-50) finetuned on a custom photogrammetry calibration sphere dataset.

Model Usage

Object Detection (using transformers)

from transformers import AutoImageProcessor, AutoModelForObjectDetection
from huggingface_hub import hf_hub_download
from PIL import Image
import torch

# download example image
img_path = hf_hub_download(repo_id="1aurent/REVA-QCAV", filename="examples/chevaux.jpg")
img = Image.open(img_path)

# transform image using image_processor
image_processor = AutoImageProcessor.from_pretrained("1aurent/REVA-QCAV")
data = image_processor(img, return_tensors="pt")

# get outputs from the model
model = AutoModelForObjectDetection.from_pretrained("1aurent/REVA-QCAV")
with torch.no_grad():
  output = model(**data)

# use image_processor post processing
img_CHW = torch.tensor([img.height, img.width]).unsqueeze(0)
output_processed = image_processor.post_process_object_detection(output, threshold=0.9, target_sizes=img_CHW)

Object Detection (using onnxruntime)

from transformers.models.detr.modeling_detr import DetrObjectDetectionOutput
from transformers import AutoImageProcessor
from huggingface_hub import hf_hub_download
import onnxruntime as ort
from PIL import Image
import torch

# download onnx and start inference session
onnx_path = hf_hub_download(repo_id="1aurent/REVA-QCAV", filename="model.onnx")
session = ort.InferenceSession(onnx_path)

# download example image
img_path = hf_hub_download(repo_id="1aurent/REVA-QCAV", filename="examples/chevaux.jpg")
img = Image.open(img_path)

# transform image using image_processor
image_processor = AutoImageProcessor.from_pretrained("1aurent/REVA-QCAV")
data = image_processor(img, return_tensors="np").data

# get logits and bbox predictions using onnx session
logits, pred_boxes = session.run(
  output_names=["logits", "pred_boxes"],
  input_feed=data,
)

# wrap outputs inside DetrObjectDetectionOutput
output = DetrObjectDetectionOutput(
  logits=torch.tensor(logits),
  pred_boxes=torch.tensor(pred_boxes),
)

# use image_processor post processing
img_CHW = torch.tensor([img.height, img.width]).unsqueeze(0)
output_processed = image_processor.post_process_object_detection(output, threshold=0.9, target_sizes=img_CHW)

Citation

@article{reva-qcav,
  author  = {Laurent Fainsin and Jean Mélou and Lilian Calvet and Antoine Laurent and Axel Carlier and Jean-Denis Durou},
  title   = {Neural sphere detection in images for lighting calibration},
  journal = {QCAV},
  year    = {2023},
  url     = {https://hal.science/hal-04160733}
}
Downloads last month
1
Safetensors
Model size
41.6M params
Tensor type
F32
·
Inference API
This model can be loaded on Inference API (serverless).

Finetuned from