Spaces:
Running
title: horizon-metrics
tags:
- evaluate
- metric
description: 'TODO: add a description here'
sdk: gradio
sdk_version: 3.19.1
app_file: app.py
pinned: false
emoji: π
SEA-AI/horizon-metrics
This huggingface metric uses seametrics.horizon.HorizonMetrics
under the hood to calculate the slope and midpoint errors.
How to Use
To utilize horizon-metrics effectively, start by installing the necessary dependencies using the provided pip command. Once installed, import the evaluate library into your Python environment. Then, use the SEA-AI/horizon-metrics metric to evaluate your horizon prediction models. Ensure that both ground truth and prediction points are correctly formatted before computing the result. Finally, analyze the computed result to gain insights into the performance of your prediction models.
Getting Started
To get started with horizon-metrics, make sure you have the necessary dependencies installed. This metric relies on the evaluate
and seametrics
libraries.
Installation
pip install evaluate git+https://github.com/SEA-AI/seametrics@develop
Basic Usage
This is how you can quickly evaluate your horizon prediction models using SEA-AI/horizon-metrics:
import evaluate
#Use artificial data for testing or
ground_truth_points = [[[0.0, 0.5384765625], [1.0, 0.4931640625]],
[[0.0, 0.53796875], [1.0, 0.4928515625]],
[[0.0, 0.5374609375], [1.0, 0.4925390625]],
[[0.0, 0.536953125], [1.0, 0.4922265625]],
[[0.0, 0.5364453125], [1.0, 0.4919140625]]]
prediction_points = [[[0.0, 0.5428930956049597], [1.0, 0.4642497615378973]],
[[0.0, 0.5428930956049597], [1.0, 0.4642497615378973]],
[[0.0, 0.523573113510805], [1.0, 0.47642688648919496]],
[[0.0, 0.5200016849393765], [1.0, 0.4728554579177664]],
[[0.0, 0.523573113510805], [1.0, 0.47642688648919496]]]
#Load data from fiftyone
sequence = "Sentry_2023_02_Portugal_2023_01_24_19_15_17"
dataset_name = "SENTRY_VIDEOS_DATASET_QA"
sequence_view = fo.load_dataset(dataset_name).match(F("sequence") == sequence)
sequence_view = sequence_view.select_group_slices("thermal_wide")
#Get the ground truth points
polylines_gt = sequence_view.values("frames.ground_truth_pl")
ground_truth_points = [
line["polylines"][0]["points"][0] for line in polylines_gt[0]
if line is not None
]
#Get the predicted points
polylines_pred = sequence_view.values(
"frames.ahoy-IR-b2-whales__XAVIER-AGX-JP46_pl")
prediction_points = [
line["polylines"][0]["points"][0] for line in polylines_pred[0]
if line is not None
]
module = evaluate.load("SEA-AI/horizon-metrics")
module.add(predictions=ground_truth_points, references=prediction_points)
module.compute()
This is output the evalutaion metrics for your horizon prediciton model:
{
'average_slope_error': 0.014823194839790999,
'average_midpoint_error': 0.014285714285714301,
'stddev_slope_error': 0.01519178791378349,
'stddev_midpoint_error': 0.0022661781575342445,
'max_slope_error': 0.033526146567062376,
'max_midpoint_error': 0.018161272321428612,
'num_slope_error_jumps': 1,
'num_midpoint_error_jumps': 1
}
Output Values
SEA-AI/horizon-metrics provides the following performance metrics for horizon prediction:
- average_slope_error: Measures the average difference in slope between the predicted and ground truth horizon.
- average_midpoint_error: Calculates the average difference in midpoint position between the predicted and ground truth horizon.
- stddev_slope_error: Indicates the variability of errors in slope between the predicted and ground truth horizon.
- stddev_midpoint_error: Quantifies the variability of errors in midpoint position between the predicted and ground truth horizon.
- max_slope_error: Represents the maximum difference in slope between the predicted and ground truth horizon.
- max_midpoint_error: Indicates the maximum difference in midpoint position between the predicted and ground truth horizon.
- num_slope_error_jumps: Calculates the differences between errors in successive frames for the slope. It then counts the number of jumps in these errors by comparing the absolute differences to a specified threshold.
- num_midpoint_error_jumps: Calculates the differences between errors in successive frames for the midpoint. It then counts the number of jumps in these errors by comparing the absolute differences to a specified threshold.
Further References
Explore the seametrics GitHub repository for more details on the underlying library.
Contribution
Your contributions are welcome! If you'd like to improve SEA-AI/horizon-metrics or add new features, please feel free to fork the repository, make your changes, and submit a pull request.