mayrajeo's picture
Update README.md
17151c4
|
raw
history blame
8.55 kB
metadata
license: agpl-3.0
library_name: ultralytics
tags:
  - satellite imagery
  - marine traffic
  - sentinel-2
  - yolov8
model-index:
  - name: mayrajeo/marine-vessel-detection-yolov8
    results:
      - task:
          type: object-detection
        metrics:
          - type: precision
            value: 0.801
            name: mAP@0.5(box)

Model Card for YOLOv8 models for detecting marine vessels from RGB Sentinel-2 images

TBA

Model Details

Model Description

TBA

  • Developed by: Janne Mäyrä
  • Model type: Object Detection
  • Finetuned from model: YOLOv8 pretrained models

Model Sources

Uses

Direct Use

Models are trained to process 320x320 pixel patches of Sentinel-2 RGB images with 10m resolution and detect marine vessels. The models will detect targets from outside of the water areas, but those detections can be eliminated by using external datasets.

Out-of-Scope Use

These models are not suitable for other purposes than for detecting potential marine vessels from satellite imagery.

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

How to Get Started with the Model

[https://github.com/mayrajeo/ship-detection] provides examples on how to use the models.

Training Details

Training Data

The model is trained using the following Sentinel-2 mosaics and manually annotated marine vessel data. Archipelago sea 2 and Kvarken were used as test data. Other three locations were sliced into 320x320 pixel patches. These patches were then spatially split into five equal sized folds so that each fold contained data from all timesteps and locations, and all patch locations that contained an annotated vessel in any timestep were included in the folds. In total, this dataset contained 3264 320x320 pixel image patches, of which 1974 contained annotated vessels and 1290 were background patches.

Training and validation data:

Location Date Number of annotations Annotated patches Background patches
Archipelago sea 1 2022-06-19 519 271 269
Archipelago sea 1 2022-07-21 1518 387 153
Archipelago sea 1 2022-08-13 1368 402 138
Gulf of Finland 2022-06-06 275 138 241
Gulf of Finland 2022-06-26 1190 269 110
Gulf of Finland 2022-07-21 971 260 119
Bothnian Bay 2022-06-27 122 81 88
Bothnian Bay 2022-07-12 162 98 71
Bothnian Bay 2022-08-28 98 68 101

Training Hyperparameters

Training configs can be found from each model directory, in the file args.yaml.

Evaluation

Testing Data, Factors & Metrics

Testing Data

Test data consists of six Sentinel-2 mosaics:

Location Date Number of annotations
Archipelago sea 2 2021-07-14 433
Archipelago sea 2 2022-06-24 413
Archipelago sea 2 2022-08-13 391
Kvarken 2022-06-17 79
Kvarken 2022-07-12 167
Kvarken 2022-08-26 81

Factors

Before evaluating, the predictions for the test set are cleaned using the following steps:

  1. All prediction whose centroid points are not located on water are discarded. The water mask used contains layers jarvi (Lakes), meri (Sea) and virtavesialue (Rivers as polygon geometry) from the Topographical database by the National Land Survey of Finland. Unfortunately this also discards all points not within the Finnish borders.
  2. All predictions whose centroid points are located on water rock areas are discarded. The mask is the layer vesikivikko (Water rock areas) from the Topographical database.
  3. All predictions that contain an above water rock within the bounding box are discarded. The mask contains classes 38511, 38512, 38513 from the layer vesikivi in the Topographical database.
  4. All predictions that contain a lighthouse or a sector light within the bounding box are discarded. Lighthouses and sector lights come from Väylävirasto data, ty_njr class ids are 1, 2, 3, 4, 5, 8
  5. All predictions that are wind turbines, found in Topographical database layer tuulivoimalat
  6. All predictions that are obviously too large are discarded. The prediction is defined to be "too large" if either of its edges is longer than 750 meters.

Metrics

Precision and Recall with IoU-threshold of 0.5, mAP50 and mAP.

Results

5-fold cross-validation results:

Model ('Precision', 'max') ('Precision', 'mean') ('Precision', 'min') ('Recall', 'max') ('Recall', 'mean') ('Recall', 'min') ('mAP', 'max') ('mAP', 'mean') ('mAP', 'min') ('mAP50', 'max') ('mAP50', 'mean') ('mAP50', 'min')
yolov8n 0.85001 0.840136 0.82782 0.82951 0.804012 0.78738 0.38816 0.380828 0.37637 0.84525 0.833424 0.81883
yolov8s 0.86717 0.854216 0.84347 0.84939 0.84065 0.83222 0.41098 0.406258 0.40374 0.86933 0.861404 0.84934
yolov8m 0.86108 0.853192 0.84191 0.87385 0.846722 0.83 0.41739 0.410742 0.40496 0.87772 0.862594 0.84602
yolov8l 0.86911 0.863254 0.85604 0.86468 0.841572 0.82725 0.41694 0.411712 0.40505 0.88288 0.867134 0.85743
yolov8x 0.86411 0.856008 0.85045 0.86086 0.845044 0.83029 0.42065 0.411532 0.40231 0.87069 0.863538 0.85316

Models with best performance for test set for each model architecture:

Model Fold Precision Recall mAP50 mAP
yolov8n 1 0.7639 0.833611 0.766 0.290
yolov8s 4 0.776751 0.845133 0.784 0.304
yolov8m 4 0.790741 0.857435 0.801 0.324
yolov8l 1 0.772199 0.851569 0.797 0.326
yolov8x 1 0.780507 0.838783 0.788 0.319

Compute Infrastructure

Hardware

NVIDIA V100 GPU with 32GB of memory, hosted on computing nodes of Puhti supercomputer by CSC - IT Center for Science, Finland.

Software

Models were trained as Slurm batch jobs in Puhti.

Citation [optional]

BibTeX:

TBA

APA:

TBA

Model Card Contact

TBA