File size: 5,045 Bytes
c8f5cc3 db88906 c16a424 c8f5cc3 db88906 c8f5cc3 8b35d05 c8f5cc3 8b35d05 c16a424 c8f5cc3 3dabf37 f535059 c8f5cc3 db88906 c8f5cc3 db88906 c8f5cc3 f3d85d9 c8f5cc3 58ffc6b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 |
---
language:
- en
base_model:
- Ultralytics/YOLO11
tags:
- yolo
- yolo11
- yolo11n
- urchin
- sea
- marine-detection
pipeline_tag: object-detection
---
# Yolo11n Sea Urchin Detector
## Model Details / Overview
This model was trained to detect sea urchins using the YOLO11 architecture. Trained on open datasets to identify and locate urchins in various underwater conditions.
- link to Medium sized model: https://huggingface.co/akridge/yolo11m-sea-urchin-detector
- **Model Architecture**: YOLO11n
- **Task**: Object Detection (Urchin Detection)
- **Footage Type**: Underwater Footage
- **Classes**: 1 (urchin)
## Yolo11n Test Results (50 epochs)
![results](./results.jpg)
## Evaluation of YOLOv11n & YOLOv11m Performance (100 epochs)
![results](./results_h.png)
## Model Weights
The model's weights can be found [here](./yolo11n_urchin_trained.pt) | Also available in various formats:
- **[PyTorch (best.pt)](./train/weights/best.pt)**: Standard format for PyTorch-based applications.
- **[Latest PyTorch Checkpoint (last.pt)](./train/weights/last.pt)**: The latest checkpoint from training.
- **[ONNX (best.onnx)](./train/weights/best.onnx)**: ForONNX runtime.
- **[TorchScript (best.torchscript)](./train/weights/best.torchscript)**
- **[NCNN](./train/weights/best_ncnn_model/model.ncnn.bin)**: Efficient for mobile platforms and embedded systems.
# Intended Use
- Real-time detections on underwater footage
- Post-processed video/imagery for detecting sea urchins in underwater environments
# Factors
### Model Performance
- Multi-source Dataset: Trained on datasets that include urchin images from various angles.
- Model Architecture (YOLO11n): Lightweight and optimized for real-time urchin detection in underwater footage.
- Training Data: The dataset is split into 70% training, 20% validation, and 10% test data.
- Training Parameters: Configured with 50 epochs, a 0.001 learning rate, and 640x640 image size for convergence.
## Datasets
The training data was collected, parsed and organized from open sources:
1. **[Orange-OpenSource Marine-Detect](https://github.com/Orange-OpenSource/marine-detect)**
2. **[Roboflow - Sakana Urchins CJLib](https://universe.roboflow.com/sakana/urchins-cjlib)**
- **Roboflow Details**:
- **Workspace**: sakana
- **Project**: urchins-cjlib
- **Version**: 1
- **License**: CC BY 4.0
- **URL**: [https://universe.roboflow.com/sakana/urchins-cjlib/dataset/1](https://universe.roboflow.com/sakana/urchins-cjlib/dataset/1)
### Dataset Composition:
- **Training Images**: 1169
- **Validation Images**: 334
- **Test Images**: 168
- **Train/Val/Test Split Ratio**: 7:2:1
## Metrics
Below are the key metrics from the model evaluation on the validation set:
## Training Validation Results
### Training and Validation Losses
![Training and Validation Losses](./train/results.png)
### Confusion Matrix
![Confusion Matrix](./train/confusion_matrix.png)
### Precision-Recall Curve
![Precision-Recall Curve](./train/PR_curve.png)
### F1 Score Curve
![F1 Score Curve](./train/F1_curve.png)
## Training Configuration
- **Model Weights File**: `yolo11n_urchin_trained.pt`
- **Number of Epochs**: 50
- **Learning Rate**: 0.001
- **Batch Size**: 32
- **Image Size**: 640x640
## Deployment
### How to Use the Model
To use the trained model, follow these steps:
1. **Load the Model**:
```python
from ultralytics import YOLO
# Load the model
model = YOLO("yolo11n_urchin_trained.pt")
## Limitations
The model was trained on a mix of open source images. It may not generalize well to other environments or non-marine scenarios. Additionally, environmental variations, occlusions, or poor lighting may affect performance.
## Additional Notes:
Dataset Sources:
- Two datasets were combined to improve model robustness, allowing the model to adapt to varying lighting and water conditions.
Ethical Considerations:
- The detection results should be validated before using them for critical applications. The model’s performance in new environments might vary, and it may have biases if certain types of sea urchins were underrepresented in the training datasets.
#### Disclaimer
This repository is a scientific product and is not official communication of the National Oceanic and Atmospheric Administration, or the United States Department of Commerce. All NOAA project content is provided on an ‘as is’ basis and the user assumes responsibility for its use. Any claims against the Department of Commerce or Department of Commerce bureaus stemming from the use of this project will be governed by all applicable Federal law. Any reference to specific commercial products, processes, or services by service mark, trademark, manufacturer, or otherwise, does not constitute or imply their endorsement, recommendation or favoring by the Department of Commerce. The Department of Commerce seal and logo, or the seal and logo of a DOC bureau, shall not be used in any manner to imply endorsement of any commercial product or activity by DOC or the United States Government.
|