VegAnn / README.md
simonMadec's picture
Update README.md
ab6dd12 verified
metadata
language:
  - en
size_categories:
  - 1K<n<10K
task_categories:
  - image-segmentation
tags:
  - vegetation
  - segmentation
DOI:
  - 10.1038/s41597-023-02098-y
licence:
  - CC-BY
dataset_info:
  features:
    - name: image
      dtype: image
    - name: mask
      dtype: image
    - name: System
      dtype: string
    - name: Orientation
      dtype: string
    - name: latitude
      dtype: float64
    - name: longitude
      dtype: float64
    - name: date
      dtype: string
    - name: LocAcc
      dtype: int64
    - name: Species
      dtype: string
    - name: Owner
      dtype: string
    - name: Dataset-Name
      dtype: string
    - name: TVT-split1
      dtype: string
    - name: TVT-split2
      dtype: string
    - name: TVT-split3
      dtype: string
    - name: TVT-split4
      dtype: string
    - name: TVT-split5
      dtype: string
  splits:
    - name: train
      num_bytes: 1896819757.9
      num_examples: 3775
  download_size: 1940313757
  dataset_size: 1896819757.9
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

VegAnn Dataset

Vegetation Annotation of a large multi-crop RGB Dataset acquired under diverse conditions for image semantic segmentation

Keypoints ⏳

  • VegAnn contains 3775 images
  • Images are 512*512 pixels
  • Corresponding binary masks is 0 for soil + crop residues (background) 255 for Vegetation (foreground)
  • The dataset includes images of 26+ crop species, which are not evenly represented
  • VegAnn was compiled using a variety of outdoor images captured with different acquisition systems and configurations
  • For more information about VegAnn, details, labeling rules and potential uses see https://doi.org/10.1038/s41597-023-02098-y

Dataset Description 📚

VegAnn, short for Vegetation Annotation, is a meticulously curated collection of 3,775 multi-crop RGB images aimed at enhancing research in crop vegetation segmentation. These images span various phenological stages and were captured using diverse systems and platforms under a wide range of illumination conditions. By aggregating sub-datasets from different projects and institutions, VegAnn represents a broad spectrum of measurement conditions, crop species, and development stages.

Languages 🌐

The annotations and documentation are primarily in English.

Dataset Structure 🏗

Data Instances 📸

A VegAnn data instance consists of a 512x512 pixel RGB image patch derived from larger raw images. These patches are designed to provide sufficient detail for distinguishing between vegetation and background, crucial for applications in semantic segmentation and other forms of computer vision analysis in agricultural contexts.

image/png

Data Fields 📋

  • Name: Unique identifier for each image patch.
  • System: The imaging system used to acquire the photo (e.g., Handheld Cameras, DHP, UAV).
  • Orientation: The camera's orientation during image capture (e.g., Nadir, 45 degrees).
  • latitude and longitude: Geographic coordinates where the image was taken.
  • date: Date of image acquisition.
  • LocAcc: Location accuracy flag (1 for high accuracy, 0 for low or uncertain accuracy).
  • Species: The crop species featured in the image (e.g., Wheat, Maize, Soybean).
  • Owner: The institution or entity that provided the image (e.g., Arvalis, INRAe).
  • Dataset-Name: The sub-dataset or project from which the image originates (e.g., Phenomobile, Easypcc).
  • TVT-split1 to TVT-split5: Fields indicating the train/validation/test split configurations, facilitating various experimental setups.

Data Splits 📊

The dataset is structured into multiple splits (as indicated by TVT-split fields) to support different training, validation, and testing scenarios in machine learning workflows.

Dataset Creation 🛠

Curation Rationale 🤔

The VegAnn dataset was developed to address the gap in available datasets for training convolutional neural networks (CNNs) for the task of semantic segmentation in real-world agricultural environments. By incorporating images from a wide array of conditions and stages of crop development, VegAnn aims to enhance the performance of segmentation algorithms, promote benchmarking, and foster research on large-scale crop vegetation segmentation.

Source Data 🌱

Initial Data Collection and Normalization

Images within VegAnn were sourced from various sub-datasets contributed by different institutions, each under specific acquisition configurations. These were then standardized into 512x512 pixel patches to maintain consistency across the dataset.

Who are the source data providers?

The data was provided by a collaboration of institutions including Arvalis, INRAe, The University of Tokyo, University of Queensland, NEON, and EOLAB, among others.

image/png

Annotations 📝

Annotation process

Annotations for the dataset were focused on distinguishing between vegetation and background within the images. The process ensured that the images offered sufficient spatial resolution to allow for accurate visual segmentation.

Who are the annotators?

The annotations were performed by a team comprising researchers and domain experts from the contributing institutions.

Considerations for Using the Data 🤓

Social Impact of Dataset 🌍

The VegAnn dataset is expected to significantly impact agricultural research and commercial applications by enhancing the accuracy of crop monitoring, disease detection, and yield estimation through improved vegetation segmentation techniques.

Discussion of Biases 🧐

Given the diverse sources of the images, there may be inherent biases towards certain crop types, geographical locations, and imaging conditions. Users should consider this diversity in applications and analyses.

Licensing Information 📄

Please refer to the specific licensing agreements of the contributing institutions or contact the dataset providers for more information on usage rights and restrictions.

Citation Information 📚

If you use the VegAnn dataset in your research, please cite the following:

@article{madec_vegann_2023,
  title = {{VegAnn}, {Vegetation} {Annotation} of multi-crop {RGB} images acquired under diverse conditions for segmentation},
  volume = {10},
  issn = {2052-4463},
  url = {https://doi.org/10.1038/s41597-023-02098-y},
  doi = {10.1038/s41597-023-02098-y},
  abstract = {Applying deep learning to images of cropping systems provides new knowledge and insights in research and commercial applications. Semantic segmentation or pixel-wise classification, of RGB images acquired at the ground level, into vegetation and background is a critical step in the estimation of several canopy traits. Current state of the art methodologies based on convolutional neural networks (CNNs) are trained on datasets acquired under controlled or indoor environments. These models are unable to generalize to real-world images and hence need to be fine-tuned using new labelled datasets. This motivated the creation of the VegAnn - Vegetation Annotation - dataset, a collection of 3775 multi-crop RGB images acquired for different phenological stages using different systems and platforms in diverse illumination conditions. We anticipate that VegAnn will help improving segmentation algorithm performances, facilitate benchmarking and promote large-scale crop vegetation segmentation research.},
  number = {1},
  journal = {Scientific Data},
  author = {Madec, Simon and Irfan, Kamran and Velumani, Kaaviya and Baret, Frederic and David, Etienne and Daubige, Gaetan and Samatan, Lucas Bernigaud and Serouart, Mario and Smith, Daniel and James, Chrisbin and Camacho, Fernando and Guo, Wei and De Solan, Benoit and Chapman, Scott C. and Weiss, Marie},
  month = may,
  year = {2023},
  pages = {302},
}

Additional Information