The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
image
image |
|---|
AgroMind
A comprehensive agricultural remote sensing benchmark covering four task dimensions: Spatial Perception, Object Understanding, Scene Understanding, and Scene Reasoning, with a total of 13 task types, ranging from crop identification and health monitoring to environmental analysis.
π Link
- GitHub Pages: https://rssysu.github.io/AgroMind/
- Paper(arxiv): https://arxiv.org/abs/2505.12207
- Dataset: https://huggingface.co/datasets/AgroMind/AgroMind
- Code: https://github.com/rssysu/AgroMind
π Structure
Please download the data sets one by one and place them in the same directory.
./
βββ Agriculture
βββ Fruit
βββ Leaf_diseases
βββ Oil_palm_trees
βββ Pest
βββ Rural
βββ Trees
βββ corn
βββ crop
βββ CropHarvest
βββ QA
π Key Features
Multidimensional Evaluation
- π Spatial Perception
- π Object Understanding
- ποΈ Scene Understanding
- π€ Scene Reasoning
Technical Specifications
- 13 specialized agricultural tasks
- Multimodal data support
Dataset/Benchmarks
Each JSON file(in QA.zip) contains questions of the same level-3 type, with items structured as follows:
{
"image_path": "path/to/image", // Image file path
"type_id": question_format_type, // Question response format
"item_id": "id", // Question id in this file(Start with the number 1)
"level1_id": "main_category", // Top-level task dimension
"level2_id": "sub_category", // Task subtype
"level3_id": "specific_task", // Detailed task type
"question": "query_text", // Natural language question
"options": ["A", ...], // Answer choices (when applicable)
"answer": "correct_response" // Ground truth answer
}
Simply deploy the Hugging Face dataset locally in the ./images as this GitHub project and then unzip. You can then freely access items to obtain image paths and corresponding questions for model evaluation.
π Cite
@misc{li2025largemultimodalmodelsunderstand,
title={Can Large Multimodal Models Understand Agricultural Scenes? Benchmarking with AgroMind},
author={Qingmei Li and Yang Zhang and Zurong Mai and Yuhang Chen and Shuohong Lou and Henglian Huang and Jiarui Zhang and Zhiwei Zhang and Yibin Wen and Weijia Li and Haohuan Fu and Jianxi Huang and Juepeng Zheng},
year={2025},
eprint={2505.12207},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2505.12207},
}
- Downloads last month
- 88