lisawen0707 commited on
Commit
7f2ce96
0 Parent(s):
Files changed (2) hide show
  1. README.md +40 -0
  2. soybean_dataset.py +119 -0
README.md ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Dataset Card for Mechanized Soybean Harvest Quality Image Dataset
2
+ This dataset contains images captured during the mechanized harvesting of soybeans, aimed at facilitating the development of machine vision and deep learning models for quality analysis. It contains information of original soybean pictures in different forms, labels of whether the soybean belongs to training, validation, or testing datasets, segmentation class of soybean pictures in one dataset.
3
+
4
+ ## Dataset Description
5
+ The dataset comprises 40 original images of harvested soybeans, which were further augmented to 800 images through various transformations such as scaling, rotating, flipping, filtering, and noise addition. The images were captured on October 9, 2018, at the soybean experimental field of Liangfeng Grain and Cotton Planting Professional Cooperative in Liangshan, Shandong, China. This collection is intended for use in the development of online detection models for soybean quality during mechanization processes.
6
+ ## Dataset Sources
7
+ The images were obtained using an industrial camera during the mechanized harvesting process and subsequently annotated by experts in the field.
8
+ ## Uses
9
+ The dataset is designed for:
10
+ Developing online detection models for soybean quality.
11
+ Analyzing soybean mechanization processes.
12
+ Training deep learning algorithms for image classification and feature extraction.
13
+ ## Out-of-Scope Use
14
+ The dataset should not be employed for non-agricultural applications or outside the context of soybean quality detection during mechanization.
15
+ ## Original Dataset Structure
16
+ The dataset is structured into three main folders:
17
+ JPEGImages: Contains 800 JPG images of soybeans.
18
+ SegmentationClass: Contains PNG images with annotations.
19
+ ImageSets: Contains TXT records for data partitioning.
20
+ ## Data Collection and Processing
21
+ The main goal is to combine all the files into a single dataset with the following columns:
22
+ unique_id: str. unique id for each picture
23
+ sets: str. Categorical. Contains records for data partitioning (test/train/valid)
24
+ original_image: contains path to 800 JPG images of soybeans.
25
+ segmentation_image: contains path to PNG images with annotations.
26
+ ## Curation Rationale
27
+ The creation of this dataset was motivated by the need for making a standardized dataset that reflects the real conditions of mechanized soybean harvesting for use in quality detection research.
28
+ ## Annotation Process
29
+ Field experts annotated the dataset, manually labeling different components of the soybean images using polygonal annotations.
30
+ Bias, Risks, and Limitations
31
+ The dataset is limited to a specific soybean variety and harvesting environment, which may affect its generalizability. Future expansions are planned to include more diversity.
32
+ ## Recommendations
33
+ Users should follow ethical guidelines for handling data and consider the dataset's limitations when interpreting results from their models.
34
+ ## Dataset Card Authors
35
+ Man Chen, Chengqian Jin, Youliang Ni, Tengxiang Yang, Jinshan Xu contributed to the dataset preparation and curation.
36
+ ## Citation
37
+ Chen, M., Jin, C., Ni, Y., Yang, T., & Xu, J. (2024). A dataset of the quality of soybean harvested by mechanization for deep-learning-based monitoring and analysis. Data in Brief, 52, 109833. https://doi.org/10.1016/j.dib.2023.109833
38
+ ## Acknowledgements
39
+ This research received partial funding from several grants from the National Natural Science Foundation of China, National Key Research and Development Program of China, and the Natural Science Foundation of Jiangsu.
40
+
soybean_dataset.py ADDED
@@ -0,0 +1,119 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ # TODO: Address all TODOs and remove all explanatory comments
15
+ """TODO: Add a description here."""
16
+
17
+
18
+ import csv
19
+ import json
20
+ import os
21
+ from typing import List
22
+ import datasets
23
+ import logging
24
+
25
+ # TODO: Add BibTeX citation
26
+ # Find for instance the citation on arxiv or on the dataset repo/website
27
+ _CITATION = """\
28
+ @InProceedings{huggingface:dataset,
29
+ title = {A great new dataset},
30
+ author={huggingface, Inc.
31
+ },
32
+ year={2020}
33
+ }
34
+ """
35
+
36
+ # TODO: Add description of the dataset here
37
+ # You can copy an official description
38
+ _DESCRIPTION = """\
39
+ This new dataset is designed to solve this great NLP task and is crafted with a lot of care.
40
+ """
41
+
42
+ # TODO: Add a link to an official homepage for the dataset here
43
+ _HOMEPAGE = ""
44
+
45
+ # TODO: Add the licence for the dataset here if you can find it
46
+ _LICENSE = ""
47
+
48
+ # TODO: Add link to the official dataset URLs here
49
+ # The HuggingFace Datasets library doesn't host the datasets but only points to the original files.
50
+ # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
51
+ _URL = "https://rajpurkar.github.io/SQuAD-explorer/dataset/"
52
+ _URLS = {
53
+ "train": _URL + "train-v1.1.json",
54
+ "dev": _URL + "dev-v1.1.json",
55
+ }
56
+
57
+ # TODO: Name of the dataset usually matches the script name with CamelCase instead of snake_case
58
+ class SquadDataset(datasets.GeneratorBasedBuilder):
59
+ """TODO: Short description of my dataset."""
60
+
61
+ _URLS = _URLS
62
+ VERSION = datasets.Version("1.1.0")
63
+
64
+ def _info(self):
65
+ raise ValueError('woops!')
66
+ return datasets.DatasetInfo(
67
+ description=_DESCRIPTION,
68
+ features=datasets.Features(
69
+ {
70
+ "id": datasets.Value("string"),
71
+ "title": datasets.Value("string"),
72
+ "context": datasets.Value("string"),
73
+ "question": datasets.Value("string"),
74
+ "answers": datasets.features.Sequence(
75
+ {"text": datasets.Value("string"), "answer_start": datasets.Value("int32"),}
76
+ ),
77
+ }
78
+ ),
79
+ # No default supervised_keys (as we have to pass both question
80
+ # and context as input).
81
+ supervised_keys=None,
82
+ homepage="https://rajpurkar.github.io/SQuAD-explorer/",
83
+ citation=_CITATION,
84
+ )
85
+
86
+ def _split_generators(self, dl_manager: datasets.DownloadManager) -> List[datasets.SplitGenerator]:
87
+ urls_to_download = self._URLS
88
+ downloaded_files = dl_manager.download_and_extract(urls_to_download)
89
+
90
+ return [
91
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]}),
92
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_files["dev"]}),
93
+ ]
94
+
95
+ def _generate_examples(self, filepath):
96
+ """This function returns the examples in the raw (text) form."""
97
+ logging.info("generating examples from = %s", filepath)
98
+ with open(filepath) as f:
99
+ squad = json.load(f)
100
+ for article in squad["data"]:
101
+ title = article.get("title", "").strip()
102
+ for paragraph in article["paragraphs"]:
103
+ context = paragraph["context"].strip()
104
+ for qa in paragraph["qas"]:
105
+ question = qa["question"].strip()
106
+ id_ = qa["id"]
107
+
108
+ answer_starts = [answer["answer_start"] for answer in qa["answers"]]
109
+ answers = [answer["text"].strip() for answer in qa["answers"]]
110
+
111
+ # Features currently used are "context", "question", and "answers".
112
+ # Others are extracted here for the ease of future expansions.
113
+ yield id_, {
114
+ "title": title,
115
+ "context": context,
116
+ "question": question,
117
+ "id": id_,
118
+ "answers": {"answer_start": answer_starts, "text": answers,},
119
+ }