corentinm7 commited on
Commit
c9fe814
1 Parent(s): f95083b

Upload 2 files

Browse files
Files changed (2) hide show
  1. README.md +23 -147
  2. SDH_16k.py +133 -0
README.md CHANGED
@@ -1,149 +1,25 @@
1
  ---
2
- annotations_creators:
3
- - expert-generated
4
- language: []
5
- language_creators:
6
- - expert-generated
7
- license:
8
- - agpl-3.0
9
- multilinguality: []
10
- pretty_name: SDH staining muscle fiber histology images used to train MyoQuant model.
11
- size_categories:
12
- - 10K<n<100K
13
- source_datasets:
14
- - original
15
- tags:
16
- - myology
17
- - biology
18
- - histology
19
- - muscle
20
- - cells
21
- - fibers
22
- - myopathy
23
- - SDH
24
- - myoquant
25
- task_categories:
26
- - image-classification
27
  ---
28
- # Dataset Card for MyoQuant SDH Data
29
-
30
- ## Table of Contents
31
- - [Table of Contents](#table-of-contents)
32
- - [Dataset Description](#dataset-description)
33
- - [Dataset Summary](#dataset-summary)
34
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
35
- - [Languages](#languages)
36
- - [Dataset Structure](#dataset-structure)
37
- - [Data Instances](#data-instances)
38
- - [Data Fields](#data-fields)
39
- - [Data Splits](#data-splits)
40
- - [Dataset Creation](#dataset-creation)
41
- - [Curation Rationale](#curation-rationale)
42
- - [Source Data](#source-data)
43
- - [Annotations](#annotations)
44
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
45
- - [Considerations for Using the Data](#considerations-for-using-the-data)
46
- - [Social Impact of Dataset](#social-impact-of-dataset)
47
- - [Discussion of Biases](#discussion-of-biases)
48
- - [Other Known Limitations](#other-known-limitations)
49
- - [Additional Information](#additional-information)
50
- - [Dataset Curators](#dataset-curators)
51
- - [Licensing Information](#licensing-information)
52
- - [Citation Information](#citation-information)
53
- - [Contributions](#contributions)
54
-
55
- ## Dataset Description
56
-
57
- - **Homepage:**
58
- - **Repository:**
59
- - **Paper:**
60
- - **Leaderboard:**
61
- - **Point of Contact:**
62
-
63
- ### Dataset Summary
64
-
65
- [More Information Needed]
66
-
67
- ### Supported Tasks and Leaderboards
68
-
69
- [More Information Needed]
70
-
71
- ### Languages
72
-
73
- [More Information Needed]
74
-
75
- ## Dataset Structure
76
-
77
- ### Data Instances
78
-
79
- [More Information Needed]
80
-
81
- ### Data Fields
82
-
83
- [More Information Needed]
84
-
85
- ### Data Splits
86
-
87
- [More Information Needed]
88
-
89
- ## Dataset Creation
90
-
91
- ### Curation Rationale
92
-
93
- [More Information Needed]
94
-
95
- ### Source Data
96
-
97
- #### Initial Data Collection and Normalization
98
-
99
- [More Information Needed]
100
-
101
- #### Who are the source language producers?
102
-
103
- [More Information Needed]
104
-
105
- ### Annotations
106
-
107
- #### Annotation process
108
-
109
- [More Information Needed]
110
-
111
- #### Who are the annotators?
112
-
113
- [More Information Needed]
114
-
115
- ### Personal and Sensitive Information
116
-
117
- [More Information Needed]
118
-
119
- ## Considerations for Using the Data
120
-
121
- ### Social Impact of Dataset
122
-
123
- [More Information Needed]
124
-
125
- ### Discussion of Biases
126
-
127
- [More Information Needed]
128
-
129
- ### Other Known Limitations
130
-
131
- [More Information Needed]
132
-
133
- ## Additional Information
134
-
135
- ### Dataset Curators
136
-
137
- [More Information Needed]
138
-
139
- ### Licensing Information
140
-
141
- [More Information Needed]
142
-
143
- ### Citation Information
144
-
145
- [More Information Needed]
146
-
147
- ### Contributions
148
-
149
- Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
 
1
  ---
2
+ dataset_info:
3
+ features:
4
+ - name: image
5
+ dtype: image
6
+ - name: label
7
+ dtype:
8
+ class_label:
9
+ names:
10
+ 0: control
11
+ 1: sick
12
+ config_name: SDH_16k
13
+ splits:
14
+ - name: test
15
+ num_bytes: 683067
16
+ num_examples: 3358
17
+ - name: train
18
+ num_bytes: 2466024
19
+ num_examples: 12085
20
+ - name: validation
21
+ num_bytes: 281243
22
+ num_examples: 1344
23
+ download_size: 2257836789
24
+ dataset_size: 3430334
 
 
25
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
SDH_16k.py ADDED
@@ -0,0 +1,133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ """MyoQuant-SDH-Data: The MyoQuant SDH Model Data."""
15
+
16
+
17
+ import csv
18
+ import json
19
+ import os
20
+
21
+ import datasets
22
+
23
+
24
+ _CITATION = """\
25
+ @InProceedings{Meyer,
26
+ title = {MyoQuant SDH Data},
27
+ author={Corentin Meyer},
28
+ year={2022}
29
+ }
30
+ """
31
+ _NAMES = ["control", "sick"]
32
+
33
+ _DESCRIPTION = """\
34
+ This dataset is used to train the SDH model of MyoQuant to detect and quantify anomaly in the mitochondria repartition in SDH stained muscle fiber with myopathy disorders.
35
+ """
36
+
37
+ _HOMEPAGE = "https://huggingface.co/datasets/corentinm7/MyoQuant-SDH-Data"
38
+
39
+ _LICENSE = "agpl-3.0"
40
+
41
+ _URLS = {
42
+ "SDH_16k": "https://huggingface.co/datasets/corentinm7/MyoQuant-SDH-Data/resolve/main/SDH_16k/SDH_16k.zip"
43
+ }
44
+ _METADATA_URL = {
45
+ "SDH_16k_metadata": "https://huggingface.co/datasets/corentinm7/MyoQuant-SDH-Data/resolve/main/SDH_16k/metadata.jsonl"
46
+ }
47
+
48
+
49
+ class SDH_16k(datasets.GeneratorBasedBuilder):
50
+ """This dataset is used to train the SDH model of MyoQuant to detect and quantify anomaly in the mitochondria repartition in SDH stained muscle fiber with myopathy disorders."""
51
+
52
+ VERSION = datasets.Version("1.0.0")
53
+
54
+ # This is an example of a dataset with multiple configurations.
55
+ # If you don't want/need to define several sub-sets in your dataset,
56
+ # just remove the BUILDER_CONFIG_CLASS and the BUILDER_CONFIGS attributes.
57
+
58
+ # If you need to make complex sub-parts in the datasets with configurable options
59
+ # You can create your own builder configuration class to store attribute, inheriting from datasets.BuilderConfig
60
+ # BUILDER_CONFIG_CLASS = MyBuilderConfig
61
+
62
+ # You will be able to load one or the other configurations in the following list with
63
+ # data = datasets.load_dataset('my_dataset', 'first_domain')
64
+ # data = datasets.load_dataset('my_dataset', 'second_domain')
65
+
66
+ DEFAULT_CONFIG_NAME = "SDH_16k" # It's not mandatory to have a default configuration. Just use one if it make sense.
67
+
68
+ def _info(self):
69
+ return datasets.DatasetInfo(
70
+ description=_DESCRIPTION,
71
+ features=datasets.Features(
72
+ {
73
+ "image": datasets.Image(),
74
+ "label": datasets.ClassLabel(num_classes=2, names=_NAMES),
75
+ }
76
+ ),
77
+ supervised_keys=("image", "label"),
78
+ homepage=_HOMEPAGE,
79
+ citation=_CITATION,
80
+ license=_LICENSE,
81
+ task_templates=[
82
+ datasets.ImageClassification(image_column="image", label_column="label")
83
+ ],
84
+ )
85
+
86
+ def _split_generators(self, dl_manager):
87
+ archive_path = dl_manager.download(_URLS)
88
+ split_metadata_path = dl_manager.download(_METADATA_URL)
89
+ files_metadata = {}
90
+ with open(split_metadata_path["SDH_16k_metadata"], encoding="utf-8") as f:
91
+ for lines in f.read().splitlines():
92
+ file_json_metdata = json.loads(lines)
93
+ files_metadata.setdefault(file_json_metdata["split"], []).append(
94
+ file_json_metdata
95
+ )
96
+ downloaded_files = dl_manager.download_and_extract(archive_path)
97
+ return [
98
+ datasets.SplitGenerator(
99
+ name=datasets.Split.TRAIN,
100
+ gen_kwargs={
101
+ "download_path": downloaded_files["SDH_16k"],
102
+ "metadata": files_metadata["train"],
103
+ },
104
+ ),
105
+ datasets.SplitGenerator(
106
+ name=datasets.Split.VALIDATION,
107
+ gen_kwargs={
108
+ "download_path": downloaded_files["SDH_16k"],
109
+ "metadata": files_metadata["validation"],
110
+ },
111
+ ),
112
+ datasets.SplitGenerator(
113
+ name=datasets.Split.TEST,
114
+ gen_kwargs={
115
+ "download_path": downloaded_files["SDH_16k"],
116
+ "metadata": files_metadata["test"],
117
+ },
118
+ ),
119
+ ]
120
+
121
+ def _generate_examples(self, download_path, metadata):
122
+ """Generate images and labels for splits."""
123
+ for single_metdata in metadata:
124
+ img_path = os.path.join(
125
+ download_path,
126
+ single_metdata["split"],
127
+ single_metdata["label"],
128
+ single_metdata["file_name"],
129
+ )
130
+ yield single_metdata["file_name"], {
131
+ "image": {"path": img_path, "bytes": open(img_path, "rb").read()},
132
+ "label": single_metdata["label"],
133
+ }