Datasets:

Size Categories:
1K<n<10K
ArXiv:
License:
GoodBaiBai88 commited on
Commit
3f72d00
1 Parent(s): 56f7deb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +80 -37
README.md CHANGED
@@ -1,33 +1,53 @@
1
  ---
2
  license: apache-2.0
3
  tags:
4
- - medical
5
  - 3D medical segmentation
6
  size_categories:
7
  - 1K<n<10K
8
  ---
9
 
 
 
10
  ## Dataset Description
11
  Large-scale General 3D Medical Image Segmentation Dataset (M3D-Seg)
12
 
13
  ### Dataset Introduction
14
- 3D medical segmentation is one of the main challenges in medical image analysis.
15
- Currently, due to privacy and cost limitations, there is a lack of large-scale publicly available 3D medical images and annotations.
16
- To address this, we have collected 25 publicly available 3D CT segmentation datasets,
17
- including CHAOS, HaN-Seg, AMOS22, AbdomenCT-1k, KiTS23, KiPA22, KiTS19, BTCV, Pancreas-CT, 3D-IRCADB, FLARE22, TotalSegmentator,
18
- CT-ORG, WORD, VerSe19, VerSe20, SLIVER07, QUBIQ, MSD-Colon, MSD-HepaticVessel, MSD-Liver, MSD-lung, MSD-pancreas, MSD-spleen,
19
- LUNA16. These datasets are uniformly encoded from 0000-0024, totaling 5,772 3D images and 149,196 3D mask annotations.
20
- Each mask corresponds to semantic labels represented in text.
21
- Within each folder, there are two sub-folders, ct and gt, storing data and annotations respectively, and utilizing json files for splitting.
22
- ‘dataset_info.txt’ describes the textual representation of each dataset label.
23
- As a universal segmentation dataset, more public and private datasets can be unified in the same format,
24
- thus building a large-scale 3D medical universal segmentation dataset.
25
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
 
27
  ### Supported Tasks
28
- As data can be represented in the form of image-mask-text, where masks can be converted to box coordinates through bounding boxes,
29
- the dataset supports tasks such as: 3D segmentation: semantic segmentation, textual hint segmentation, inference segmentation, etc.
30
- 3D localization: visual grounding, referring expression comprehension, referring expression generation.
 
 
 
 
31
 
32
  ## Dataset Format and Structure
33
 
@@ -35,48 +55,71 @@ the dataset supports tasks such as: 3D segmentation: semantic segmentation, text
35
  <pre>
36
  M3D_Seg/
37
  0000/
38
- ct/
39
- case_00000.npy
40
- ......
41
- gt/
42
- case_00000.(3, 512, 512, 611).npz
43
- ......
44
  0000.json
45
  0001/
46
  ......
47
  </pre>
48
 
49
  ### Dataset Download
 
 
50
  #### Clone with HTTP
51
  ```bash
52
- git clone
 
 
 
 
 
 
53
  ```
54
  #### Manual Download
55
- Download all files from the dataset file manually, which can be done using batch download tools.
56
- Note: Since the 0024 dataset is large, its compressed files are split into 00, 01, 02 three files.
57
- Please merge and decompress them after downloading.
58
- As the foreground in mask files is often sparse, to save storage space, we use sparse matrices for storage, saved as npz files,
59
- with the file name containing the mask shape, please refer to ‘data_load_demo.py’ for data reading.
 
 
 
 
 
 
 
 
 
 
60
 
61
 
62
  ### Dataset Loading Method
63
- #### 1. If downloading this dataset directly, ‘data_process.py’ is not required for processing, skip directly to step 2
64
- Raw data downloaded from the original data must be processed through ‘data_process.py’ and unified into the M3D-Seg dataset.
65
- Please note that due to preprocessing, there are differences between the data provided by this dataset and its original nii.gz files.
66
- Please refer to ‘data_process.py’ for processing methods.
 
 
 
 
67
 
68
  #### 2. Build Dataset
69
- We provide sample code for three tasks' Datasets, including semantic segmentation, hint segmentation, and inference segmentation.
 
70
 
71
  ```python
72
 
73
  ```
74
 
75
 
 
 
 
76
 
77
 
78
- ### Data Splitting
79
- Each file is split into ‘train, validation/test’ using json files, for ease of training and testing models.
80
 
81
  ### Dataset Sources
82
 
@@ -89,7 +132,7 @@ Each file is split into ‘train, validation/test’ using json files, for ease
89
  | 0004 |KiTS23| https://kits-challenge.org/kits23/|
90
  | 0005 |KiPA22| https://kipa22.grand-challenge.org/|
91
  | 0006 |KiTS19| https://kits19.grand-challenge.org/|
92
- | 0007 |BTCV| https://www.synapse.org/\#!Synapse:syn3193805/wiki/217752|
93
  | 0008 |Pancreas-CT| https://wiki.cancerimagingarchive.net/display/public/pancreas-ct|
94
  | 0009 | 3D-IRCADB | https://www.kaggle.com/datasets/nguyenhoainam27/3dircadb |
95
  | 0010 |FLARE22| https://flare22.grand-challenge.org/|
@@ -111,7 +154,7 @@ Each file is split into ‘train, validation/test’ using json files, for ease
111
 
112
  ## Dataset Copyright Information
113
 
114
- All datasets involved in this dataset are publicly available datasets. For detailed copyright information, please refer to the corresponding dataset links.
115
 
116
  ## Citation
117
  If you use this dataset, please cite the following works:
 
1
  ---
2
  license: apache-2.0
3
  tags:
4
+ - multi-modal
5
  - 3D medical segmentation
6
  size_categories:
7
  - 1K<n<10K
8
  ---
9
 
10
+ ![Data_visualization](M3D_Seg.jpg)
11
+
12
  ## Dataset Description
13
  Large-scale General 3D Medical Image Segmentation Dataset (M3D-Seg)
14
 
15
  ### Dataset Introduction
16
+ 3D medical image segmentation poses a significant challenge in medical image analysis.
17
+ Currently, due to privacy and cost constraints, publicly available large-scale 3D medical images
18
+ and their annotated data are scarce. To address this, we have collected 25 publicly available
19
+ 3D CT segmentation datasets, including CHAOS, HaN-Seg, AMOS22, AbdomenCT-1k, KiTS23, KiPA22,
20
+ KiTS19, BTCV, Pancreas-CT, 3D-IRCADB, FLARE22, TotalSegmentator, CT-ORG, WORD, VerSe19, VerSe20,
21
+ SLIVER07, QUBIQ, MSD-Colon, MSD-HepaticVessel, MSD-Liver, MSD-lung, MSD-pancreas, MSD-spleen, LUNA16.
22
+ These datasets are uniformly encoded from 0000 to 0024, totaling 5,772 3D images and 149,196 3D mask annotations.
23
+ The semantic labels corresponding to each mask can be represented in text.
24
+ Within each sub-dataset folder, there are multiple data folders (containing image and mask files),
25
+ and each sub-dataset independently utilizes its JSON file to split.
26
+
27
+ - **data_load_demo.py**: Provides an example code on reading images and masks from the dataset.
28
+ - **data_preocess.py**: Describes how to convert raw `nii.gz` or other format data into a more efficient `npy` format and preprocess them,
29
+ saving them in a unified format. This dataset has already been preprocessed, so there is no need to use data_preocess.py again.
30
+ If adding new datasets, please follow a unified processing approach.
31
+ - **dataset_info.json & dataset_info.txt**: Contain the names of each dataset and their label texts.
32
+ - **term_dictionary.json**: Provides multiple definitions or descriptions for each semantic label in the dataset,
33
+ generated by `ChatGPT` for each term. Researchers can convert category IDs in the dataset to label texts
34
+ using the information in dataset_info.txt and further convert them into text descriptions using term_dictionary.json as text inputs for segmentation models,
35
+ enabling tasks such as segmentation based on text prompts and referring segmentation.
36
+
37
+ This dataset supports not only traditional semantic segmentation tasks but also text-based segmentation tasks.
38
+ For detailed methods, please refer to [SegVol](https://github.com/BAAI-DCAI/SegVol) and [M3D](https://github.com/BAAI-DCAI/M3D).
39
+ As a general segmentation dataset, we provide a convenient, unified, and structured dataset organization
40
+ that allows for the uniform integration of more public and private datasets in the same format as this dataset,
41
+ thereby constructing a larger-scale general 3D medical image segmentation dataset.
42
 
43
  ### Supported Tasks
44
+ This dataset not only supports traditional image-mask semantic segmentation tasks
45
+ but also represents data in the form of image-mask-text, where masks can be converted into box coordinates
46
+ through bounding boxes. Based on this, the dataset can effectively support a series of image segmentation
47
+ and positioning tasks, as follows:
48
+
49
+ - **3D Segmentation**: Semantic segmentation, text-based segmentation, referring segmentation, reasoning segmentation, etc.
50
+ - **3D Positioning**: Visual grounding/referring expression comprehension, referring expression generation.
51
 
52
  ## Dataset Format and Structure
53
 
 
55
  <pre>
56
  M3D_Seg/
57
  0000/
58
+ 1/
59
+ image.npy
60
+ mask_(1, 512, 512, 96).npz
61
+ 2/
62
+ ......
 
63
  0000.json
64
  0001/
65
  ......
66
  </pre>
67
 
68
  ### Dataset Download
69
+ The total dataset size is approximately 224G.
70
+
71
  #### Clone with HTTP
72
  ```bash
73
+ git clone https://huggingface.co/datasets/GoodBaiBai88/M3D-Seg
74
+ ```
75
+
76
+ #### SDK Download
77
+ ```bash
78
+ from datasets import load_dataset
79
+ dataset = load_dataset("GoodBaiBai88/M3D-Seg")
80
  ```
81
  #### Manual Download
82
+ Manually download all files from the dataset files. It is recommended to use batch download tools for efficient downloading.
83
+ Please note the following:
84
+
85
+ - **Downloading in Parts and Merging**: Since dataset 0024 has a large volume,
86
+ the original compressed file has been split into two parts: `0024_1` and `0024_2`.
87
+ Make sure to download these two files separately and unzip them in the same directory to ensure data integrity.
88
+
89
+ - **Masks with Sparse Matrices**: To save storage space effectively,
90
+ foreground information in masks is stored in sparse matrix format and saved with the extension `.npz`.
91
+ The name of each mask file typically includes its shape information for identification and loading purposes.
92
+
93
+ - **Data Load Demo**: There is a script named data_load_demo.py, which serves as a reference for correctly
94
+ reading the sparse matrix format of masks and other related data.
95
+ Please refer to this script for specific loading procedures and required dependencies.
96
+
97
 
98
 
99
  ### Dataset Loading Method
100
+ #### 1. Direct Usage of Preprocessed Data
101
+ If you have already downloaded the preprocessed dataset, no additional data processing steps are required.
102
+ You can directly jump to step 2 to build and load the dataset.
103
+ Please note that the contents provided by this dataset have been transformed and numbered through data_process.py,
104
+ differing from the original `nii.gz` files. To understand the specific preprocessing process,
105
+ refer to the data_process.py file for detailed information. If adding new datasets or modifying existing ones,
106
+ please refer to data_process.py for data preprocessing and uniform formatting.
107
+
108
 
109
  #### 2. Build Dataset
110
+ To facilitate model training and evaluation using this dataset, we provide an example code for the Dataset class.
111
+ Wrap the dataset in your project according to the following example:
112
 
113
  ```python
114
 
115
  ```
116
 
117
 
118
+ ### Data Splitting
119
+ Each sub-dataset folder is splitted into `train` and `test` parts through a JSON file,
120
+ facilitating model training and testing.
121
 
122
 
 
 
123
 
124
  ### Dataset Sources
125
 
 
132
  | 0004 |KiTS23| https://kits-challenge.org/kits23/|
133
  | 0005 |KiPA22| https://kipa22.grand-challenge.org/|
134
  | 0006 |KiTS19| https://kits19.grand-challenge.org/|
135
+ | 0007 |BTCV| https://www.synapse.org/#!Synapse:syn3193805/wiki/217753|
136
  | 0008 |Pancreas-CT| https://wiki.cancerimagingarchive.net/display/public/pancreas-ct|
137
  | 0009 | 3D-IRCADB | https://www.kaggle.com/datasets/nguyenhoainam27/3dircadb |
138
  | 0010 |FLARE22| https://flare22.grand-challenge.org/|
 
154
 
155
  ## Dataset Copyright Information
156
 
157
+ All datasets in this dataset are publicly available. For detailed copyright information, please refer to the corresponding dataset links.
158
 
159
  ## Citation
160
  If you use this dataset, please cite the following works: