GoodBaiBai88
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -1,10 +1,10 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
tags:
|
4 |
-
- medical
|
5 |
-
- 3D medical image caption
|
6 |
- image-text pair
|
7 |
-
-
|
|
|
|
|
8 |
size_categories:
|
9 |
- 100K<n<1M
|
10 |
---
|
@@ -16,26 +16,41 @@ Large-scale 3D medical multi-modal dataset - Image-Text Pair Dataset (M3D-Cap)
|
|
16 |
### Dataset Introduction
|
17 |
Medical institutions, such as hospitals, store vast amounts of multi-modal data,
|
18 |
including medical images and diagnostic reports.
|
19 |
-
However,
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
|
30 |
### Supported Tasks
|
31 |
-
M3D-Cap supports
|
32 |
-
|
33 |
|
34 |
## Dataset Format and Structure
|
35 |
|
36 |
### Data Format
|
37 |
<pre>
|
38 |
-
|
39 |
ct_case/
|
40 |
000006/
|
41 |
Axial_non_contrast/
|
@@ -56,23 +71,36 @@ including image-text retrieval, report generation, and image generation.
|
|
56 |
</pre>
|
57 |
|
58 |
### Dataset Download
|
|
|
|
|
|
|
|
|
|
|
59 |
#### Clone with HTTP
|
60 |
```bash
|
61 |
-
git clone
|
|
|
|
|
|
|
|
|
|
|
|
|
62 |
```
|
|
|
63 |
#### Manual Download
|
64 |
-
|
65 |
-
Note: Due to the large size of the overall dataset, it is divided into subfiles of 20G each.
|
66 |
-
After downloading all files, extract them together to obtain the complete data.
|
67 |
|
68 |
|
69 |
### Dataset Loading Method
|
70 |
#### 1. Preprocessing
|
71 |
-
|
72 |
-
|
|
|
|
|
|
|
73 |
|
74 |
#### 2. Build Dataset
|
75 |
-
We provide
|
76 |
|
77 |
```python
|
78 |
class CapDataset(Dataset):
|
@@ -194,9 +222,11 @@ class CapDataset(Dataset):
|
|
194 |
|
195 |
|
196 |
### Data Splitting
|
197 |
-
The entire dataset is split into
|
198 |
-
|
199 |
-
|
|
|
|
|
200 |
|
201 |
## Dataset Copyright Information
|
202 |
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
tags:
|
|
|
|
|
4 |
- image-text pair
|
5 |
+
- image-captioning
|
6 |
+
- 3D medical images
|
7 |
+
- medical reports
|
8 |
size_categories:
|
9 |
- 100K<n<1M
|
10 |
---
|
|
|
16 |
### Dataset Introduction
|
17 |
Medical institutions, such as hospitals, store vast amounts of multi-modal data,
|
18 |
including medical images and diagnostic reports.
|
19 |
+
However, due to the sensitivity and privacy concerns associated with patient data,
|
20 |
+
publicly releasing these multimodal datasets poses challenges.
|
21 |
+
To overcome these limitations, we collected medical images and reports from the publicly
|
22 |
+
accessible professional medical website [Radiopaedia](https://radiopaedia.org/).
|
23 |
+
Specifically, each patient case in our dataset consists of multiple 3D images and corresponding reports,
|
24 |
+
which experts on the Radiopaedia platform have meticulously reviewed.
|
25 |
+
Given the critical role of 3D CT in medical image analysis, particularly in the diagnosis,
|
26 |
+
localization, and measurement of systemic lesions, we focused on 3D CT data and successfully
|
27 |
+
built the largest-scale 3D medical image-text paired dataset, named M3D-Cap, comprising 120K image-text pairs.
|
28 |
+
|
29 |
+
The dataset is divided into two main folders named ct_case and ct_quizze.
|
30 |
+
The ct_quizze folder is intended for medical exams and exhibits higher quality.
|
31 |
+
Each folder contains subfolders for images and texts.
|
32 |
+
The image folders contain multiple 2D slices of 3D images,
|
33 |
+
and the text files provide English report descriptions corresponding to the 3D images,
|
34 |
+
including anomaly types, lesion locations, etc.
|
35 |
+
|
36 |
+
- **M3D_Cap.json**: Provides the dataset split.
|
37 |
+
- **data_examples**: Provides examples of 24 sets of 3D images and text data.
|
38 |
+
- **M3D_Cap**: Provides the complete dataset, please download this folder.
|
39 |
+
- **m3d_cap_data_prepare.py**: Provides data preprocessing code, including image normalization,
|
40 |
+
stack 3D images from 2D slices, image cropping, and effective text extraction.
|
41 |
+
|
42 |
+
Based on the image-text pairs in the M3D-Cap dataset, we created the M3D-VQA (Visual Question Answering) dataset.
|
43 |
+
Please refer to the [link](https://www.modelscope.cn/datasets/GoodBaiBai88/M3D-VQA).
|
44 |
|
45 |
### Supported Tasks
|
46 |
+
M3D-Cap supports multimodal tasks in 3D medical scenarios such as image-text retrieval,
|
47 |
+
report generation, and image generation.
|
48 |
|
49 |
## Dataset Format and Structure
|
50 |
|
51 |
### Data Format
|
52 |
<pre>
|
53 |
+
M3D_Cap/
|
54 |
ct_case/
|
55 |
000006/
|
56 |
Axial_non_contrast/
|
|
|
71 |
</pre>
|
72 |
|
73 |
### Dataset Download
|
74 |
+
|
75 |
+
The total size of the dataset is approximately 978G.
|
76 |
+
Please note that the contents of the data_examples folder are only examples and do not need to be downloaded.
|
77 |
+
The complete dataset is located in the M3D_Cap folder.
|
78 |
+
|
79 |
#### Clone with HTTP
|
80 |
```bash
|
81 |
+
git clone https://huggingface.co/datasets/GoodBaiBai88/M3D-Cap
|
82 |
+
```
|
83 |
+
|
84 |
+
#### SDK Download
|
85 |
+
```bash
|
86 |
+
from datasets import load_dataset
|
87 |
+
dataset = load_dataset("GoodBaiBai88/M3D-Cap")
|
88 |
```
|
89 |
+
|
90 |
#### Manual Download
|
91 |
+
Manually download all files from the dataset, and we recommend using a batch download tool.
|
|
|
|
|
92 |
|
93 |
|
94 |
### Dataset Loading Method
|
95 |
#### 1. Preprocessing
|
96 |
+
Preprocess the dataset according to m3d_cap_data_prepare.py, including:
|
97 |
+
stack 3D images from 2D slices in each folder of the dataset and name them with the image file name
|
98 |
+
(retaining plane and phase information), saving as `npy` files,
|
99 |
+
image normalization and cropping, and filtering and extracting high-quality descriptions
|
100 |
+
from the text reports in the dataset.
|
101 |
|
102 |
#### 2. Build Dataset
|
103 |
+
We provide examples for building the Dataset:
|
104 |
|
105 |
```python
|
106 |
class CapDataset(Dataset):
|
|
|
222 |
|
223 |
|
224 |
### Data Splitting
|
225 |
+
The entire dataset is split using a JSON file and can be divided into
|
226 |
+
`train, validation, test100, test500, test1k, test`, where the test subset contains 2k samples.
|
227 |
+
Considering testing costs, we provide test sets with different sample sizes,
|
228 |
+
including 100, 500, 1k, and 2k samples.
|
229 |
+
|
230 |
|
231 |
## Dataset Copyright Information
|
232 |
|