Datasets:
File size: 4,373 Bytes
34d2a85 bda3c82 34d2a85 197dc46 34d2a85 bda3c82 34d2a85 197dc46 34d2a85 197dc46 34d2a85 197dc46 34d2a85 197dc46 34d2a85 197dc46 34d2a85 197dc46 34d2a85 197dc46 34d2a85 197dc46 34d2a85 197dc46 34d2a85 197dc46 34d2a85 197dc46 34d2a85 197dc46 34d2a85 197dc46 34d2a85 197dc46 34d2a85 197dc46 34d2a85 197dc46 34d2a85 197dc46 34d2a85 bda3c82 34d2a85 197dc46 34d2a85 197dc46 34d2a85 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 |
---
size_categories:
- 1K<N<10K
source_datasets:
- original
task_categories:
- image-segmentation
task_ids:
- instance-segmentation
pretty_name: XAMI-dataset
tags:
- COCO format
- Astronomy
- XMM-Newton
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/valid-*
dataset_info:
features:
- name: observation id
dtype: string
- name: segmentation
dtype: image
- name: bbox
dtype: image
- name: label
dtype: string
- name: area
dtype: string
- name: image shape
dtype: string
splits:
- name: train
num_bytes: 66654394.0
num_examples: 105
- name: validation
num_bytes: 74471782.0
num_examples: 126
download_size: 141102679
dataset_size: 141126176.0
---
# XAMI-dataset
The **Git** repository for this dataset can be found **[here](https://github.com/ESA-Datalabs/XAMI-dataset)**.
The XAMI dataset contains 1000 annotated images of observations from diverse sky regions of the XMM-Newton Optical Monitor (XMM-OM) image catalog. An additional 50 images with no annotations are included to help decrease the amount of False Positives or Negatives that may be caused by complex objects (e.g., large galaxies, clusters, nebulae).
### Artefacts
A particularity of the dataset compared to every-day images are the locations where artefacts usually appear.
<img src="https://huggingface.co/datasets/iulia-elisa/XAMI-dataset/resolve/main/plots/artefact_distributions.png" alt="Examples of an image with multiple artefacts." />
Here are some examples of common artefacts in the dataset:
<img src="https://huggingface.co/datasets/iulia-elisa/XAMI-dataset/resolve/main/plots/artefacts_examples.png" alt="Examples of common artefacts in the OM observations." width="400"/>
# Annotation platforms
The images have been annotated using the following platform:
- [Zooniverse](https://www.zooniverse.org/projects/ori-j/ai-for-artefacts-in-sky-images), where the resulted annotations are not externally visible.
- [Roboflow](https://universe.roboflow.com/iuliaelisa/xmm_om_artefacts_512/), which allows for more interactive and visual annotation tools.
# The dataset format
The dataset is splited into train and validation categories and contains annotated artefacts in COCO format for Instance Segmentation. We use multilabel Stratified K-fold (**k=4**) to balance class distributions across splits. We choose to work with a single dataset splits version (out of 4) but also provide means to work with all 4 versions.
Please check [Dataset Structure](Datasets-Structure.md) for a more detailed structure of our dataset in COCO-IS and YOLOv8-Seg format.
# Downloading the dataset
### *(Option 1)* Downloading the dataset **archive** from HuggingFace
- using a python script:
```python
import os
import json
import pandas as pd
from huggingface_hub import hf_hub_download
dataset_name = 'xami_dataset' # the dataset name of Huggingface
images_dir = '.' # the output directory of the dataset images
hf_hub_download(
repo_id="iulia-elisa/XAMI-dataset", # the Huggingface repo ID
repo_type='dataset',
filename=dataset_name+'.zip',
local_dir=images_dir
);
# Unzip file
!unzip -q "xami_dataset.zip"
# Read the train json annotations file
annotations_path = os.path.join(images_dir, dataset_name, 'train/', '_annotations.coco.json')
with open(annotations_path) as f:
data_in = json.load(f)
data_in['images'][0]
```
or
- using a CLI command:
```bash
huggingface-cli download iulia-elisa/XAMI-dataset xami_dataset.zip --repo-type dataset --local-dir '/path/to/local/dataset/dir'
```
### *(Option 2)* Cloning the repository for more visualization tools
```bash
# Github
git clone https://github.com/ESA-Datalabs/XAMI-dataset.git
cd XAMI-dataset
```
<!--
# Dataset Split with SKF (Optional)
- The below method allows for dataset splitting, using the pre-generated splits in CSV files. This step is useful when training multiple dataset splits versions to gain mor generalised view on metrics.
```python
import utils
# run multilabel SKF split with the standard k=4
csv_files = ['mskf_0.csv', 'mskf_1.csv', 'mskf_2.csv', 'mskf_3.csv']
for idx, csv_file in enumerate(csv_files):
mskf = pd.read_csv(csv_file)
utils.create_directories_and_copy_files(images_dir, data_in, mskf, idx)
``` -->
## Licence
...
|