The dataset viewer is not available for this dataset.
Error code: ConfigNamesError Exception: FileNotFoundError Message: Couldn't find a dataset script at /src/services/worker/sohonjit/brats2023_multidomain_i2i/brats2023_multidomain_i2i.py or any data file in the same directory. Couldn't find 'sohonjit/brats2023_multidomain_i2i' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in sohonjit/brats2023_multidomain_i2i. Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 64, in compute_config_names_response for config in sorted(get_dataset_config_names(path=dataset, token=hf_token)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1507, in dataset_module_factory raise FileNotFoundError( FileNotFoundError: Couldn't find a dataset script at /src/services/worker/sohonjit/brats2023_multidomain_i2i/brats2023_multidomain_i2i.py or any data file in the same directory. Couldn't find 'sohonjit/brats2023_multidomain_i2i' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in sohonjit/brats2023_multidomain_i2i.
Need help to make the dataset viewer work? Open a discussion for direct support.
Dataset Summary
This dataset is based on the BraTS2023 dataset and is supposed to be used for Multi-domain Image-to-Image Translation task. It takes 5 middle slices from each nifti volume of the BraTS2023 dataset after normalizing to a value of (-1,1). All of these images are .npy
files and one can load them using the np.load(FILEPATH).astype(np.float32)
. We provide the training and the test set which contains 6255 and 1095 files respectively for each domain. These are actually 4 domains, and are named accordingly.
It is highly recommend to create a separate validation set from the training dataset for applications. We use Pytorch
to do this. We do this by using the following command.
seed = 97
train_dataset, val_dataset = torch.utils.data.random_split(
dataset, lengths=(0.9, 0.1), generator=torch.Generator().manual_seed(seed)
) # dataset is the dataset instance.
This dataset is actually part of a paper which is under peer-review currently.
We hope this helps the community.
- Downloads last month
- 0