CathAction / README.md
vovantuan's picture
update
c65bc15 verified
---
license: cc-by-nc-sa-4.0
language:
- en
tags:
- action
- segmentation
size_categories:
- 100K<n<1M
---
# CathAction Dataset
CathAction is large-scale dataset designed for advancing catheterization understanding. CathAction comprises annotated frames focused on catheterization understanding and collision detection, along with groundtruth masks dedicated to catheter and guidewire segmentation.
Please fill out the [download form](https://airvlab.github.io/cathaction/docs/download/) and agree to our license prior to downloading the dataset.
# Dataset Structure:
## 1. Catheterization Action understanding
The CathAction dataset encompasses annotated frames for catheterization action understanding task such as catheterization anticipation and action recognition.
These are five classes: *advance catheter*, *retract catheter*, *advance guidewire*, *retract guidewire*, and *rotate*.
The dataset is organized into the following folders and files:
- **video_frames/**: Contains extracted video frames for each video.
- **feature_extractions/**: Contains pre-extracted RGB features, extracted using [this code](https://github.com/yjxiong/tsn-pytorch).
- **training.csv**: Groundtruth CSV file for training data.
- **validation.csv**: Groundtruth CSV file for validation data.
### Annotation File Structure
The annotation files (`training.csv` and `validation.csv`) contain four columns, with the following structure:
| Column Name | Type | Example | Description |
|---------------------|------------------|--------------|-------------------------------------------------------------------------------------------------|
| `video_id` | string | `video_1` | ID of the video where the action segment is located. |
| `start_frame` | int | `430` | Start frame of the action. |
| `stop_frame` | int | `643` | End frame of the action. |
| `all_action_classes`| list of int(s) | `[1]` | List of numeric IDs for all detected action classes in the segment. |
The frames and pre-extracted RGB features are located in the `video_frames` and `feature_extractions` folders, respectively, and can be generated using [this code](https://github.com/yjxiong/tsn-pytorch).
### Usage
1. **Catheterization Action Recognition and Anticipation Models**: Use the `start_frame` and `stop_frame` values, along with the ground truth `all_action_classes` in the CSV file, to train models that recognize action segments and anticipate future catheter actions.
## 2. Collision Detection
The CathAction dataset is designed for the collision detection task, which involves identifying whether the tip of the catheter or guidewire comes into contact with the blood vessel wall.
The dataset is organized as follows:
- **images/**: Contains images related to collision and normal events.
- **labels/**: Contains annotation files for each image, detailing information on bounding boxes and object classes, including collision occurrences and the normal class for the corresponding image
- **train_phantom.txt**: A text file listing paths to training images and labels for the "phantom" data source in the collision detection task.
- **valid_animal.txt**: A text file listing paths to validation images and labels for the "animal" source data.
- **valid_phantom.txt**: A text file listing paths to validation images and labels for the "phantom" source data.
Each `.txt` file contains a list of image and label paths for its respective category and split (train/validation), enabling easy access and organization for model training and evaluation.
### Usage
1. **Training**: Use `train_phantom.txt` to load training data for the phantom data source.
2. **Validation**: Use `valid_animal.txt` and `valid_phantom.txt` for validating model performance on different data sources, specifically focusing on the 'animal' and 'phantom' data.
For more information, please visit our [webpage](https://airvlab.github.io/cathaction/).
For inquiries or assistance, please contact the authors at [this link](https://airvlab.github.io/cathaction/).
Best regards,
Authors