The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    FileNotFoundError
Message:      Couldn't find a dataset script at /src/services/worker/SimulaMet-HOST/visem-tracking-graphs/visem-tracking-graphs.py or any data file in the same directory. Couldn't find 'SimulaMet-HOST/visem-tracking-graphs' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in SimulaMet-HOST/visem-tracking-graphs. 
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 55, in compute_config_names_response
                  for config in sorted(get_dataset_config_names(path=dataset, token=hf_token))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1508, in dataset_module_factory
                  raise FileNotFoundError(
              FileNotFoundError: Couldn't find a dataset script at /src/services/worker/SimulaMet-HOST/visem-tracking-graphs/visem-tracking-graphs.py or any data file in the same directory. Couldn't find 'SimulaMet-HOST/visem-tracking-graphs' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in SimulaMet-HOST/visem-tracking-graphs.

Need help to make the dataset viewer work? Open a discussion for direct support.

VISEM-Tracking-graphs - HuggingFace Repository

This HuggingFace repository contains the pre-generated graphs for the sperm video dataset called VISEM-Tracking (https://huggingface.co/papers/2212.02842) . The graphs represent spatial and temporal relationships between sperm in a video. Spatial edges connect sperms within the same frame, while temporal edges connect sperms across different frames.

The graphs have been generated with varying spatial threshold values: 0.1, 0.2, 0.3, 0.4, and 0.5. Each spatial threshold determines the maximum distance between two nodes for them to be connected in the graph. The repository contains separate directories for each spatial threshold.

The source code used to generate graphs can be found here: https://github.com/vlbthambawita/visem-tracking-graphs

Repository Structure

The repository is structured as follows:

  • spatial_threshold_0.1
  • spatial_threshold_0.2
  • spatial_threshold_0.3
  • spatial_threshold_0.4
  • spatial_threshold_0.5

Inside each spatial_threshold_X directory, you will find:

  • frame_graphs: A directory containing individual frame graphs as GraphML files.
  • video_graph.graphml: A GraphML file containing the complete video graph.

Usage

To use the graphs in this repository, you need to:

  1. Download the desired graph files (frame graphs or video graph) for the spatial threshold of your choice.
  2. Load the graphs using a graph library such as NetworkX in Python:
import networkx as nx

# Load a frame graph
frame_graph = nx.read_graphml('path/to/frame_graph_X.graphml')

# Load the video graph
video_graph = nx.read_graphml('path/to/video_graph.graphml')

TO USE THIS DATA, you need to cite the paper: https://www.nature.com/articles/s41597-023-02173-4

Downloads last month
0
Edit dataset card