Datasets:

Languages:
English
ArXiv:
License:

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

FastUMI Banner

Fast-UMI: A Scalable and Hardware-Independent Universal Manipulation Interface

Welcome to the official repository of FastUMI!

License Platform Hugging Face GitHub

Project Page | Hugging Face Dataset | PDF (Early Version) | PDF (TBA)

FastUMI Prototype

Physical prototypes of the Fast-UMI system


πŸ“‹ Contents

πŸ”₯ News

  • [2024-12] We released Data Collection Code and Dataset.

🏠 How to Collect Data

The full data collection pipeline, including instructions and code, is available on our GitHub repository.

πŸ“¦ How to Use the Dataset

Due to Hugging Face's file size limitation of 50GB per file, the dataset has been split into smaller parts. Users need to merge the files after downloading to reconstruct the original dataset.

πŸ“š Dataset Structure

Purpose: Each HDF5 file corresponds to a single episode and encapsulates both observational data and actions. Below is the hierarchical structure of the HDF5 file:

episode_<idx>.hdf5
β”œβ”€β”€ observations/
β”‚   β”œβ”€β”€ images/
β”‚   β”‚   └── <camera_name_1> (Dataset)
β”‚   └── qpos (Dataset)
β”œβ”€β”€ action (Dataset)
└── attributes/
    └── sim = False

Attributes: sim

  • Type: Boolean
  • Value: False
  • Description: Indicates whether the data was recorded in simulation (True) or real-world (False).

Groups and Datasets: observations/

  • images/
    • Description: Stores image data from camera.
    • Datasets:
      • front
      • Type: Dataset containing image arrays.
      • Shape: (num_frames, height=1920, width=1080, channels=3)
      • Data Type: uint8
      • Compression: gzip with compression level 4.
  • qpos
    • Type: Dataset
    • Shape: (num_timesteps, 7)
    • Description: Stores position and orientation data for each timestep.
    • Columns: [Pos X, Pos Y, Pos Z, Q_X, Q_Y, Q_Z, Q_W]
  • action
    • Type: Dataset
    • Shape: (num_timesteps, 7)
    • Description: Stores action data corresponding to each timestep. In this script, actions mirror the qpos data.
    • Columns: [Pos X, Pos Y, Pos Z, Q_X, Q_Y, Q_Z, Q_W]

πŸ“‚ A. Splitting Data

The data is split to ensure each part remains below the 50GB limit. The splitting process divides large .tar.gz files into smaller chunks.

Splitting Overview:

  • Method: Use file splitting tools or commands to divide large files into manageable parts.
  • Example Tool: split command in Unix-based systems.

Example Command:

split -b 8G FastUMI_Data.tar.gz FastUMI_Data.tar.gz.part-

This command splits FastUMI_Data.tar.gz into 8GB parts with filenames starting with FastUMI_Data.tar.gz.part-.

πŸ’‘ B. Merging Data

After downloading the split files, users need to merge them to reconstruct the original dataset.

Merging Instructions:

  1. Navigate to the Download Directory:

    cd path_to_downloaded_files
    
  2. Merge Files Using cat:

    Use the cat command to concatenate the split parts. Replace filename.tar.gz.part-001, filename.tar.gz.part-002, etc., with your actual file names.

    cat filename.tar.gz.part-* > filename.tar.gz
    

    Example:

    cat FastUMI_Data.tar.gz.part-* > FastUMI_Data.tar.gz
    
  3. Alternatively, Use the Provided Python Script to Automate Merging:

    Save the following script as merge_files.py:

    import os
    import glob
    
    def merge_files(part_pattern, output_file):
        """
        Merges split file parts into a single file.
        
        :param part_pattern: Pattern matching the split file parts, e.g., "filename.tar.gz.part-*"
        :param output_file: Name of the output merged file, e.g., "filename.tar.gz"
        """
        parts = sorted(glob.glob(part_pattern))
        if not parts:
            raise FileNotFoundError(f"No parts found for pattern: {part_pattern}")
        
        with open(output_file, 'wb') as outfile:
            for part in parts:
                print(f"Merging {part} into {output_file}")
                with open(part, 'rb') as infile:
                    while True:
                        chunk = infile.read(1024 * 1024)  # 1MB
                        if not chunk:
                            break
                        outfile.write(chunk)
        print(f"Merge completed: {output_file}")
    
    if __name__ == "__main__":
        import argparse
    
        parser = argparse.ArgumentParser(description="Merge split file parts into a single file.")
        parser.add_argument('--pattern', type=str, required=True, help='Pattern of split file parts, e.g., "filename.tar.gz.part-*"')
        parser.add_argument('--output', type=str, required=True, help='Name of the output merged file, e.g., "filename.tar.gz"')
        
        args = parser.parse_args()
        
        merge_files(args.pattern, args.output)
    

    Usage:

    1. Run the Merging Script:

      python merge_files.py --pattern "filename.tar.gz.part-*" --output "filename.tar.gz"
      

      Replace filename.tar.gz.part-* and filename.tar.gz with your actual file name pattern and desired output file name.

    2. Example:

      python merge_files.py --pattern "FastUMI_Data.tar.gz.part-*" --output "FastUMI_Data.tar.gz"
      
  4. Verify the Merged File:

    Ensure that the merged file size matches the original file size before splitting. You can use the ls -lh command to check file sizes.

    ls -lh FastUMI_Data.tar.gz
    
  5. Extract the Dataset:

    Once merged, extract the dataset using the tar command:

    tar -xzvf FastUMI_Data.tar.gz
    

πŸ”§ Usage

After successfully merging and extracting the dataset, you can utilize it for training and evaluating robotic manipulation models. Detailed methodologies and application examples are available on the Project Page and in the Early Version PDF.

License

This project is licensed under the MIT License.

Contact

For questions or feedback, please reach out to the Yding Team or visit our website.

Downloads last month
40