remostei's picture
cosmetics
05c4a59 verified
|
raw
history blame
13.5 kB
metadata
datasets:
  - nvidia/PhysicalAI-Robotics-mindmap-Franka-Cube-Stacking
  - nvidia/PhysicalAI-Robotics-mindmap-Franka-Mug-in-Drawer
  - nvidia/PhysicalAI-Robotics-mindmap-GR1-Drill-in-Box
  - nvidia/PhysicalAI-Robotics-mindmap-GR1-Stick-in-Bin

Model Overview

Description:

mindmap is a 3D diffusion policy that generates robot trajectories based on a semantic 3D reconstruction of the environment, enabling robots with spatial memory.

Trained models are available on Hugging Face: PhysicalAI-Robotics-mindmap-Checkpoints

License/Terms of Use

Deployment Geography:

Global

Use Case

The trained mindmap policies allow for quick evaluation of the mindmap concept on selected simulated robotic manipulation tasks.

  • Researchers, Academics, Open-Source Community: AI-driven robotics research and algorithm development.
  • Developers: Integrate and customize AI for various robotic applications.
  • Startups & Companies: Accelerate robotics development and reduce training costs.

References(s):

Model Architecture:

Architecture Type: Denoising Diffusion Probabilistic Model

Network Architecture:

mindmap is a Denoising Diffusion Probabilistic Model that samples robot trajectories conditioned on sensor observations and a 3D reconstruction of the environment. Images are first passed through a Vision Foundation Model and then back-projected, using the depth image, to a pointcloud. In parallel, a reconstruction of the scene is built that accumulates metric-semantic information from past observations. The two 3D data sources, the instantaneous visual observation and the reconstruction, are passed to a transformer that iteratively denoises robot trajectories.

This model was developed based on: 3D Diffuser Actor

Number of model parameters: ∼3M trainable, plus ∼100M frozen in the image encoder

Input:

Input Type(s):

  • RGB: Image frames
  • Geometry: Depth frames converted to 3D pointclouds
  • State: Robot proprioception
  • Reconstruction: Metric-semantic reconstruction represented as featurized pointcloud

Input Format(s):

  • RGB: float32 in the range [0, 1]
  • Geometry: float32 in world coordinates
  • State: float32 in world coordinates
  • Reconstruction (represented as feature pointcloud):
    • Points: float32 in world coordinates
    • Features: float32

Input Parameters:

  • RGB: [NUM_CAMERAS, 3, HEIGHT, WIDTH] - 512x512 resolution on the provided checkpoints
  • Geometry: [NUM_CAMERAS, 3, HEIGHT, WIDTH] - 512x512 resolution on the provided checkpoints
  • State: [HISTORY_LENGTH, NUM_GRIPPERS, 8] - consisting of end-effector translation, rotation (quaternion, wxyz) and closedness
  • Reconstruction (represented as feature pointcloud):
    • Points: [NUM_POINTS, 3] - NUM_POINTS is 2048 for the provided checkpoints
    • Features: [NUM_POINTS, FEATURE_DIM] - FEATURE_DIM is 768 for the RADIO_V25_B feature extractor used for the provided checkpoints

Output:

Output Type(s): Robot actions

Output Format: float32

Output Parameters:

  • Gripper: [PREDICTION_HORIZON, NUM_GRIPPERS, 8] - consisting of end-effector translation, rotation (quaternion, wxyz) and closedness
  • Head Yaw: [PREDICTION_HORIZON, 1] - only for humanoid embodiments

Software Integration:

Runtime Engine(s): PyTorch

Supported Hardware Microarchitecture Compatibility:

  • NVIDIA Ampere
  • NVIDIA Blackwell
  • NVIDIA Jetson
  • NVIDIA Hopper
  • NVIDIA Lovelace
  • NVIDIA Pascal
  • NVIDIA Turing
  • NVIDIA Volta

Preferred/Supported Operating System(s):

  • Linux

Model Version(s):

This is the initial version of the model, version 1.0.0

Training, Testing, and Evaluation Datasets:

Datasets:

The models were trained on 100 (GR1) and 130 (Franka) demonstrations. The evaluation set consisted of 20 distinct demonstrations. Closed loop testing was performed on 100 demonstrations mutually exclusive from the training set.

Inference:

Engine: PyTorch

Test Hardware: Linux, L40S

Model Limitations:

This model is not tested or intended for use in mission critical applications that require functional safety. The use of the model in those applications is at the user's own risk and sole responsibility, including taking the necessary steps to add needed guardrails or safety mechanisms.

  • Risk: This policy is only effective on the exact simulation environment it was trained on.
    • Mitigation: Need to retrain the model on new simulation environments.
  • Risk: The policy was never tested on a physical robot and likely only works in simulation
    • Mitigation: Expand training, testing and validation on physical robot platforms.

Ethical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety & Security, and Privacy Subcards.

Please report security vulnerabilities or NVIDIA AI Concerns here.

Bias

Field Response
Participation considerations from adversely impacted groups protected classes in model design and testing: Not Applicable
Bias Metric (If Measured): Not Applicable
(For GPAI Models) Which characteristic (feature) show(s) the greatest difference in performance?: Not Applicable
(For GPAI Models): Which feature(s) have have the worst performance overall? Not Applicable
Measures taken to mitigate against unwanted bias: Not Applicable
(For GPAI Models): If using internal data, description of methods implemented in data acquisition or processing, if any, to address the prevalence of identifiable biases in the training, testing, and validation data: Not Applicable
(For GPAI Models): Tools used to assess statistical imbalances and highlight patterns that may introduce bias into AI models: Not Applicable
(For GPAI Models): Tools used to assess statistical imbalances and highlight patterns that may introduce bias into AI models: Not Applicable

Explainability

Field Response
Intended Task/Domain: Robotic Manipulation
Model Type: Denoising Diffusion Probabilistic Model
Intended Users: Roboticists and researchers in academia and industry who are interested in robot manipulation research
Output: Actions consisting of end-effector poses, gripper states and head orientation.
(For GPAI Models): Tools used to evaluate datasets to identify synthetic data and ensure data authenticity. Not Applicable
Describe how the model works: mindmap is a Denoising Diffusion Probabilistic Model that samples robot trajectories conditioned on sensor observations and a 3D reconstruction of the environment.
Name the adversely impacted groups this has been tested to deliver comparable outcomes regardless of: Not Applicable
Technical Limitations & Mitigation: - The policy is only effective on the exact simulation environment it was trained on. - The policy was never tested on a physical robot and likely only works in simulation.
Verified to have met prescribed NVIDIA quality standards: Yes
Performance Metrics: Closed loop success rate on simulated robotic manipulation tasks.
Potential Known Risks: The model might be susceptible to rendering changes on the simulation tasks it was trained on.
Licensing: NVIDIA Open Model License Agreement

Safety and Security

Field Response
Model Application Field(s): Robotics
Describe the life critical impact (if present). Not Applicable
(For GPAI Models): Description of methods implemented in data acquisition or processing, if any, to address other types of potentially harmful data in the training, testing, and validation data: Not GPAI
(For GPAI Models): Description of any methods implemented in data acquisition or processing, if any, to address illegal or harmful content in the training data, including, but not limited to, child sexual abuse material (CSAM) and non-consensual intimate imagery (NCII) Not GPAI
Use Case Restrictions: Abide by NVIDIA Open Model License Agreement
Model and dataset restrictions: The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to.

Privacy

Field Response
Generatable or reverse engineerable personal data? No
Personal data used to create this model? No
Was consent obtained for any personal data used? Not Applicable
(For GPAI Models): A description of any methods implemented in data acquisition or processing, if any, to address the prevalence of personal data in the training data, where relevant and applicable. Not Applicable
How often is dataset reviewed? Before Release
Is there provenance for all datasets used in training? Yes
Does data labeling (annotation, metadata) comply with privacy laws? Yes
Is data compliant with data subject requests for data correction or removal, if such a request was made? Yes
Applicable Privacy Policy NVIDIA Privacy Policy