Datasets:
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
dataset_info:
features:
- name: file_name
dtype: string
- name: source_file
dtype: string
- name: question
dtype: string
- name: question_type
dtype: string
- name: question_id
dtype: int32
- name: answer
dtype: string
- name: answer_choices
list: string
- name: correct_choice_idx
dtype: int32
- name: image
dtype: image
- name: video
dtype: video
- name: media_type
dtype: string
splits:
- name: test
num_bytes: 1132139207159
num_examples: 98326
download_size: 1123845484193
dataset_size: 1132139207159
license: mit
task_categories:
- visual-question-answering
language:
- en
size_categories:
- 10K<n<100K
OpenSeeSimE-Fluid: Engineering Simulation Visual Question Answering Benchmark
Dataset Summary
OpenSeeSimE-Fluid is a large-scale benchmark dataset for evaluating vision-language models on computational fluid dynamics (CFD) simulation interpretation tasks. It contains approximately 98,000 question-answer pairs across parametrically-varied fluid simulations including turbulent flow, heat transfer, and complex flow patterns.
Purpose
While vision-language models (VLMs) have shown promise in general visual reasoning, their effectiveness for specialized engineering simulation interpretation remains unknown. This benchmark enables:
- Statistically robust evaluation of VLM performance on CFD visualizations
- Assessment across multiple reasoning capabilities (captioning, reasoning, grounding, relationship understanding)
- Evaluation using different question formats (binary classification, multiple-choice, spatial grounding)
Dataset Composition
Statistics
- Total instances: ~98,000 question-answer pairs
- Simulation types: 5 fluid models (Bent Pipe, Converging Nozzle, Mixing Pipe, Heat Sink, Heat Exchanger)
- Parametric variations: 1,024 unique instances per base model (4^5 parameter combinations)
- Question categories: Captioning, Reasoning, Grounding, Relationship Understanding
- Question types: Binary, Multiple-choice, Spatial grounding
- Media formats: Both static images (1920×1440 PNG) and videos (Originally Extracted at: 200 frames, 40 fps, 5 seconds (some exceptions apply for longer fluid flow development))
Simulation Parameters
Each base model varies across 5 parameters with 4 values each:
Bent Pipe: Bend Angle, Turn Radius, Pipe Diameter, Fluid Viscosity, Fluid Velocity
Converging Nozzle: Pipe Diameter, Front Chamfer Length, Back Chamfer Length, Inner Fillet Radius, Fluid Velocity
Mixing Pipe: Pipe 1 Diameter, Pipe 2 Diameter, Fillet Radius, Fluid 1 Velocity, Fluid 2 Velocity
Heat Sink: Fin Thickness, Sink Length, Fin Spacing, Fin Number, Fluid Velocity
Heat Exchanger: Tube Diameter, Fin Diameter, Fin Thickness, Fin Spacing, Fluid Velocity
Question Distribution
- Binary Classification: 40% (yes/no questions about dead zones, symmetry, flow direction, etc.)
- Multiple-Choice: 30% (4-option questions about flow regime, axis of symmetry, magnitude ranges, etc.)
- Spatial Grounding: 30% (location-based questions with labeled regions A/B/C/D)
Data Collection Process
Simulation Generation
- Base models sourced from Ansys Fluent tutorial files
- Parametric automation via PyFluent and PyGeometry interfaces
- Systematic variation across 5 parameters with 4 linearly-spaced values
- All simulations solved with validated turbulence models and convergence criteria
Ground Truth Extraction
Automated extraction eliminates human annotation costs and ensures consistency:
- Statistical Analysis: Direct queries on velocity, pressure, and temperature fields
- Distribution Analysis: Dead zone detection via velocity magnitude thresholds (1% of maximum)
- Physics-Based Classification: Mach number calculations for flow regime classification
- Spatial Localization: Color-based region generation with computer vision algorithms
All ground truth derived from numerical simulation results rather than visual interpretation.
Preprocessing and Data Format
Image Processing
- Resolution: 1920×1440 pixels
- Format: PNG with lossless compression
- Standardized viewing orientations: front, back, left, right, top, bottom, isometric
- Consistent color mapping: rainbow gradients (red=maximum, blue=minimum)
- Visualization types: contour plots (pressure, velocity magnitude) and vector plots (velocity vectors, streamlines)
Video Processing
- 200 frames at 40 fps (5 seconds duration); some exceptions apply for longer fluid flow development
- Pathlines showing steady-state flow solution
- H.264 compression at 1920×1440 resolution
Data Fields
{
'file_name': str, # Unique identifier
'source_file': str, # Base simulation model
'question': str, # Question text
'question_type': str, # 'Binary', 'Multiple Choice', or 'Spatial'
'question_id': int, # Question identifier (1-20)
'answer': str, # Ground truth answer
'answer_choices': List[str], # Available options
'correct_choice_idx': int, # Index of correct answer
'image': Image, # PIL Image object (1920×1440)
'video': Video, # Video frames
'media_type': str # 'image' or 'video'
}
Labels
All labels are automatically generated from simulation numerical results:
- Binary questions: "Yes" or "No"
- Multiple-choice: Single letter (A/B/C/D) or descriptive option
- Spatial grounding: Region label (A/B/C/D) corresponding to labeled visualization locations
Dataset Splits
- Test split only: ~98K instances
- No train/validation splits provided (evaluation benchmark, not for model training)
- Representative sampling across all simulation types and question categories
Intended Use
Primary Use Cases
- Benchmark evaluation of vision-language models on CFD simulation interpretation
- Capability assessment across visual reasoning dimensions (captioning, spatial grounding, relationship understanding)
- Transfer learning analysis from general-domain to specialized technical visual reasoning
Out-of-Scope Use
- Real-time engineering decision-making without expert validation
- Safety-critical applications without human oversight
- Generalization to simulation types beyond fluid dynamics
Limitations
Technical Limitations
- Objective tasks only: Excludes subjective engineering judgments requiring domain expertise
- Single physics domain: Fluid dynamics only (see OpenSeeSimE-Structural for structural mechanics)
- Ansys-specific: Visualizations generated using Ansys Fluent rendering conventions
- Steady-state focus: Videos show pathlines of steady-state solutions, not transient phenomena
- 2D visualizations: All inputs are 2D projections of 3D simulations (or 2D Cross Sectional Planes)
Known Biases
- Color scheme dependency: Questions exploit default color gradient conventions
- Geometry bias: Selected simulation types may not represent full diversity of CFD applications
- Flow regime bias: Limited supersonic cases due to parameter range constraints
- View orientation bias: Standardized camera positions may not capture all critical flow features
Ethical Considerations
Responsible Use
- Models evaluated on this benchmark should NOT be deployed for safety-critical engineering decisions without expert validation
- Automated interpretation should augment, not replace, human engineering expertise
- Users should verify that benchmark performance translates to their specific simulation contexts
Data Privacy
- All simulations have no proprietary or confidential engineering data
- No personal information collected
- Publicly available tutorial files used as base models
Environmental Impact
- Dataset generation required significant CPU computational resources with parallel processing
- Consider environmental cost of large-scale model evaluation on this benchmark
- CFD simulations are computationally intensive (hours per case on multi-core workstations)
License
MIT License - Free for academic and commercial use with attribution
Citation
If you use this dataset, please cite:
@article{ezemba2024opensesime,
title={OpenSeeSimE: A Large-Scale Benchmark to Assess Vision-Language Model Question Answering Capabilities in Engineering Simulations},
author={Ezemba, Jessica and Pohl, Jason and Tucker, Conrad and McComb, Christopher},
year={2025}
}
AI Usage Disclosure
Dataset Generation
- Simulation automation: Python scripts with Ansys PyFluent interface
- Ground truth extraction: Automated computational protocols (no AI involvement)
- Quality validation: Expert oversight of automated extraction procedures
- No generative AI used in dataset creation, labeling, or curation
Visualization Generation
- Ansys Fluent rendering engine (deterministic, physics-based)
- Standardized color mapping and camera controls
- No AI-based image generation or enhancement
Contact
Authors: Jessica Ezemba (jezemba@andrew.cmu.edu), Jason Pohl, Conrad Tucker, Christopher McComb
Institution: Department of Mechanical Engineering, Carnegie Mellon University
Acknowledgments
- Ansys for providing simulation software and tutorial files
- Carnegie Mellon University for computational resources
- Reviewers and domain experts who validated the automated extraction protocols
Version: 1.0
Last Updated: December 2025
Status: Complete and stable