Datasets:
license: mit
language:
- en
pretty_name: open-mm-rl
size_categories:
- n<1K
tags:
- chemistry
- physics
- math
- biology
- science
- RL
task_categories:
- question-answering
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: conversation_id
dtype: string
- name: domain
dtype: string
- name: subDomain
dtype: string
- name: author_id
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: format
dtype: string
- name: images
list: image
splits:
- name: train
num_bytes: 15515561
num_examples: 40
download_size: 15508201
dataset_size: 15515561
Dataset Summary
Open-MM-RL is a multimodal STEM reasoning dataset covering Physics, Mathematics, Biology, and Chemistry. It is designed for problems that require models to interpret visual information and combine it with step-by-step analytical reasoning.
Compared with existing multimodal reasoning benchmarks, Open-MM-RL broadens the evaluation setting beyond standard single-image question answering by including multi-panel and multi-image tasks that require integrating information across more complex visual contexts. As rarely in real-life problems is context confined to a single image. Instead, the necessary information is often fragmented across multiple related images, requiring scientists to reason across them to find the solution.
The dataset includes three multimodal input formats:
- Single-image problems: one image paired with one question.
- Multi-panel problems: a composite or panel-based visual paired with one question.
- Multi-image problems: multiple separate images paired with one question.
These formats increase task complexity by requiring models to reason not only from text, but also across visual layouts, multiple views, and distributed evidence.
Across all formats, problems are constructed to be self-contained, unambiguous, reasoning-intensive, and verifiable making the dataset useful both as an evaluation benchmark and as a training resource for reasoning-focused models.
A key distinguishing feature of this dataset is its focus on PhD-level STEM problem solving across all three multimodal formats. This makes it possible to assess both advanced subject-matter reasoning and a model's ability to synthesize information across increasingly complex visual inputs.
Unlike scientific figure benchmarks that rely significantly on captions, examples in this dataset are designed to be answered directly from the provided image or images together with the question.
Supported Tasks and Applications
This dataset is intended for settings where reliable answer checking matters. In particular, it is well suited for:
- Outcome-supervised training
- Reinforcement learning for reasoning
- Reward modeling
- Automatic evaluation of multimodal reasoning systems
- Benchmarking frontier model performance on verifiable STEM tasks
Because each example has a deterministic target answer, the dataset supports training and evaluation pipelines that depend on objective correctness rather than subjective preference judgments.
Why This Dataset Is Useful
The dataset is designed to occupy a practical middle ground: difficult enough to expose reasoning failures, but structured enough that correctness can be measured automatically. This makes it useful both for benchmarking current models and for training future multimodal reasoning systems.
Its coverage of single-image, multi-panel, and multi-image inputs also makes it possible to study how reasoning performance changes as visual evidence becomes more distributed and structurally complex.
Task Format
The task is to produce a final answer to a self-contained STEM question grounded in the provided visual input.
Each problem consists of:
- A question
- One or more associated images
- A deterministic ground-truth answer
The dataset is focused on answer generation for verifiable STEM reasoning, rather than caption generation, retrieval, or free-form scientific description.
Dataset Structure
Each example typically contains the following components:
| Field | Description |
|---|---|
question |
The text of the STEM reasoning problem. |
files |
The visual input associated with the problem. This may be a single image, a multi-panel image, or multiple separate images. |
format |
The multimodal format label, such as single_image, multi_panel, or multi_image. |
domain |
The scientific domain, such as Physics, Mathematics, Biology, or Chemistry. |
subDomain |
The subdomain in Physics, Mathematics, Biology, or Chemistry. |
answer |
The deterministic ground-truth final answer. |
Exact field names may vary by release version.
Example Instance
{
"question": "Given the visual input, determine the final value of the requested quantity.",
"files": ["image_001.png"],
"format": "single_image",
"domain": "Physics",
"subDomain": "High-energy particle physics"
"answer": "42",
}
For multi-image examples, the images field may contain multiple image paths:
Subject Coverage
The dataset spans multiple STEM disciplines:
- Physics
- Mathematics
- Biology
- Chemistry
This cross-domain coverage supports evaluation of both domain-specific reasoning and generalization across scientific problem types. The problems are designed to emphasize analytical reasoning, quantitative problem solving, symbolic manipulation, and integration of visual evidence.
Difficulty Profile
The tasks are designed to reflect advanced STEM reasoning at or near the PhD level. They are intended to require more than surface-level perception or direct extraction from the image, often involving multi-step derivations, symbolic manipulation, quantitative analysis, and synthesis of information across complex visual inputs.
The dataset aims for a learning-efficient regime in which:
- The problems are not easy enough to be saturated.
- The success rate is not so low that all learning signals disappear.
- Difficulty varies across examples and multimodal formats.
- Stronger models can still make measurable progress.
The inclusion of single-image, multi-panel, and multi-image questions creates a richer spread of difficulty and enables more targeted analysis of model strengths and weaknesses.
Problem and Answer Design
Each example is written so that the final response is deterministic and programmatically checkable. The focus is on tasks where evaluation depends on the correctness of the answer rather than subjective judgment.
Typical answer formats include:
- Numerical values
- Symbolic expressions
- Simplified algebraic forms
- Short text
- Identities or derived equations
- Canonical LaTeX outputs
Because the answers are deterministic, the dataset is especially appropriate for workflows that need stable reward signals or automatic grading at scale.
Verifiability and Automatic Evaluation
A core design principle of this dataset is objective verifiability.
Each problem is constructed so that:
- The final answer is deterministic.
- Correctness can be evaluated programmatically.
- No subjective interpretation is required.
- There is a clear separation between reasoning process and final outcome.
Depending on the task, answers can be evaluated using:
- Normalized exact match
- Symbolic equivalence checks
- Numerical tolerance thresholds
- Unit-aware validation, where applicable
This makes the dataset well suited for reproducible benchmarking and scalable automated evaluation.
Data Creation and Quality Control
All problems are developed and reviewed with an emphasis on scientific correctness and benchmark reliability. Tasks undergo two rounds of expert review by PhD-level domain specialists.
Review criteria include:
- Correctness of the prompt
- Correctness of the target answer
- Clarity of the reasoning path implied by the problem
- Absence of ambiguity in interpretation
- Originality and resistance to trivial lookup
- Identification of cases where models fail because of reasoning errors rather than annotation issues
This process is intended to ensure that dataset difficulty comes from the task itself, not from noisy labeling or underspecified questions.
Relevance for Reinforcement Learning
The dataset is particularly useful for reasoning-oriented reinforcement learning because each example supports an objective reward signal.
A simple setup is:
- Input: question and associated image(s)
- Model output: final predicted answer
- Reward: computed from agreement with the ground truth
Possible reward schemes include:
- Full credit for exact or equivalent answers
- No credit for incorrect answers
- Optional partial credit for numerically close or symbolically related outputs
This structure supports training approaches where progress depends on measurable correctness rather than preference judgments. It is therefore a natural fit for:
- Policy optimization
- Reward-guided fine-tuning
- Outcome-supervised learning
- Iterative self-improvement pipelines
Intended Uses
This dataset is intended for:
- Benchmarking multimodal STEM reasoning systems
- Evaluating reasoning performance under verifiable answer supervision
- Reinforcement learning and outcome-supervised training
- Reward modeling and automated grading research
- Studying failure modes across single-image, multi-panel, and multi-image settings
Out-of-Scope Uses
This dataset is not designed for:
- Open-ended caption generation
- Subjective evaluation of scientific writing quality
- Conversational tutoring or pedagogical dialogue assessment
- Retrieval-based figure understanding using captions or external metadata
- Broad real-world safety judgments or non-verifiable open-ended reasoning
Because the dataset emphasizes deterministic final answers, it is less informative for tasks that require subjective interpretation or unconstrained explanation quality.
Limitations
Open-MM-RL is intentionally focused on verifiable STEM reasoning. As a result:
- It may not measure open-ended explanatory quality.
- It may not capture all aspects of scientific communication.
- It may not evaluate tutoring ability or interactive reasoning.
- It is not intended as a complete measure of general scientific intelligence.
- Automatic grading may require task-specific normalization for symbolic, numeric, or unit-bearing answers.
The dataset is best interpreted as a benchmark for final-answer correctness under multimodal STEM reasoning constraints.
Ethical Considerations
The dataset is designed for scientific reasoning and model evaluation. It does not intentionally contain personal data, demographic labels, or sensitive personal information.
Users should avoid applying the dataset outside its intended scope, especially for real-world scientific, medical, safety-critical, or educational decisions without additional expert validation.
Planned Extensions
Future versions of the dataset may introduce structured hinting or nudge-based augmentations for especially difficult problems.
The motivation is straightforward: in online reinforcement learning, examples with near-zero success rates often produce little or no useful learning signal. In such cases, lightweight guidance can help convert otherwise unsolved samples into learnable ones without revealing the full solution.
Possible future additions include:
- High-level conceptual hints
- Difficulty-controlled nudges
- Conditional hinting for zero-pass examples
- Augmented rollouts for frontier-level tasks
The goal of these extensions is to preserve the dataset's verifiability while making it more useful for studying how models learn from extremely difficult reasoning problems.
Citation
If you use Open-MM-RL, please cite the dataset as follows:
@dataset{ turing_2026_open_mm_rl,
title = {Open-MM-RL: A Multimodal STEM Reasoning Dataset},
author = {
Shukla, Chinmayee and
Patil, Saurabh and
Han, Kihwan and
Tao, Charlotte and
Tager, Tristan and
Ukarde, Tejas Mohan and
Bertollo, Amanda Gollo and
Pande, Seetesh and
Verma, Divya and
Ramakrishnan, Pooja and
Kumari, Surbhi and
Seth, Harshita and
Nazim, Muhammad and
Zia, Muhammad Danish and
Gupta, Rashi and
K S, Tharangini and
Yadav, Yogesh and
Okayim, Paul and
Jangra, Mandeep and
Jhakad, Pooja and
Panda, Biswajit and
Jain, Priya
},
year = {2026},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/TuringEnterprises/Open-MM-RL/}
}