The dataset viewer is not available for this split.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      Schema at index 1 was different: 
training_data_buffer_size: int64
last_training_data: int64
last_proof_id: string
external_theorems_used_cnt: int64
local_theorems_used_cnt: int64
total_proof_step_cnt: int64
data_filename_prefix: string
data_filename_suffix: string
lemma_ref_filename_prefix: string
lemma_ref_filename_suffix: string
num_theorems: int64
vs
training_data: list<item: struct<proof_id: string, all_useful_defns_theorems: list<item: null>, goal_description: null, start_goals: list<item: struct<hypotheses: list<item: string>, goal: string, relevant_defns: list<item: null>, used_theorems_local: list<item: null>, used_theorems_external: list<item: null>, possible_useful_theorems_external: list<item: null>, possible_useful_theorems_local: list<item: null>>>, end_goals: list<item: struct<hypotheses: list<item: string>, goal: string, relevant_defns: list<item: null>, used_theorems_local: list<item: null>, used_theorems_external: list<item: null>, possible_useful_theorems_external: list<item: null>, possible_useful_theorems_local: list<item: null>>>, proof_steps: list<item: string>, simplified_goals: list<item: null>, addition_state_info: struct<>, file_path: string, project_id: string, theorem_name: string>>
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3339, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2096, in _head
                  return next(iter(self.iter(batch_size=n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2300, in iter
                  for key, example in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1856, in __iter__
                  for key, pa_table in self._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1878, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 504, in _iter_arrow
                  yield new_key, pa.Table.from_batches(chunks_buffer)
                File "pyarrow/table.pxi", line 4116, in pyarrow.lib.Table.from_batches
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 1 was different: 
              training_data_buffer_size: int64
              last_training_data: int64
              last_proof_id: string
              external_theorems_used_cnt: int64
              local_theorems_used_cnt: int64
              total_proof_step_cnt: int64
              data_filename_prefix: string
              data_filename_suffix: string
              lemma_ref_filename_prefix: string
              lemma_ref_filename_suffix: string
              num_theorems: int64
              vs
              training_data: list<item: struct<proof_id: string, all_useful_defns_theorems: list<item: null>, goal_description: null, start_goals: list<item: struct<hypotheses: list<item: string>, goal: string, relevant_defns: list<item: null>, used_theorems_local: list<item: null>, used_theorems_external: list<item: null>, possible_useful_theorems_external: list<item: null>, possible_useful_theorems_local: list<item: null>>>, end_goals: list<item: struct<hypotheses: list<item: string>, goal: string, relevant_defns: list<item: null>, used_theorems_local: list<item: null>, used_theorems_external: list<item: null>, possible_useful_theorems_external: list<item: null>, possible_useful_theorems_local: list<item: null>>>, proof_steps: list<item: string>, simplified_goals: list<item: null>, addition_state_info: struct<>, file_path: string, project_id: string, theorem_name: string>>Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
π ProofWalaDataset
The ProofWalaDataset is a multilingual dataset of formal theorem proving traces collected from multiple interactive theorem prover (ITP) ecosystems. It provides a structured view of proof steps, goals, hypotheses, and theorem names from diverse mathematical and program verification libraries.
This dataset is intended for researchers and practitioners working on:
- Automated theorem proving
 - Formal code generation
 - Machine learning for logic
 - Proof step prediction
 - Multi-language transfer in formal systems
 
π Dataset Structure
The dataset is organized into the following ITP families:
lean/coq/GeoCoq/math-comp/multilingual/(cross-formal-language hybrid)
Each family includes standard splits:train/, test/, and eval/, each containing multiple JSON files.
Each JSON file contains a top-level key: "training_data" with a list of proof records.
π Each record contains
| Field | Description | 
|---|---|
proof_id | 
Unique identifier for the proof trace | 
goal_description | 
Optional natural language description of the proof | 
start_goals | 
List of starting goals (each with goal and hypotheses) | 
end_goals | 
Final goals after applying proof steps | 
proof_steps | 
List of applied proof tactics (inv, rewrite, etc.) | 
simplified_goals | 
Simplified representations of goals (if any) | 
all_useful_defns_theorems | 
Set of useful definitions or theorems (static analysis) | 
addition_state_info | 
Optional additional metadata about the proof context | 
file_path | 
Source file where the proof appears | 
project_id | 
The ITP project or repository path (e.g., CompCert) | 
theorem_name | 
Name of the theorem being proved | 
For convenience, structured fields such as start_goals[*].goal, start_goals[*].hypotheses, end_goals[*].goal, and end_goals[*].hypotheses are exposed directly through the Croissant metadata.
π§ Use Cases
- Pretraining and finetuning LLMs for formal verification
 - Evaluating proof search strategies
 - Building cross-language proof translators
 - Fine-grained proof tactic prediction
 
π Format
- Data format: JSON
 - Schema described via Croissant metadata (
croissant.json) - Fully validated using mlcroissant
 
- Downloads last month
 - 14