Update dataset card for MultiOOP: A Multi-Language Object-Oriented Programming Benchmark
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,34 +1,53 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
| 2 |
license: mit
|
|
|
|
|
|
|
| 3 |
task_categories:
|
| 4 |
- text-generation
|
|
|
|
| 5 |
tags:
|
| 6 |
- code
|
| 7 |
- dataset
|
| 8 |
-
|
| 9 |
-
-
|
| 10 |
-
|
| 11 |
-
-
|
| 12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
---
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
# Dataset Card for Object-Oriented Programming
|
| 17 |
|
| 18 |
## Dataset Description
|
| 19 |
|
| 20 |
-
-
|
| 21 |
-
-
|
| 22 |
|
| 23 |
### Dataset Summary
|
| 24 |
|
| 25 |
-
The OOP benchmark
|
| 26 |
|
| 27 |
### Supported Tasks and Leaderboards
|
| 28 |
|
|
|
|
|
|
|
| 29 |
### Languages
|
| 30 |
|
| 31 |
-
The
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 32 |
|
| 33 |
## Dataset Structure
|
| 34 |
|
|
@@ -39,58 +58,72 @@ load_dataset("oop")
|
|
| 39 |
DatasetDict({
|
| 40 |
test: Dataset({
|
| 41 |
features: ['task_id', 'question', 'canonical_solution', 'test_list', 'test_function', 'entry_point', 'test_matching', 'test_match_function'],
|
| 42 |
-
num_rows:
|
| 43 |
})
|
| 44 |
})
|
| 45 |
```
|
| 46 |
|
| 47 |
### Data Instances
|
| 48 |
|
| 49 |
-
####
|
| 50 |
```
|
| 51 |
{
|
| 52 |
'task_id': 'OOP/0',
|
| 53 |
'question': 'First, write a **WDS** class using the Python language. Then, within the WDS class, create a public function called **without_duplicates** to implement finding the length of the longest substring in a given string **s** that does not contain any duplicate characters.',
|
| 54 |
-
'test_function': 'def test_run(content1):\
|
|
|
|
| 55 |
'test_list': [
|
| 56 |
'assert candidate("abcabcbb")==3',
|
| 57 |
'assert candidate("bbbbb")==1',
|
| 58 |
'assert candidate("pwwkew")==3'],
|
| 59 |
'entry_point': 'test_run',
|
| 60 |
'test_matching': 'assert candidate([["class WDS", "def without_duplicates"]]) == True',
|
| 61 |
-
'test_match_function': 'def matching_function(content):\
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 62 |
}
|
| 63 |
```
|
| 64 |
|
| 65 |
### Data Fields
|
| 66 |
|
| 67 |
-
-
|
| 68 |
-
-
|
| 69 |
-
-
|
| 70 |
-
-
|
| 71 |
-
-
|
| 72 |
-
-
|
| 73 |
-
-
|
|
|
|
| 74 |
|
| 75 |
### Data Splits
|
| 76 |
|
| 77 |
-
The
|
| 78 |
|
| 79 |
## Dataset Creation
|
| 80 |
|
| 81 |
-
|
| 82 |
|
| 83 |
### Citation Information
|
| 84 |
-
```
|
| 85 |
-
@
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
|
| 89 |
-
|
| 90 |
-
|
| 91 |
}
|
| 92 |
```
|
| 93 |
|
| 94 |
### Contributions
|
| 95 |
|
| 96 |
-
Thanks to [@lvwerra](https://github.com/lvwerra) for adding this dataset.
|
|
|
|
| 1 |
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
license: mit
|
| 5 |
+
size_categories:
|
| 6 |
+
- 1K<n<10K
|
| 7 |
task_categories:
|
| 8 |
- text-generation
|
| 9 |
+
pretty_name: MultiOOP Benchmark
|
| 10 |
tags:
|
| 11 |
- code
|
| 12 |
- dataset
|
| 13 |
+
- object-oriented-programming
|
| 14 |
+
- code-generation
|
| 15 |
+
- benchmark
|
| 16 |
+
- multi-language
|
| 17 |
+
- python
|
| 18 |
+
- php
|
| 19 |
+
- cpp
|
| 20 |
+
- csharp
|
| 21 |
+
- java
|
| 22 |
+
- javascript
|
| 23 |
---
|
| 24 |
+
|
| 25 |
+
# MultiOOP: A Multi-Language Object-Oriented Programming Benchmark for Large Language Models
|
|
|
|
| 26 |
|
| 27 |
## Dataset Description
|
| 28 |
|
| 29 |
+
- **Repository:** [GitHub Repository](https://github.com/alphadl/OOP-eval)
|
| 30 |
+
- **Paper:** [A Multi-Language Object-Oriented Programming Benchmark for Large Language Models](https://huggingface.co/papers/2509.26111)
|
| 31 |
|
| 32 |
### Dataset Summary
|
| 33 |
|
| 34 |
+
MultiOOP is a multi-language object-oriented programming benchmark designed to establish fair and robust evaluations for intelligent code generation by large language models (LLMs). It addresses major imbalances in existing benchmarks by covering six popular programming languages: Python, PHP, C++, C#, Java, and JavaScript. The benchmark features 267 tasks per language, totaling 1602 unique tasks, and extends an existing single-language OOP benchmark to a multilingual setting. MultiOOP includes an automated framework for augmenting test cases and introduces the `pass@o` metric to specifically quantify LLMs' understanding of core object-oriented programming concepts. It covers three difficulty levels: Simple-level OOP, Moderate-level OOP, and Difficult-level OOP.
|
| 35 |
|
| 36 |
### Supported Tasks and Leaderboards
|
| 37 |
|
| 38 |
+
The dataset supports tasks related to object-oriented code generation and evaluation for Large Language Models (LLMs). It is designed to assess LLMs' ability to understand and generate code that encapsulates core OOP concepts across multiple programming languages. Evaluation is typically performed using metrics like `pass@k` and the specialized `pass@o` for object-oriented understanding.
|
| 39 |
+
|
| 40 |
### Languages
|
| 41 |
|
| 42 |
+
The MultiOOP benchmark problems are available in six popular programming languages:
|
| 43 |
+
- Python
|
| 44 |
+
- PHP
|
| 45 |
+
- C++
|
| 46 |
+
- C#
|
| 47 |
+
- Java
|
| 48 |
+
- JavaScript
|
| 49 |
+
|
| 50 |
+
The natural language descriptions for the tasks, including comments and docstrings, are in English.
|
| 51 |
|
| 52 |
## Dataset Structure
|
| 53 |
|
|
|
|
| 58 |
DatasetDict({
|
| 59 |
test: Dataset({
|
| 60 |
features: ['task_id', 'question', 'canonical_solution', 'test_list', 'test_function', 'entry_point', 'test_matching', 'test_match_function'],
|
| 61 |
+
num_rows: 1602 # 267 tasks * 6 languages
|
| 62 |
})
|
| 63 |
})
|
| 64 |
```
|
| 65 |
|
| 66 |
### Data Instances
|
| 67 |
|
| 68 |
+
#### Example for MultiOOP benchmark (Python)
|
| 69 |
```
|
| 70 |
{
|
| 71 |
'task_id': 'OOP/0',
|
| 72 |
'question': 'First, write a **WDS** class using the Python language. Then, within the WDS class, create a public function called **without_duplicates** to implement finding the length of the longest substring in a given string **s** that does not contain any duplicate characters.',
|
| 73 |
+
'test_function': 'def test_run(content1):\
|
| 74 |
+
return WDS().without_duplicates(content1)',
|
| 75 |
'test_list': [
|
| 76 |
'assert candidate("abcabcbb")==3',
|
| 77 |
'assert candidate("bbbbb")==1',
|
| 78 |
'assert candidate("pwwkew")==3'],
|
| 79 |
'entry_point': 'test_run',
|
| 80 |
'test_matching': 'assert candidate([["class WDS", "def without_duplicates"]]) == True',
|
| 81 |
+
'test_match_function': 'def matching_function(content):\
|
| 82 |
+
def run_match(text):\
|
| 83 |
+
for task in text:\
|
| 84 |
+
if task not in str_content:\
|
| 85 |
+
return False\
|
| 86 |
+
return True\
|
| 87 |
+
len_cont = len(content)\
|
| 88 |
+
if len_cont==1 and run_match(content[0]) == True:\
|
| 89 |
+
return True\
|
| 90 |
+
elif (len_cont==2 and run_match(content[0]) == True) or (len_cont==2 and run_match(content[1]) == True):\
|
| 91 |
+
return True\
|
| 92 |
+
else:\
|
| 93 |
+
return False'
|
| 94 |
}
|
| 95 |
```
|
| 96 |
|
| 97 |
### Data Fields
|
| 98 |
|
| 99 |
+
- `task_id`: Identifier for the data sample (e.g., 'Python/OOP/0', 'Java/OOP/0').
|
| 100 |
+
- `question`: Natural language description of the programming task.
|
| 101 |
+
- `canonical_solution`: The ground truth solution to the programming task.
|
| 102 |
+
- `test_function`: The function used to run the tests against the generated code.
|
| 103 |
+
- `test_list`: A list of assertions or test cases to verify the functional correctness of the solution.
|
| 104 |
+
- `entry_point`: The entry point function for test execution.
|
| 105 |
+
- `test_matching`: Tests designed to verify adherence to core OOP concepts (e.g., correct class and method definitions).
|
| 106 |
+
- `test_match_function`: The function used to perform conceptual matching tests.
|
| 107 |
|
| 108 |
### Data Splits
|
| 109 |
|
| 110 |
+
The MultiOOP dataset consists of a test split containing 1602 samples in total, comprising 267 distinct tasks for each of the six supported programming languages.
|
| 111 |
|
| 112 |
## Dataset Creation
|
| 113 |
|
| 114 |
+
For detailed information on the dataset's creation methodology, task design, the translator used to extend the single-language benchmark, and the definition of the `pass@o` metric, please refer to the original paper: [A Multi-Language Object-Oriented Programming Benchmark for Large Language Models](https://huggingface.co/papers/2509.26111).
|
| 115 |
|
| 116 |
### Citation Information
|
| 117 |
+
```bibtex
|
| 118 |
+
@article{wang2024multioop,
|
| 119 |
+
title={A Multi-Language Object-Oriented Programming Benchmark for Large Language Models},
|
| 120 |
+
author={Wang, Shuai and Ding, Liang and Shen, Li and Luo, Yong and Du, Bo and Tao, Dacheng},
|
| 121 |
+
journal={arXiv preprint arXiv:2509.26111},
|
| 122 |
+
year={2024},
|
| 123 |
+
url={https://huggingface.co/papers/2509.26111}
|
| 124 |
}
|
| 125 |
```
|
| 126 |
|
| 127 |
### Contributions
|
| 128 |
|
| 129 |
+
Thanks to [@lvwerra](https://github.com/lvwerra) for adding this dataset.
|