Datasets:
ArXiv:
License:
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,28 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
---
|
4 |
+
# DEVAI dataset
|
5 |
+
<p align="center" width="100%">
|
6 |
+
<img src="dataset_stats.png" align="center" width="70%"/>
|
7 |
+
</p>
|
8 |
+
|
9 |
+
**DEVAI** is a benchmark of 55 realistic AI development tasks. It consists of plentiful manual annotations, including a total of 365 hierarchical user requirements.
|
10 |
+
This dataset enables rich reinforcement signals for better automated AI software development.
|
11 |
+
|
12 |
+
Here is an example of our tasks.
|
13 |
+
<p align="center" width="100%">
|
14 |
+
<img src="task51.png" align="center" width="60%"/>
|
15 |
+
</p>
|
16 |
+
|
17 |
+
We apply three state-of-the-art automatic software development systems to DEVAI, namely MetaGPT, GPT-Piolt, and OpenHands. The table below shows preliminary statistics results.
|
18 |
+
<p align="center" width="100%">
|
19 |
+
<img src="developer_stats.png" align="center" width="60%"/>
|
20 |
+
</p>
|
21 |
+
|
22 |
+
We perform a manual evaluation to judge if each requirement is satisfied by the solution provided by the aforementioned systems.
|
23 |
+
<p align="center" width="100%">
|
24 |
+
<img src="human_evaluation.png" align="center" width="60%"/>
|
25 |
+
</p>
|
26 |
+
|
27 |
+
An automated evaluation program that could possibly replace manual evaluation can be found at our [Github realse](https://github.com/metauto-ai/Devai).
|
28 |
+
Find more details in our [paper]().
|