walidsobhie-code commited on
Commit
20a06fb
·
1 Parent(s): b5998ff

feat: add evaluation datasets (HumanEval 50, MBPP 100, Tool scenarios 50)

Browse files
evaluation/README.md ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Stack 2.9 Evaluation Datasets
2
+
3
+ ## Files
4
+
5
+ | File | Count | Description |
6
+ |------|-------|-------------|
7
+ | `humaneval_50.jsonl` | 50 | HumanEval subset with difficulty ratings |
8
+ | `mbpp_100.jsonl` | 100 | MBPP-style programming problems |
9
+ | `tool_scenarios_50.jsonl` | 50 | Multi-step tool calling scenarios |
10
+
11
+ ## Format
12
+
13
+ ### HumanEval
14
+ ```json
15
+ {"task_id": "humaneval_1", "difficulty": "medium", "prompt": "def solution(x):\n", "test": "assert solution(5) == 5"}
16
+ ```
17
+
18
+ ### MBPP
19
+ ```json
20
+ {"task_id": "mbpp_1", "difficulty": "easy", "prompt": "def task(arr):\n", "test": "assert task([1,2,3]) == 6"}
21
+ ```
22
+
23
+ ### Tool Scenarios
24
+ ```json
25
+ {"task_id": "tool_scenario_1", "difficulty": "hard", "prompt": "Task: Read file and count errors", "tools_needed": ["FileRead", "Grep"], "expected_steps": 3}
26
+ ```
27
+
28
+ ## Usage
29
+
30
+ ```python
31
+ from evaluate_model import load_benchmark
32
+
33
+ benchmarks = load_benchmark("evaluation/humaneval_50.jsonl")
34
+ # Run evaluation...
35
+ ```
evaluation/humaneval_50.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a49313a480bdf837da3b5021f7dfdb4ab29780e5ec73cd1ed1acb2d2d1724baa
3
+ size 6794
evaluation/mbpp_100.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:25a221d5d56f26a47ddc9c65dce47d1861aa2ee40e959463e2ac1b6c1d67e10b
3
+ size 13886
evaluation/tool_scenarios_50.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:846d42c214cbfd1392a6a86e27503214f38377aaf86d36178f2fa6152c213d48
3
+ size 7285