README.md CHANGED
@@ -1,42 +1,97 @@
1
  ---
2
- license: apache-2.0
3
  configs:
4
- - config_name: default
5
- data_files:
6
- - split: test
7
- path: data/*.json
8
  ---
9
 
10
- ## Dataset
11
- The following dataset contains logs of the GitHub action for a failed workflow of some commits,
12
- followed by the commit that passes the workflow successfully. A full list of the datapoints' content is given below.
13
 
 
 
14
 
15
- ## Task
16
- The intended task for this dataset is to fix the repo to pass GitHub actions workflow.
17
- Note that the dataset does not contain repo snapshot.
18
- During benchmark, the method clones the necessary repo on the user's local machine.
19
- The user's model should correct the files of the repo, and benchmark pushes repo to GitHub, returning the result of the workflow run aggregated by all datapoints.
20
 
21
- ## List of items of the datapoint:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
 
23
  **TODO** Add http links to failed commit
24
  **TODO** Add file list os changed files
25
 
26
- **id**: unique id of the dp
27
- **language**: the main language of the repo
28
- **repo_name**: original repo name
29
- **repo_owner**: original repo owner
30
- **head_branch**: name of the original branch that the commit was pushed at
31
- **contributor**: username of the contributor that committed changes
32
- **difficulty**: the difficulty of the problem (accessor-based)
33
- **sha_fail**: sha of the failed commit
34
- **sha_success**: sha of the successful commit
35
- **diff:** the content of diff file between failed and successful commits
36
- **logs**: list of dicts [{"log": log, "step_name": step_name}]:
37
- - log: logs of the failed job, particular step
38
- - step_name: name of the failed step of the job
39
- **workflow**: content of the workflow file that has been used to run jobs
40
- **workflow_filename**: workflow filename that has been used to run jobs
41
- **workflow_name**: name of the workflow that was run
42
- **workflow_path**: path to the workflow file that was run
 
 
1
  ---
 
2
  configs:
3
+ - config_name: python
4
+ data_files:
5
+ - split: test
6
+ path: data/python/*.json
7
  ---
8
 
9
+ # 🏟️ Long Code Arena (CI Fixing)
 
 
10
 
11
+ > πŸ› οΈ CI Fixing: given logs of a failed GitHub Actions workflow and the corresponding repository shapshot, fix the
12
+ > repository contents in order to make the workflow pass.
13
 
14
+ This is the benchmark for **CI Fixing** task as part of
15
+ 🏟️ [**Long Code Arena** benchmark](https://huggingface.co/spaces/JetBrains-Research/long-code-arena).
 
 
 
16
 
17
+ ## How-to
18
+
19
+ 1. List all the available configs
20
+ via [`datasets.get_dataset_config_names`](https://huggingface.co/docs/datasets/v2.14.3/en/package_reference/loading_methods#datasets.get_dataset_config_names)
21
+ and choose an appropriate one.
22
+
23
+ Current configs: `python`
24
+
25
+ 2. Load the data
26
+ via [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.3/en/package_reference/loading_methods#datasets.load_dataset):
27
+
28
+ ```
29
+ from datasets import load_dataset
30
+
31
+ configuration = "TODO" # select a configuration
32
+ dataset = load_dataset("JetBrains-Research/lca-ci-fix", configuration, split="test")
33
+ ```
34
+
35
+ Note that all the data we have is considered to be in the test split.
36
+
37
+ ## Dataset Structure
38
+
39
+ This dataset contains logs of the failed GitHub Action workflows for some commits
40
+ followed by the commit that passes the workflow successfully.
41
+
42
+ Note that, unlike many other 🏟 Long Code Arena datasets, this dataset doesn't contain repositories.
43
+
44
+ * Our [CI Fixing benchmark](todo) (🚧 todo) clones the necessary repos to the user's local machine. The user should run
45
+ their model to
46
+ fix the failing CI workflows, and the benchmark will push commits to GitHub, returning the results of the workflow
47
+ runs
48
+ for all the datapoints.
49
+
50
+ ### Datapoint Schema
51
+
52
+ **TODO** Add http links to failed commit
53
+ **TODO** Add file list of changed files
54
+
55
+ Each example has the following fields:
56
+
57
+ | Field | Description |
58
+ |---------------------|------------------------------------------------------------------------------------------------------------------------------|
59
+ | `contributor` | Username of the contributor that committed changes |
60
+ | `difficulty` | Difficulty of the problem (assessor-based) |
61
+ | `diff` | Contents of the diff between the failed and the successful commits |
62
+ | `head_branch` | Name of the original branch that the commit was pushed at |
63
+ | `id` | Unique ID of the datapoint |
64
+ | `language` | Main language of the repo |
65
+ | `logs` | List of dicts with keys `log` (logs of the failed job, particular step) and `step_name` (name of the failed step of the job) |
66
+ | `repo_name` | Name of the original repo (second part of the `owner/name` on GitHub) |
67
+ | `repo owner` | Owner of the original repo (first part of the `owner/name` on GitHub) |
68
+ | `sha_fail` | SHA of the failed commit |
69
+ | `sha_success` | SHA of the successful commit |
70
+ | `workflow` | Contents of the workflow file |
71
+ | `workflow_filename` | The name of the workflow file (without directories) |
72
+ | `workflow_name` | The name of the workflow |
73
+ | `workflow_path` | The full path to the workflow file |
74
+
75
+ ### Datapoint Example
76
 
77
  **TODO** Add http links to failed commit
78
  **TODO** Add file list os changed files
79
 
80
+ ```
81
+ {'contributor': 'Gallaecio',
82
+ 'diff': 'diff --git a/scrapy/crawler.py b/scrapy/crawler.py/n<...>',
83
+ 'difficulty': '1',
84
+ 'head_branch': 'component-getters',
85
+ 'id': 18,
86
+ 'language': 'Python',
87
+ 'logs': [{'log': '##[group]Run pip install -U tox\n<...>',
88
+ 'step_name': 'checks (3.12, pylint)/4_Run check.txt'}],
89
+ 'repo_name': 'scrapy',
90
+ 'repo_owner': 'scrapy',
91
+ 'sha_fail': '0f71221cf9875ed8ef3400e1008408e79b6691e6',
92
+ 'sha_success': 'c1ba9ccdf916b89d875628ba143dc5c9f6977430',
93
+ 'workflow': 'name: Checks\non: [push, pull_request]\n\n<...>',
94
+ 'workflow_filename': 'checks.yml',
95
+ 'workflow_name': 'Checks',
96
+ 'workflow_path': '.github/workflows/checks.yml'}
97
+ ```
data/{0828c8d.json β†’ python/0828c8d.json} RENAMED
File without changes
data/{0f71221.json β†’ python/0f71221.json} RENAMED
File without changes
data/{102f918.json β†’ python/102f918.json} RENAMED
File without changes
data/{2a104bf.json β†’ python/2a104bf.json} RENAMED
File without changes
data/{2c06ffa.json β†’ python/2c06ffa.json} RENAMED
File without changes
data/{2e41e78.json β†’ python/2e41e78.json} RENAMED
File without changes
data/{434321a.json β†’ python/434321a.json} RENAMED
File without changes
data/{43dd59c.json β†’ python/43dd59c.json} RENAMED
File without changes
data/{72cd8be.json β†’ python/72cd8be.json} RENAMED
File without changes
data/{79f4668.json β†’ python/79f4668.json} RENAMED
File without changes
data/{cc2ad92.json β†’ python/cc2ad92.json} RENAMED
File without changes
data/{cdfe3ca.json β†’ python/cdfe3ca.json} RENAMED
File without changes
data/{d2e06b5.json β†’ python/d2e06b5.json} RENAMED
File without changes
data/{db6550a.json β†’ python/db6550a.json} RENAMED
File without changes
data/{eaba357.json β†’ python/eaba357.json} RENAMED
File without changes
data/{f9f4b05.json β†’ python/f9f4b05.json} RENAMED
File without changes