|
from typing import Optional |
|
|
|
TASKS_PRETTY = { |
|
"library_based_code_generation": "Library-based code generation", |
|
"ci_builds_repair": "CI builds repair", |
|
"project_code_completion": "Project-level code completion", |
|
"commit_message_generation": "Commit message generation", |
|
"bug_localization": "Bug localization", |
|
"module_summarization": "Module Summarization", |
|
} |
|
TASKS_PRETTY_REVERSE = {value: key for key, value in TASKS_PRETTY.items()} |
|
|
|
TASKS_DESCRIPTIONS = { |
|
"library_based_code_generation": """# Library-Based Code Generation\n |
|
|
|
Our Library-Based Code Generation benchmark π€ [JetBrains-Research/lca-library-based-code-generation](https://huggingface.co/datasets/JetBrains-Research/lca-library-based-code-generation) includes 150 manually curated instructions asking model to generate Python code using a particular library. Samples come from 62 Python repositories. All the samples in the dataset are based on reference example programs written by authors of the respective libraries. |
|
|
|
For evaluation we use two metrics: |
|
* `API Recall`: share of library-specific API calls used in the reference program that appear in the generated code, |
|
* `ChrF`: textual similarity between the generated code and the reference program. |
|
|
|
For further details on the dataset and the baselines from ποΈ Long Code Arena Team, refer to `library_based_code_generation` folder in [our baselines repository](https://github.com/JetBrains-Research/lca-baselines) or to our preprint (TODO). |
|
""", |
|
|
|
"ci_builds_repair": """# CI Builds Repair\n |
|
|
|
Our CI Builds Repair benchmark π€ [JetBrains-Research/lca-ci-builds-repair](https://huggingface.co/datasets/JetBrains-Research/lca-ci-builds-repair) includes 77 data points. |
|
|
|
We use Pass@1 metric for CI repair. |
|
We evaluate Exact Match for different line categories: |
|
|
|
For further details on the dataset and the baselines from ποΈ Long Code Arena Team, refer to `ci-builds-repair` folder in [our baselines repository](https://github.com/JetBrains-Research/lca-baselines) or to our preprint. |
|
""", |
|
|
|
"project_code_completion": """# Project-Level Code Completion\n |
|
|
|
Our Project-Level Code Completion benchmark π€ [JetBrains-Research/lca-project-level-code-completion](https://huggingface.co/datasets/JetBrains-Research/lca-project-level-code-completion) includes four datasets: |
|
* `small-context`: 144 data points, |
|
* `medium-context`: 224 data points, |
|
* `large-context`: 270 data points, |
|
* `huge-context`: 296 data points. |
|
|
|
We use standard Exact Match (EM) metric for one-line code completion. |
|
We evaluate Exact Match for different line categories: |
|
* *infile* β functions and classes are from the completion file; |
|
* *inproject* β functions and files are from the repository snapshot; |
|
* *committed* β functions and classes are from the files that were added on the completion file commit; |
|
* *common* β functions and classes with common names, e.g., `main`, `get`, etc.; |
|
* *non-informative* β short/long lines, import/print lines, or comment lines; |
|
* *random* β lines that doesn't fit to any of previous categories. |
|
|
|
For further details on the dataset and the baselines from ποΈ Long Code Arena Team, refer to `code_completion` folder in [our baselines repository](https://github.com/JetBrains-Research/lca-baselines) or to our preprint (TODO). |
|
""", |
|
|
|
"commit_message_generation": """# Commit Message Generation\n |
|
|
|
Our Commit Message Generation benchmark π€ [JetBrains-Research/lca-commit-message-generation](https://huggingface.co/datasets/JetBrains-Research/lca-commit-message-generation) includes 163 manually curated commits from Python projects. |
|
|
|
We use the following metrics for evaluation: |
|
* [BLEU](https://huggingface.co/spaces/evaluate-metric/sacrebleu) |
|
* [ROUGE](https://huggingface.co/spaces/evaluate-metric/rouge) |
|
* [ChrF](https://huggingface.co/spaces/evaluate-metric/chrf) |
|
* [BERTScore](https://huggingface.co/spaces/evaluate-metric/bertscore) |
|
|
|
For further details on the dataset and the baselines from ποΈ Long Code Arena Team, refer to `commit_message_generation` folder in [our baselines repository](https://github.com/JetBrains-Research/lca-baselines) or to our preprint (TODO). |
|
|
|
**Note.** The leaderboard is sorted by ROUGE-1 metric by default. |
|
""", |
|
|
|
"bug_localization": """# Bug Localization\n |
|
|
|
Our Module-to-Text benchmark π€ [JetBrains-Research/lca-bug-localization](https://huggingface.co/datasets/JetBrains-Research/lca-bug-localization) includes 7,479 bug issue descriptions with information about pull request that fix them for Python, Java and Kotlin projects. |
|
|
|
Moreover, 150 data points from the test split were manually verified and can be used for bug localization approaches evaluation. |
|
We used information retrieval metrics such as R@k, P@k and F1-score for evaluation, taking k equal to 1 and 2. |
|
""", |
|
|
|
"module_summarization": """# Module Summarization\n |
|
Our Module Summarization benchmark π€ [JetBrains-Research/lca-module-summarization](https://huggingface.co/datasets/JetBrains-Research/lca-module-summarization) includes 216 manually curated text files describing different documentation of opensource permissive Python projects. |
|
|
|
We use new metric for evaluation: |
|
* `CompScore`: New metric proposed for this task. More details in [our baselines repository](https://github.com/JetBrains-Research/lca-baselines/blob/main/module_summarization/README.md) |
|
|
|
For further details on the dataset and the baselines from ποΈ Long Code Arena Team, refer to `module_summarization` folder in [our baselines repository](https://github.com/JetBrains-Research/lca-baselines/blob/main/module_summarization/). |
|
""", |
|
} |
|
|
|
|
|
def get_submission_text_files_for_task(task_pretty: Optional[str]) -> str: |
|
if not task_pretty: |
|
return "Please, select a specific task to see more detailed instructions regarding submitting files." |
|
|
|
task_id = TASKS_PRETTY_REVERSE[task_pretty] |
|
|
|
if task_id == "commit_message_generation": |
|
return f"""**{task_pretty} Instructions:**\n\n* Please, attach files in [JSONLines format](https://jsonlines.org/). For an example, check the predictions provided by ποΈ Long Code Arena Team in π€ [JetBrains-Research/lca-results](https://huggingface.co/datasets/JetBrains-Research/lca-results/tree/main/commit_message_generation/predictions). Make sure to include `"prediction"` and `"reference"` fields for each example, the rest are optional.""" |
|
|
|
return f"**{task_pretty} Instructions:**\n\n* π§ There are no instructions for the current task yet." |
|
|