long-code-arena / src /tasks.py
saridormi's picture
Fix CMG name mapping
338ab1d
raw
history blame
1.83 kB
TASKS_PRETTY = {
"commit_message_generation": "Commit Message Generation",
"bug_localization": "Bug Localization on Issue",
"module_to_text": "Module-to-Text",
"library_usage": "Library Usage Examples Generation",
"project_code_completion": "Project-level Code Completion",
"bug_localization_build_logs": "Bug Localization on Build Logs",
}
TASKS_PRETTY_REVERSE = {value: key for key, value in TASKS_PRETTY.items()}
TASKS_DESCRIPTIONS = {
"Commit Message Generation": """# Commit Message Generation\n
Our Commit Message Generation benchmark πŸ€— [JetBrains-Research/lca-cmg](https://huggingface.co/datasets/JetBrains-Research/lca-cmg) includes 163 manually curated commits from Python projects.
We use the following metrics for evaluation:
* [BLEU](https://huggingface.co/spaces/evaluate-metric/sacrebleu)
* [ROUGE](https://huggingface.co/spaces/evaluate-metric/rouge)
* [ChrF](https://huggingface.co/spaces/evaluate-metric/chrf)
* [BERTScore](https://huggingface.co/spaces/evaluate-metric/berscore)
For further details on the dataset and the baselines from 🏟️ Long Code Arena Team, refer to `commit_message_generation` folder in [our baselines repository](https://github.com/JetBrains-Research/lca-baselines) or to our preprint (TODO).
""",
"Bug Localization on Issue": "cool description for Bug Localization on Issue task",
"Module-to-Text": "cool description for Module-to-Text task",
"Library Usage Examples Generation": "cool description for Library Usage Examples Generation task",
"Project-level Code Completion": "cool description for Project-level Code Completion task",
"Bug Localization on Build Logs": "cool description for Bug Localization on Build Logs task",
}