Datasets:
Tasks:
Question Answering
Modalities:
Text
Sub-tasks:
extractive-qa
Languages:
code
Size:
100K - 1M
License:
annotations_creators: | |
- expert-generated | |
language: | |
- code | |
language_creators: | |
- found | |
license: | |
- mit | |
multilinguality: | |
- monolingual | |
pretty_name: codequeries | |
size_categories: | |
- 100K<n<1M | |
source_datasets: | |
- original | |
tags: | |
- code | |
- code question answering | |
- code semantic parsing | |
- codeqa | |
task_categories: | |
- question-answering | |
task_ids: | |
- extractive-qa | |
# Dataset Card for Codequeries | |
## Table of Contents | |
- [Table of Contents](#table-of-contents) | |
- [Dataset Description](#dataset-description) | |
- [Dataset Summary](#dataset-summary) | |
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) | |
- [Languages](#languages) | |
- [Dataset Structure](#dataset-structure) | |
- [How to use](#how-to-use) | |
- [Data Splits and Data Fields](#data-splits-and-data-fields) | |
- [Dataset Creation](#dataset-creation) | |
- [Additional Information](#additional-information) | |
- [Licensing Information](#licensing-information) | |
- [Citation Information](#citation-information) | |
## Dataset Description | |
- **Homepage:** [Codequeires](https://huggingface.co/datasets/thepurpleowl/codequeries) | |
- **Repository:** [Code repo](https://github.com/adityakanade/natural-cubert/) | |
- **Paper:** | |
### Dataset Summary | |
CodeQueries allows to explore extractive question-answering methodology over code | |
by providing semantic natural language queries as question and code spans as answer or supporting fact. Given a query, finding the answer/supporting fact spans in code context involves analysis complex concepts and long chains of reasoning. The dataset is provided with five separate settings; details on the setting can be found in the [paper](). | |
### Supported Tasks and Leaderboards | |
Query comprehension for code, Extractive question answering for code. | |
### Languages | |
The dataset contains code context from `python` files. | |
## Dataset Structure | |
### How to use | |
The dataset can directly used with huggingface datasets. You can load and iterate through the dataset for the proposed five settings with the following two lines of code: | |
```python | |
import datasets | |
# instead `twostep`, other settings are <ideal/file_ideal/prefix>. | |
ds = datasets.load_dataset("thepurpleowl/codequeries", "twostep", split=datasets.Split.TEST) | |
print(next(iter(ds))) | |
#OUTPUT: | |
{'query_name': 'Unused import', | |
'code_file_path': 'rcbops/glance-buildpackage/glance/tests/unit/test_db.py', | |
'context_block': {'content': '# vim: tabstop=4 shiftwidth=4 softtabstop=4\n\n# Copyright 2010-2011 OpenStack, LLC\ ...', | |
'metadata': 'root', | |
'header': "['module', '___EOS___']", | |
'index': 0}, | |
'answer_spans': [{'span': 'from glance.common import context', | |
'start_line': 19, | |
'start_column': 0, | |
'end_line': 19, | |
'end_column': 33} | |
], | |
'supporting_fact_spans': [], | |
'example_type': 1, | |
'single_hop': False, | |
'subtokenized_input_sequence': ['[CLS]_', 'Un', 'used_', 'import_', '[SEP]_', 'module_', '\\u\\u\\uEOS\\u\\u\\u_', '#', ' ', 'vim', ':', ...], | |
'label_sequence': [4, 4, 4, 4, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, ...], | |
'relevance_label': 1 | |
} | |
``` | |
### Data Splits and Data Fields | |
Detailed information on the data splits for proposed settings can be found in the paper. | |
In general, data splits in all prpoposed settings have examples in following fields - | |
``` | |
- query_name (query name to uniquely identify the query) | |
- code_file_path (relative source file path w.r.t. ETH Py150 corpus) | |
- context_blocks (code blocks as context with metadata) [`prefix` setting doesn't have this field and `twostep` has `context_block`] | |
- answer_spans (answer spans with metadata) | |
- supporting_fact_spans (supporting-fact spans with metadata) | |
- example_type (1(positive)) or 0(negative)) example type) | |
- single_hop (True or False - for query type) | |
- subtokenized_input_sequence (example subtokens) [`prefix` setting has the corresponding token ids] | |
- label_sequence (example subtoken labels) | |
- relevance_label (0 (not relevant) or 1 (relevant) - relevance label of a block) [only `twostep` setting has this field] | |
``` | |
## Dataset Creation | |
The dataset is created by using [ETH Py150 Open corpus](https://github.com/google-research-datasets/eth_py150_open) as source for code contexts. To get natural language queries and corresponding answer/supporting spans in ETH Py150 Open corpus files, CodeQL was used. | |
## Additional Information | |
### Licensing Information | |
Codequeries dataset is licensed under the [Apache-2.0](https://opensource.org/licenses/Apache-2.0) License. | |
### Citation Information | |
[More Information Needed] | |