Datasets:
Tasks:
Question Answering
Modalities:
Text
Sub-tasks:
extractive-qa
Languages:
code
Size:
100K - 1M
License:
thepurpleowl
commited on
Commit
•
ba0e440
1
Parent(s):
efdb8a6
Update README.md
Browse files
README.md
CHANGED
@@ -6,7 +6,7 @@ language:
|
|
6 |
language_creators:
|
7 |
- found
|
8 |
license:
|
9 |
-
-
|
10 |
multilinguality:
|
11 |
- monolingual
|
12 |
pretty_name: codequeries
|
@@ -15,10 +15,9 @@ size_categories:
|
|
15 |
source_datasets:
|
16 |
- original
|
17 |
tags:
|
18 |
-
- code
|
19 |
- code question answering
|
20 |
-
- code semantic
|
21 |
-
- codeqa
|
22 |
task_categories:
|
23 |
- question-answering
|
24 |
task_ids:
|
@@ -44,17 +43,16 @@ task_ids:
|
|
44 |
## Dataset Description
|
45 |
|
46 |
- **Homepage:** [Codequeires](https://huggingface.co/datasets/thepurpleowl/codequeries)
|
47 |
-
- **Repository:** [Code repo](
|
48 |
- **Paper:**
|
49 |
|
50 |
### Dataset Summary
|
51 |
|
52 |
-
CodeQueries
|
53 |
-
by providing semantic natural language queries as question and code spans as answer or supporting fact. Given a query, finding the answer/supporting fact spans in code context involves analysis complex concepts and long chains of reasoning. The dataset is provided with five separate settings; details on the setting can be found in the [paper]().
|
54 |
|
55 |
### Supported Tasks and Leaderboards
|
56 |
|
57 |
-
|
58 |
|
59 |
### Languages
|
60 |
|
@@ -62,12 +60,12 @@ The dataset contains code context from `python` files.
|
|
62 |
|
63 |
## Dataset Structure
|
64 |
|
65 |
-
### How to
|
66 |
-
The dataset can directly used with huggingface datasets. You can load and iterate through the dataset for the proposed five settings with the following two lines of code:
|
67 |
```python
|
68 |
import datasets
|
69 |
|
70 |
-
#
|
71 |
ds = datasets.load_dataset("thepurpleowl/codequeries", "twostep", split=datasets.Split.TEST)
|
72 |
print(next(iter(ds)))
|
73 |
|
@@ -96,7 +94,7 @@ print(next(iter(ds)))
|
|
96 |
### Data Splits and Data Fields
|
97 |
Detailed information on the data splits for proposed settings can be found in the paper.
|
98 |
|
99 |
-
In general, data splits in
|
100 |
```
|
101 |
- query_name (query name to uniquely identify the query)
|
102 |
- code_file_path (relative source file path w.r.t. ETH Py150 corpus)
|
@@ -113,13 +111,13 @@ In general, data splits in all prpoposed settings have examples in following fi
|
|
113 |
|
114 |
## Dataset Creation
|
115 |
|
116 |
-
The dataset is created by using [ETH Py150 Open corpus](https://github.com/google-research-datasets/eth_py150_open) as source for code contexts. To get
|
117 |
|
118 |
|
119 |
## Additional Information
|
120 |
### Licensing Information
|
121 |
|
122 |
-
|
123 |
|
124 |
### Citation Information
|
125 |
|
|
|
6 |
language_creators:
|
7 |
- found
|
8 |
license:
|
9 |
+
- apache-2.0
|
10 |
multilinguality:
|
11 |
- monolingual
|
12 |
pretty_name: codequeries
|
|
|
15 |
source_datasets:
|
16 |
- original
|
17 |
tags:
|
18 |
+
- neural modeling of code
|
19 |
- code question answering
|
20 |
+
- code semantic understanding
|
|
|
21 |
task_categories:
|
22 |
- question-answering
|
23 |
task_ids:
|
|
|
43 |
## Dataset Description
|
44 |
|
45 |
- **Homepage:** [Codequeires](https://huggingface.co/datasets/thepurpleowl/codequeries)
|
46 |
+
- **Repository:** [Code repo](point to the right repo that we will be open sourcing)
|
47 |
- **Paper:**
|
48 |
|
49 |
### Dataset Summary
|
50 |
|
51 |
+
CodeQueries is a dataset to evaluate the ability of neural networks to answer semantic queries over code. Given a query and code, a model is expected to identify answer and supporting-fact spans in the code for the query. This is extractive question-answering over code, for questions with a large scope (entire files) and complexity including both single- and multi-hop reasoning. [paper]().
|
|
|
52 |
|
53 |
### Supported Tasks and Leaderboards
|
54 |
|
55 |
+
Extractive question answering for code, semantic understanding of code.
|
56 |
|
57 |
### Languages
|
58 |
|
|
|
60 |
|
61 |
## Dataset Structure
|
62 |
|
63 |
+
### How to Use
|
64 |
+
The dataset can be directly used with the huggingface datasets package. You can load and iterate through the dataset for the proposed five settings with the following two lines of code:
|
65 |
```python
|
66 |
import datasets
|
67 |
|
68 |
+
# in addition to `twostep`, the other supported settings are <ideal/file_ideal/prefix>.
|
69 |
ds = datasets.load_dataset("thepurpleowl/codequeries", "twostep", split=datasets.Split.TEST)
|
70 |
print(next(iter(ds)))
|
71 |
|
|
|
94 |
### Data Splits and Data Fields
|
95 |
Detailed information on the data splits for proposed settings can be found in the paper.
|
96 |
|
97 |
+
In general, data splits in all the proposed settings have examples with the following fields -
|
98 |
```
|
99 |
- query_name (query name to uniquely identify the query)
|
100 |
- code_file_path (relative source file path w.r.t. ETH Py150 corpus)
|
|
|
111 |
|
112 |
## Dataset Creation
|
113 |
|
114 |
+
The dataset is created by using [ETH Py150 Open corpus](https://github.com/google-research-datasets/eth_py150_open) as source for code contexts. To get semantic queries and corresponding answer/supporting-fact spans in ETH Py150 Open corpus files, CodeQL was used.
|
115 |
|
116 |
|
117 |
## Additional Information
|
118 |
### Licensing Information
|
119 |
|
120 |
+
The CodeQueries dataset is licensed under the [Apache-2.0](https://opensource.org/licenses/Apache-2.0) License.
|
121 |
|
122 |
### Citation Information
|
123 |
|