sps commited on
Commit
442aae3
1 Parent(s): 94a6d2b

Update README

Browse files
Files changed (1) hide show
  1. README.md +0 -109
README.md CHANGED
@@ -27,115 +27,6 @@ task_ids:
27
 
28
  # Dataset Card for Codequeries
29
 
30
- ## Table of Contents
31
- - [Table of Contents](#table-of-contents)
32
- - [Dataset Description](#dataset-description)
33
- - [Dataset Summary](#dataset-summary)
34
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
35
- - [Languages](#languages)
36
- - [Dataset Structure](#dataset-structure)
37
- - [Data Instances](#data-instances)
38
- - [Data Fields](#data-fields)
39
- - [Data Splits](#data-splits)
40
- - [Dataset Creation](#dataset-creation)
41
- - [Curation Rationale](#curation-rationale)
42
- - [Source Data](#source-data)
43
- - [Annotations](#annotations)
44
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
45
- - [Considerations for Using the Data](#considerations-for-using-the-data)
46
- - [Social Impact of Dataset](#social-impact-of-dataset)
47
- - [Discussion of Biases](#discussion-of-biases)
48
- - [Other Known Limitations](#other-known-limitations)
49
- - [Additional Information](#additional-information)
50
- - [Dataset Curators](#dataset-curators)
51
- - [Licensing Information](#licensing-information)
52
- - [Citation Information](#citation-information)
53
- - [Contributions](#contributions)
54
-
55
- ## Dataset Description
56
-
57
- - **Homepage:** [Codequeires](https://huggingface.co/datasets/thepurpleowl/codequeries)
58
- - **Repository:** [Code repo](https://github.com/adityakanade/natural-cubert/)
59
- - **Leaderboard:** [Code repo](https://github.com/adityakanade/natural-cubert/)
60
- - **Paper:**
61
-
62
- ### Dataset Summary
63
-
64
- CodeQueries allows to explore extractive question-answering methodology over code
65
- by providing semantic natural language queries as question and code spans as answer or supporting fact. Given a query, finding the answer/supporting fact spans in code context involves analysis complex concepts and long chains of reasoning. The dataset is provided with five separate settings; details on the setting can be found in the [paper]().
66
-
67
- ### Supported Tasks and Leaderboards
68
-
69
- Query comprehension for code, Extractive question answering for code. Refer the [paper]().
70
-
71
- ### Languages
72
-
73
- The dataset contains code context from `python` files.
74
-
75
- ## Dataset Structure
76
-
77
- ### How to use
78
- The dataset can directly used with huggingface datasets. You can load and iterate through the dataset for the proposed five settings with the following two lines of code:
79
- ```python
80
- from datasets import load_dataset
81
-
82
- ds = load_dataset("thepurpleowl/codequeries", "<ideal/file_ideal/prefix/twostep>", split="train")
83
- print(next(iter(ds)))
84
- #OUTPUT:
85
- {
86
- 'code': "import mod189 from './mod189';\nvar value=mod189+1;\nexport default value;\n",
87
- 'repo_name': 'MirekSz/webpack-es6-ts',
88
- 'path': 'app/mods/mod190.js',
89
- 'language': 'JavaScript',
90
- 'license': 'isc',
91
- 'size': 73
92
- }
93
- ```
94
-
95
- ### Data Splits and Data Fields
96
- Detailed information on the data splits for proposed settings can be found in the paper.
97
-
98
- In general, data splits in all prpoposed settings have examples in following fields -
99
- ```
100
- - query_name (query name to uniquely identify the query)
101
- - code_file_path (relative source file path w.r.t. ETH Py150 corpus)
102
- - context_blocks (code blocks as context with metadata) [`prefix` setting doesn't have this field]
103
- - answer_spans (answer spans with metadata)
104
- - supporting_fact_spans (supporting-fact spans with metadata)
105
- - example_type (1(positive)) or 0(negative)) example type)
106
- - single_hop (True or False - for query type)
107
- - subtokenized_input_sequence (example subtokens) [`prefix` setting has the corresponding token ids]
108
- - label_sequence (example subtoken labels)
109
- - relevance_label (0 (not relevant) or 1 (relevant) - relevance label of a block)
110
- ```
111
-
112
- ### Data Splits
113
-
114
- | |train |validation |test |
115
- |--------------|:----:|:---------:|:---:|
116
- |ideal | 9427 | 3270| 3245|
117
- |prefix | - | - | 3245|
118
- |sliding_window| - | - | 3245|
119
- |file_ideal | - | - | 3245|
120
- |twostep | - | - | 3245|
121
-
122
- ## Dataset Creation
123
-
124
- The dataset is created by using [ETH Py150 Open corpus](https://github.com/google-research-datasets/eth_py150_open) as source for code contexts. To get natural language queries and corresponding answer/supporting spans in ETH Py150 Open corpus files, CodeQL was used.
125
-
126
-
127
- ### Licensing Information
128
-
129
- Codequeries dataset is licensed under the [Apache-2.0](https://opensource.org/licenses/Apache-2.0) License.
130
-
131
- ### Citation Information
132
-
133
- [More Information Needed]
134
-
135
- ### Contributions
136
-
137
- Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.# Dataset Card for Codequeries
138
-
139
  ## Table of Contents
140
  - [Table of Contents](#table-of-contents)
141
  - [Dataset Description](#dataset-description)
 
27
 
28
  # Dataset Card for Codequeries
29
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
  ## Table of Contents
31
  - [Table of Contents](#table-of-contents)
32
  - [Dataset Description](#dataset-description)