ManyaGupta commited on
Commit
c3f116f
1 Parent(s): 4cf8a48

Upload output.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. output.md +794 -0
output.md ADDED
@@ -0,0 +1,794 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # arXiv:2406.14497v1 [cs.SE] 20 Jun 2024
2
+
3
+ # CODERAG-BENCH: Can Retrieval Augment Code Generation?
4
+
5
+ Zora Zhiruo Wang♠∗ Akari Asai♢∗ Xinyan Velocity Yu♡ Frank F. Xu♠ Yiqing Xie♠
6
+
7
+ ♠Carnegie Mellon UniversityGraham Neubig ♠♢University of WashingtonDaniel Fried♠♡University of Southern California
8
+
9
+ https://code-rag-bench.github.io/
10
+
11
+ # Abstract
12
+
13
+ While language models (LMs) have proven remarkably adept at generating code, many programs are challenging for LMs to generate using their parametric knowledge alone. Providing external contexts such as library documentation can facilitate generating accurate and functional code. Despite the success of retrieval-augmented generation (RAG) in various text-oriented tasks, its potential for improving code generation remains under-explored. In this work, we conduct a systematic, large-scale analysis by asking: in what scenarios can retrieval benefit code generation models? and what challenges remain? We first curate a comprehensive evaluation benchmark, CODERAG-BENCH, encompassing three categories of code generation tasks, including basic programming, open-domain, and repository-level problems. We aggregate documents from five sources for models to retrieve contexts: competition solutions, online tutorials, library documentation, StackOverflow posts, and GitHub repositories. We examine top-performing models on CODERAG-BENCH by providing contexts retrieved from one or multiple sources. While notable gains are made in final code generation by retrieving high-quality contexts across various settings, our analysis reveals room for improvement—current retrievers still struggle to fetch useful contexts especially with limited lexical overlap, and generators fail to improve with limited context lengths or abilities to integrate additional contexts. We hope CODERAG-BENCH serves as an effective testbed to encourage further development of advanced code-oriented RAG methods.
14
+
15
+ # 1 Introduction
16
+
17
+ The task of generating program code from natural language (NL) descriptions has rapidly advanced with language models (LMs) [ 5, 20, 19 , 34 ]. While more advanced code generation models are constantly emerging [23 , 43 , 10 ], most of these models employ an NL-to-code generation paradigm without the ability to integrate additional context. However, it is often challenging to directly generate programs without additional information in many complex coding scenarios, e.g., when using unfamiliar libraries that models cannot easily memorize [47 , 15 ]. Further, solely relying on parametric knowledge learned during training also makes it harder to adapt generation to new distributions during testing [ 2]. For example, models are unable to stay up-to-date with continuously-evolving public libraries [47], or private code bases that are not included in the pre-training data [46, 15]. Retrieval-augmented generation (RAG) [ 18, 11 ] retrieves and incorporates relevant documents at inference time. RAG reduces the need to include all knowledge within model parameters [ 2], leading to accuracy improvements in various scenarios [ 13 ], even without additional training [ 31 , 25 ]. It also
18
+
19
+ ∗Equal contribution.
20
+
21
+ Preprint. Under review.
22
+ ---
23
+ # 8 Coding Tasks for Code RAG
24
+
25
+ |Basic Programming x 3|Open-Domain x 2|Repository-Level x 2|Code Retrieval x 1|
26
+ |---|---|---|---|
27
+ |def has_close_elements(numbers: List[float], threshold: float) -> bool:|df.columns = np.concatenate([df.iloc[0, :2], df.columns[2:]])|import def main(): run_colmap(image_path, ...)|Q: Print a log message to standard error.|
28
+ |""" Check if in given list of numbers, are any two numbers closer to each other than given threshold."|df = df.iloc[1:].reset_index(drop=True)|import numpy as np|utils.copy_img(data, img_dir)|
29
+ | |return df|import pandas as pd|utils.downscale(img_dir, config)|
30
+ | | |# call COLMAP executable|def downscale(img_dir, config):|
31
+ | | |> __________________|return img|
32
+ | | | |def print_log(text, *colors):|
33
+ | | | |sys.stderr.write(sprint(text, *colors))|
34
+
35
+ # 5 Document Sources for Retrieval
36
+
37
+ - Programming Solutions
38
+ - Library Documentation
39
+ - Online Tutorials
40
+ - StackOverflow Posts
41
+ - GitHub Repositories
42
+
43
+ # VoYAGE AI
44
+
45
+ - Models
46
+
47
+ # Datastore
48
+
49
+ - Retriever
50
+
51
+ # Evaluation
52
+
53
+ - ✓Evaluation against canonical docs
54
+ - ✓Execution-based end-to-end evaluation
55
+
56
+ Figure 1: Overview of CODERAG-BENCH.
57
+
58
+ allows for flexible knowledge updates by swapping datastores—large-scale data used during inference. Nevertheless, prior work often focuses on general-domain text-oriented generation tasks [30] using a general datastore such as Wikipedia [2 ]. While several works explore ways to incorporate library documents [47 , 37 ] or files within a repository [46, 15], retrieval-augmented approaches on other types of coding problems and diverse retrieval sources are still largely underexplored.
59
+
60
+ We propose a new evaluation benchmark, CODERAG-BENCH, to fill the gap and facilitate research on an alternative paradigm—retrieval-augmented code generation (RACG; §2). CODERAG-BENCH (depicted in Figure 1) integrates six programming tasks of four categories: basic programming, open-domain coding, repository-level, and code retrieval problems. To analyze the effectiveness of diverse code-related datastores, we also collect a range of retrieval documents from five sources: programming solutions, tutorials from online platforms, Python library documentation, StackOverflow (SO) posts, and GitHub files. Further, for each problem, we manually annotate ground-truth documents from their corresponding sources, as a reference for RACG. In summary, CODERAG-BENCH gathers 9k coding problems and 25M retrieval documents, empowering experiments of various setups, and provides reproducible and reliable retrieval and end-to-end execution-based evaluations for RACG.
61
+
62
+ Based on this benchmark, we conduct holistic evaluations in retrieval, generation, and RACG scenarios (§3). Although code generation models can benefit from ground-truth documents in multiple scenarios, current retrieval models struggle with selecting accurate documents, especially for open-domain tasks. Meanwhile, many code generation models experience little gains due to their limited context capacity to consume retrieved documents, or limited ability to do RACG effectively.
63
+
64
+ Beyond canonical retrieval (i.e., from the ground-truth source), we also explore RACG with open retrieval, i.e., retrieving documents from various sources with different chunking strategies (§4). We find that each type of coding task can benefit from functionally relevant snippets from certain sources, and chunking documents to 200–800 tokens often gives the best results. We hope CODERAG-BENCH can serve as a testbed for future work exploring, analyzing, and improving RACG systems.
65
+
66
+ # The CODERAG-BENCH
67
+
68
+ For CODERAG-BENCH (Figure 1), the curation methodology is motivated by the following three factors: (i) Diverse tasks: Code generation involves versatile tasks that operate on different levels (line, function, repository) and various domains (closed, open). (ii) Rigorous and reproducible evaluation: We provide high-quality annotation of ground-truth documents to enable retrieval evaluation, and execution-based evaluation for all code generation tasks to rigorously measure functional correctness. (iii) Unified interface: While current datasets utilize heterogeneous pipelines, our codebase provides a unified interface for retrieval, augmented generation, and evaluation.
69
+
70
+ In this section, we introduce the creation process of CODERAG-BENCH: programming problem integration (§2.1), retrieval source collection (§2.2), canonical document annotation (§2.3), and the evaluation pipeline (§2.4). Examples with canonical documents are available in §B.
71
+ ---
72
+ # Table 1: Overview of the datasets in CodeRAG-Bench. CSN stands for CodeSearchNet.
73
+
74
+ |Type|Dataset|# Examples|Ground-Truth Docs|Evaluation|
75
+ |---|---|---|---|---|
76
+ | |HumanEval|164|program solutions|execution|
77
+ |Basic programming|MBPP|500|program solutions|execution|
78
+ | |LiveCodeBench|400|-|execution|
79
+ |Open-domain|DS-1000|1000|docs|execution|
80
+ | |ODEX|945|docs, stackoverflow|execution|
81
+ |Repository-level|RepoEval (function)|373|github repository|execution|
82
+ | |SWE-bench-Lite|300|github repository|execution|
83
+ |Code retrieval|CodeSearchNet-Py|22177|CSN functions|ndcg@10|
84
+
85
+ # Programming Problems
86
+
87
+ We categorize existing Python-based coding datasets into four types: code retrieval, basic programming, open-domain problems, and repository-level problems. To ensure the diversity of datasets, we choose and unify multiple frequently adopted datasets for each category, as listed in Table 1.
88
+
89
+ # Basic programming problems
90
+
91
+ This category includes interview-style problems that mostly require Python built-in operations and pose algorithmic challenges. We select the two most widely used datasets: HumanEval and MBPP, which ask the model to complete a function from an NL problem description. However, due to limited public knowledge about model training data, it is unclear whether models suffer from data contamination on HumanEval and MBPP. Hence, we also include LiveCodeBench with problems collected from coding websites after the training cutoff of LMs that we consider, to decrease the risk of contamination.
92
+
93
+ # Open-domain problems
94
+
95
+ Open-domain coding problems require Python libraries beyond the standard libraries used in basic programming problems. We adopt the DS-1000 and ODEX datasets that cover data-science and general open-domain coding problems. DS-1000 collects data science problems with programs using seven common data-related libraries such as pandas and numpy. ODEX covers problems using a broader range of 79 libraries, such as web requests with requests and database operations with sqlalchemy.
96
+
97
+ # Repository-level coding problems
98
+
99
+ Beyond function-level coding, some problems require editing files in the context of an entire GitHub repository. We thus adopt RepoEval and SWE-bench for repository-level code generation and issue-solving tasks. We integrate all three splits of RepoEval but only report its function split, as it is the only split supporting execution-based evaluation. Notably, our codebase is the first to provide reproducible execution evaluation on RepoEval. SWE-bench focuses on resolving GitHub issues by asking models to edit multiple files that pass the required test cases. However, due to reproducibility issues on the full dataset, we use SWE-bench-Lite, a 300-problem subset whose results can be reproduced, in addition to a packaged Docker container with the pre-populated evaluation environment setup.
100
+
101
+ # Code retrieval problems
102
+
103
+ In addition to retrieval for augmenting generations, we adopt the Python split of CodeSearchNet (CSN) as a code retrieval task. CSN searches for the correct implementation of an NL query from a pool of functions collected from GitHub repositories. Instead of monitoring how generation changes with various retrieval results, CSN can directly measure retrieval quality.
104
+
105
+ In this work we focus on Python-related tasks because it is the most widely-used programming language for benchmarking code generation. We leave extensions to other programming languages for future work.
106
+
107
+ Two other splits (API and line completion) are evaluated by lexical measures that have been shown as ineffective in signifying functional correctness.
108
+
109
+ Links referenced in the text:
110
+
111
+ - https://github.com/princeton-nlp/SWE-bench/issues
112
+ - https://www.swebench.com/lite.html
113
+ - https://github.com/OpenDevin/OpenDevin/tree/main/evaluation/swe_bench#opendevin-swe-bench-docker-image
114
+ ---
115
+ # Retrieval Sources
116
+
117
+ We collect retrieval documents from five commonly used resources for program developers, listed in Table 2. CODERAG-BENCH supports two retrieval setups: canonical retrieval—retrieves documents from only the canonical datastore (§2.3), and open retrieval—retrieves documents from any datastore.
118
+
119
+ |Programming solutions|We create one document from each basic programming problems that have canonical solutions (i.e., HumanEval and MBPP), following VoyageAI [40], by concatenating its NL problem and program solution.|Resource|Size|Length|
120
+ |---|---|---|---|---|
121
+ | | |Programming solutions|1.1k|194.6|
122
+ | | |Online tutorials|79.4k|1502.5|
123
+ | | |Library documentation|34k|953.4|
124
+ | | |StackOverflow posts|23.5M|689.2|
125
+ | | |Github files|1.7M|5135.4|
126
+
127
+ Online tutorials: We collect tutorials from multiple websites including GeeksforGeeks, W3Schools, tutorialspoint, and Towards Data Science, via the raw HTML pages obtained from ClueWeb22 [29], a large-scale crawled web corpus. Each page contains code snippets and their text explanations, covering topics from basic programming techniques to advanced library usage.
128
+
129
+ Library documentation: We collect the official documentation provided by devdocs.io for all Python libraries following [47]. These could be especially useful for open-domain and repository-level problems that use some library functions to realize complex setups.
130
+
131
+ StackOverflow posts: StackOverflow (SO) is among the most frequently visited sites for developers. We collect all SO posts from the RedPajama-1T [7] stackexchange split. We treat each post as a retrievable document, that has a question description, code responses, and textual explanations.
132
+
133
+ GitHub repository: Lastly, we collect high-quality repositories from GitHub, using the github split of RedPajama-1T [7], as developers often refer to popular repositories when writing their own programs. Following this practical paradigm, we enable LMs to retrieve files from other GitHub repositories as contexts to write the current program.
134
+
135
+ # Canonical Document Annotation
136
+
137
+ To enable reliable retrieval evaluation and estimate the upper bound of a RACG system with a perfect retriever, it is crucial that all examples include canonical documents: the document(s) containing the supporting contexts needed to solve the programming problem. However, because RACG is under-explored, most existing datasets do not provide these canonical documents. Therefore, we annotate the canonical documents from their corresponding retrieval pool, as listed in Table 1.
138
+
139
+ Basic programming problems: The canonical document for each example in HumanEval and MBPP is the document we created in §2.2 in the programming solutions pool. Since LiveCodeBench does not provide solutions to its problems, we do not annotate canonical documents for it.
140
+
141
+ Open-domain problems: Since open-domain problems require libraries, we annotate the canonical library documentation for DS-1000 and ODEX examples. We first automatically parse out the library functions used in each program, and find their corresponding documentation entries. Then, we manually verify the functions and remove incorrect ones. This yields an average of 1.4 and 1.2 entries for DS-1000 and ODEX.
142
+
143
+ Repository-level problems: We adopt canonical code from the original dataset as our canonical documents: 20-line code snippets of the missing functions in RepoEval, and the ground-truth edited files in SWE-bench. We obtain these from the completed local repositories from the original datasets.
144
+
145
+ # Evaluation Metrics
146
+
147
+ For retrieval, we evaluate NDCG, Precision and Recall [39] and use NDCG@10 percentage as our primary metric, following prior work [12]. We only evaluate the canonical retrieval setup. For code generation, we adopt the pass@k metric [5] to measure the execution correctness of programs. We evaluate the final RAG performance both in canonical and open retrieval setups.
148
+
149
+ 7https://geeksforgeeks.org; https://www.w3schools.com/; https://www.tutorialspoint.com/; https://towardsdatascience.com
150
+ ---
151
+ # Table 3: Retrieval performance (NDCG@10) on code generation datasets
152
+
153
+ |Method| |Problem Solutions| |Library Docs| | |In-Repository Files|
154
+ |---|---|---|---|---|---|---|---|
155
+ |BM25|100.0|98.6|89.1|5.2|6.7|93.2|43.0|
156
+ |GIST-base (769)|98.0|98.0|89.9|12.0|12.1|81.2|46.8|
157
+ |GIST-large (1024)|100.0|98.9|89.6|13.6|28.0|82.9|47.8|
158
+ |BGE-base (769)|99.7|98.0|90.0|10.8|22.0|77.5|44.9|
159
+ |BGE-large (1024)|98.0|99.0|90.6|8.9|11.5|80.4|40.1|
160
+ |SFR-Mistral (4096)|100.0|99.0|-|19.3|37.1|83.8|62.7|
161
+ |Voyage-code (1536)|100.0|99.0|-|33.1|26.6|94.3|29.1|
162
+ |OpenAI-03 (1536)|100.0|98.9|-|18.2|16.5|93.0|43.3|
163
+
164
+ # Canonical RACG: Experiments and Results
165
+
166
+ We conduct baseline experiments with multiple top-performing retrieval and generation models on CODERAG-BENCH (§3.1) using canonical data sources. We report results of document retrieval (§3.2), direct NL-to-code generation (§3.3), and end-to-end RACG using retrieved context (§3.4).
167
+
168
+ # 3.1 Experimental Setup
169
+
170
+ Retrieval baselines
171
+
172
+ We adopt top-performing retrievers from three categories: sparse retrievers, dense retrievers with open checkpoints, and proprietary APIs. Concretely, we use BM25 [33] to represent sparse retrievers that are often robust to domain adaptations [39]. We use dense retrievers of varying sizes, namely BGE-base/large [44], GIST-base/large [35], and SFR-Embedding-Mistral [26] which are among the top of the MTEB leaderboard [28]. We use two proprietary retrieval APIs, voyage-code-2 [40] and openai-text-embedding-small-03, which are the best options with reasonable cost in our preliminary study. We also explore reranking with BGE-reranker-base [44], which reranks the top-100 openai retrieved documents before feeding into the generation models.
173
+
174
+ Generation baselines
175
+
176
+ We adopt both code-specific LMs and strong general text-oriented LMs. For code-specific LMs, we use StarCoder2 [23], CodeGemma [38], CodeLlama [34], and DeepSeekCoder [10] in various sizes. For general text LMs, we include three top-performing models: Llama3 [27], Command-R [6] specially optimized for RAG, and proprietary GPT models gpt-3.5-turbo-0125 and gpt-4. We use the instruct version of all generation models if available, since they often perform better than the base versions.
177
+
178
+ Experimental setup and hyper-parameters
179
+
180
+ For retrieval, we implement BM25 retrievers using pyserini [21] with parameter k1 = 1.2 and b = 0.75, and use sentence-transformers [32] for all dense models with open checkpoints. For code generation, we use temperature t = 0.2, top_p = 0.95 and sample one response for all generations. We prepend the top-5 retrieved documents to the original problems, and do not include other unnecessary contexts such as few-shot demonstrations.
181
+
182
+ # 3.2 Retrieval Results
183
+
184
+ Table 3 shows diverse retrieval models’ performance across six tasks.
185
+
186
+ Comparison of Lexical and Neural Retrievers
187
+
188
+ BM25 has been widely used as a primary retrieval model in recent RACG work [47, 15], yet comprehensive comparisons against diverse retrieval systems are often under-explored. While prior studies indicate that neural retrieval systems often underperform BM25 baselines in out-of-domain scenarios [39], our analysis of CODERAG-BENCH reveals that dense embedding models frequently surpass BM25. We hypothesize that this is because many competitive retrieval models are trained on diverse tasks across various domains, including code data [1, 36], enhancing their robustness in code retrieval setups.
189
+ ---
190
+ # Do Larger Retrieval Models Perform Better?
191
+
192
+ Among dense retrieval models, increasing model size often leads to better retrieval performance, similar to the trends observed in LMs [4]. In particular, GIST-large (340M) constantly outperforms GIST-base (110M), and SFR-Mistral (7B) achieves the best performance among all open sparse and dense models on all tasks, surpassing OpenAI embedding on several tasks. However, it is important to note that this model has the largest embedding dimension, likely exceeding those of proprietary retrieval systems with 1,536 hidden dimensions.
193
+
194
+ # Efficiency
195
+
196
+ While larger retrieval models often outperform smaller ones, they often introduce significant costs. We analyze efficiency, focusing on:
197
+
198
+ 1. Encoding latency: latency to encode documents offline
199
+ 2. Search latency: latency to encode queries/documents and calculate their similarities
200
+ 3. Model storage requirements
201
+ 4. Index storage requirements
202
+
203
+ We conduct efficiency analysis on sampled CodeSearchNet Python data. See experimental details in §C. As shown in Table 4, BM25 indexing and searching takes only seconds to finish.
204
+
205
+ **Table 4: Efficiency analysis for document retrieval.**
206
+ |Method|Encoding|Search|Model|Index|
207
+ |---|---|---|---|---|
208
+ |BM25|0.15ms|0.02ms|-|141MB|
209
+ |GIST-base|3.7ms|9.7ms|440MB|307MB|
210
+ |GIST-large|13ms|18ms|1300MB|409MB|
211
+ |SFR-Mistral|316ms|113ms|14220MB|1638MB|
212
+ |Voyage-code|22ms|40ms|-|1172MB|
213
+ |OpenAI-03|31ms|47ms|-|1172MB|
214
+
215
+ # Generation with and without Canonical Documents
216
+
217
+ We first evaluate possible lower- and upper-bounds on RACG performance by testing generation:
218
+
219
+ 1. Without any retrieval
220
+ 2. With ground-truth documents
221
+
222
+ We report both results in Table 5. Compared to the base generation without contexts, incorporating canonical contexts improves in most setups, and substantially so on basic programming problems.
223
+
224
+ **Table 5: Code generation pass@1 (i) without additional contexts (before the slash), and (ii) with ground-truth documents (after the slash).**
225
+ |Method|HumanEval|MBPP|LCB|DS-1000|ODEX|RepoEval (function)|SWE-bench (Lite)|
226
+ |---|---|---|---|---|---|---|---|
227
+ |StarCoder2-7B|31.7 / 94.5|10.4 / 34.8|1.5|29.2 / 30.0|14.6 / 17.5|26.5 / 42.0|0.0 / 0.7|
228
+ |CodeGemma-7B|49.4 / 77.4|48.0 / 52.2|21.5|20.1 / 19.8|18.9 / 18.2|24.7 / 32.2|0.0 / 0.3|
229
+ |CodeLlama-7B|34.8 / 87.2|23.8 / 42.8|13.5|21.8 / 26.1|35.8 / 41.0|24.1 / 38.3|0.3 / 0.0|
230
+ |CodeLlama-34B|42.7 / 84.8|51.2 / 88.0|5.8|34.7 / 37.0|34.9 / 38.0|29.8 / 42.6|0.0 / 0.0|
231
+ |DeepSeekCoder-7B|70.1 / 87.8|60.8 / 63.6|30.5|41.4 / 43.2|39.2 / 41.7|28.2 / 43.7|0.7 / 0.0|
232
+ |DeepSeekCoder-33B|78.0 / 95.7|61.0 / 92.2|33.8|40.2 / 40.1|28.0 / 28.9|32.4 / 45.3|0.3 / 0.7|
233
+ |Llama3-8B|57.9 / 65.2|35.6 / 52.8|2.8|28.9 / 31.1|37.4 / 33.7|26.0 / 43.2|0.7 / 0.3|
234
+ |Command-R|43.3 / 51.2|37.2 / 37.8|10.0|25.8 / 28.5|35.5 / 36.0|23.9 / 37.0|0.0 / 0.3|
235
+ |GPT-3.5-turbo|72.6 / 91.5|70.8 / 72.6|35.3|43.7 / 42.9|41.7 / 40.3|23.9 / 39.1|0.7 / 2.7|
236
+ |GPT-4|75.6 / 92.6|79.4 / 81.4|43.8|52.7 / 51.2|44.6 / 44.2|32.4 / 46.1|2.3 / 2.3|
237
+
238
+ On open-domain problems, most code-specific LMs experience increases of up to 5.2 points, signifying that most models can effectively consume indirectly helpful documentation. Among all general LMs, Command-R appears to be the only model benefiting from contexts for both datasets, consistent with its superior RAG ability. However, the strongest GPT does not gain from contexts, likely due to its prior familiarity with these libraries, as well-trained models are known to memorize popular facts and benefit little from retrieval for them [25, 16].
239
+
240
+ On repository-level problems, all models increase by 7.5–17.2 points from canonical snippets in RepoEval. SWE-bench Lite, however, is much harder, with even the strongest GPT model achieving only 2.7% in the canonical setting. We conjecture that the difficulty comes from both complex multi-file editing and long contexts that stress the limits of most models. We leave the endeavor to integrate better inference-time strategies [45] with RAG to future work.
241
+
242
+ Due to the costs, we randomly sample 10k queries and 100k from CodeSearchNet Python split. For API models, we use a batch size of 64 for encoding.
243
+ ---
244
+ # Retrieval-Augmented Code Generation
245
+
246
+ We now experiment with top-performing retrieval and generation models in the full RACG setting, which requires both retrieve documents and generating conditioned on the documents. We select the best retrieval models from each type: BM25, GIST-large, and OpenAI and Voyager embeddings. For generation, we select (i) StarCoder2-7B: a weaker model that benefits the most from contexts; (ii) DeepSeekCoder-7B: one of the strongest open code LMs; and (iii) GPT-3.5-turbo: a top proprietary model.11 For each dataset, we retrieve the most relevant contexts from its canonical source marked in Table 1,12 and retrieve programming solutions for LiveCodeBench. Table 6 shows the results.
247
+
248
+ |Method| |General| |Open-Domain| | |Repo-Level|
249
+ |---|---|---|---|---|---|---|---|
250
+ | |HumanEval|MBPP|LCB|DS-1000|ODEX|RepoEval|SWE-bench-Lite|
251
+ |None|31.7|2.4|1.5|29.2|14.6|26.5|0.0|
252
+ |BM25|43.9|51.8|1.0|36.7|14.1|36.7|0.0|
253
+ |GIST-large|38.7|50.4|0.5|35.9|17.3|40.8|0.3|
254
+ |Voyage, code|39.0|52.6|0.3|36.0|15.3|45.8|0.3|
255
+ |OpenAI, small|39.0|52.6|1.5|35.5|15.9|51.2|0.0|
256
+ |OpenAI, rerank|34.8|53.4|0.5|33.4|14.1|53.9|0.3|
257
+ |Gold|94.5|34.8|-|30.0|17.5|42.0|0.7|
258
+ |w/ DeepseekCoder-7B-instruct|w/ DeepseekCoder-7B-instruct|w/ DeepseekCoder-7B-instruct|w/ DeepseekCoder-7B-instruct|w/ DeepseekCoder-7B-instruct|w/ DeepseekCoder-7B-instruct|w/ DeepseekCoder-7B-instruct|w/ DeepseekCoder-7B-instruct| | | | | | | |
259
+ |None|70.1|60.8|30.5|41.4|39.2|28.2|0.7|
260
+ |BM25|68.9|60.0|31.8|36.6|37.8|37.3|0.0|
261
+ |GIST-large|66.3|56.6|33.8|35.9|34.9|44.5|0.3|
262
+ |Voyage, code|66.5|56.4|31.8|35.9|39.4|46.6|0.3|
263
+ |OpenAI, small|68.9|58.6|32.0|35.5|37.1|55.2|0.3|
264
+ |OpenAI, rerank|53.0|60.6|31.5|36.5|37.1|55.5|0.3|
265
+ |Gold|87.8|63.6|-|43.2|41.7|48.1|0.0|
266
+ |w/ GPT-3.5-turbo|w/ GPT-3.5-turbo|w/ GPT-3.5-turbo|w/ GPT-3.5-turbo|w/ GPT-3.5-turbo|w/ GPT-3.5-turbo|w/ GPT-3.5-turbo|w/ GPT-3.5-turbo| | | | | | | |
267
+ |None|72.6|70.8|35.3|43.7|41.7|23.9|0.7|
268
+ |BM25|73.2|72.4|35.5|36.9|41.0|30.8|1.0|
269
+ |GIST-large|73.2|68.2|34.8|36.7|36.2|38.3|0.3|
270
+ |Voyage, code|75.0|66.8|34.5|37.4|41.0|43.2|0.7|
271
+ |OpenAI, small|73.8|68.4|35.8|36.9|40.3|48.0|0.3|
272
+ |OpenAI, rerank|64.0|72.6|33.5|37.4|40.5|49.6|0.3|
273
+ |Gold|91.5|72.6|-|42.9|40.3|39.1|2.7|
274
+
275
+ Basic programming problems: Most retrieved contexts can help StarCoder2 generations. On MBPP, RACG even outperforms canonical setup by 15.6–17.8. However, RACG does not improve DeepSeekCoder generations, which we observe is due to over-complicated and ungrammatically repetitive generations when having additional contexts. This may indicate that DeepSeekCoder is not robust enough to extra contexts, and hence produces undesired behaviors when receiving different inputs. In comparison, GPT-3.5-turbo can effectively improve with added contexts, showing its better ability to leverage augmented contexts.
276
+
277
+ Open-domain problems: StarCoder2 substantially benefits from retrieved library documentation on both datasets, while DeepSeekCoder only improves on ODEX, and GPT-3.5 on neither. We hypothesize that the less familiar the model is with the domain, the more the model benefits from retrieving documents. Meanwhile, poor retrieval results can also impair the effectiveness of RACG.
278
+
279
+ 11We use deepseek-coder-7b/gpt-3.5-turbo instead of deepseek-coder-33b/gpt-4 due to resource limitations.
280
+
281
+ 12For HumanEval and MBPP, we exclude the canonical document for each query and retrieve top 5 documents.
282
+ ---
283
+ # Repository-level problems
284
+
285
+ All models can benefit from retrieved code snippets on RepoEval, and RACG with openai-embeddings can often surpass the canonical setup. While some files do not include solutions (as in the canonical documents) to the problem, they may contain function definitions or usage examples that benefit end generation, suggesting that openai-embeddings understands the repository well and thus is able to retrieve implicitly supporting contexts. However, SWE-bench-lite is too complex and no RACG setups can get a non-trivial result.
286
+
287
+ # How Many Documents to Augment?
288
+
289
+ Different models have varied context length limits and context utilization abilities. Therefore, we study how model performance varies when providing different numbers of documents in the context. We experiment with one representative dataset for each task category: HumanEval since it is the most commonly used dataset, ODEX for its broad domain coverage, and RepoEval for its solvable difficulty. We compare RACG performance when providing top-1, 2, 5, and 10 documents.
290
+
291
+ |StarCoder2|DeepseekCoder|StarCoder2|DeepseekCoder|StarCoder2|DeepseekCoder|
292
+ |---|---|---|---|---|---|
293
+ | |HumanEval| |ODEX| |RepoEval|
294
+ | |Top-k|Top-k|Top-k| | |
295
+
296
+ Figure 2: Comparing RACG performance with various numbers of documents.
297
+
298
+ As shown by Figure 2, including five documents yields the best results in most settings, except for StarCoder2 on RepoEval which best uses 8 documents. Despite the drastic difference in the context limit of StarCoder2 (16k) and DeepseekCoder (4k), the sweet spot is consistently top-5 documents. While adding a few documents may include helpful contexts, adding more low-ranked documents may introduce noise and deteriorate generation due to the imperfections of retrieval systems [41].
299
+
300
+ # RACG with Open Retrieval
301
+
302
+ Besides retrieving documents from the canonical source, we explore RACG with open retrieval from all sources (§2.2). We experiment with three category-representative datasets (HumanEval, ODEX, and RepoEval) as in §3.5. We also experiment with mixed retrieval documents from all sources, where we aggregate the top-1 documents from all five sources as additional contexts.
303
+
304
+ # Can RACG Benefit Weaker Models?
305
+
306
+ We use the three top-performing retrievers and the StarCoder2 generation model, as in §3.4, to examine if RACG helps weaker code LMs.
307
+
308
+ General Programming: HumanEval
309
+
310
+ |Method|Program|Tutorial|Docs|SO|GitHub|All|
311
+ |---|---|---|---|---|---|---|
312
+ |BM25|97.6|27.4|29.3|32.9|30.5|97.6|
313
+ |GIST-large|67.1|34.8|26.7|32.3|32.9|69.1|
314
+ |OpenAI|97.6|29.3|24.4|36.0|31.1|97.6|
315
+
316
+ Table 7: Comparing five retrieval sources on HumanEval. We highlight results better than the no-retrieval baseline 31.7 with green, bold-type the best results for each source, and mark results with the canonical source in gray.
317
+
318
+ For all experiments in this section, we only include the first 500 tokens of each retrieved document, which we show to be optimal on average in ablation studies in §4, and satisfies the context limits of all models.
319
+ ---
320
+ # tutorials are about the same programming problem as the HumanEval example, with code and detailed textual explanations, hence could hint or disclose the answer. Other retrieval sources do not often contain relevant contexts thus do not improve generation. Surprisingly, generation with mixed documents performs as well as using the gold documents, suggesting that the model can discern and integrate the most useful content from a mixture of texts.
321
+
322
+ Open-Domain: ODEX Programming
323
+
324
+ | |Method|Program|Tutorial|Docs|SO|GitHub|All|
325
+ |---|---|---|---|---|---|---|---|
326
+ |Al-BM25| |18.2|13.4|14.1|11.6|15.9|16.2|
327
+ | |GIST-large|14.6|15.7|17.3|11.4|15.5|17.1|
328
+ | |OpenAI|18.7|14.1|15.9|10.9|16.9|15.3|
329
+
330
+ solutions are the most helpful source, bringing gains of 3.8–4.3; GitHub files also improve by 0.9–2.3 points. Though the retrieved solutions/files are only sometimes functionally relevant to the ODEX examples, they can demonstrate the correct usage of libraries such as regex from solutions and requests from GitHub files, thus guiding the generation to be more functionally correct.
331
+
332
+ Similar to HumanEval, GIST-large is particularly good at retrieving tutorials, while BM25 and OpenAI embeddings find higher-quality program solutions, indicating their respective domain advantages.
333
+
334
+ Repository-Level:
335
+
336
+ | |Method|Local|Program|Tutorial|Docs|SO|GitHub|Open|L+O|
337
+ |---|---|---|---|---|---|---|---|---|---|
338
+ | |BM25|36.7|23.6|25.2|23.9|23.6|25.5|23.6|31.4|
339
+ | |GIST-large|40.8|24.1|23.3|21.7|24.7|24.4|24.1|41.8|
340
+ | |OpenAI|51.2|23.9|24.1|24.1|23.1|22.8|24.9|50.9|
341
+
342
+ RepoEval sources are less useful than using code snippets retrieved from the local repository. As the RepoEval task is code completion, it is crucial to understand the local code context, which cannot be obtained from external sources. When using both local and open-source contexts (L+O), models surpass the no-retrieval baseline, yet are still only comparable with Local, suggesting more efforts to build systems that benefit from both sources.
343
+
344
+ Exploring optimal chunking strategies multiple documents may exceed model context limits hence impairing RACG. Therefore, we explore various chunking strategies to better include retrieved contexts. Compared to the no-chunking baseline, we study (i) post-retrieval chunking that takes the first N-tokens of each document, (ii) post-retrieval with reranking using BGE-reranker-base (§3.1) to find the most relevant N-token chunk from each document, and (iii) pre-retrieval chunking that chunks documents beforehand and retrieves N-token pieces directly.
345
+
346
+ For (i), we compare using the first N-tokens for N from 200 to 1500. As in Figure 3, most sources are best represented by the first 800 tokens; SO posts perform best with the first 200 tokens. We then perform (ii) reranking within this optimal range of 200–800 tokens, yet find it greatly degrades the results, showing the limited utility of current rerankers. Lastly, Table 10 shows that (iii) pre-retrieval achieves the highest scores on almost all document sources.
347
+
348
+ # Does RACG Help Stronger Models?
349
+
350
+ We have shown that RACG with open retrieval improves a relatively weaker model, StarCoder2. To see if this improvement of RACG with open retrieval generalizes to stronger models, we experiment with a series of top-performing proprietary models: GPT-4o, Claude-3-haiku/sonnet, and Gemini-1.5-flash/pro.
351
+
352
+ 14We do not chunk programming solutions since they are typically short (average <200 tokens as in Table 2).
353
+ ---
354
+ # Basic programming: HumanEval
355
+
356
+ RACG can consistently improve the performance of GPT-4 and Claude-3-sonnet when leveraging all sources of documents. However, for weaker models such as Claude-3-haiku and Gemini-1.5-flash, RACG only helps when aggregating multiple sources yet falls short when grounding on one source (even the canonical solution source). Interestingly, the stronger Claude-3-sonnet performs worse than the weaker Claude-3-haiku, but can benefit from all retrieval sources and outperform haiku with documents from the canonical programming source, suggesting its potentially better RAG ability. While the stronger Claude effectively benefits from additional contexts, the stronger Gemini-1.5-pro behaves similarly to its weaker counterpart and cannot do RACG effectively with non-canonical sources.
357
+
358
+ |Method|Baseline|Program|Tutorial|Docs|SO|GitHub|All|
359
+ |---|---|---|---|---|---|---|---|
360
+ |GPT-4o|75.6|94.5|90.2|90.9|91.5|84.8|95.1|
361
+ |Claude-3-haiku|74.4|77.4|77.4|71.3|67.7|73.2|82.9|
362
+ |Claude-3-sonnet|65.9|78.7|66.5|68.9|70.7|73.8|80.5|
363
+ |Gemini-1.5-flash|72.0|91.5|75.0|70.1|68.9|68.9|95.1|
364
+ |Gemini-1.5-pro|82.9|95.7|79.9|77.4|79.9|80.5|86.6|
365
+
366
+ # Open domain: ODEX
367
+
368
+ All models experience limited improvements by leveraging library documentation to complex the ODEX task, with the only exception that GPT-4o improves 4.6 points by incorporating programming solutions into the context. As results degrade in most cases, we conduct a manual analysis to examine when most models fail. We find that most models tend to copy functions in the context, sometimes even overwriting the function being queried, thus failing all the test cases specific to the queried function. Further, possibly affected by the plethora of programs in context, models tend to generate over-complicated programs which, however, do not often pass the test cases. In general, most models can be easily distracted or disturbed by additional contexts [41], and fail to conduct the designated code generation task, indicating much room for improvement for RACG.
369
+
370
+ |Method|Baseline|Program|Tutorial|Docs|SO|GitHub|All|
371
+ |---|---|---|---|---|---|---|---|
372
+ |GPT-4o|44.6|49.2|44.2|47.6|40.3|39.4|39.6|
373
+ |Claude-3-haiku|48.5|42.6|39.2|44.6|33.7|40.5|35.1|
374
+ |Claude-3-sonnet|41.0|37.6|35.3|38.0|34.2|42.4|38.0|
375
+ |Gemini-1.5-flash|50.6|48.3|46.7|46.2|41.9|44.9|43.1|
376
+ |Gemini-1.5-pro|57.2|54.4|45.6|51.0|46.5|39.6|46.0|
377
+
378
+ # Repository level: RepoEval
379
+
380
+ While GPT-4o can solve the RepoEval task with a reasonable success rate, all Claude models are challenged by the task and achieve less than 10% pass@1 for most scenarios. We find Claude models mostly respond with explanations of the incomplete input code, instead of the to-be-completed code even with proper instructions, possibly caused by some properties of the unknown training data. Gemini-1.5-flash also barely solves the task and often generates textual explanations; however its stronger pro variant gets about 10–25 point improvements, demonstrating its stronger repository-level code completion abilities.
381
+
382
+ |Method|Baseline|Local|Program|Tutorial|Docs|SO|GitHub|All|L+E|
383
+ |---|---|---|---|---|---|---|---|---|---|
384
+ |GPT-4o|32.4|62.2|35.4|28.7|27.8|29.0|28.2|30.3|54.2|
385
+ |Claude-3-haiku|9.1|0.5|0.5|0.5|0.5|0.5|0.2|0.2|0.5|
386
+ |Claude-3-sonnet|0.5|0.5|0.5|0.5|0.5|0.5|0.5|0.5|0.5|
387
+ |Gemini-1.5-flash|1.3|16.9|4.0|2.1|3.2|2.1|3.2|2.7|11.8|
388
+ |Gemini-1.5-pro|10.5|39.1|15.1|13.4|15.8|15.3|11.8|12.3|33.0|
389
+ ---
390
+ # Related Work
391
+
392
+ Code Generation Neural code generation has been an important task [24], and increasingly strong code LMs have been created [34, 19, 10, 38] to solve various tasks [5, 17, 15]. However, most LMs generate code solely based on NL problems and model parametric knowledge, without using external programming sources (e.g., tutorials) or a RAG approach. To fill in this gap and allow of systematic study of RACG, we orchestrate various datasets and retrieval sources to benchmark and analyze RACG systems.
393
+
394
+ Retrieval augmented generation (RAG) RAG has been widely used for knowledge-intensive tasks [18, 11]. While previous studies often train retrieval and generation components from scratch or sequentially [13], recent work has demonstrated the effectiveness of retrieval-augmented approaches on top of off-the-shelf powerful LMs [31, 25]. However, most prior works focus on text-centric tasks using general domain corpora such as Wikipedia [2]. Several prior works leverage programming context retrieved from repositories [8, 45] or documentations [47], to our knowledge, there are no prior studies analyzing the effectiveness of RACG across different coding tasks and knowledge sources. In text-centric tasks, unified benchmarks such as BEIR [39] and KILT [30] have been proposed to aggregate several text retrieval and generation tasks, and facilitate rapid progress in this area [28]. Yet, we currently lack a large-scale benchmark or analysis for RACG. To provide a systematic analysis of coding tasks with various retrieval sources, we propose a unified benchmark and codebase to enable versatile analysis of RACG.
395
+
396
+ # Conclusion
397
+
398
+ In this work, we propose CODERAG-BENCH, a benchmark for retrieval-augmented code generation with various coding tasks and retrieval sources. With our experiments with top-performing retrieval and generation models, we show that retrieving external documents can greatly benefit code generation. However, current retrieval models struggle to find accurately helpful documents, and generation models have limited context capacity and RAG abilities, both leading to suboptimal RACG results. We hope CODERAG-BENCH can serve as a solid testbed to advance future endeavors in this direction.
399
+
400
+ Acknowledgment We thank Shuyan Zhou and Xinran Zhao for the helpful discussions in the early stage of this project, and Saujas Vaduguru, Jing Yu Koh, Alex Xie, and Andy Liu for providing valuable feedback for the draft. Zora Zhiruo Wang is supported by Carnegie Mellon University Presidential Fellowship. Yiqing is supported by NSF grant DSES 2222762.
401
+
402
+ # References
403
+
404
+ [1] A. Asai, T. Schick, P. Lewis, X. Chen, G. Izacard, S. Riedel, H. Hajishirzi, and W.-t. Yih. Task-aware retrieval with instructions. In A. Rogers, J. Boyd-Graber, and N. Okazaki, editors, Findings of the Association for Computational Linguistics: ACL 2023, pages 3650–3675, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023. findings-acl.225. URL https://aclanthology.org/2023.findings-acl.225.[2] A. Asai, Z. Zhong, D. Chen, P. W. Koh, L. Zettlemoyer, H. Hajishirzi, and W.-t. Yih. Reliable, adaptable, and attributable language models with retrieval. arXiv preprint arXiv:2403.03187, 2024.[3] J. Austin, A. Odena, M. Nye, M. Bosma, H. Michalewski, D. Dohan, E. Jiang, C. Cai, M. Terry, Q. Le, et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732, 2021.[4] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei. Language models are few-shot learners. ArXiv, abs/2005.14165, 2020. URL https://api.semanticscholar.org/CorpusID:218971783.
405
+ ---
406
+ # References
407
+
408
+ [5] M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. d. O. Pinto, J. Kaplan, H. Edwards, Y. Burda, N. Joseph, G. Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021.[6] CohereAI. Command r. 2024. URL https://docs.cohere.com/docs/command-r.[7] T. Computer. Redpajama: An open source recipe to reproduce llama training dataset, 2023. URL https://github.com/togethercomputer/RedPajama-Data.[8] Y. Ding, Z. Wang, W. U. Ahmad, H. Ding, M. Tan, N. Jain, M. K. Ramanathan, R. Nallapati, P. Bhatia, D. Roth, and B. Xiang. Crosscodeeval: A diverse and multilingual benchmark for cross-file code completion. In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2023. URL https://openreview.net/forum?id=wgDcbBMSfh.[9] T. Gebru, J. Morgenstern, B. Vecchione, J. W. Vaughan, H. Wallach, H. D. Iii, and K. Crawford. Datasheets for datasets. Communications of the ACM, 64(12):86–92, 2021.[10] D. Guo, Q. Zhu, D. Yang, Z. Xie, K. Dong, W. Zhang, G. Chen, X. Bi, Y. Wu, Y. Li, et al. Deepseek-coder: When the large language model meets programming–the rise of code intelligence. arXiv preprint arXiv:2401.14196, 2024.[11] K. Guu, K. Lee, Z. Tung, P. Pasupat, and M. Chang. Retrieval augmented language model pre-training. In International conference on machine learning, pages 3929–3938. PMLR, 2020.[12] G. Izacard, M. Caron, L. Hosseini, S. Riedel, P. Bojanowski, A. Joulin, and E. Grave. Unsupervised dense information retrieval with contrastive learning. Transactions on Machine Learning Research, 2022. ISSN 2835-8856. URL https://openreview.net/forum?id=jKN1pXi7b0.[13] G. Izacard, P. Lewis, M. Lomeli, L. Hosseini, F. Petroni, T. Schick, J. A. Yu, A. Joulin, S. Riedel, and E. Grave. Few-shot learning with retrieval augmented language models. ArXiv, abs/2208.03299, 2022. URL https://api.semanticscholar.org/CorpusID:251371732.[14] N. Jain, K. Han, A. Gu, W.-D. Li, F. Yan, T. Zhang, S. Wang, A. Solar-Lezama, K. Sen, and I. Stoica. Livecodebench: Holistic and contamination free evaluation of large language models for code. arXiv preprint arXiv:2403.07974, 2024.[15] C. E. Jimenez, J. Yang, A. Wettig, S. Yao, K. Pei, O. Press, and K. R. Narasimhan. SWE-bench: Can language models resolve real-world github issues? In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=VTF8yNQM66.[16] N. Kandpal, H. Deng, A. Roberts, E. Wallace, and C. Raffel. Large language models struggle to learn long-tail knowledge. In International Conference on Machine Learning, pages 15696–15707. PMLR, 2023.[17] Y. Lai, C. Li, Y. Wang, T. Zhang, R. Zhong, L. Zettlemoyer, W.-t. Yih, D. Fried, S. Wang, and T. Yu. Ds-1000: a natural and reliable benchmark for data science code generation. In Proceedings of the 40th International Conference on Machine Learning, ICML’23. JMLR.org, 2023.[18] P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, H. Küttler, M. Lewis, W.-t. Yih, T. Rocktäschel, S. Riedel, and D. Kiela. Retrieval-augmented generation for knowledge-intensive nlp tasks. In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 9459–9474. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/6b493230205f780e1bc26945df7481e5-Paper.pdf.[19] R. Li, L. B. allal, Y. Zi, N. Muennighoff, D. Kocetkov, C. Mou, M. Marone, C. Akiki, J. LI, J. Chim, Q. Liu, E. Zheltonozhskii, T. Y. Zhuo, T. Wang, O. Dehaene, J. Lamy-Poirier, J. Monteiro, N. Gontier, M.-H. Yee, L. K. Umapathi, J. Zhu, B. Lipkin, M. Oblokulov, Z. Wang, 12
409
+ ---
410
+ # R. Murthy, J. T. Stillerman, S. S. Patel, D. Abulkhanov, M. Zocca, M. Dey, Z. Zhang, U. Bhattacharyya, W. Yu, S. Luccioni, P. Villegas, F. Zhdanov, T. Lee, N. Timor, J. Ding, C. S. Schlesinger, H. Schoelkopf, J. Ebert, T. Dao, M. Mishra, A. Gu, C. J. Anderson, B. Dolan-Gavitt, D. Contractor, S. Reddy, D. Fried, D. Bahdanau, Y. Jernite, C. M. Ferrandis, S. Hughes, T. Wolf, A. Guha, L. V. Werra, and H. de Vries. Starcoder: may the source be with you! Transactions on Machine Learning Research, 2023. ISSN 2835-8856. URL https://openreview.net/forum?id=KoFOg41haE. Reproducibility Certification.
411
+
412
+ [20] Y. Li, D. Choi, J. Chung, N. Kushman, J. Schrittwieser, R. Leblond, T. Eccles, J. Keeling, F. Gimeno, A. D. Lago, T. Hubert, P. Choy, C. de Masson d’Autume, I. Babuschkin, X. Chen, P.-S. Huang, J. Welbl, S. Gowal, A. Cherepanov, J. Molloy, D. J. Mankowitz, E. S. Robson, P. Kohli, N. de Freitas, K. Kavukcuoglu, and O. Vinyals. Competition-level code generation with alphacode. Science, 378(6624):1092–1097, 2022. doi: 10.1126/science.abq1158. URL https://www.science.org/doi/abs/10.1126/science.abq1158.
413
+
414
+ [21] J. Lin, X. Ma, S.-C. Lin, J.-H. Yang, R. Pradeep, and R. Nogueira. Pyserini: A Python toolkit for reproducible information retrieval research with sparse and dense representations. In Proceedings of the 44th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2021), 2021. URL https://dl.acm.org/doi/10.1145/3404835.3463238.
415
+
416
+ [22] J. Liu, C. S. Xia, Y. Wang, and L. Zhang. Is your code generated by chatGPT really correct? rigorous evaluation of large language models for code generation. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=1qvx610Cu7.
417
+
418
+ [23] A. Lozhkov, R. Li, L. B. Allal, F. Cassano, J. Lamy-Poirier, N. Tazi, A. Tang, D. Pykhtar, J. Liu, Y. Wei, et al. Starcoder 2 and the stack v2: The next generation. arXiv preprint arXiv:2402.19173, 2024.
419
+
420
+ [24] S. Lu, D. Guo, S. Ren, J. Huang, A. Svyatkovskiy, A. Blanco, C. B. Clement, D. Drain, D. Jiang, D. Tang, G. Li, L. Zhou, L. Shou, L. Zhou, M. Tufano, M. Gong, M. Zhou, N. Duan, N. Sundaresan, S. K. Deng, S. Fu, and S. Liu. Codexglue: A machine learning benchmark dataset for code understanding and generation. CoRR, abs/2102.04664, 2021.
421
+
422
+ [25] A. Mallen, A. Asai, V. Zhong, R. Das, D. Khashabi, and H. Hajishirzi. When not to trust language models: Investigating effectiveness of parametric and non-parametric memories. In A. Rogers, J. Boyd-Graber, and N. Okazaki, editors, Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 9802–9822, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.546. URL https://aclanthology.org/2023.acl-long.546.
423
+
424
+ [26] R. Meng, Y. Liu, S. R. Joty, C. Xiong, Y. Zhou, and S. Yavuz. Sfr-embedding-mistral: enhance text retrieval with transfer learning, 2024. URL https://blog.salesforceairesearch.com/sfr-embedded-mistral/.
425
+
426
+ [27] Meta. Introducing meta llama 3: The most capable openly available llm to date. 2024. URL https://ai.meta.com/blog/meta-llama-3/.
427
+
428
+ [28] N. Muennighoff, N. Tazi, L. Magne, and N. Reimers. Mteb: Massive text embedding benchmark. arXiv preprint arXiv:2210.07316, 2022.
429
+
430
+ [29] A. Overwijk, C. Xiong, X. Liu, C. VandenBerg, and J. Callan. Clueweb22: 10 billion web documents with visual and semantic information. arXiv preprint arXiv:2211.15848, 2022.
431
+
432
+ [30] F. Petroni, A. Piktus, A. Fan, P. Lewis, M. Yazdani, N. De Cao, J. Thorne, Y. Jernite, V. Karpukhin, J. Maillard, et al. Kilt: a benchmark for knowledge intensive language tasks. arXiv preprint arXiv:2009.02252, 2020.
433
+
434
+ [31] O. Ram, Y. Levine, I. Dalmedigos, D. Muhlgay, A. Shashua, K. Leyton-Brown, and Y. Shoham. In-context retrieval-augmented language models. Transactions of the Association for Computational Linguistics, 11:1316–1331, 2023. doi: 10.1162/tacl_a_00605. URL https://aclanthology.org/2023.tacl-1.75.
435
+ ---
436
+ # References
437
+
438
+ [32] N. Reimers and I. Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 11 2019. URL https://arxiv.org/abs/1908.10084.[33] S. E. Robertson and H. Zaragoza. The probabilistic relevance framework: Bm25 and beyond. Found. Trends Inf. Retr., 3:333–389, 2009. URL https://api.semanticscholar.org/CorpusID:207178704.[34] B. Roziere, J. Gehring, F. Gloeckle, S. Sootla, I. Gat, X. E. Tan, Y. Adi, J. Liu, T. Remez,,J. Rapin, et al. Code llama: Open foundation models for code. arXiv preprint arXiv:2308.12950 2023.[35] A. V. Solatorio. Gistembed: Guided in-sample selection of training negatives for text embedding fine-tuning, 2024. URL https://arxiv.org/abs/2402.16829.[36] H. Su, W. Shi, J. Kasai, Y. Wang, Y. Hu, M. Ostendorf, W.-t. Yih, N. A. Smith, L. Zettlemoyer, and T. Yu. One embedder, any task: Instruction-finetuned text embeddings. In A. Rogers, J. Boyd-Graber, and N. Okazaki, editors, Findings of the Association for Computational Linguistics: ACL 2023, pages 1102–1121, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.findings-acl.71. URL https://aclanthology.org/2023.findings-acl.71.[37] H. Su, S. Jiang, Y. Lai, H. Wu, B. Shi, C. Liu, Q. Liu, and T. Yu. Arks: Active retrieval inknowledge soup for code generation. arXiv preprint arXiv:2402.12317, 2024.[38] C. Team. Codegemma: Open code models based on gemma. 2024. URL https://storage.googleapis.com/deepmind-media/gemma/codegemma_report.pdf.[39] N. Thakur, N. Reimers, A. Rücklé, A. Srivastava, and I. Gurevych. BEIR: A heterogeneous benchmark for zero-shot evaluation of information retrieval models. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2), 2021. URL https://openreview.net/forum?id=wCu6T5xFjeJ.[40] VoyageAI. voyage-code-2: Elevate your code retrieval. 2024. URL https://blog.voyageai.com/2024/01/23/voyage-code-2-elevate-your-code-retrieval/.[41] Z. Wang, J. Araki, Z. Jiang, M. R. Parvez, and G. Neubig. Learning to filter context for retrieval-augmented generation. arXiv preprint arXiv:2311.08377, 2023.[42] Z. Wang, S. Zhou, D. Fried, and G. Neubig. Execution-based evaluation for open-domain code generation. In Findings of the Association for Computational Linguistics: EMNLP 2023. Association for Computational Linguistics, 2023. doi: 10.18653/v1/2023.findings-emnlp.89. URL https://aclanthology.org/2023.findings-emnlp.89.[43] Y. Wei, Z. Wang, J. Liu, Y. Ding, and L. Zhang. Magicoder: Source code is all you need. arXiv preprint arXiv:2312.02120, 2023.[44] S. Xiao, Z. Liu, P. Zhang, and N. Muennighoff. C-pack: Packaged resources to advance general chinese embedding. arXiv, 2023. URL https://arxiv.org/abs/2309.07597.[45] J. Yang, C. E. Jimenez, A. Wettig, K. Lieret, S. Yao, K. Narasimhan, and O. Press. Swe-agent: Agent-computer interfaces enable automated software engineering. arXiv preprint arXiv:2405.15793, 2024.[46] F. Zhang, B. Chen, Y. Zhang, J. Keung, J. Liu, D. Zan, Y. Mao, J.-G. Lou, and W. Chen.Repocoder: Repository-level code completion through iterative retrieval and generation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2023. URL https://aclanthology.org/2023.emnlp-main.151.[47] S. Zhou, U. Alon, F. F. Xu, Z. Jiang, and G. Neubig. Docprompting: Generating code by retrieving the docs. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=ZTCxT2t2Ru.
439
+ ---
440
+ # Appendix: Datasheets for Datasets
441
+
442
+ # A.1 Access to CODERAG-BENCH
443
+
444
+ We provide access to view and download all datasets with our additional ground-truth document annotation, as well as all documents from the five retrieval sources at https://huggingface.co/code-rag-bench. For each dataset or retrieval source, the corresponding Croissant metadata can be found via a Croissant tag button on the dataset’s page. All code generation datasets we build upon are permissively licensed. There is no noticeable chance that our data would contain personally identifiable or offensive content. The codebase for our retrieval-augmented code generation framework can be found at https://github.com/code-rag-bench/code-rag-bench. Overall, all necessary datasets, code, and evaluation procedures are accessible and documented by our main website https://code-rag-bench.github.io/.
445
+
446
+ Author Statement The authors state that they bear all responsibility in case of violation of rights of the original datasets and retrieval sources. We confirm that the data is released under the CC-BY-SA 4.0 license. The authors plan to host the dataset and codebase with the above sources on Huggingface and GitHub, and will continue to provide the necessary maintenance to both.
447
+
448
+ # A.2 Dataset Documentation and Intended Uses
449
+
450
+ We provide detailed dataset documentation and explanations for the intended uses, using the datasets for dataset [9] framework.
451
+
452
+ # A.3 Motivation
453
+
454
+ For what purpose was the dataset created? We create CODERAG-BENCH to provide a unified benchmark for retrieval-augmented code generation, encompassing various code generation tasks and retrieval sources, to facilitate research in this direction.
455
+
456
+ Who created the dataset and on behalf of which entity? Student researchers in Carnegie Mellon University, University of Washington, and University of Southern California created this dataset.
457
+
458
+ Who funded the creation of the dataset? Supervisors of this project, also professors at Carnegie Mellon University funded the creation of this dataset.
459
+
460
+ # A.4 Composition
461
+
462
+ What do the instances that comprise the dataset represent? The dataset represents (i) different programming tasks that reflect the job of software developers, and (ii) various reference sources for solving or guiding software programming.
463
+
464
+ How many instances are there in total? Our dataset comprises 9k programming problems and 160k retrieval documents in total.
465
+
466
+ Does the dataset contain all possible instances or is it a sample of instances from a larger set? For code generation datasets, our CODERAG-BENCH contains all possible instances. For retrieval sources, our CODERAG-BENCH contains a subset of documents in high quality.
467
+
468
+ What data does each instance consist of? Each example in code generation tasks consists of the problem statement, reference solution, executable test cases, and other necessary metadata specific to individual tasks. Each example in retrieval documents contains the textual content and other optional metadata specific to individual sources. All fields in both types are represented by texts.
469
+
470
+ Is there a label or target associated with each instance? Each example in code generation tasks is associated with canonical test cases, which serve as the role of labels because model-generated programs need to be executed over and pass all test cases to verify the correctness. Examples in retrieval documents do not have a label because they are collected for retrieval to augment contexts, instead of end evaluation purposes.
471
+
472
+ Is any information missing from individual instances? No, we did not remove any information collected throughout the process.
473
+ ---
474
+ # Are relationships between individual instances made explicit?
475
+
476
+ Yes. We mark examples that are originated from each dataset or each retrieval sources, by putting them into different dataset splits.
477
+
478
+ # Are there recommended data splits?
479
+
480
+ Our CODERAG-BENCH is built for evaluation purposes and only has the test split, though we do not explicitly split to 'test' since 'train' and 'validation' sets do not exist.
481
+
482
+ # Are there any errors, sources of noise, or redundancies in the dataset?
483
+
484
+ For code generation tasks, we build upon existing high-quality datasets and the authors conduct manual assessments of each dataset. We do not notice any noticeable errors among randomly sampled instances. For retrieval sources, we apply several layers to clean the scraped texts, but they may not appear perfectly standard and noises are possible.
485
+
486
+ # Is the dataset self-contained, or does it link to or otherwise rely on external resources?
487
+
488
+ Our dataset is self-contained and do not rely on external resources.
489
+
490
+ # Does the dataset contain data that might be considered confidential?
491
+
492
+ No, we collect documents from permissively licensed sources.
493
+
494
+ # Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety?
495
+
496
+ No, we collect programming data, which by default should not involve offensive languages.
497
+
498
+ # Does the dataset identify any subpopulations?
499
+
500
+ No, our dataset does not include metadata that are specifically related to any subpopulations (e.g., age, gender).
501
+
502
+ # Is it possible to identify individuals, either directly or indirectly, from the dataset?
503
+
504
+ No, our dataset is unlikely to contain user-specific information such as name or other personally identifiable data.
505
+
506
+ # Does the dataset contain data that might be considered sensitive in any way?
507
+
508
+ No.
509
+
510
+ # Collection Process
511
+
512
+ How was the data associated with each instance acquired?
513
+
514
+ The data is derived from existing datasets and online resources.
515
+
516
+ What mechanisms or procedures were used to collect the data?
517
+
518
+ We first automatically collect retrieval sources to construct a large-scale document pool. We then iteratively conduct manual verification and content refinement to ensure the data quality.
519
+
520
+ If the dataset is a sample from a larger set, what was the sampling strategy?
521
+
522
+ Only the StackOverflow posts and GitHub repositories are sampled, randomly from the full set.
523
+
524
+ Who was involved in the data collection process and how were they compensated?
525
+
526
+ Graduate student researchers who authored this work were involved in the data collection process. The students were compensated by the authorship of this paper.
527
+
528
+ Over what timeframe was the data collected?
529
+
530
+ March 2024 to May 2024.
531
+
532
+ Were any ethical review processes conducted?
533
+
534
+ No. Because our benchmark does not involve human annotation and is mostly automatic, and the data collected are mainly about programming without raised ethical concerns, we did not find it necessary to conduct ethical reviews.
535
+
536
+ # Preprocessing/cleaning/labeling
537
+
538
+ Was any preprocessing/cleaning/labeling of the data done?
539
+
540
+ For code generation datasets, we perform manual labeling of the ground-truth documents. For retrieval documents, we perform necessary cleaning to ensure the quality and clarity of these documents.
541
+
542
+ Was the "raw" data saved in addition to the preprocessed/cleaned/labeled data?
543
+
544
+ Yes, the raw data is accessible through its original sources, which we individually referenced in the main paper.
545
+
546
+ Is the software that was used to preprocess/clean/label the data available?
547
+
548
+ Yes, the software is provided by our codebase.
549
+ ---
550
+ # A.4.3 Uses
551
+
552
+ |Has the dataset been used for any tasks already?|No.|
553
+ |---|---|
554
+ |What (other) tasks could the dataset be used for?|In addition to code generation and retrieval-augmented code generation tasks that we have experimented in this work, our dataset could be potentially extended to other programming tasks or programming document retrieval tasks.|
555
+ |Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses?|Our dataset is centered around code generation tasks, but this paradigm could be further extended to other programming-related tasks.|
556
+ |Are there tasks for which the dataset should not be used?|No.|
557
+
558
+ # A.4.4 Distribution
559
+
560
+ |Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created?|No.|
561
+ |---|---|
562
+ |How will the dataset be distributed?|By Huggingface and GitHub, see the URLs in §A.1.|
563
+ |When will the dataset be distributed?|The dataset will be distributed on Jun 6th, 2024.|
564
+ |Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)?|The dataset will be distributed under the Apache 2.0 license.|
565
+ |Have any third parties imposed IP-based or other restrictions on the data associated with the instances?|No.|
566
+ |Do any export controls or other regulatory restrictions apply to the dataset or to individual instances?|No.|
567
+
568
+ # A.4.5 Maintenance
569
+
570
+ |Who will be supporting/hosting/maintaining the dataset?|The authors of this work will be supporting/hosting/maintaining the dataset. All of the URLs are available at §A.1.|
571
+ |---|---|
572
+ |Is there an erratum?|No.|
573
+ |Will the dataset be updated?|No, at least no plans to do so at the submission time.|
574
+ |If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances?|No, the dataset is not related to people.|
575
+ |Will older versions of the dataset continue to be supported/hosted/maintained?|We are not planning to update the dataset and will continue to host the current version.|
576
+ |If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so?|Yes, we will make our datasets and codebase publicly available, anyone in the community is welcome to contribute or leave comments.|
577
+
578
+ # B Example Illustrations
579
+
580
+ # B.1 Example with Canonical Documents
581
+
582
+ To present our canonical document annotation (§2.3) more concretely, we illustrate examples with their annotated canonical documents. Figure 4 shows the general-programming examples, with one HumanEval and one MBPP example, respectively. Figure 5 shows two open-domain coding examples with canonical library documentation from DS-1000 and ODEX, respectively.
583
+
584
+ # B.2 RACG with Helpful and Distracting Documents
585
+
586
+ Beyond the numerical numbers reported in experiment sections, here we provide some concrete examples that: (i) benefit from RACG when relevant documents are retrieved, and (ii) distracted by irrelevant documents retrieved hence results in degraded performance.
587
+ ---
588
+ # Canonical Document
589
+
590
+ def truncate_number(number: float) -&gt; float:
591
+
592
+ """ Given a positive floating point number, it can
593
+
594
+ be decomposed into and integer part (largest
595
+
596
+ integer smaller than given number) and decimals
597
+
598
+ (leftover part always smaller than 1).
599
+
600
+ Return the decimal part of the number.
601
+
602
+ &gt;&gt;&gt; truncate_number(3.5)
603
+
604
+ 0.5
605
+
606
+ """
607
+
608
+ return number % 1.0
609
+
610
+ # Problem
611
+
612
+ def truncate_number(number: float) -&gt; float:
613
+
614
+ """ Given a positive floating point number, it can
615
+
616
+ be decomposed into and integer part (largest
617
+
618
+ integer smaller than given number) and decimals
619
+
620
+ (leftover part always smaller than 1).
621
+
622
+ # Canonical Document
623
+
624
+ # Write a python function to remove first and last
625
+
626
+ occurrence of a given character from the string.
627
+
628
+ def remove_Occ(s,ch):
629
+
630
+ for i in range(len(s)):
631
+
632
+ if (s[i] == ch):
633
+
634
+ s = s[0 : i] + s[i + 1:]
635
+
636
+ break
637
+
638
+ for i in range(len(s) - 1,-1,-1):
639
+
640
+ if (s[i] == ch):
641
+
642
+ s = s[0 : i] + s[i + 1:]
643
+
644
+ break
645
+
646
+ return s
647
+
648
+ # Problem
649
+
650
+ # Write a python function to remove first and last
651
+
652
+ occurrence of a given character from the string.
653
+
654
+ Return the decimal part of the number.
655
+
656
+ &gt;&gt;&gt; truncate_number(3.5)
657
+
658
+ 0.5
659
+
660
+ | | |Figure 4: HumanEval (left) and MBPP (right) examples with annotated canonical solutions.| |
661
+ |---|---|---|---|
662
+ |Canonical Document|Canonical Document| | |
663
+ | |# pandas.reference.api.pandas.dataframe.groupby| |# python.library.socket#socket.socket.send|
664
+ |pandas.DataFrame.groupby DataFrame.groupby(by=None, axis=0, level=None, as_index=True, sort=True, group_keys=True, squeeze=NoDefault.no_default, observed=False, dropna=True)[source]|socket.send(bytes[, flags]) Send data to the socket. The socket must be connected to a remote socket. The optional flags argument has the same meaning as for recv() above. Returns the number of bytes sent. Applications are responsible for checking that all data has been sent; if only some of the data was transmitted, the application needs to attempt delivery of the remaining data. For further information on this topic, consult the Socket Programming HOWTO. Changed in version 3.5: If the system call is interrupted and the signal handler does not raise an exception, the method now retries the system call instead of raising an InterruptedError exception (see PEP 475 for the rationale).| | |
665
+ | |...| | |
666
+ | |# pandas.reference.api.pandas.dataframe.squeeze| | |
667
+ |pandas.DataFrame.squeeze DataFrame.squeeze(axis=None)[source] Squeeze 1 dimensional axis objects into scalars. Series or DataFrames with a single element are squeezed to a scalar.| | | |
668
+
669
+ # Problem
670
+
671
+ What is best way to achieve this ? closest I got was with the zip function but haven't managed to make it work for more then one level (two columns).
672
+
673
+ A:
674
+
675
+ &lt;code&gt;
676
+
677
+ import pandas as pd
678
+
679
+ df = pd.DataFrame({'name': ['A', 'A', 'B', 'C', 'B', 'A'], 'v1': ['A1', 'A2', 'B1', 'C1', 'B2', 'A2'], 'v2': ['A11', 'A12', 'B12', 'C11', 'B21', 'A21'], 'v3': [1, 2, 3, 4, 5, 6]})
680
+
681
+ &lt;/code&gt<Br>
682
+ result = ... # put solution in this variable<Br>
683
+ BEGIN SOLUTION<Br>
684
+ &lt;code&gt;
685
+
686
+ Figure 5: DS-1000 (left) and ODEX (right) examples with annotated canonical library documentation.
687
+
688
+ C Additional Details about Retrieval Efficiency
689
+
690
+ For open access models, we use the same single A100 GPU with 80 GRAM, with a batch size of 64 for GIST base and large, and 8 for SFR-Mistral. For proprietary models, we estimate their efficiency using a batch size of 64. We then average the time for each batch for each query and document. For Voyage-code, we apply a “dynamic-batching” technique that make sure the total tokens in the batch won’t exceed the token limit. For both open and proprietary models, we define the search efficiency as the time it takes to embed individual query and the time to calculate similarities. Note that the time for both can be optimized by tokenizing all documents and all queries, then taking the dot product.
691
+ ---
692
+ # Question
693
+
694
+ Split string 'Words, words, words.' using a regex '(\\W+)'Find intersection data between series `s1` and series `s2`
695
+
696
+ # Solution
697
+
698
+ re.split('(\\W+)', 'Words, words, words.')pd.Series(list(set(s1).intersection(set(s2))))
699
+
700
+ Context (helpful)
701
+
702
+ # Write a function to put spaces between words starting with capital letters in a given string by using regex.
703
+
704
+ import re
705
+
706
+ def capital_words_spaces(str1):
707
+
708
+ return re.sub(r"(\w)([A-Z])", r"\1 \2", str1)
709
+
710
+ Model Prediction
711
+
712
+ import re
713
+
714
+ def f_2195340():
715
+
716
+ """split string 'Words, words, words.' using a regex
717
+
718
+ '(\W+)'"""
719
+
720
+ text = 'Words, words, words.'
721
+
722
+ return re.split(r'(\W+)', text)
723
+
724
+ Context (distracting)
725
+
726
+ skimage.segmentation.join_segmentations(s1, s2)
727
+
728
+ [source]
729
+
730
+ Return the join of the two input segmentations. The join J
731
+
732
+ of S1 and S2 is defined as the segmentation in which two
733
+
734
+ voxels are in the same segment if and only if they are in
735
+
736
+ the same segment in both S1 and S2. Parameters
737
+
738
+ class Intersection(expr1, expr2, **extra)
739
+
740
+ set_extremes(e1, e2)[source]
741
+
742
+ Model Prediction
743
+
744
+ from skimage.segmentation import join_segmentations
745
+
746
+ def f_18079563(s1, s2):
747
+
748
+ """Find intersection data between series `s1` and series
749
+
750
+ `s2`
751
+
752
+ """
753
+
754
+ return join_segmentations(s1, s2)
755
+
756
+ # Figure 6: RACG helps with relevant contexts (left) and hurts with distracting contexts (right).
757
+
758
+ The actual runtime for API models varies for each organization with different rate limits and the batch size. For this experiment, we set the maximum context length to match the maximum length of the original models. This notably increases the encoding latency of SFR Mixtral, which has a longer maximum context window size than smaller embedding models.
759
+
760
+ # Result Reproduction
761
+
762
+ In Table 5 in §3, we are able to reproduce most results reported in the original papers, but with minor variances. Here we explain the differences in implementation and (potential) reasons that lead to these small performance variances.
763
+
764
+ Our approach
765
+
766
+ To keep a fair comparison among all models, we use the same prompt for each dataset when evaluating all models. Meanwhile, we use zero-shot prompts without any additional instructions, i.e., only input the original problem description of the example, to prevent unknown effects on the model performance when using different instructions and/or in-context examples.
767
+
768
+ According to this setup, we next describe the differences in prompts used by the original works and how they may affect the results.
769
+
770
+ StarCoder2
771
+
772
+ The StarCoder2 technical report [23 ] reported results on the HumanEval, MBPP, and DS-1000 datasets. On HumanEval, our reproduced results (31.7) is slightly lower than their number (35.4), possibly because the original paper additionally input the test cases as additional information in the prompt, whereas in our basic NL-to-code setup, no test cases are provided. This additional information may cause their results to be higher.
773
+
774
+ On MBPP dataset, they adopt a subset of MBPP, i.e., 399 out of 427 examples that have additional test cases populated by Liu et al. [22] . In contrast, we evaluate on the entire dataset, which is likely to cause the variance in results.
775
+
776
+ On DS-1000, the original paper samples 40 generations and report the pass@1 rate, while we only generate one program with greedy decoding. This difference in decoding strategy may cause slight variance in the results.
777
+
778
+ CodeGemma
779
+
780
+ The CodeGemma technical report [38 ] reported results on HumanEval and MBPP datasets, but does not provide any details about the instructions, few-shot examples, or other parts of the prompt that they use. We were able to roughly reproduce their reported results, but with 3-5 points less in pass@1.
781
+
782
+ CodeLlama
783
+
784
+ The CodeLlama technical report [34] reports results on HumanEval and MBPP datasets. We were able to perfectly reproduce their results on the HumanEval dataset under the zero-shot setting. However, for MBPP experiments, they use 3-shot prompting, which could potentially explain that our zero-shot results are 4 points lower in pass@1.
785
+ ---
786
+ # DeepSeekCoder
787
+
788
+ The DeepSeekCoder technical report [10] reports results on HumanEval and MBPP for the 7B-instruct-v1.5 and the 33B-instruct models, the report additionally reports DS1000 results for the 33B-instruct model. We could reproduce the original results on HumanEval and DS-1000, but got slightly worse results on MBPP because they used few-shot prompting, which should outperform our zero-shot method.
789
+
790
+ # Llama3
791
+
792
+ Since there is no technical report available yet, the official blog post [15] reports results on HumanEval, without any descriptions on prompting construction or the inference process. Our reproduced results are about 4 points lower than their original results.
793
+
794
+ 15https://ai.meta.com/blog/meta-llama-3/