# arXiv:2406.14497v1 [cs.SE] 20 Jun 2024 # CODERAG-BENCH: Can Retrieval Augment Code Generation? Zora Zhiruo Wang♠∗ Akari Asai♢∗ Xinyan Velocity Yu♡ Frank F. Xu♠ Yiqing Xie♠ ♠Carnegie Mellon UniversityGraham Neubig ♠♢University of WashingtonDaniel Fried♠♡University of Southern California https://code-rag-bench.github.io/ # Abstract While language models (LMs) have proven remarkably adept at generating code, many programs are challenging for LMs to generate using their parametric knowledge alone. Providing external contexts such as library documentation can facilitate generating accurate and functional code. Despite the success of retrieval-augmented generation (RAG) in various text-oriented tasks, its potential for improving code generation remains under-explored. In this work, we conduct a systematic, large-scale analysis by asking: in what scenarios can retrieval benefit code generation models? and what challenges remain? We first curate a comprehensive evaluation benchmark, CODERAG-BENCH, encompassing three categories of code generation tasks, including basic programming, open-domain, and repository-level problems. We aggregate documents from five sources for models to retrieve contexts: competition solutions, online tutorials, library documentation, StackOverflow posts, and GitHub repositories. We examine top-performing models on CODERAG-BENCH by providing contexts retrieved from one or multiple sources. While notable gains are made in final code generation by retrieving high-quality contexts across various settings, our analysis reveals room for improvement—current retrievers still struggle to fetch useful contexts especially with limited lexical overlap, and generators fail to improve with limited context lengths or abilities to integrate additional contexts. We hope CODERAG-BENCH serves as an effective testbed to encourage further development of advanced code-oriented RAG methods. # 1 Introduction The task of generating program code from natural language (NL) descriptions has rapidly advanced with language models (LMs) [ 5, 20, 19 , 34 ]. While more advanced code generation models are constantly emerging [23 , 43 , 10 ], most of these models employ an NL-to-code generation paradigm without the ability to integrate additional context. However, it is often challenging to directly generate programs without additional information in many complex coding scenarios, e.g., when using unfamiliar libraries that models cannot easily memorize [47 , 15 ]. Further, solely relying on parametric knowledge learned during training also makes it harder to adapt generation to new distributions during testing [ 2]. For example, models are unable to stay up-to-date with continuously-evolving public libraries [47], or private code bases that are not included in the pre-training data [46, 15]. Retrieval-augmented generation (RAG) [ 18, 11 ] retrieves and incorporates relevant documents at inference time. RAG reduces the need to include all knowledge within model parameters [ 2], leading to accuracy improvements in various scenarios [ 13 ], even without additional training [ 31 , 25 ]. It also ∗Equal contribution. Preprint. Under review. --- # 8 Coding Tasks for Code RAG |Basic Programming x 3|Open-Domain x 2|Repository-Level x 2|Code Retrieval x 1| |---|---|---|---| |def has_close_elements(numbers: List[float], threshold: float) -> bool:|df.columns = np.concatenate([df.iloc[0, :2], df.columns[2:]])|import def main(): run_colmap(image_path, ...)|Q: Print a log message to standard error.| |""" Check if in given list of numbers, are any two numbers closer to each other than given threshold."|df = df.iloc[1:].reset_index(drop=True)|import numpy as np|utils.copy_img(data, img_dir)| | |return df|import pandas as pd|utils.downscale(img_dir, config)| | | |# call COLMAP executable|def downscale(img_dir, config):| | | |> __________________|return img| | | | |def print_log(text, *colors):| | | | |sys.stderr.write(sprint(text, *colors))| # 5 Document Sources for Retrieval - Programming Solutions - Library Documentation - Online Tutorials - StackOverflow Posts - GitHub Repositories # VoYAGE AI - Models # Datastore - Retriever # Evaluation - ✓Evaluation against canonical docs - ✓Execution-based end-to-end evaluation Figure 1: Overview of CODERAG-BENCH. allows for flexible knowledge updates by swapping datastores—large-scale data used during inference. Nevertheless, prior work often focuses on general-domain text-oriented generation tasks [30] using a general datastore such as Wikipedia [2 ]. While several works explore ways to incorporate library documents [47 , 37 ] or files within a repository [46, 15], retrieval-augmented approaches on other types of coding problems and diverse retrieval sources are still largely underexplored. We propose a new evaluation benchmark, CODERAG-BENCH, to fill the gap and facilitate research on an alternative paradigm—retrieval-augmented code generation (RACG; §2). CODERAG-BENCH (depicted in Figure 1) integrates six programming tasks of four categories: basic programming, open-domain coding, repository-level, and code retrieval problems. To analyze the effectiveness of diverse code-related datastores, we also collect a range of retrieval documents from five sources: programming solutions, tutorials from online platforms, Python library documentation, StackOverflow (SO) posts, and GitHub files. Further, for each problem, we manually annotate ground-truth documents from their corresponding sources, as a reference for RACG. In summary, CODERAG-BENCH gathers 9k coding problems and 25M retrieval documents, empowering experiments of various setups, and provides reproducible and reliable retrieval and end-to-end execution-based evaluations for RACG. Based on this benchmark, we conduct holistic evaluations in retrieval, generation, and RACG scenarios (§3). Although code generation models can benefit from ground-truth documents in multiple scenarios, current retrieval models struggle with selecting accurate documents, especially for open-domain tasks. Meanwhile, many code generation models experience little gains due to their limited context capacity to consume retrieved documents, or limited ability to do RACG effectively. Beyond canonical retrieval (i.e., from the ground-truth source), we also explore RACG with open retrieval, i.e., retrieving documents from various sources with different chunking strategies (§4). We find that each type of coding task can benefit from functionally relevant snippets from certain sources, and chunking documents to 200–800 tokens often gives the best results. We hope CODERAG-BENCH can serve as a testbed for future work exploring, analyzing, and improving RACG systems. # The CODERAG-BENCH For CODERAG-BENCH (Figure 1), the curation methodology is motivated by the following three factors: (i) Diverse tasks: Code generation involves versatile tasks that operate on different levels (line, function, repository) and various domains (closed, open). (ii) Rigorous and reproducible evaluation: We provide high-quality annotation of ground-truth documents to enable retrieval evaluation, and execution-based evaluation for all code generation tasks to rigorously measure functional correctness. (iii) Unified interface: While current datasets utilize heterogeneous pipelines, our codebase provides a unified interface for retrieval, augmented generation, and evaluation. In this section, we introduce the creation process of CODERAG-BENCH: programming problem integration (§2.1), retrieval source collection (§2.2), canonical document annotation (§2.3), and the evaluation pipeline (§2.4). Examples with canonical documents are available in §B. --- # Table 1: Overview of the datasets in CodeRAG-Bench. CSN stands for CodeSearchNet. |Type|Dataset|# Examples|Ground-Truth Docs|Evaluation| |---|---|---|---|---| | |HumanEval|164|program solutions|execution| |Basic programming|MBPP|500|program solutions|execution| | |LiveCodeBench|400|-|execution| |Open-domain|DS-1000|1000|docs|execution| | |ODEX|945|docs, stackoverflow|execution| |Repository-level|RepoEval (function)|373|github repository|execution| | |SWE-bench-Lite|300|github repository|execution| |Code retrieval|CodeSearchNet-Py|22177|CSN functions|ndcg@10| # Programming Problems We categorize existing Python-based coding datasets into four types: code retrieval, basic programming, open-domain problems, and repository-level problems. To ensure the diversity of datasets, we choose and unify multiple frequently adopted datasets for each category, as listed in Table 1. # Basic programming problems This category includes interview-style problems that mostly require Python built-in operations and pose algorithmic challenges. We select the two most widely used datasets: HumanEval and MBPP, which ask the model to complete a function from an NL problem description. However, due to limited public knowledge about model training data, it is unclear whether models suffer from data contamination on HumanEval and MBPP. Hence, we also include LiveCodeBench with problems collected from coding websites after the training cutoff of LMs that we consider, to decrease the risk of contamination. # Open-domain problems Open-domain coding problems require Python libraries beyond the standard libraries used in basic programming problems. We adopt the DS-1000 and ODEX datasets that cover data-science and general open-domain coding problems. DS-1000 collects data science problems with programs using seven common data-related libraries such as pandas and numpy. ODEX covers problems using a broader range of 79 libraries, such as web requests with requests and database operations with sqlalchemy. # Repository-level coding problems Beyond function-level coding, some problems require editing files in the context of an entire GitHub repository. We thus adopt RepoEval and SWE-bench for repository-level code generation and issue-solving tasks. We integrate all three splits of RepoEval but only report its function split, as it is the only split supporting execution-based evaluation. Notably, our codebase is the first to provide reproducible execution evaluation on RepoEval. SWE-bench focuses on resolving GitHub issues by asking models to edit multiple files that pass the required test cases. However, due to reproducibility issues on the full dataset, we use SWE-bench-Lite, a 300-problem subset whose results can be reproduced, in addition to a packaged Docker container with the pre-populated evaluation environment setup. # Code retrieval problems In addition to retrieval for augmenting generations, we adopt the Python split of CodeSearchNet (CSN) as a code retrieval task. CSN searches for the correct implementation of an NL query from a pool of functions collected from GitHub repositories. Instead of monitoring how generation changes with various retrieval results, CSN can directly measure retrieval quality. In this work we focus on Python-related tasks because it is the most widely-used programming language for benchmarking code generation. We leave extensions to other programming languages for future work. Two other splits (API and line completion) are evaluated by lexical measures that have been shown as ineffective in signifying functional correctness. Links referenced in the text: - https://github.com/princeton-nlp/SWE-bench/issues - https://www.swebench.com/lite.html - https://github.com/OpenDevin/OpenDevin/tree/main/evaluation/swe_bench#opendevin-swe-bench-docker-image --- # Retrieval Sources We collect retrieval documents from five commonly used resources for program developers, listed in Table 2. CODERAG-BENCH supports two retrieval setups: canonical retrieval—retrieves documents from only the canonical datastore (§2.3), and open retrieval—retrieves documents from any datastore. |Programming solutions|We create one document from each basic programming problems that have canonical solutions (i.e., HumanEval and MBPP), following VoyageAI [40], by concatenating its NL problem and program solution.|Resource|Size|Length| |---|---|---|---|---| | | |Programming solutions|1.1k|194.6| | | |Online tutorials|79.4k|1502.5| | | |Library documentation|34k|953.4| | | |StackOverflow posts|23.5M|689.2| | | |Github files|1.7M|5135.4| Online tutorials: We collect tutorials from multiple websites including GeeksforGeeks, W3Schools, tutorialspoint, and Towards Data Science, via the raw HTML pages obtained from ClueWeb22 [29], a large-scale crawled web corpus. Each page contains code snippets and their text explanations, covering topics from basic programming techniques to advanced library usage. Library documentation: We collect the official documentation provided by devdocs.io for all Python libraries following [47]. These could be especially useful for open-domain and repository-level problems that use some library functions to realize complex setups. StackOverflow posts: StackOverflow (SO) is among the most frequently visited sites for developers. We collect all SO posts from the RedPajama-1T [7] stackexchange split. We treat each post as a retrievable document, that has a question description, code responses, and textual explanations. GitHub repository: Lastly, we collect high-quality repositories from GitHub, using the github split of RedPajama-1T [7], as developers often refer to popular repositories when writing their own programs. Following this practical paradigm, we enable LMs to retrieve files from other GitHub repositories as contexts to write the current program. # Canonical Document Annotation To enable reliable retrieval evaluation and estimate the upper bound of a RACG system with a perfect retriever, it is crucial that all examples include canonical documents: the document(s) containing the supporting contexts needed to solve the programming problem. However, because RACG is under-explored, most existing datasets do not provide these canonical documents. Therefore, we annotate the canonical documents from their corresponding retrieval pool, as listed in Table 1. Basic programming problems: The canonical document for each example in HumanEval and MBPP is the document we created in §2.2 in the programming solutions pool. Since LiveCodeBench does not provide solutions to its problems, we do not annotate canonical documents for it. Open-domain problems: Since open-domain problems require libraries, we annotate the canonical library documentation for DS-1000 and ODEX examples. We first automatically parse out the library functions used in each program, and find their corresponding documentation entries. Then, we manually verify the functions and remove incorrect ones. This yields an average of 1.4 and 1.2 entries for DS-1000 and ODEX. Repository-level problems: We adopt canonical code from the original dataset as our canonical documents: 20-line code snippets of the missing functions in RepoEval, and the ground-truth edited files in SWE-bench. We obtain these from the completed local repositories from the original datasets. # Evaluation Metrics For retrieval, we evaluate NDCG, Precision and Recall [39] and use NDCG@10 percentage as our primary metric, following prior work [12]. We only evaluate the canonical retrieval setup. For code generation, we adopt the pass@k metric [5] to measure the execution correctness of programs. We evaluate the final RAG performance both in canonical and open retrieval setups. 7https://geeksforgeeks.org; https://www.w3schools.com/; https://www.tutorialspoint.com/; https://towardsdatascience.com --- # Table 3: Retrieval performance (NDCG@10) on code generation datasets |Method| |Problem Solutions| |Library Docs| | |In-Repository Files| |---|---|---|---|---|---|---|---| |BM25|100.0|98.6|89.1|5.2|6.7|93.2|43.0| |GIST-base (769)|98.0|98.0|89.9|12.0|12.1|81.2|46.8| |GIST-large (1024)|100.0|98.9|89.6|13.6|28.0|82.9|47.8| |BGE-base (769)|99.7|98.0|90.0|10.8|22.0|77.5|44.9| |BGE-large (1024)|98.0|99.0|90.6|8.9|11.5|80.4|40.1| |SFR-Mistral (4096)|100.0|99.0|-|19.3|37.1|83.8|62.7| |Voyage-code (1536)|100.0|99.0|-|33.1|26.6|94.3|29.1| |OpenAI-03 (1536)|100.0|98.9|-|18.2|16.5|93.0|43.3| # Canonical RACG: Experiments and Results We conduct baseline experiments with multiple top-performing retrieval and generation models on CODERAG-BENCH (§3.1) using canonical data sources. We report results of document retrieval (§3.2), direct NL-to-code generation (§3.3), and end-to-end RACG using retrieved context (§3.4). # 3.1 Experimental Setup Retrieval baselines We adopt top-performing retrievers from three categories: sparse retrievers, dense retrievers with open checkpoints, and proprietary APIs. Concretely, we use BM25 [33] to represent sparse retrievers that are often robust to domain adaptations [39]. We use dense retrievers of varying sizes, namely BGE-base/large [44], GIST-base/large [35], and SFR-Embedding-Mistral [26] which are among the top of the MTEB leaderboard [28]. We use two proprietary retrieval APIs, voyage-code-2 [40] and openai-text-embedding-small-03, which are the best options with reasonable cost in our preliminary study. We also explore reranking with BGE-reranker-base [44], which reranks the top-100 openai retrieved documents before feeding into the generation models. Generation baselines We adopt both code-specific LMs and strong general text-oriented LMs. For code-specific LMs, we use StarCoder2 [23], CodeGemma [38], CodeLlama [34], and DeepSeekCoder [10] in various sizes. For general text LMs, we include three top-performing models: Llama3 [27], Command-R [6] specially optimized for RAG, and proprietary GPT models gpt-3.5-turbo-0125 and gpt-4. We use the instruct version of all generation models if available, since they often perform better than the base versions. Experimental setup and hyper-parameters For retrieval, we implement BM25 retrievers using pyserini [21] with parameter k1 = 1.2 and b = 0.75, and use sentence-transformers [32] for all dense models with open checkpoints. For code generation, we use temperature t = 0.2, top_p = 0.95 and sample one response for all generations. We prepend the top-5 retrieved documents to the original problems, and do not include other unnecessary contexts such as few-shot demonstrations. # 3.2 Retrieval Results Table 3 shows diverse retrieval models’ performance across six tasks. Comparison of Lexical and Neural Retrievers BM25 has been widely used as a primary retrieval model in recent RACG work [47, 15], yet comprehensive comparisons against diverse retrieval systems are often under-explored. While prior studies indicate that neural retrieval systems often underperform BM25 baselines in out-of-domain scenarios [39], our analysis of CODERAG-BENCH reveals that dense embedding models frequently surpass BM25. We hypothesize that this is because many competitive retrieval models are trained on diverse tasks across various domains, including code data [1, 36], enhancing their robustness in code retrieval setups. --- # Do Larger Retrieval Models Perform Better? Among dense retrieval models, increasing model size often leads to better retrieval performance, similar to the trends observed in LMs [4]. In particular, GIST-large (340M) constantly outperforms GIST-base (110M), and SFR-Mistral (7B) achieves the best performance among all open sparse and dense models on all tasks, surpassing OpenAI embedding on several tasks. However, it is important to note that this model has the largest embedding dimension, likely exceeding those of proprietary retrieval systems with 1,536 hidden dimensions. # Efficiency While larger retrieval models often outperform smaller ones, they often introduce significant costs. We analyze efficiency, focusing on: 1. Encoding latency: latency to encode documents offline 2. Search latency: latency to encode queries/documents and calculate their similarities 3. Model storage requirements 4. Index storage requirements We conduct efficiency analysis on sampled CodeSearchNet Python data. See experimental details in §C. As shown in Table 4, BM25 indexing and searching takes only seconds to finish. **Table 4: Efficiency analysis for document retrieval.** |Method|Encoding|Search|Model|Index| |---|---|---|---|---| |BM25|0.15ms|0.02ms|-|141MB| |GIST-base|3.7ms|9.7ms|440MB|307MB| |GIST-large|13ms|18ms|1300MB|409MB| |SFR-Mistral|316ms|113ms|14220MB|1638MB| |Voyage-code|22ms|40ms|-|1172MB| |OpenAI-03|31ms|47ms|-|1172MB| # Generation with and without Canonical Documents We first evaluate possible lower- and upper-bounds on RACG performance by testing generation: 1. Without any retrieval 2. With ground-truth documents We report both results in Table 5. Compared to the base generation without contexts, incorporating canonical contexts improves in most setups, and substantially so on basic programming problems. **Table 5: Code generation pass@1 (i) without additional contexts (before the slash), and (ii) with ground-truth documents (after the slash).** |Method|HumanEval|MBPP|LCB|DS-1000|ODEX|RepoEval (function)|SWE-bench (Lite)| |---|---|---|---|---|---|---|---| |StarCoder2-7B|31.7 / 94.5|10.4 / 34.8|1.5|29.2 / 30.0|14.6 / 17.5|26.5 / 42.0|0.0 / 0.7| |CodeGemma-7B|49.4 / 77.4|48.0 / 52.2|21.5|20.1 / 19.8|18.9 / 18.2|24.7 / 32.2|0.0 / 0.3| |CodeLlama-7B|34.8 / 87.2|23.8 / 42.8|13.5|21.8 / 26.1|35.8 / 41.0|24.1 / 38.3|0.3 / 0.0| |CodeLlama-34B|42.7 / 84.8|51.2 / 88.0|5.8|34.7 / 37.0|34.9 / 38.0|29.8 / 42.6|0.0 / 0.0| |DeepSeekCoder-7B|70.1 / 87.8|60.8 / 63.6|30.5|41.4 / 43.2|39.2 / 41.7|28.2 / 43.7|0.7 / 0.0| |DeepSeekCoder-33B|78.0 / 95.7|61.0 / 92.2|33.8|40.2 / 40.1|28.0 / 28.9|32.4 / 45.3|0.3 / 0.7| |Llama3-8B|57.9 / 65.2|35.6 / 52.8|2.8|28.9 / 31.1|37.4 / 33.7|26.0 / 43.2|0.7 / 0.3| |Command-R|43.3 / 51.2|37.2 / 37.8|10.0|25.8 / 28.5|35.5 / 36.0|23.9 / 37.0|0.0 / 0.3| |GPT-3.5-turbo|72.6 / 91.5|70.8 / 72.6|35.3|43.7 / 42.9|41.7 / 40.3|23.9 / 39.1|0.7 / 2.7| |GPT-4|75.6 / 92.6|79.4 / 81.4|43.8|52.7 / 51.2|44.6 / 44.2|32.4 / 46.1|2.3 / 2.3| On open-domain problems, most code-specific LMs experience increases of up to 5.2 points, signifying that most models can effectively consume indirectly helpful documentation. Among all general LMs, Command-R appears to be the only model benefiting from contexts for both datasets, consistent with its superior RAG ability. However, the strongest GPT does not gain from contexts, likely due to its prior familiarity with these libraries, as well-trained models are known to memorize popular facts and benefit little from retrieval for them [25, 16]. On repository-level problems, all models increase by 7.5–17.2 points from canonical snippets in RepoEval. SWE-bench Lite, however, is much harder, with even the strongest GPT model achieving only 2.7% in the canonical setting. We conjecture that the difficulty comes from both complex multi-file editing and long contexts that stress the limits of most models. We leave the endeavor to integrate better inference-time strategies [45] with RAG to future work. Due to the costs, we randomly sample 10k queries and 100k from CodeSearchNet Python split. For API models, we use a batch size of 64 for encoding. --- # Retrieval-Augmented Code Generation We now experiment with top-performing retrieval and generation models in the full RACG setting, which requires both retrieve documents and generating conditioned on the documents. We select the best retrieval models from each type: BM25, GIST-large, and OpenAI and Voyager embeddings. For generation, we select (i) StarCoder2-7B: a weaker model that benefits the most from contexts; (ii) DeepSeekCoder-7B: one of the strongest open code LMs; and (iii) GPT-3.5-turbo: a top proprietary model.11 For each dataset, we retrieve the most relevant contexts from its canonical source marked in Table 1,12 and retrieve programming solutions for LiveCodeBench. Table 6 shows the results. |Method| |General| |Open-Domain| | |Repo-Level| |---|---|---|---|---|---|---|---| | |HumanEval|MBPP|LCB|DS-1000|ODEX|RepoEval|SWE-bench-Lite| |None|31.7|2.4|1.5|29.2|14.6|26.5|0.0| |BM25|43.9|51.8|1.0|36.7|14.1|36.7|0.0| |GIST-large|38.7|50.4|0.5|35.9|17.3|40.8|0.3| |Voyage, code|39.0|52.6|0.3|36.0|15.3|45.8|0.3| |OpenAI, small|39.0|52.6|1.5|35.5|15.9|51.2|0.0| |OpenAI, rerank|34.8|53.4|0.5|33.4|14.1|53.9|0.3| |Gold|94.5|34.8|-|30.0|17.5|42.0|0.7| |w/ DeepseekCoder-7B-instruct|w/ DeepseekCoder-7B-instruct|w/ DeepseekCoder-7B-instruct|w/ DeepseekCoder-7B-instruct|w/ DeepseekCoder-7B-instruct|w/ DeepseekCoder-7B-instruct|w/ DeepseekCoder-7B-instruct|w/ DeepseekCoder-7B-instruct| | | | | | | | |None|70.1|60.8|30.5|41.4|39.2|28.2|0.7| |BM25|68.9|60.0|31.8|36.6|37.8|37.3|0.0| |GIST-large|66.3|56.6|33.8|35.9|34.9|44.5|0.3| |Voyage, code|66.5|56.4|31.8|35.9|39.4|46.6|0.3| |OpenAI, small|68.9|58.6|32.0|35.5|37.1|55.2|0.3| |OpenAI, rerank|53.0|60.6|31.5|36.5|37.1|55.5|0.3| |Gold|87.8|63.6|-|43.2|41.7|48.1|0.0| |w/ GPT-3.5-turbo|w/ GPT-3.5-turbo|w/ GPT-3.5-turbo|w/ GPT-3.5-turbo|w/ GPT-3.5-turbo|w/ GPT-3.5-turbo|w/ GPT-3.5-turbo|w/ GPT-3.5-turbo| | | | | | | | |None|72.6|70.8|35.3|43.7|41.7|23.9|0.7| |BM25|73.2|72.4|35.5|36.9|41.0|30.8|1.0| |GIST-large|73.2|68.2|34.8|36.7|36.2|38.3|0.3| |Voyage, code|75.0|66.8|34.5|37.4|41.0|43.2|0.7| |OpenAI, small|73.8|68.4|35.8|36.9|40.3|48.0|0.3| |OpenAI, rerank|64.0|72.6|33.5|37.4|40.5|49.6|0.3| |Gold|91.5|72.6|-|42.9|40.3|39.1|2.7| Basic programming problems: Most retrieved contexts can help StarCoder2 generations. On MBPP, RACG even outperforms canonical setup by 15.6–17.8. However, RACG does not improve DeepSeekCoder generations, which we observe is due to over-complicated and ungrammatically repetitive generations when having additional contexts. This may indicate that DeepSeekCoder is not robust enough to extra contexts, and hence produces undesired behaviors when receiving different inputs. In comparison, GPT-3.5-turbo can effectively improve with added contexts, showing its better ability to leverage augmented contexts. Open-domain problems: StarCoder2 substantially benefits from retrieved library documentation on both datasets, while DeepSeekCoder only improves on ODEX, and GPT-3.5 on neither. We hypothesize that the less familiar the model is with the domain, the more the model benefits from retrieving documents. Meanwhile, poor retrieval results can also impair the effectiveness of RACG. 11We use deepseek-coder-7b/gpt-3.5-turbo instead of deepseek-coder-33b/gpt-4 due to resource limitations. 12For HumanEval and MBPP, we exclude the canonical document for each query and retrieve top 5 documents. --- # Repository-level problems All models can benefit from retrieved code snippets on RepoEval, and RACG with openai-embeddings can often surpass the canonical setup. While some files do not include solutions (as in the canonical documents) to the problem, they may contain function definitions or usage examples that benefit end generation, suggesting that openai-embeddings understands the repository well and thus is able to retrieve implicitly supporting contexts. However, SWE-bench-lite is too complex and no RACG setups can get a non-trivial result. # How Many Documents to Augment? Different models have varied context length limits and context utilization abilities. Therefore, we study how model performance varies when providing different numbers of documents in the context. We experiment with one representative dataset for each task category: HumanEval since it is the most commonly used dataset, ODEX for its broad domain coverage, and RepoEval for its solvable difficulty. We compare RACG performance when providing top-1, 2, 5, and 10 documents. |StarCoder2|DeepseekCoder|StarCoder2|DeepseekCoder|StarCoder2|DeepseekCoder| |---|---|---|---|---|---| | |HumanEval| |ODEX| |RepoEval| | |Top-k|Top-k|Top-k| | | Figure 2: Comparing RACG performance with various numbers of documents. As shown by Figure 2, including five documents yields the best results in most settings, except for StarCoder2 on RepoEval which best uses 8 documents. Despite the drastic difference in the context limit of StarCoder2 (16k) and DeepseekCoder (4k), the sweet spot is consistently top-5 documents. While adding a few documents may include helpful contexts, adding more low-ranked documents may introduce noise and deteriorate generation due to the imperfections of retrieval systems [41]. # RACG with Open Retrieval Besides retrieving documents from the canonical source, we explore RACG with open retrieval from all sources (§2.2). We experiment with three category-representative datasets (HumanEval, ODEX, and RepoEval) as in §3.5. We also experiment with mixed retrieval documents from all sources, where we aggregate the top-1 documents from all five sources as additional contexts. # Can RACG Benefit Weaker Models? We use the three top-performing retrievers and the StarCoder2 generation model, as in §3.4, to examine if RACG helps weaker code LMs. General Programming: HumanEval |Method|Program|Tutorial|Docs|SO|GitHub|All| |---|---|---|---|---|---|---| |BM25|97.6|27.4|29.3|32.9|30.5|97.6| |GIST-large|67.1|34.8|26.7|32.3|32.9|69.1| |OpenAI|97.6|29.3|24.4|36.0|31.1|97.6| Table 7: Comparing five retrieval sources on HumanEval. We highlight results better than the no-retrieval baseline 31.7 with green, bold-type the best results for each source, and mark results with the canonical source in gray. For all experiments in this section, we only include the first 500 tokens of each retrieved document, which we show to be optimal on average in ablation studies in §4, and satisfies the context limits of all models. --- # tutorials are about the same programming problem as the HumanEval example, with code and detailed textual explanations, hence could hint or disclose the answer. Other retrieval sources do not often contain relevant contexts thus do not improve generation. Surprisingly, generation with mixed documents performs as well as using the gold documents, suggesting that the model can discern and integrate the most useful content from a mixture of texts. Open-Domain: ODEX Programming | |Method|Program|Tutorial|Docs|SO|GitHub|All| |---|---|---|---|---|---|---|---| |Al-BM25| |18.2|13.4|14.1|11.6|15.9|16.2| | |GIST-large|14.6|15.7|17.3|11.4|15.5|17.1| | |OpenAI|18.7|14.1|15.9|10.9|16.9|15.3| solutions are the most helpful source, bringing gains of 3.8–4.3; GitHub files also improve by 0.9–2.3 points. Though the retrieved solutions/files are only sometimes functionally relevant to the ODEX examples, they can demonstrate the correct usage of libraries such as regex from solutions and requests from GitHub files, thus guiding the generation to be more functionally correct. Similar to HumanEval, GIST-large is particularly good at retrieving tutorials, while BM25 and OpenAI embeddings find higher-quality program solutions, indicating their respective domain advantages. Repository-Level: | |Method|Local|Program|Tutorial|Docs|SO|GitHub|Open|L+O| |---|---|---|---|---|---|---|---|---|---| | |BM25|36.7|23.6|25.2|23.9|23.6|25.5|23.6|31.4| | |GIST-large|40.8|24.1|23.3|21.7|24.7|24.4|24.1|41.8| | |OpenAI|51.2|23.9|24.1|24.1|23.1|22.8|24.9|50.9| RepoEval sources are less useful than using code snippets retrieved from the local repository. As the RepoEval task is code completion, it is crucial to understand the local code context, which cannot be obtained from external sources. When using both local and open-source contexts (L+O), models surpass the no-retrieval baseline, yet are still only comparable with Local, suggesting more efforts to build systems that benefit from both sources. Exploring optimal chunking strategies multiple documents may exceed model context limits hence impairing RACG. Therefore, we explore various chunking strategies to better include retrieved contexts. Compared to the no-chunking baseline, we study (i) post-retrieval chunking that takes the first N-tokens of each document, (ii) post-retrieval with reranking using BGE-reranker-base (§3.1) to find the most relevant N-token chunk from each document, and (iii) pre-retrieval chunking that chunks documents beforehand and retrieves N-token pieces directly. For (i), we compare using the first N-tokens for N from 200 to 1500. As in Figure 3, most sources are best represented by the first 800 tokens; SO posts perform best with the first 200 tokens. We then perform (ii) reranking within this optimal range of 200–800 tokens, yet find it greatly degrades the results, showing the limited utility of current rerankers. Lastly, Table 10 shows that (iii) pre-retrieval achieves the highest scores on almost all document sources. # Does RACG Help Stronger Models? We have shown that RACG with open retrieval improves a relatively weaker model, StarCoder2. To see if this improvement of RACG with open retrieval generalizes to stronger models, we experiment with a series of top-performing proprietary models: GPT-4o, Claude-3-haiku/sonnet, and Gemini-1.5-flash/pro. 14We do not chunk programming solutions since they are typically short (average <200 tokens as in Table 2). --- # Basic programming: HumanEval RACG can consistently improve the performance of GPT-4 and Claude-3-sonnet when leveraging all sources of documents. However, for weaker models such as Claude-3-haiku and Gemini-1.5-flash, RACG only helps when aggregating multiple sources yet falls short when grounding on one source (even the canonical solution source). Interestingly, the stronger Claude-3-sonnet performs worse than the weaker Claude-3-haiku, but can benefit from all retrieval sources and outperform haiku with documents from the canonical programming source, suggesting its potentially better RAG ability. While the stronger Claude effectively benefits from additional contexts, the stronger Gemini-1.5-pro behaves similarly to its weaker counterpart and cannot do RACG effectively with non-canonical sources. |Method|Baseline|Program|Tutorial|Docs|SO|GitHub|All| |---|---|---|---|---|---|---|---| |GPT-4o|75.6|94.5|90.2|90.9|91.5|84.8|95.1| |Claude-3-haiku|74.4|77.4|77.4|71.3|67.7|73.2|82.9| |Claude-3-sonnet|65.9|78.7|66.5|68.9|70.7|73.8|80.5| |Gemini-1.5-flash|72.0|91.5|75.0|70.1|68.9|68.9|95.1| |Gemini-1.5-pro|82.9|95.7|79.9|77.4|79.9|80.5|86.6| # Open domain: ODEX All models experience limited improvements by leveraging library documentation to complex the ODEX task, with the only exception that GPT-4o improves 4.6 points by incorporating programming solutions into the context. As results degrade in most cases, we conduct a manual analysis to examine when most models fail. We find that most models tend to copy functions in the context, sometimes even overwriting the function being queried, thus failing all the test cases specific to the queried function. Further, possibly affected by the plethora of programs in context, models tend to generate over-complicated programs which, however, do not often pass the test cases. In general, most models can be easily distracted or disturbed by additional contexts [41], and fail to conduct the designated code generation task, indicating much room for improvement for RACG. |Method|Baseline|Program|Tutorial|Docs|SO|GitHub|All| |---|---|---|---|---|---|---|---| |GPT-4o|44.6|49.2|44.2|47.6|40.3|39.4|39.6| |Claude-3-haiku|48.5|42.6|39.2|44.6|33.7|40.5|35.1| |Claude-3-sonnet|41.0|37.6|35.3|38.0|34.2|42.4|38.0| |Gemini-1.5-flash|50.6|48.3|46.7|46.2|41.9|44.9|43.1| |Gemini-1.5-pro|57.2|54.4|45.6|51.0|46.5|39.6|46.0| # Repository level: RepoEval While GPT-4o can solve the RepoEval task with a reasonable success rate, all Claude models are challenged by the task and achieve less than 10% pass@1 for most scenarios. We find Claude models mostly respond with explanations of the incomplete input code, instead of the to-be-completed code even with proper instructions, possibly caused by some properties of the unknown training data. Gemini-1.5-flash also barely solves the task and often generates textual explanations; however its stronger pro variant gets about 10–25 point improvements, demonstrating its stronger repository-level code completion abilities. |Method|Baseline|Local|Program|Tutorial|Docs|SO|GitHub|All|L+E| |---|---|---|---|---|---|---|---|---|---| |GPT-4o|32.4|62.2|35.4|28.7|27.8|29.0|28.2|30.3|54.2| |Claude-3-haiku|9.1|0.5|0.5|0.5|0.5|0.5|0.2|0.2|0.5| |Claude-3-sonnet|0.5|0.5|0.5|0.5|0.5|0.5|0.5|0.5|0.5| |Gemini-1.5-flash|1.3|16.9|4.0|2.1|3.2|2.1|3.2|2.7|11.8| |Gemini-1.5-pro|10.5|39.1|15.1|13.4|15.8|15.3|11.8|12.3|33.0| --- # Related Work Code Generation Neural code generation has been an important task [24], and increasingly strong code LMs have been created [34, 19, 10, 38] to solve various tasks [5, 17, 15]. However, most LMs generate code solely based on NL problems and model parametric knowledge, without using external programming sources (e.g., tutorials) or a RAG approach. To fill in this gap and allow of systematic study of RACG, we orchestrate various datasets and retrieval sources to benchmark and analyze RACG systems. Retrieval augmented generation (RAG) RAG has been widely used for knowledge-intensive tasks [18, 11]. While previous studies often train retrieval and generation components from scratch or sequentially [13], recent work has demonstrated the effectiveness of retrieval-augmented approaches on top of off-the-shelf powerful LMs [31, 25]. However, most prior works focus on text-centric tasks using general domain corpora such as Wikipedia [2]. Several prior works leverage programming context retrieved from repositories [8, 45] or documentations [47], to our knowledge, there are no prior studies analyzing the effectiveness of RACG across different coding tasks and knowledge sources. In text-centric tasks, unified benchmarks such as BEIR [39] and KILT [30] have been proposed to aggregate several text retrieval and generation tasks, and facilitate rapid progress in this area [28]. Yet, we currently lack a large-scale benchmark or analysis for RACG. To provide a systematic analysis of coding tasks with various retrieval sources, we propose a unified benchmark and codebase to enable versatile analysis of RACG. # Conclusion In this work, we propose CODERAG-BENCH, a benchmark for retrieval-augmented code generation with various coding tasks and retrieval sources. With our experiments with top-performing retrieval and generation models, we show that retrieving external documents can greatly benefit code generation. However, current retrieval models struggle to find accurately helpful documents, and generation models have limited context capacity and RAG abilities, both leading to suboptimal RACG results. We hope CODERAG-BENCH can serve as a solid testbed to advance future endeavors in this direction. Acknowledgment We thank Shuyan Zhou and Xinran Zhao for the helpful discussions in the early stage of this project, and Saujas Vaduguru, Jing Yu Koh, Alex Xie, and Andy Liu for providing valuable feedback for the draft. Zora Zhiruo Wang is supported by Carnegie Mellon University Presidential Fellowship. Yiqing is supported by NSF grant DSES 2222762. # References [1] A. Asai, T. Schick, P. Lewis, X. Chen, G. Izacard, S. Riedel, H. Hajishirzi, and W.-t. Yih. Task-aware retrieval with instructions. In A. Rogers, J. Boyd-Graber, and N. Okazaki, editors, Findings of the Association for Computational Linguistics: ACL 2023, pages 3650–3675, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023. findings-acl.225. URL https://aclanthology.org/2023.findings-acl.225.[2] A. Asai, Z. Zhong, D. Chen, P. W. Koh, L. Zettlemoyer, H. Hajishirzi, and W.-t. Yih. Reliable, adaptable, and attributable language models with retrieval. arXiv preprint arXiv:2403.03187, 2024.[3] J. Austin, A. Odena, M. Nye, M. Bosma, H. Michalewski, D. Dohan, E. Jiang, C. Cai, M. Terry, Q. Le, et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732, 2021.[4] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei. Language models are few-shot learners. ArXiv, abs/2005.14165, 2020. URL https://api.semanticscholar.org/CorpusID:218971783. --- # References [5] M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. d. O. Pinto, J. Kaplan, H. Edwards, Y. Burda, N. Joseph, G. Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021.[6] CohereAI. Command r. 2024. URL https://docs.cohere.com/docs/command-r.[7] T. Computer. Redpajama: An open source recipe to reproduce llama training dataset, 2023. URL https://github.com/togethercomputer/RedPajama-Data.[8] Y. Ding, Z. Wang, W. U. Ahmad, H. Ding, M. Tan, N. Jain, M. K. Ramanathan, R. Nallapati, P. Bhatia, D. Roth, and B. Xiang. Crosscodeeval: A diverse and multilingual benchmark for cross-file code completion. In Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2023. URL https://openreview.net/forum?id=wgDcbBMSfh.[9] T. Gebru, J. Morgenstern, B. Vecchione, J. W. Vaughan, H. Wallach, H. D. Iii, and K. Crawford. Datasheets for datasets. Communications of the ACM, 64(12):86–92, 2021.[10] D. Guo, Q. Zhu, D. Yang, Z. Xie, K. Dong, W. Zhang, G. Chen, X. Bi, Y. Wu, Y. Li, et al. Deepseek-coder: When the large language model meets programming–the rise of code intelligence. arXiv preprint arXiv:2401.14196, 2024.[11] K. Guu, K. Lee, Z. Tung, P. Pasupat, and M. Chang. Retrieval augmented language model pre-training. In International conference on machine learning, pages 3929–3938. PMLR, 2020.[12] G. Izacard, M. Caron, L. Hosseini, S. Riedel, P. Bojanowski, A. Joulin, and E. Grave. Unsupervised dense information retrieval with contrastive learning. Transactions on Machine Learning Research, 2022. ISSN 2835-8856. URL https://openreview.net/forum?id=jKN1pXi7b0.[13] G. Izacard, P. Lewis, M. Lomeli, L. Hosseini, F. Petroni, T. Schick, J. A. Yu, A. Joulin, S. Riedel, and E. Grave. Few-shot learning with retrieval augmented language models. ArXiv, abs/2208.03299, 2022. URL https://api.semanticscholar.org/CorpusID:251371732.[14] N. Jain, K. Han, A. Gu, W.-D. Li, F. Yan, T. Zhang, S. Wang, A. Solar-Lezama, K. Sen, and I. Stoica. Livecodebench: Holistic and contamination free evaluation of large language models for code. arXiv preprint arXiv:2403.07974, 2024.[15] C. E. Jimenez, J. Yang, A. Wettig, S. Yao, K. Pei, O. Press, and K. R. Narasimhan. SWE-bench: Can language models resolve real-world github issues? In The Twelfth International Conference on Learning Representations, 2024. URL https://openreview.net/forum?id=VTF8yNQM66.[16] N. Kandpal, H. Deng, A. Roberts, E. Wallace, and C. Raffel. Large language models struggle to learn long-tail knowledge. In International Conference on Machine Learning, pages 15696–15707. PMLR, 2023.[17] Y. Lai, C. Li, Y. Wang, T. Zhang, R. Zhong, L. Zettlemoyer, W.-t. Yih, D. Fried, S. Wang, and T. Yu. Ds-1000: a natural and reliable benchmark for data science code generation. In Proceedings of the 40th International Conference on Machine Learning, ICML’23. JMLR.org, 2023.[18] P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, H. Küttler, M. Lewis, W.-t. Yih, T. Rocktäschel, S. Riedel, and D. Kiela. Retrieval-augmented generation for knowledge-intensive nlp tasks. In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 9459–9474. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/2020/file/6b493230205f780e1bc26945df7481e5-Paper.pdf.[19] R. Li, L. B. allal, Y. Zi, N. Muennighoff, D. Kocetkov, C. Mou, M. Marone, C. Akiki, J. LI, J. Chim, Q. Liu, E. Zheltonozhskii, T. Y. Zhuo, T. Wang, O. Dehaene, J. Lamy-Poirier, J. Monteiro, N. Gontier, M.-H. Yee, L. K. Umapathi, J. Zhu, B. Lipkin, M. Oblokulov, Z. Wang, 12 --- # R. Murthy, J. T. Stillerman, S. S. Patel, D. Abulkhanov, M. Zocca, M. Dey, Z. Zhang, U. Bhattacharyya, W. Yu, S. Luccioni, P. Villegas, F. Zhdanov, T. Lee, N. Timor, J. Ding, C. S. Schlesinger, H. Schoelkopf, J. Ebert, T. Dao, M. Mishra, A. Gu, C. J. Anderson, B. Dolan-Gavitt, D. Contractor, S. Reddy, D. Fried, D. Bahdanau, Y. Jernite, C. M. Ferrandis, S. Hughes, T. Wolf, A. Guha, L. V. Werra, and H. de Vries. Starcoder: may the source be with you! Transactions on Machine Learning Research, 2023. ISSN 2835-8856. URL https://openreview.net/forum?id=KoFOg41haE. Reproducibility Certification. [20] Y. Li, D. Choi, J. Chung, N. Kushman, J. Schrittwieser, R. Leblond, T. Eccles, J. Keeling, F. Gimeno, A. D. Lago, T. Hubert, P. Choy, C. de Masson d’Autume, I. Babuschkin, X. Chen, P.-S. Huang, J. Welbl, S. Gowal, A. Cherepanov, J. Molloy, D. J. Mankowitz, E. S. Robson, P. Kohli, N. de Freitas, K. Kavukcuoglu, and O. Vinyals. Competition-level code generation with alphacode. Science, 378(6624):1092–1097, 2022. doi: 10.1126/science.abq1158. URL https://www.science.org/doi/abs/10.1126/science.abq1158. [21] J. Lin, X. Ma, S.-C. Lin, J.-H. Yang, R. Pradeep, and R. Nogueira. Pyserini: A Python toolkit for reproducible information retrieval research with sparse and dense representations. In Proceedings of the 44th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2021), 2021. URL https://dl.acm.org/doi/10.1145/3404835.3463238. [22] J. Liu, C. S. Xia, Y. Wang, and L. Zhang. Is your code generated by chatGPT really correct? rigorous evaluation of large language models for code generation. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. URL https://openreview.net/forum?id=1qvx610Cu7. [23] A. Lozhkov, R. Li, L. B. Allal, F. Cassano, J. Lamy-Poirier, N. Tazi, A. Tang, D. Pykhtar, J. Liu, Y. Wei, et al. Starcoder 2 and the stack v2: The next generation. arXiv preprint arXiv:2402.19173, 2024. [24] S. Lu, D. Guo, S. Ren, J. Huang, A. Svyatkovskiy, A. Blanco, C. B. Clement, D. Drain, D. Jiang, D. Tang, G. Li, L. Zhou, L. Shou, L. Zhou, M. Tufano, M. Gong, M. Zhou, N. Duan, N. Sundaresan, S. K. Deng, S. Fu, and S. Liu. Codexglue: A machine learning benchmark dataset for code understanding and generation. CoRR, abs/2102.04664, 2021. [25] A. Mallen, A. Asai, V. Zhong, R. Das, D. Khashabi, and H. Hajishirzi. When not to trust language models: Investigating effectiveness of parametric and non-parametric memories. In A. Rogers, J. Boyd-Graber, and N. Okazaki, editors, Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 9802–9822, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.acl-long.546. URL https://aclanthology.org/2023.acl-long.546. [26] R. Meng, Y. Liu, S. R. Joty, C. Xiong, Y. Zhou, and S. Yavuz. Sfr-embedding-mistral: enhance text retrieval with transfer learning, 2024. URL https://blog.salesforceairesearch.com/sfr-embedded-mistral/. [27] Meta. Introducing meta llama 3: The most capable openly available llm to date. 2024. URL https://ai.meta.com/blog/meta-llama-3/. [28] N. Muennighoff, N. Tazi, L. Magne, and N. Reimers. Mteb: Massive text embedding benchmark. arXiv preprint arXiv:2210.07316, 2022. [29] A. Overwijk, C. Xiong, X. Liu, C. VandenBerg, and J. Callan. Clueweb22: 10 billion web documents with visual and semantic information. arXiv preprint arXiv:2211.15848, 2022. [30] F. Petroni, A. Piktus, A. Fan, P. Lewis, M. Yazdani, N. De Cao, J. Thorne, Y. Jernite, V. Karpukhin, J. Maillard, et al. Kilt: a benchmark for knowledge intensive language tasks. arXiv preprint arXiv:2009.02252, 2020. [31] O. Ram, Y. Levine, I. Dalmedigos, D. Muhlgay, A. Shashua, K. Leyton-Brown, and Y. Shoham. In-context retrieval-augmented language models. Transactions of the Association for Computational Linguistics, 11:1316–1331, 2023. doi: 10.1162/tacl_a_00605. URL https://aclanthology.org/2023.tacl-1.75. --- # References [32] N. Reimers and I. Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 11 2019. URL https://arxiv.org/abs/1908.10084.[33] S. E. Robertson and H. Zaragoza. The probabilistic relevance framework: Bm25 and beyond. Found. Trends Inf. Retr., 3:333–389, 2009. URL https://api.semanticscholar.org/CorpusID:207178704.[34] B. Roziere, J. Gehring, F. Gloeckle, S. Sootla, I. Gat, X. E. Tan, Y. Adi, J. Liu, T. Remez,,J. Rapin, et al. Code llama: Open foundation models for code. arXiv preprint arXiv:2308.12950 2023.[35] A. V. Solatorio. Gistembed: Guided in-sample selection of training negatives for text embedding fine-tuning, 2024. URL https://arxiv.org/abs/2402.16829.[36] H. Su, W. Shi, J. Kasai, Y. Wang, Y. Hu, M. Ostendorf, W.-t. Yih, N. A. Smith, L. Zettlemoyer, and T. Yu. One embedder, any task: Instruction-finetuned text embeddings. In A. Rogers, J. Boyd-Graber, and N. Okazaki, editors, Findings of the Association for Computational Linguistics: ACL 2023, pages 1102–1121, Toronto, Canada, July 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.findings-acl.71. URL https://aclanthology.org/2023.findings-acl.71.[37] H. Su, S. Jiang, Y. Lai, H. Wu, B. Shi, C. Liu, Q. Liu, and T. Yu. Arks: Active retrieval inknowledge soup for code generation. arXiv preprint arXiv:2402.12317, 2024.[38] C. Team. Codegemma: Open code models based on gemma. 2024. URL https://storage.googleapis.com/deepmind-media/gemma/codegemma_report.pdf.[39] N. Thakur, N. Reimers, A. Rücklé, A. Srivastava, and I. Gurevych. BEIR: A heterogeneous benchmark for zero-shot evaluation of information retrieval models. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2), 2021. URL https://openreview.net/forum?id=wCu6T5xFjeJ.[40] VoyageAI. voyage-code-2: Elevate your code retrieval. 2024. URL https://blog.voyageai.com/2024/01/23/voyage-code-2-elevate-your-code-retrieval/.[41] Z. Wang, J. Araki, Z. Jiang, M. R. Parvez, and G. Neubig. Learning to filter context for retrieval-augmented generation. arXiv preprint arXiv:2311.08377, 2023.[42] Z. Wang, S. Zhou, D. Fried, and G. Neubig. Execution-based evaluation for open-domain code generation. In Findings of the Association for Computational Linguistics: EMNLP 2023. Association for Computational Linguistics, 2023. doi: 10.18653/v1/2023.findings-emnlp.89. URL https://aclanthology.org/2023.findings-emnlp.89.[43] Y. Wei, Z. Wang, J. Liu, Y. Ding, and L. Zhang. Magicoder: Source code is all you need. arXiv preprint arXiv:2312.02120, 2023.[44] S. Xiao, Z. Liu, P. Zhang, and N. Muennighoff. C-pack: Packaged resources to advance general chinese embedding. arXiv, 2023. URL https://arxiv.org/abs/2309.07597.[45] J. Yang, C. E. Jimenez, A. Wettig, K. Lieret, S. Yao, K. Narasimhan, and O. Press. Swe-agent: Agent-computer interfaces enable automated software engineering. arXiv preprint arXiv:2405.15793, 2024.[46] F. Zhang, B. Chen, Y. Zhang, J. Keung, J. Liu, D. Zan, Y. Mao, J.-G. Lou, and W. Chen.Repocoder: Repository-level code completion through iterative retrieval and generation. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2023. URL https://aclanthology.org/2023.emnlp-main.151.[47] S. Zhou, U. Alon, F. F. Xu, Z. Jiang, and G. Neubig. Docprompting: Generating code by retrieving the docs. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=ZTCxT2t2Ru. --- # Appendix: Datasheets for Datasets # A.1 Access to CODERAG-BENCH We provide access to view and download all datasets with our additional ground-truth document annotation, as well as all documents from the five retrieval sources at https://huggingface.co/code-rag-bench. For each dataset or retrieval source, the corresponding Croissant metadata can be found via a Croissant tag button on the dataset’s page. All code generation datasets we build upon are permissively licensed. There is no noticeable chance that our data would contain personally identifiable or offensive content. The codebase for our retrieval-augmented code generation framework can be found at https://github.com/code-rag-bench/code-rag-bench. Overall, all necessary datasets, code, and evaluation procedures are accessible and documented by our main website https://code-rag-bench.github.io/. Author Statement The authors state that they bear all responsibility in case of violation of rights of the original datasets and retrieval sources. We confirm that the data is released under the CC-BY-SA 4.0 license. The authors plan to host the dataset and codebase with the above sources on Huggingface and GitHub, and will continue to provide the necessary maintenance to both. # A.2 Dataset Documentation and Intended Uses We provide detailed dataset documentation and explanations for the intended uses, using the datasets for dataset [9] framework. # A.3 Motivation For what purpose was the dataset created? We create CODERAG-BENCH to provide a unified benchmark for retrieval-augmented code generation, encompassing various code generation tasks and retrieval sources, to facilitate research in this direction. Who created the dataset and on behalf of which entity? Student researchers in Carnegie Mellon University, University of Washington, and University of Southern California created this dataset. Who funded the creation of the dataset? Supervisors of this project, also professors at Carnegie Mellon University funded the creation of this dataset. # A.4 Composition What do the instances that comprise the dataset represent? The dataset represents (i) different programming tasks that reflect the job of software developers, and (ii) various reference sources for solving or guiding software programming. How many instances are there in total? Our dataset comprises 9k programming problems and 160k retrieval documents in total. Does the dataset contain all possible instances or is it a sample of instances from a larger set? For code generation datasets, our CODERAG-BENCH contains all possible instances. For retrieval sources, our CODERAG-BENCH contains a subset of documents in high quality. What data does each instance consist of? Each example in code generation tasks consists of the problem statement, reference solution, executable test cases, and other necessary metadata specific to individual tasks. Each example in retrieval documents contains the textual content and other optional metadata specific to individual sources. All fields in both types are represented by texts. Is there a label or target associated with each instance? Each example in code generation tasks is associated with canonical test cases, which serve as the role of labels because model-generated programs need to be executed over and pass all test cases to verify the correctness. Examples in retrieval documents do not have a label because they are collected for retrieval to augment contexts, instead of end evaluation purposes. Is any information missing from individual instances? No, we did not remove any information collected throughout the process. --- # Are relationships between individual instances made explicit? Yes. We mark examples that are originated from each dataset or each retrieval sources, by putting them into different dataset splits. # Are there recommended data splits? Our CODERAG-BENCH is built for evaluation purposes and only has the test split, though we do not explicitly split to 'test' since 'train' and 'validation' sets do not exist. # Are there any errors, sources of noise, or redundancies in the dataset? For code generation tasks, we build upon existing high-quality datasets and the authors conduct manual assessments of each dataset. We do not notice any noticeable errors among randomly sampled instances. For retrieval sources, we apply several layers to clean the scraped texts, but they may not appear perfectly standard and noises are possible. # Is the dataset self-contained, or does it link to or otherwise rely on external resources? Our dataset is self-contained and do not rely on external resources. # Does the dataset contain data that might be considered confidential? No, we collect documents from permissively licensed sources. # Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? No, we collect programming data, which by default should not involve offensive languages. # Does the dataset identify any subpopulations? No, our dataset does not include metadata that are specifically related to any subpopulations (e.g., age, gender). # Is it possible to identify individuals, either directly or indirectly, from the dataset? No, our dataset is unlikely to contain user-specific information such as name or other personally identifiable data. # Does the dataset contain data that might be considered sensitive in any way? No. # Collection Process How was the data associated with each instance acquired? The data is derived from existing datasets and online resources. What mechanisms or procedures were used to collect the data? We first automatically collect retrieval sources to construct a large-scale document pool. We then iteratively conduct manual verification and content refinement to ensure the data quality. If the dataset is a sample from a larger set, what was the sampling strategy? Only the StackOverflow posts and GitHub repositories are sampled, randomly from the full set. Who was involved in the data collection process and how were they compensated? Graduate student researchers who authored this work were involved in the data collection process. The students were compensated by the authorship of this paper. Over what timeframe was the data collected? March 2024 to May 2024. Were any ethical review processes conducted? No. Because our benchmark does not involve human annotation and is mostly automatic, and the data collected are mainly about programming without raised ethical concerns, we did not find it necessary to conduct ethical reviews. # Preprocessing/cleaning/labeling Was any preprocessing/cleaning/labeling of the data done? For code generation datasets, we perform manual labeling of the ground-truth documents. For retrieval documents, we perform necessary cleaning to ensure the quality and clarity of these documents. Was the "raw" data saved in addition to the preprocessed/cleaned/labeled data? Yes, the raw data is accessible through its original sources, which we individually referenced in the main paper. Is the software that was used to preprocess/clean/label the data available? Yes, the software is provided by our codebase. --- # A.4.3 Uses |Has the dataset been used for any tasks already?|No.| |---|---| |What (other) tasks could the dataset be used for?|In addition to code generation and retrieval-augmented code generation tasks that we have experimented in this work, our dataset could be potentially extended to other programming tasks or programming document retrieval tasks.| |Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses?|Our dataset is centered around code generation tasks, but this paradigm could be further extended to other programming-related tasks.| |Are there tasks for which the dataset should not be used?|No.| # A.4.4 Distribution |Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created?|No.| |---|---| |How will the dataset be distributed?|By Huggingface and GitHub, see the URLs in §A.1.| |When will the dataset be distributed?|The dataset will be distributed on Jun 6th, 2024.| |Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)?|The dataset will be distributed under the Apache 2.0 license.| |Have any third parties imposed IP-based or other restrictions on the data associated with the instances?|No.| |Do any export controls or other regulatory restrictions apply to the dataset or to individual instances?|No.| # A.4.5 Maintenance |Who will be supporting/hosting/maintaining the dataset?|The authors of this work will be supporting/hosting/maintaining the dataset. All of the URLs are available at §A.1.| |---|---| |Is there an erratum?|No.| |Will the dataset be updated?|No, at least no plans to do so at the submission time.| |If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances?|No, the dataset is not related to people.| |Will older versions of the dataset continue to be supported/hosted/maintained?|We are not planning to update the dataset and will continue to host the current version.| |If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so?|Yes, we will make our datasets and codebase publicly available, anyone in the community is welcome to contribute or leave comments.| # B Example Illustrations # B.1 Example with Canonical Documents To present our canonical document annotation (§2.3) more concretely, we illustrate examples with their annotated canonical documents. Figure 4 shows the general-programming examples, with one HumanEval and one MBPP example, respectively. Figure 5 shows two open-domain coding examples with canonical library documentation from DS-1000 and ODEX, respectively. # B.2 RACG with Helpful and Distracting Documents Beyond the numerical numbers reported in experiment sections, here we provide some concrete examples that: (i) benefit from RACG when relevant documents are retrieved, and (ii) distracted by irrelevant documents retrieved hence results in degraded performance. --- # Canonical Document def truncate_number(number: float) -> float: """ Given a positive floating point number, it can be decomposed into and integer part (largest integer smaller than given number) and decimals (leftover part always smaller than 1). Return the decimal part of the number. >>> truncate_number(3.5) 0.5 """ return number % 1.0 # Problem def truncate_number(number: float) -> float: """ Given a positive floating point number, it can be decomposed into and integer part (largest integer smaller than given number) and decimals (leftover part always smaller than 1). # Canonical Document # Write a python function to remove first and last occurrence of a given character from the string. def remove_Occ(s,ch): for i in range(len(s)): if (s[i] == ch): s = s[0 : i] + s[i + 1:] break for i in range(len(s) - 1,-1,-1): if (s[i] == ch): s = s[0 : i] + s[i + 1:] break return s # Problem # Write a python function to remove first and last occurrence of a given character from the string. Return the decimal part of the number. >>> truncate_number(3.5) 0.5 | | |Figure 4: HumanEval (left) and MBPP (right) examples with annotated canonical solutions.| | |---|---|---|---| |Canonical Document|Canonical Document| | | | |# pandas.reference.api.pandas.dataframe.groupby| |# python.library.socket#socket.socket.send| |pandas.DataFrame.groupby DataFrame.groupby(by=None, axis=0, level=None, as_index=True, sort=True, group_keys=True, squeeze=NoDefault.no_default, observed=False, dropna=True)[source]|socket.send(bytes[, flags]) Send data to the socket. The socket must be connected to a remote socket. The optional flags argument has the same meaning as for recv() above. Returns the number of bytes sent. Applications are responsible for checking that all data has been sent; if only some of the data was transmitted, the application needs to attempt delivery of the remaining data. For further information on this topic, consult the Socket Programming HOWTO. Changed in version 3.5: If the system call is interrupted and the signal handler does not raise an exception, the method now retries the system call instead of raising an InterruptedError exception (see PEP 475 for the rationale).| | | | |...| | | | |# pandas.reference.api.pandas.dataframe.squeeze| | | |pandas.DataFrame.squeeze DataFrame.squeeze(axis=None)[source] Squeeze 1 dimensional axis objects into scalars. Series or DataFrames with a single element are squeezed to a scalar.| | | | # Problem What is best way to achieve this ? closest I got was with the zip function but haven't managed to make it work for more then one level (two columns). A: <code> import pandas as pd df = pd.DataFrame({'name': ['A', 'A', 'B', 'C', 'B', 'A'], 'v1': ['A1', 'A2', 'B1', 'C1', 'B2', 'A2'], 'v2': ['A11', 'A12', 'B12', 'C11', 'B21', 'A21'], 'v3': [1, 2, 3, 4, 5, 6]}) </code>
result = ... # put solution in this variable
BEGIN SOLUTION
<code> Figure 5: DS-1000 (left) and ODEX (right) examples with annotated canonical library documentation. C Additional Details about Retrieval Efficiency For open access models, we use the same single A100 GPU with 80 GRAM, with a batch size of 64 for GIST base and large, and 8 for SFR-Mistral. For proprietary models, we estimate their efficiency using a batch size of 64. We then average the time for each batch for each query and document. For Voyage-code, we apply a “dynamic-batching” technique that make sure the total tokens in the batch won’t exceed the token limit. For both open and proprietary models, we define the search efficiency as the time it takes to embed individual query and the time to calculate similarities. Note that the time for both can be optimized by tokenizing all documents and all queries, then taking the dot product. --- # Question Split string 'Words, words, words.' using a regex '(\\W+)'Find intersection data between series `s1` and series `s2` # Solution re.split('(\\W+)', 'Words, words, words.')pd.Series(list(set(s1).intersection(set(s2)))) Context (helpful) # Write a function to put spaces between words starting with capital letters in a given string by using regex. import re def capital_words_spaces(str1): return re.sub(r"(\w)([A-Z])", r"\1 \2", str1) Model Prediction import re def f_2195340(): """split string 'Words, words, words.' using a regex '(\W+)'""" text = 'Words, words, words.' return re.split(r'(\W+)', text) Context (distracting) skimage.segmentation.join_segmentations(s1, s2) [source] Return the join of the two input segmentations. The join J of S1 and S2 is defined as the segmentation in which two voxels are in the same segment if and only if they are in the same segment in both S1 and S2. Parameters class Intersection(expr1, expr2, **extra) set_extremes(e1, e2)[source] Model Prediction from skimage.segmentation import join_segmentations def f_18079563(s1, s2): """Find intersection data between series `s1` and series `s2` """ return join_segmentations(s1, s2) # Figure 6: RACG helps with relevant contexts (left) and hurts with distracting contexts (right). The actual runtime for API models varies for each organization with different rate limits and the batch size. For this experiment, we set the maximum context length to match the maximum length of the original models. This notably increases the encoding latency of SFR Mixtral, which has a longer maximum context window size than smaller embedding models. # Result Reproduction In Table 5 in §3, we are able to reproduce most results reported in the original papers, but with minor variances. Here we explain the differences in implementation and (potential) reasons that lead to these small performance variances. Our approach To keep a fair comparison among all models, we use the same prompt for each dataset when evaluating all models. Meanwhile, we use zero-shot prompts without any additional instructions, i.e., only input the original problem description of the example, to prevent unknown effects on the model performance when using different instructions and/or in-context examples. According to this setup, we next describe the differences in prompts used by the original works and how they may affect the results. StarCoder2 The StarCoder2 technical report [23 ] reported results on the HumanEval, MBPP, and DS-1000 datasets. On HumanEval, our reproduced results (31.7) is slightly lower than their number (35.4), possibly because the original paper additionally input the test cases as additional information in the prompt, whereas in our basic NL-to-code setup, no test cases are provided. This additional information may cause their results to be higher. On MBPP dataset, they adopt a subset of MBPP, i.e., 399 out of 427 examples that have additional test cases populated by Liu et al. [22] . In contrast, we evaluate on the entire dataset, which is likely to cause the variance in results. On DS-1000, the original paper samples 40 generations and report the pass@1 rate, while we only generate one program with greedy decoding. This difference in decoding strategy may cause slight variance in the results. CodeGemma The CodeGemma technical report [38 ] reported results on HumanEval and MBPP datasets, but does not provide any details about the instructions, few-shot examples, or other parts of the prompt that they use. We were able to roughly reproduce their reported results, but with 3-5 points less in pass@1. CodeLlama The CodeLlama technical report [34] reports results on HumanEval and MBPP datasets. We were able to perfectly reproduce their results on the HumanEval dataset under the zero-shot setting. However, for MBPP experiments, they use 3-shot prompting, which could potentially explain that our zero-shot results are 4 points lower in pass@1. --- # DeepSeekCoder The DeepSeekCoder technical report [10] reports results on HumanEval and MBPP for the 7B-instruct-v1.5 and the 33B-instruct models, the report additionally reports DS1000 results for the 33B-instruct model. We could reproduce the original results on HumanEval and DS-1000, but got slightly worse results on MBPP because they used few-shot prompting, which should outperform our zero-shot method. # Llama3 Since there is no technical report available yet, the official blog post [15] reports results on HumanEval, without any descriptions on prompting construction or the inference process. Our reproduced results are about 4 points lower than their original results. 15https://ai.meta.com/blog/meta-llama-3/