princeton-nlp commited on
Commit
f7d376d
1 Parent(s): daa3210

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -0
README.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ language: en
2
+
3
+ datasets:
4
+ - 37 popular Python code repositories
5
+ - See princeton-nlp/SWE-bench train split
6
+ - See the `make_datasets` documentation on SWE-bench's [GitHub](https://github.com/princeton-nlp/SWE-bench/tree/main/inference/make_datasets) for details on formatting input.
7
+
8
+ ---
9
+
10
+ # SWE-Llama
11
+
12
+ SWE-Llama are variants of the [CodeLlama](https://arxiv.org/abs/2308.12950) model fine-tuned on software engineering tasks extracted from real-world GitHub issues and pull requests. They were introduced and evaluated on the SWE-bench benchmark in this [paper](https://arxiv.org/abs/2310.06770).
13
+
14
+ ## Model Details
15
+
16
+ - **Architecture:** Transformer, based on [CodeLlama](https://arxiv.org/abs/2308.12950) architecture
17
+ - **Parameters:** 7 billion for SWE-Llama-7b, 13 billion for SWE-Llama-13b
18
+ - **Objective:** Generating patches to resolve GitHub issues, conditioned on issue description and code context
19
+
20
+ ## Training Data
21
+
22
+ SWE-Llama was fine-tuned on 19,000 issues and pull requests collected from 37 popular Python code repositories on GitHub, disjoint from those used in SWE-bench.
23
+
24
+ ## Training Procedure
25
+
26
+ - Fine-tuned only the attention matrices using LoRA method
27
+ - Trained for 4 epochs with a batch size of 32
28
+ - Selected best checkpoint based on validation perplexity
29
+
30
+ ## Evaluation Results
31
+
32
+ When evaluated on the SWE-bench benchmark:
33
+
34
+ - SWE-Llama-7b achieved 3.0% issue resolution rate using oracle context retrieval
35
+ - SWE-Llama-13b achieved 4.0% issue resolution rate using oracle context retrieval
36
+
37
+ ## BibTeX Entry
38
+ ```tex
39
+ @misc{jimenez2023swebench,
40
+ title={SWE-bench: Can Language Models Resolve Real-World GitHub Issues?},
41
+ author={Carlos E. Jimenez and John Yang
42
+ and Alexander Wettig and Shunyu Yao
43
+ and Kexin Pei and Ofir Press and Karthik Narasimhan},
44
+ year={2023},
45
+ eprint={2310.06770},
46
+ archivePrefix={arXiv},
47
+ primaryClass={cs.CL}
48
+ }
49
+ ```