Update README.md
Browse files
README.md
CHANGED
@@ -67,7 +67,20 @@ another model outlines a plan, similar to Reflexion prompting,
|
|
67 |
2) Lazy: Informal instructions resemble typical user queries
|
68 |
for LLMs in code generation.
|
69 |
|
70 |
-
For more information and results see [our paper](https://
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
71 |
|
72 |
## How To Evaluate
|
73 |
All the code for evaluating the benchmark can be found in our [GitHub repository](https://github.com/nuprl/CanItEdit).
|
|
|
67 |
2) Lazy: Informal instructions resemble typical user queries
|
68 |
for LLMs in code generation.
|
69 |
|
70 |
+
For more information and results see [our paper](https://arxiv.org/abs/2312.12450).
|
71 |
+
|
72 |
+
## Citation
|
73 |
+
If you use our work, please cite our paper as such:
|
74 |
+
```
|
75 |
+
@misc{cassano2023edit,
|
76 |
+
title={Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions},
|
77 |
+
author={Federico Cassano and Luisa Li and Akul Sethi and Noah Shinn and Abby Brennan-Jones and Anton Lozhkov and Carolyn Jane Anderson and Arjun Guha},
|
78 |
+
year={2023},
|
79 |
+
eprint={2312.12450},
|
80 |
+
archivePrefix={arXiv},
|
81 |
+
primaryClass={cs.SE}
|
82 |
+
}
|
83 |
+
```
|
84 |
|
85 |
## How To Evaluate
|
86 |
All the code for evaluating the benchmark can be found in our [GitHub repository](https://github.com/nuprl/CanItEdit).
|