Datasets:
Tasks:
Text2Text Generation
Modalities:
Text
Languages:
code
Size:
1K - 10K
ArXiv:
Tags:
code-generation
License:
Update README.md
Browse files
README.md
CHANGED
@@ -23,7 +23,6 @@ task_ids:
|
|
23 |
|
24 |
## Dataset Description
|
25 |
|
26 |
-
- **Homepage:** https://github.com/openai/human-eval-infilling
|
27 |
- **Repository:** https://github.com/openai/human-eval-infilling
|
28 |
- **Paper:** https://arxiv.org/pdf/2207.14255
|
29 |
|
@@ -53,7 +52,8 @@ The single-line, multi-line, random span infilling and its light version have 10
|
|
53 |
|
54 |
## Citation
|
55 |
|
56 |
-
|
|
|
57 |
title={Efficient Training of Language Models to Fill in the Middle},
|
58 |
author={Bavarian, Mohammad and Jun, Heewoo and Tezak, Nikolas and Schulman, John and McLeavey, Christine and Tworek, Jerry and Chen, Mark},
|
59 |
journal={arXiv preprint arXiv:2207.14255},
|
|
|
23 |
|
24 |
## Dataset Description
|
25 |
|
|
|
26 |
- **Repository:** https://github.com/openai/human-eval-infilling
|
27 |
- **Paper:** https://arxiv.org/pdf/2207.14255
|
28 |
|
|
|
52 |
|
53 |
## Citation
|
54 |
|
55 |
+
```
|
56 |
+
@article{bavarian2022efficient,
|
57 |
title={Efficient Training of Language Models to Fill in the Middle},
|
58 |
author={Bavarian, Mohammad and Jun, Heewoo and Tezak, Nikolas and Schulman, John and McLeavey, Christine and Tworek, Jerry and Chen, Mark},
|
59 |
journal={arXiv preprint arXiv:2207.14255},
|