juyongjiang
commited on
Commit
•
a578f82
1
Parent(s):
d4af0b2
Update README.md
Browse files
README.md
CHANGED
@@ -49,3 +49,19 @@ This way, we gain the 19K high-quality instruction data of code generation. The
|
|
49 |
|
50 |
## Training & Inference
|
51 |
Detailed instructions can be found at [https://github.com/juyongjiang/CodeUp](https://github.com/juyongjiang/CodeUp).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
49 |
|
50 |
## Training & Inference
|
51 |
Detailed instructions can be found at [https://github.com/juyongjiang/CodeUp](https://github.com/juyongjiang/CodeUp).
|
52 |
+
|
53 |
+
|
54 |
+
## Citation
|
55 |
+
If you use the data or code in this repo, please cite the repo.
|
56 |
+
|
57 |
+
```
|
58 |
+
@misc{codeup,
|
59 |
+
author = {Juyong Jiang and Sunghun Kim},
|
60 |
+
title = {CodeUp: A Multilingual Code Generation Llama2 Model with Parameter-Efficient Instruction-Tuning},
|
61 |
+
year = {2023},
|
62 |
+
publisher = {GitHub},
|
63 |
+
journal = {GitHub repository},
|
64 |
+
howpublished = {\url{https://github.com/juyongjiang/CodeUp}},
|
65 |
+
}
|
66 |
+
```
|
67 |
+
Naturally, you should also cite the original LLaMA V1 [1] & V2 paper [2], and the Self-Instruct paper [3], and the LoRA paper [4], and the [Stanford Alpaca repo](https://github.com/tatsu-lab/stanford_alpaca), and [Alpaca-LoRA repo](https://github.com/tloen/alpaca-lora), and [Code Alpaca repo](https://github.com/sahil280114/codealpaca), and [PEFT](https://github.com/huggingface/peft).
|