Update README.md
Browse files
README.md
CHANGED
@@ -43,7 +43,7 @@ We use state-of-the-art [Language Model Evaluation Harness](https://github.com/E
|
|
43 |
|
44 |
`garage-bAInd/Platypus2-70B` trained using STEM and logic based dataset [`garage-bAInd/Open-Platypus`](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
|
45 |
|
46 |
-
Please see our [paper](https://
|
47 |
|
48 |
### Training Procedure
|
49 |
|
@@ -90,21 +90,29 @@ Llama 2 and fine-tuned variants are a new technology that carries risks with use
|
|
90 |
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
|
91 |
|
92 |
### Citations
|
93 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
94 |
```bibtex
|
95 |
@misc{touvron2023llama,
|
96 |
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
|
97 |
-
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov
|
98 |
-
year={2023},
|
99 |
eprint={2307.09288},
|
100 |
archivePrefix={arXiv},
|
101 |
}
|
102 |
```
|
103 |
```bibtex
|
104 |
-
@
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
|
|
|
|
|
109 |
}
|
110 |
```
|
|
|
43 |
|
44 |
`garage-bAInd/Platypus2-70B` trained using STEM and logic based dataset [`garage-bAInd/Open-Platypus`](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
|
45 |
|
46 |
+
Please see our [paper](https://arxiv.org/abs/2308.07317) and [project webpage](https://platypus-llm.github.io) for additional information.
|
47 |
|
48 |
### Training Procedure
|
49 |
|
|
|
90 |
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
|
91 |
|
92 |
### Citations
|
93 |
+
```bibtex
|
94 |
+
@article{platypus2023,
|
95 |
+
title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs},
|
96 |
+
author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz},
|
97 |
+
booktitle={arXiv preprint arxiv:2308.07317},
|
98 |
+
year={2023}
|
99 |
+
}
|
100 |
+
```
|
101 |
```bibtex
|
102 |
@misc{touvron2023llama,
|
103 |
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
|
104 |
+
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov year={2023},
|
|
|
105 |
eprint={2307.09288},
|
106 |
archivePrefix={arXiv},
|
107 |
}
|
108 |
```
|
109 |
```bibtex
|
110 |
+
@inproceedings{
|
111 |
+
hu2022lora,
|
112 |
+
title={Lo{RA}: Low-Rank Adaptation of Large Language Models},
|
113 |
+
author={Edward J Hu and Yelong Shen and Phillip Wallis and Zeyuan Allen-Zhu and Yuanzhi Li and Shean Wang and Lu Wang and Weizhu Chen},
|
114 |
+
booktitle={International Conference on Learning Representations},
|
115 |
+
year={2022},
|
116 |
+
url={https://openreview.net/forum?id=nZeVKeeFYf9}
|
117 |
}
|
118 |
```
|