Update README.md
Browse files
README.md
CHANGED
@@ -44,7 +44,7 @@ We use state-of-the-art [Language Model Evaluation Harness](https://github.com/E
|
|
44 |
|
45 |
`garage-bAInd/Platypus2-70B` trained using STEM and logic based dataset [`garage-bAInd/Open-Platypus`](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
|
46 |
|
47 |
-
Please see our [paper](https://
|
48 |
|
49 |
### Training Procedure
|
50 |
|
@@ -91,19 +91,29 @@ Llama 2 and fine-tuned variants are a new technology that carries risks with use
|
|
91 |
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
|
92 |
|
93 |
### Citations
|
94 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
95 |
```bibtex
|
96 |
@misc{touvron2023llama,
|
97 |
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
|
98 |
-
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov
|
99 |
-
|
|
|
100 |
}
|
101 |
```
|
102 |
```bibtex
|
103 |
-
@
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
|
|
|
|
|
108 |
}
|
109 |
```
|
|
|
44 |
|
45 |
`garage-bAInd/Platypus2-70B` trained using STEM and logic based dataset [`garage-bAInd/Open-Platypus`](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
|
46 |
|
47 |
+
Please see our [paper](https://arxiv.org/abs/2308.07317) and [project webpage](https://platypus-llm.github.io) for additional information.
|
48 |
|
49 |
### Training Procedure
|
50 |
|
|
|
91 |
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
|
92 |
|
93 |
### Citations
|
94 |
+
```bibtex
|
95 |
+
@article{platypus2023,
|
96 |
+
title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs},
|
97 |
+
author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz},
|
98 |
+
booktitle={arXiv preprint arxiv:2308.07317},
|
99 |
+
year={2023}
|
100 |
+
}
|
101 |
+
```
|
102 |
```bibtex
|
103 |
@misc{touvron2023llama,
|
104 |
title={Llama 2: Open Foundation and Fine-Tuned Chat Models},
|
105 |
+
author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov year={2023},
|
106 |
+
eprint={2307.09288},
|
107 |
+
archivePrefix={arXiv},
|
108 |
}
|
109 |
```
|
110 |
```bibtex
|
111 |
+
@inproceedings{
|
112 |
+
hu2022lora,
|
113 |
+
title={Lo{RA}: Low-Rank Adaptation of Large Language Models},
|
114 |
+
author={Edward J Hu and Yelong Shen and Phillip Wallis and Zeyuan Allen-Zhu and Yuanzhi Li and Shean Wang and Lu Wang and Weizhu Chen},
|
115 |
+
booktitle={International Conference on Learning Representations},
|
116 |
+
year={2022},
|
117 |
+
url={https://openreview.net/forum?id=nZeVKeeFYf9}
|
118 |
}
|
119 |
```
|