Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,50 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
library_name: transformers
|
6 |
---
|
7 |
+
|
8 |
+
# OpenLLaMA-3B-Chat: Chat Model on top of Open Reproduction of LLaMA
|
9 |
+
|
10 |
+
## Reference
|
11 |
+
|
12 |
+
If you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX:
|
13 |
+
```
|
14 |
+
@software{Yao_FMEngine_Library_for_2023,
|
15 |
+
author = {Yao, Xiaozhe},
|
16 |
+
doi = {10.5281/zenodo.8314779},
|
17 |
+
month = sep,
|
18 |
+
title = {{FMEngine: Library for Training Foundation Models}},
|
19 |
+
url = {https://github.com/eth-easl/fmengine},
|
20 |
+
version = {0.0.1},
|
21 |
+
year = {2023}
|
22 |
+
}
|
23 |
+
@software{openlm2023openllama,
|
24 |
+
author = {Geng, Xinyang and Liu, Hao},
|
25 |
+
title = {OpenLLaMA: An Open Reproduction of LLaMA},
|
26 |
+
month = May,
|
27 |
+
year = 2023,
|
28 |
+
url = {https://github.com/openlm-research/open_llama}
|
29 |
+
}
|
30 |
+
```
|
31 |
+
```
|
32 |
+
@software{together2023redpajama,
|
33 |
+
author = {Together Computer},
|
34 |
+
title = {RedPajama-Data: An Open Source Recipe to Reproduce LLaMA training dataset},
|
35 |
+
month = April,
|
36 |
+
year = 2023,
|
37 |
+
url = {https://github.com/togethercomputer/RedPajama-Data}
|
38 |
+
}
|
39 |
+
```
|
40 |
+
```
|
41 |
+
@article{touvron2023llama,
|
42 |
+
title={Llama: Open and efficient foundation language models},
|
43 |
+
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and others},
|
44 |
+
journal={arXiv preprint arXiv:2302.13971},
|
45 |
+
year={2023}
|
46 |
+
}
|
47 |
+
```
|
48 |
+
|
49 |
+
## Limitations and Bias
|
50 |
+
As with all language models, LLaMA-2-7B-32K may generate incorrect or biased content. It's important to keep this in mind when using the model.
|