pangpang666
commited on
Commit
•
52b5206
1
Parent(s):
5ee2013
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,122 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
+
|
5 |
+
# OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA
|
6 |
+
|
7 |
+
In this repo, we release a permissively licensed open-source instruction-following model based on [OpenLLaMA](https://github.com/openlm-research/open_llama). In this release, we release a public preview of the 7B OpenAlpaca model based on [the previewed version of OpenLLaMA](https://huggingface.co/openlm-research/open_llama_7b_700bt_preview) that is 7B model trained with 700 billion tokens. We provide PyTorch weights of OpenAlpaca. Stay tuned for our forthcoming updates!
|
8 |
+
|
9 |
+
**[Project Page]** [(https://github.com/yxuansu/OpenAlpaca)](https://github.com/yxuansu/OpenAlpaca)
|
10 |
+
|
11 |
+
# Dataset and Training
|
12 |
+
|
13 |
+
We train our model on the [dolly 15k dataset](https://huggingface.co/datasets/databricks/databricks-dolly-15k) released by Databricks. The training configurations are provided in the table below. The training takes on 8 x A100(40G) GPUs and lasts for around 30 minutes.
|
14 |
+
|
15 |
+
|||
|
16 |
+
|:-------------:|:-------------:|
|
17 |
+
|**Batch Size**|64|
|
18 |
+
|**Learning rate**|2e-5|
|
19 |
+
|**Epochs**|3|
|
20 |
+
|**Max length**|1024|
|
21 |
+
|
22 |
+
|
23 |
+
|
24 |
+
# Example Usage
|
25 |
+
|
26 |
+
Below shows an example on how to use OpenAlpaca
|
27 |
+
|
28 |
+
```python
|
29 |
+
import torch
|
30 |
+
from transformers import LlamaForCausalLM, LlamaTokenizer
|
31 |
+
|
32 |
+
# the previewed version of OpenAlpaca
|
33 |
+
model_path = r'openllmplayground/openalpaca_7b_700bt_preview'
|
34 |
+
tokenizer = LlamaTokenizer.from_pretrained(model_path)
|
35 |
+
model = LlamaForCausalLM.from_pretrained(model_path).cuda()
|
36 |
+
|
37 |
+
# same prompt as provided in https://crfm.stanford.edu/2023/03/13/alpaca.html
|
38 |
+
instruction = r'What is an alpaca? How is it different from a llama?'
|
39 |
+
'''
|
40 |
+
instruction = r'Write an e-mail to congratulate new Standford admits and mention that you are excited about meeting all of them in person.'
|
41 |
+
instruction = r'What is the capital of Tanzania?'
|
42 |
+
instruction = r'Write a well-thought out abstract for a machine learning paper that proves that 42 is the optimal seed for training neural networks.'
|
43 |
+
'''
|
44 |
+
|
45 |
+
prompt_no_input = r'### Instruction:\n{instruction}\n\n### Response:'
|
46 |
+
tokens = tokenizer.encode(prompt_no_input)
|
47 |
+
bos_token_id, eos_token_id = 1, 2 # see https://github.com/openlm-research/open_llama#preview-weights-release-and-usage
|
48 |
+
tokens = [bos_token_id] + tokens + [eos_token_id] + [bos_token_id]
|
49 |
+
tokens = torch.LongTensor(tokens[-1024:]).unsqueeze(0).cuda()
|
50 |
+
instance = {'input_ids': tokens,
|
51 |
+
'top_k': 50,
|
52 |
+
'top_p': 0.9,
|
53 |
+
'generate_len': 128}
|
54 |
+
|
55 |
+
length = len(tokens[0])
|
56 |
+
with torch.no_grad():
|
57 |
+
rest = model.generate(
|
58 |
+
input_ids=tokens,
|
59 |
+
max_length=length+instance['generate_len'],
|
60 |
+
use_cache=True,
|
61 |
+
do_sample=True,
|
62 |
+
top_p=instance['top_p'],
|
63 |
+
top_k=instance['top_k']
|
64 |
+
)
|
65 |
+
|
66 |
+
output = rest[0][length:]
|
67 |
+
string = tokenizer.decode(output, skip_special_tokens=False)
|
68 |
+
string = string.replace('<s>', '').replace('</s>', '').strip()
|
69 |
+
print(f'[!] Generation results: {string}')
|
70 |
+
```
|
71 |
+
|
72 |
+
|
73 |
+
# License and Usage
|
74 |
+
|
75 |
+
OpenAlpaca is permissively licensed under the Apache 2.0 license and can be used freely for academic/commercial purposes.
|
76 |
+
|
77 |
+
|
78 |
+
# Contact
|
79 |
+
We would love to get feedback from the community. If you have any questions, please open an issue or contact us.
|
80 |
+
|
81 |
+
OpenAlpaca is developed by: [Yixuan Su](https://yxuansu.github.io/)<sup>\*</sup>, [Tian Lan](https://github.com/gmftbyGMFTBY)<sup>\*</sup>, and [Deng Cai](https://jcyk.github.io/) (The first two members<sup>\*</sup> contributed equally.)
|
82 |
+
|
83 |
+
# Reference:
|
84 |
+
|
85 |
+
If you found OpenAlpaca useful in your research or applications, please kindly cite using the following BibTeX:
|
86 |
+
```
|
87 |
+
@misc{openalpaca,
|
88 |
+
author = {Yixuan Su and Tian Lan and Deng Cai},
|
89 |
+
title = {OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA},
|
90 |
+
year = {2023},
|
91 |
+
publisher = {GitHub},
|
92 |
+
journal = {GitHub repository},
|
93 |
+
howpublished = {\url{https://github.com/yxuansu/OpenAlpaca}},
|
94 |
+
}
|
95 |
+
```
|
96 |
+
```
|
97 |
+
@software{openlm2023openllama,
|
98 |
+
author = {Xinyang Geng and Hao Liu},
|
99 |
+
title = {OpenLLaMA: An Open Reproduction of LLaMA},
|
100 |
+
month = May,
|
101 |
+
year = 2023,
|
102 |
+
url = {https://github.com/openlm-research/open_llama}
|
103 |
+
}
|
104 |
+
```
|
105 |
+
```
|
106 |
+
@misc{alpaca,
|
107 |
+
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
|
108 |
+
title = {Stanford Alpaca: An Instruction-following LLaMA model},
|
109 |
+
year = {2023},
|
110 |
+
publisher = {GitHub},
|
111 |
+
journal = {GitHub repository},
|
112 |
+
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
|
113 |
+
}
|
114 |
+
```
|
115 |
+
```
|
116 |
+
@article{touvron2023llama,
|
117 |
+
title={Llama: Open and efficient foundation language models},
|
118 |
+
author={Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie{-}Anne Lachaux and Timoth{\'{e}}e Lacroix and Baptiste Rozi{\`{e}}re and Naman Goyal and Eric Hambro and Faisal Azhar and Aur{\'{e}}lien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample},
|
119 |
+
journal={arXiv preprint arXiv:2302.13971},
|
120 |
+
year={2023}
|
121 |
+
}
|
122 |
+
```
|