Text Generation
Transformers
PyTorch
Safetensors
English
llama
Eval Results
text-generation-inference
Inference Endpoints
Pankaj Mathur commited on
Commit
b02da34
1 Parent(s): d3e5979

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +142 -0
README.md CHANGED
@@ -1,3 +1,145 @@
1
  ---
2
  license: mit
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ language:
4
+ - en
5
+ library_name: adapter-transformers
6
  ---
7
+ # Wizardlm Alpaca Dolly Orca Open_LLaMa_13b
8
+ An Open_LLaMA-13B model trained on custom explain tuned datasets, created using Instructions and Input from WizardLM, Alpaca & Dolly-V2 datasets and applying [Orca Research Paper](https://arxiv.org/abs/2306.02707) dataset construction approaches.
9
+
10
+
11
+ # Dataset
12
+
13
+ We trained [OpenLLaMa-3B model](https://github.com/openlm-research/open_llama) on custom explain tuned [Alpaca dataset](https://crfm.stanford.edu/2023/03/13/alpaca.html) (~52K) created using approaches from [Orca Research Paper](https://arxiv.org/abs/2306.02707).
14
+
15
+ We leverage all of the 15 system instructions provided in [Orca Research Paper](https://arxiv.org/abs/2306.02707) to generate custom Alpaca dataset, in contrast to vanilla instruction tuning approaches used by original [Alpaca research paper](https://crfm.stanford.edu/2023/03/13/alpaca.html).
16
+
17
+ This helps student model aka [alpaca_orca_open_llama_3b](psmathur/alpaca_orca_open_llama_3b) to learn ***thought*** process from teacher model, which is ChatGPT (gpt-3.5-turbo-0301 version).
18
+
19
+ Please see below example usage how the **System** prompt is added before each *instruction*.
20
+
21
+ # Training
22
+
23
+ The training configurations are provided in the table below.
24
+
25
+ The training takes on 4x A600(50G) GPUs and lasts for around 20 Hours for cost of $66 using [Lambda Labs](https://lambdalabs.com)
26
+
27
+ We used DeepSpeed with Zero-3 approaches for parallel gpu training by writing our own fine tunning scripts plus leveraging some of the model training code provided by amazing [OpenAlpaca repo](https://github.com/yxuansu/OpenAlpaca)
28
+
29
+ Here are some of params used during training:
30
+
31
+ |||
32
+ |:-------------:|:-------------:|
33
+ |*batch_size*|16|
34
+ |*train_micro_batch_size_per_gpu*|2|
35
+ |*gradient_accumulation_steps*|2|
36
+ |*Learning rate*|2e-5|
37
+ |*Max length*|1024|
38
+ |*Epochs*|3|
39
+
40
+
41
+
42
+ # Example Usage
43
+
44
+ Below shows an example on how to use [alpaca_orca_open_llama_3b](psmathur/alpaca_orca_open_llama_3b)
45
+
46
+ ```python
47
+ import torch
48
+ from transformers import LlamaForCausalLM, LlamaTokenizer
49
+
50
+ # change model_path between 3b,7b or 13b
51
+ model_path = 'psmathur/alpaca_orca_open_llama_3b'
52
+ tokenizer = LlamaTokenizer.from_pretrained(model_path)
53
+ model = LlamaForCausalLM.from_pretrained(
54
+ model_path, torch_dtype=torch.float16, device_map='auto',
55
+ )
56
+
57
+
58
+ #generate text function
59
+ def generate_text(system, instruction, input=None):
60
+
61
+ if input:
62
+ prompt = f"### System:\n{system}\n\n#\n\n### User:\n{instruction}\n\n### Input:\n{input}\n\n### Response:\n"
63
+ else:
64
+ prompt = f"### System:\n{system}\n\n#\n\n### User:\n{instruction}\n\n### Response:\n"
65
+
66
+ tokens = tokenizer.encode(prompt)
67
+ tokens = torch.LongTensor(tokens).unsqueeze(0)
68
+ tokens = tokens.to('cuda')
69
+
70
+ instance = {'input_ids': tokens,'top_p': 1.0, 'temperature':0.7, 'generate_len': 1024}
71
+
72
+ length = len(tokens[0])
73
+ with torch.no_grad():
74
+ rest = model.generate(
75
+ input_ids=tokens,
76
+ max_length=length+instance['generate_len'],
77
+ use_cache=True,
78
+ do_sample=True,
79
+ top_p=instance['top_p'],
80
+ temperature=instance['temperature']
81
+ )
82
+ output = rest[0][length:]
83
+ string = tokenizer.decode(output, skip_special_tokens=True)
84
+ print(f'[!] Response: {string}')
85
+
86
+ # same prompt as provided by Orca Research Paper
87
+ system = 'You are an AI assistant. User will you give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.'
88
+ instruction = 'Use the given data to calculate the median.'
89
+ input = '[5,2,3,4,1]'
90
+ generate_text(system, instruction, input)
91
+
92
+ ```
93
+
94
+ **P.S. I am #opentowork and #collaboration, if you can help, please reach out to me at psmathur.public@gmail.com**
95
+
96
+ Next Goals:
97
+ 1) Try more data, Dolly V2, WizardLM, & Others (we are open for suggestions)
98
+ 2) Try bigger OpenLLaMA models 7B and 13B
99
+ 3) Try better GPU for training, couldn't get 8xA100 (40GB), I guess they are in hot demand now.
100
+ 4) Provide more options for Text generation UI. (may be https://github.com/oobabooga/text-generation-webui)
101
+ 6) Provide 4bit GGML/GPTQ quantized model (may be [TheBloke](https://huggingface.co/TheBloke) can help here)
102
+
103
+
104
+ Reference:
105
+ If you found [alpaca_orca_open_llama_3b](psmathur/alpaca_orca_open_llama_3b) useful in your research or applications, please kindly cite using the following BibTeX:
106
+
107
+ ```
108
+ @misc{alpaca_orca_open_llama_3b,
109
+ author = {Pankaj Mathur},
110
+ title = {alpaca_orca_open_llama_3b: A custom explain tuned Alpaca Model Based On OpenLLaMA},
111
+ year = {2023},
112
+ publisher = {GitHub, HuggingFace},
113
+ journal = {GitHub repository, HuggingFace repository},
114
+ howpublished = {\url{https://github.com/pankajarm/alpaca_orca_open_llama_3b}, \url{https://https://huggingface.co/psmathur/alpaca_orca_open_llama_3b}},
115
+ }
116
+ ```
117
+ ```
118
+ @software{openlm2023openllama,
119
+ author = {Xinyang Geng and Hao Liu},
120
+ title = {OpenLLaMA: An Open Reproduction of LLaMA},
121
+ month = May,
122
+ year = 2023,
123
+ url = {https://github.com/openlm-research/open_llama}
124
+ }
125
+ ```
126
+ ```
127
+ @misc{openalpaca,
128
+ author = {Yixuan Su and Tian Lan and Deng Cai},
129
+ title = {OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA},
130
+ year = {2023},
131
+ publisher = {GitHub},
132
+ journal = {GitHub repository},
133
+ howpublished = {\url{https://github.com/yxuansu/OpenAlpaca}},
134
+ }
135
+ ```
136
+ ```
137
+ @misc{alpaca,
138
+ author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
139
+ title = {Stanford Alpaca: An Instruction-following LLaMA model},
140
+ year = {2023},
141
+ publisher = {GitHub},
142
+ journal = {GitHub repository},
143
+ howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
144
+ }
145
+ ```