aubrie commited on
Commit
9f55278
1 Parent(s): baf21e8

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +265 -0
README.md ADDED
@@ -0,0 +1,265 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ thumbnail: >-
4
+ https://raw.githubusercontent.com/SpeedStar1O1/discord-bots/main/VergilpluseGPT2.png?token=GHSAT0AAAAAACC53HUTBR6T2QVILOHJ275QZD5AL4A
5
+ tags:
6
+ - gpt2
7
+ - dialogue
8
+ - response generation
9
+ - transformers
10
+ - pytorch
11
+ - conversational
12
+ - text-generation
13
+ license: mit
14
+ datasets:
15
+ - allenai/soda
16
+ - allenai/prosocial-dialog
17
+ - vicgalle/alpaca-gpt4
18
+ metrics:
19
+ - accuracy
20
+ ---
21
+
22
+ Note: As of current writing this model is still under development.
23
+
24
+ ## VergilGPT2
25
+
26
+ VergilGPT2 is an exceptional model leveraging the renowned gpt2 architecture, meticulously trained on the allenai/soda conversational dataset using Google Collaboratory. Designed as an interactive chatbot, VergilGPT2 showcases the ability to respond to user queries and engage in meaningful conversations.
27
+
28
+ The allenai/soda dataset serves as the backbone for VergilGPT2's training, offering an extensive corpus of conversational dialogue. With a staggering 1.19 million training lines, 149,000 test lines, and 146,000 validation lines, this dataset provides a robust foundation for fostering natural and coherent interactions. The dataset itself spans an impressive file size of 856 MB, ensuring a comprehensive and diverse range of conversational scenarios for training.
29
+
30
+ By harnessing the power of the gpt2 model architecture and the rich context provided by the allenai/soda dataset, VergilGPT2 excels at generating responses that exhibit fluency, coherence, and relevance. Its training on extensive conversational data allows it to capture the intricacies of human conversation, enabling more engaging and interactive interactions.
31
+
32
+ VergilGPT2 stands as a testament to the advancements in conversational AI, embodying the fusion of cutting-edge technology, massive dataset utilization, and meticulous training. It holds immense potential for a wide array of applications, including virtual assistants, dialogue systems, and interactive chatbot experiences.
33
+
34
+ Please note that while VergilGPT2 demonstrates impressive conversational capabilities, it is important to recognize that, like all language models, its responses are generated based on patterns and examples from the training data. Thus, it may occasionally produce inaccurate or nonsensical outputs. Care should be taken to interpret and verify its responses in context.
35
+
36
+ Harness the power of VergilGPT2, and unlock a world of dynamic and captivating conversations that push the boundaries of interactive AI experiences.
37
+
38
+ ## Installation
39
+
40
+ Make sure to install the required dependencies by running the following commands:
41
+
42
+ ```python
43
+ !pip install torch
44
+ !pip install datasets
45
+ !pip install transformers==4.29.2
46
+ !pip install tokenizers==0.13.3
47
+ !pip install toml==0.10.2
48
+ !pip install accelerate
49
+ ```
50
+
51
+ If you are familiar with QLoRA or need to install specific libraries, use the following commands:
52
+
53
+ ```python
54
+ !pip install -q -U bitsandbytes
55
+ !pip install -q -U git+https://github.com/huggingface/transformers.git
56
+ !pip install -q -U git+https://github.com/huggingface/peft.git
57
+ !pip install -q -U git+https://github.com/huggingface/accelerate.git
58
+ ```
59
+
60
+ ## Training Example
61
+
62
+ To train a model on a dataset, you can use the following example:
63
+
64
+ ```python
65
+ from datasets import load_dataset
66
+
67
+ dataset = load_dataset("allenai/soda")
68
+ ```
69
+
70
+ In this example, we load the allenai/soda conversational dataset.
71
+
72
+ ## Loading the Model
73
+
74
+ To load the original GPT2 model for training, you can use the following example:
75
+
76
+ ```python
77
+ from transformers import AutoTokenizer, AutoModelForCausalLM
78
+
79
+ model_id = "gpt2"
80
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
81
+ model = AutoModelForCausalLM.from_pretrained(model_id)
82
+ ```
83
+
84
+ For loading the original GPT2 model in 4-bit and applying quantization for better results, as well as utilizing bfloat16 compute dtype and nested quantization for memory efficiency during model loading, use the following example:
85
+
86
+ ```python
87
+ import torch
88
+ from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
89
+
90
+ model_id = "gpt2"
91
+ bnb_config = BitsAndBytesConfig(
92
+ load_in_4bit=True,
93
+ bnb_4bit_use_double_quant=True,
94
+ bnb_4bit_quant_type="nf4",
95
+ bnb_4bit_compute_dtype=torch.bfloat16
96
+ )
97
+
98
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
99
+ model_4bit = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, device_map="auto")
100
+ ```
101
+
102
+ To load the GPT2 model with the allenai/soda dataset, follow this example:
103
+
104
+ ```python
105
+ import torch
106
+ from transformers import GPT2LMHeadModel, GPT2Tokenizer, GPT2Config
107
+ from transformers import TextDataset, DataCollatorForLanguageModeling
108
+ from transformers import Trainer, TrainingArguments
109
+ from sklearn.model_selection import train_test_split
110
+ from datasets import load_dataset
111
+ from accelerate import Accelerator
112
+
113
+ # Define the model and tokenizer
114
+ model_name = "gpt2"
115
+ tokenizer = GPT2Tokenizer.from_pretrained(model_name)
116
+ model = GPT2LMHeadModel.from_pretrained(model_name)
117
+
118
+ # Preprocess the dataset
119
+ def preprocess_dataset(example):
120
+ inputs = f"USER: {example['dialogue'][-2]} \nASSISTANT: {example['dialogue'][-1]}"
121
+ outputs = example['dialogue'][1:-1]
122
+ return {'inputs': inputs, 'outputs': outputs}
123
+
124
+ # Load and preprocess the dataset
125
+ dataset = load_dataset("allenai/soda")
126
+ dataset = dataset.map(preprocess_dataset)
127
+ ```
128
+
129
+ ## Supported Models
130
+
131
+ Experience the power of 4-bit mode with the array of supported models on QLoRA:
132
+
133
+ ```python
134
+ [
135
+ 'bigbird_pegasus', 'blip_2', 'bloom', 'bridgetower', 'codegen', 'deit', 'esm',
136
+ 'gpt2', 'gpt_bigcode', 'gpt_neo', 'gpt_neox', 'gpt_neox_japanese', 'gptj', 'gptsan_japanese',
137
+ 'lilt', 'llama', 'longformer', 'longt5', 'luke', 'm2m_100', 'mbart', 'mega', 'mt5', 'nllb_moe',
138
+ 'open_llama', 'opt', 'owlvit', 'plbart', 'roberta', 'roberta_prelayernorm', 'rwkv', 'switch_transformers',
139
+ 't5', 'vilt', 'vit', 'vit_hybrid', 'whisper', 'xglm', 'xlm_roberta'
140
+ ]
141
+ ```
142
+
143
+ ## Loading & Training VergilGPT2
144
+
145
+ To load the original VergilGPT2 model for training, you can use the following example:
146
+ ```python
147
+ from transformers import AutoTokenizer, AutoModelForCausalLM
148
+
149
+ model_id = "VergilGPT2"
150
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
151
+ model = AutoModelForCausalLM.from_pretrained(model_id)
152
+ ```
153
+
154
+ For loading the VergilGPT2 model in 4-bit and applying quantization for better results, as well as utilizing bfloat16 compute dtype and nested quantization for memory efficiency during model loading, use the following example:
155
+
156
+ ```python
157
+ import torch
158
+ from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
159
+
160
+ model_id = "VergilGPT2"
161
+ bnb_config = BitsAndBytesConfig(
162
+ load_in_4bit=True,
163
+ bnb_4bit_use_double_quant=True,
164
+ bnb_4bit_quant_type="nf4",
165
+ bnb_4bit_compute_dtype=torch.bfloat16
166
+ )
167
+
168
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
169
+ model_4bit = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, device_map="auto")
170
+ ```
171
+
172
+ To load the VergilGPT2 model with the allenai/soda dataset, follow this example:
173
+
174
+ ```python
175
+ import torch
176
+ from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
177
+ from transformers import TextDataset, DataCollatorForLanguageModeling
178
+ from transformers import Trainer, TrainingArguments
179
+ from sklearn.model_selection import train_test_split
180
+ from datasets import load_dataset
181
+ from accelerate import Accelerator
182
+
183
+ # Define the model and tokenizer
184
+ model_name = "VergilGPT2"
185
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
186
+ model = AutoModelForCausalLM.from_pretrained(model_name)
187
+
188
+ # Preprocess the dataset
189
+ def preprocess_dataset(example):
190
+ inputs = f"USER: {example['dialogue'][-2]} \nASSISTANT: {example['dialogue'][-1]}"
191
+ outputs = example['dialogue'][1:-1]
192
+ return {'inputs': inputs, 'outputs': outputs}
193
+
194
+ # Load and preprocess the dataset
195
+ dataset = load_dataset("allenai/soda")
196
+ dataset = dataset.map(preprocess_dataset)
197
+
198
+ # Split the dataset into training and validation sets
199
+ train_dataset, val_dataset = train_test_split(dataset['train'], test_size=0.1, shuffle=True)
200
+ ```
201
+
202
+ It is worth noting that VergilGPT2 is already trained on the allensi/soda dataset so in actual training be sure to change the conversational dialogue.
203
+
204
+ ## Text Files
205
+
206
+ You can create an instance where your code can create text files so you can continue tarining and create check points:
207
+
208
+ ```python
209
+ # Extract the 'text' column from the train_dataset and val_dataset
210
+ train_texts = train_dataset['inputs']
211
+ val_texts = val_dataset['inputs']
212
+
213
+ # Write train_texts to a text file
214
+ train_file = "train_texts.txt"
215
+ with open(train_file, 'w', encoding='utf-8') as f:
216
+ for text in train_texts:
217
+ f.write(text + '\n')
218
+
219
+ # Write val_texts to a text file
220
+ val_file = "val_texts.txt"
221
+ with open(val_file, 'w', encoding='utf-8') as f:
222
+ for text in val_texts:
223
+ f.write(text + '\n')
224
+
225
+ ```
226
+
227
+ ## Training Arguments
228
+
229
+ You can use these training arguments to train & fine-tune your model.
230
+
231
+ ```python
232
+ # Define the training arguments
233
+ training_args = TrainingArguments(
234
+ output_dir=output_dir,
235
+ overwrite_output_dir=True,
236
+ num_train_epochs=3,
237
+ per_device_train_batch_size=4,
238
+ save_steps=500,
239
+ save_total_limit=2,
240
+ learning_rate=2e-5,
241
+ prediction_loss_only=True,
242
+ )
243
+
244
+ # Create the data collator
245
+ data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)
246
+
247
+ # Create the Accelerator instance
248
+ accelerator = Accelerator()
249
+
250
+ # Create the Trainer instance
251
+ trainer = Trainer(
252
+ model=model.to(accelerator.device),
253
+ args=training_args,
254
+ data_collator=data_collator,
255
+ train_dataset=train_text_dataset,
256
+ eval_dataset=val_text_dataset,
257
+ )
258
+
259
+ # Fine-tune the model
260
+ trainer = accelerator.prepare(trainer)
261
+ trainer.train()
262
+
263
+ # Save the fine-tuned model
264
+ trainer.save_model(output_dir)
265
+ ```