File size: 9,669 Bytes
9f55278 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 |
---
language: en
thumbnail: >-
https://raw.githubusercontent.com/SpeedStar1O1/discord-bots/main/VergilpluseGPT2.png?token=GHSAT0AAAAAACC53HUTBR6T2QVILOHJ275QZD5AL4A
tags:
- gpt2
- dialogue
- response generation
- transformers
- pytorch
- conversational
- text-generation
license: mit
datasets:
- allenai/soda
- allenai/prosocial-dialog
- vicgalle/alpaca-gpt4
metrics:
- accuracy
---
Note: As of current writing this model is still under development.
## VergilGPT2
VergilGPT2 is an exceptional model leveraging the renowned gpt2 architecture, meticulously trained on the allenai/soda conversational dataset using Google Collaboratory. Designed as an interactive chatbot, VergilGPT2 showcases the ability to respond to user queries and engage in meaningful conversations.
The allenai/soda dataset serves as the backbone for VergilGPT2's training, offering an extensive corpus of conversational dialogue. With a staggering 1.19 million training lines, 149,000 test lines, and 146,000 validation lines, this dataset provides a robust foundation for fostering natural and coherent interactions. The dataset itself spans an impressive file size of 856 MB, ensuring a comprehensive and diverse range of conversational scenarios for training.
By harnessing the power of the gpt2 model architecture and the rich context provided by the allenai/soda dataset, VergilGPT2 excels at generating responses that exhibit fluency, coherence, and relevance. Its training on extensive conversational data allows it to capture the intricacies of human conversation, enabling more engaging and interactive interactions.
VergilGPT2 stands as a testament to the advancements in conversational AI, embodying the fusion of cutting-edge technology, massive dataset utilization, and meticulous training. It holds immense potential for a wide array of applications, including virtual assistants, dialogue systems, and interactive chatbot experiences.
Please note that while VergilGPT2 demonstrates impressive conversational capabilities, it is important to recognize that, like all language models, its responses are generated based on patterns and examples from the training data. Thus, it may occasionally produce inaccurate or nonsensical outputs. Care should be taken to interpret and verify its responses in context.
Harness the power of VergilGPT2, and unlock a world of dynamic and captivating conversations that push the boundaries of interactive AI experiences.
## Installation
Make sure to install the required dependencies by running the following commands:
```python
!pip install torch
!pip install datasets
!pip install transformers==4.29.2
!pip install tokenizers==0.13.3
!pip install toml==0.10.2
!pip install accelerate
```
If you are familiar with QLoRA or need to install specific libraries, use the following commands:
```python
!pip install -q -U bitsandbytes
!pip install -q -U git+https://github.com/huggingface/transformers.git
!pip install -q -U git+https://github.com/huggingface/peft.git
!pip install -q -U git+https://github.com/huggingface/accelerate.git
```
## Training Example
To train a model on a dataset, you can use the following example:
```python
from datasets import load_dataset
dataset = load_dataset("allenai/soda")
```
In this example, we load the allenai/soda conversational dataset.
## Loading the Model
To load the original GPT2 model for training, you can use the following example:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "gpt2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
```
For loading the original GPT2 model in 4-bit and applying quantization for better results, as well as utilizing bfloat16 compute dtype and nested quantization for memory efficiency during model loading, use the following example:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
model_id = "gpt2"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model_4bit = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, device_map="auto")
```
To load the GPT2 model with the allenai/soda dataset, follow this example:
```python
import torch
from transformers import GPT2LMHeadModel, GPT2Tokenizer, GPT2Config
from transformers import TextDataset, DataCollatorForLanguageModeling
from transformers import Trainer, TrainingArguments
from sklearn.model_selection import train_test_split
from datasets import load_dataset
from accelerate import Accelerator
# Define the model and tokenizer
model_name = "gpt2"
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
model = GPT2LMHeadModel.from_pretrained(model_name)
# Preprocess the dataset
def preprocess_dataset(example):
inputs = f"USER: {example['dialogue'][-2]} \nASSISTANT: {example['dialogue'][-1]}"
outputs = example['dialogue'][1:-1]
return {'inputs': inputs, 'outputs': outputs}
# Load and preprocess the dataset
dataset = load_dataset("allenai/soda")
dataset = dataset.map(preprocess_dataset)
```
## Supported Models
Experience the power of 4-bit mode with the array of supported models on QLoRA:
```python
[
'bigbird_pegasus', 'blip_2', 'bloom', 'bridgetower', 'codegen', 'deit', 'esm',
'gpt2', 'gpt_bigcode', 'gpt_neo', 'gpt_neox', 'gpt_neox_japanese', 'gptj', 'gptsan_japanese',
'lilt', 'llama', 'longformer', 'longt5', 'luke', 'm2m_100', 'mbart', 'mega', 'mt5', 'nllb_moe',
'open_llama', 'opt', 'owlvit', 'plbart', 'roberta', 'roberta_prelayernorm', 'rwkv', 'switch_transformers',
't5', 'vilt', 'vit', 'vit_hybrid', 'whisper', 'xglm', 'xlm_roberta'
]
```
## Loading & Training VergilGPT2
To load the original VergilGPT2 model for training, you can use the following example:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "VergilGPT2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
```
For loading the VergilGPT2 model in 4-bit and applying quantization for better results, as well as utilizing bfloat16 compute dtype and nested quantization for memory efficiency during model loading, use the following example:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
model_id = "VergilGPT2"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model_4bit = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, device_map="auto")
```
To load the VergilGPT2 model with the allenai/soda dataset, follow this example:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
from transformers import TextDataset, DataCollatorForLanguageModeling
from transformers import Trainer, TrainingArguments
from sklearn.model_selection import train_test_split
from datasets import load_dataset
from accelerate import Accelerator
# Define the model and tokenizer
model_name = "VergilGPT2"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Preprocess the dataset
def preprocess_dataset(example):
inputs = f"USER: {example['dialogue'][-2]} \nASSISTANT: {example['dialogue'][-1]}"
outputs = example['dialogue'][1:-1]
return {'inputs': inputs, 'outputs': outputs}
# Load and preprocess the dataset
dataset = load_dataset("allenai/soda")
dataset = dataset.map(preprocess_dataset)
# Split the dataset into training and validation sets
train_dataset, val_dataset = train_test_split(dataset['train'], test_size=0.1, shuffle=True)
```
It is worth noting that VergilGPT2 is already trained on the allensi/soda dataset so in actual training be sure to change the conversational dialogue.
## Text Files
You can create an instance where your code can create text files so you can continue tarining and create check points:
```python
# Extract the 'text' column from the train_dataset and val_dataset
train_texts = train_dataset['inputs']
val_texts = val_dataset['inputs']
# Write train_texts to a text file
train_file = "train_texts.txt"
with open(train_file, 'w', encoding='utf-8') as f:
for text in train_texts:
f.write(text + '\n')
# Write val_texts to a text file
val_file = "val_texts.txt"
with open(val_file, 'w', encoding='utf-8') as f:
for text in val_texts:
f.write(text + '\n')
```
## Training Arguments
You can use these training arguments to train & fine-tune your model.
```python
# Define the training arguments
training_args = TrainingArguments(
output_dir=output_dir,
overwrite_output_dir=True,
num_train_epochs=3,
per_device_train_batch_size=4,
save_steps=500,
save_total_limit=2,
learning_rate=2e-5,
prediction_loss_only=True,
)
# Create the data collator
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)
# Create the Accelerator instance
accelerator = Accelerator()
# Create the Trainer instance
trainer = Trainer(
model=model.to(accelerator.device),
args=training_args,
data_collator=data_collator,
train_dataset=train_text_dataset,
eval_dataset=val_text_dataset,
)
# Fine-tune the model
trainer = accelerator.prepare(trainer)
trainer.train()
# Save the fine-tuned model
trainer.save_model(output_dir)
``` |