--- language: en thumbnail: >- https://raw.githubusercontent.com/SpeedStar1O1/discord-bots/main/VergilpluseGPT2.png?token=GHSAT0AAAAAACC53HUTBR6T2QVILOHJ275QZD5AL4A tags: - gpt2 - dialogue - response generation - transformers - pytorch - conversational - text-generation license: mit datasets: - allenai/soda - allenai/prosocial-dialog - vicgalle/alpaca-gpt4 metrics: - accuracy --- Note: As of current writing this model is still under development. ## VergilGPT2 VergilGPT2 is an exceptional model leveraging the renowned gpt2 architecture, meticulously trained on the allenai/soda conversational dataset using Google Collaboratory. Designed as an interactive chatbot, VergilGPT2 showcases the ability to respond to user queries and engage in meaningful conversations. The allenai/soda dataset serves as the backbone for VergilGPT2's training, offering an extensive corpus of conversational dialogue. With a staggering 1.19 million training lines, 149,000 test lines, and 146,000 validation lines, this dataset provides a robust foundation for fostering natural and coherent interactions. The dataset itself spans an impressive file size of 856 MB, ensuring a comprehensive and diverse range of conversational scenarios for training. By harnessing the power of the gpt2 model architecture and the rich context provided by the allenai/soda dataset, VergilGPT2 excels at generating responses that exhibit fluency, coherence, and relevance. Its training on extensive conversational data allows it to capture the intricacies of human conversation, enabling more engaging and interactive interactions. VergilGPT2 stands as a testament to the advancements in conversational AI, embodying the fusion of cutting-edge technology, massive dataset utilization, and meticulous training. It holds immense potential for a wide array of applications, including virtual assistants, dialogue systems, and interactive chatbot experiences. Please note that while VergilGPT2 demonstrates impressive conversational capabilities, it is important to recognize that, like all language models, its responses are generated based on patterns and examples from the training data. Thus, it may occasionally produce inaccurate or nonsensical outputs. Care should be taken to interpret and verify its responses in context. Harness the power of VergilGPT2, and unlock a world of dynamic and captivating conversations that push the boundaries of interactive AI experiences. ## Installation Make sure to install the required dependencies by running the following commands: ```python !pip install torch !pip install datasets !pip install transformers==4.29.2 !pip install tokenizers==0.13.3 !pip install toml==0.10.2 !pip install accelerate ``` If you are familiar with QLoRA or need to install specific libraries, use the following commands: ```python !pip install -q -U bitsandbytes !pip install -q -U git+https://github.com/huggingface/transformers.git !pip install -q -U git+https://github.com/huggingface/peft.git !pip install -q -U git+https://github.com/huggingface/accelerate.git ``` ## Training Example To train a model on a dataset, you can use the following example: ```python from datasets import load_dataset dataset = load_dataset("allenai/soda") ``` In this example, we load the allenai/soda conversational dataset. ## Loading the Model To load the original GPT2 model for training, you can use the following example: ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "gpt2" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) ``` For loading the original GPT2 model in 4-bit and applying quantization for better results, as well as utilizing bfloat16 compute dtype and nested quantization for memory efficiency during model loading, use the following example: ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig model_id = "gpt2" bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 ) tokenizer = AutoTokenizer.from_pretrained(model_id) model_4bit = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, device_map="auto") ``` To load the GPT2 model with the allenai/soda dataset, follow this example: ```python import torch from transformers import GPT2LMHeadModel, GPT2Tokenizer, GPT2Config from transformers import TextDataset, DataCollatorForLanguageModeling from transformers import Trainer, TrainingArguments from sklearn.model_selection import train_test_split from datasets import load_dataset from accelerate import Accelerator # Define the model and tokenizer model_name = "gpt2" tokenizer = GPT2Tokenizer.from_pretrained(model_name) model = GPT2LMHeadModel.from_pretrained(model_name) # Preprocess the dataset def preprocess_dataset(example): inputs = f"USER: {example['dialogue'][-2]} \nASSISTANT: {example['dialogue'][-1]}" outputs = example['dialogue'][1:-1] return {'inputs': inputs, 'outputs': outputs} # Load and preprocess the dataset dataset = load_dataset("allenai/soda") dataset = dataset.map(preprocess_dataset) ``` ## Supported Models Experience the power of 4-bit mode with the array of supported models on QLoRA: ```python [ 'bigbird_pegasus', 'blip_2', 'bloom', 'bridgetower', 'codegen', 'deit', 'esm', 'gpt2', 'gpt_bigcode', 'gpt_neo', 'gpt_neox', 'gpt_neox_japanese', 'gptj', 'gptsan_japanese', 'lilt', 'llama', 'longformer', 'longt5', 'luke', 'm2m_100', 'mbart', 'mega', 'mt5', 'nllb_moe', 'open_llama', 'opt', 'owlvit', 'plbart', 'roberta', 'roberta_prelayernorm', 'rwkv', 'switch_transformers', 't5', 'vilt', 'vit', 'vit_hybrid', 'whisper', 'xglm', 'xlm_roberta' ] ``` ## Loading & Training VergilGPT2 To load the original VergilGPT2 model for training, you can use the following example: ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "VergilGPT2" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) ``` For loading the VergilGPT2 model in 4-bit and applying quantization for better results, as well as utilizing bfloat16 compute dtype and nested quantization for memory efficiency during model loading, use the following example: ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig model_id = "VergilGPT2" bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 ) tokenizer = AutoTokenizer.from_pretrained(model_id) model_4bit = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, device_map="auto") ``` To load the VergilGPT2 model with the allenai/soda dataset, follow this example: ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig from transformers import TextDataset, DataCollatorForLanguageModeling from transformers import Trainer, TrainingArguments from sklearn.model_selection import train_test_split from datasets import load_dataset from accelerate import Accelerator # Define the model and tokenizer model_name = "VergilGPT2" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) # Preprocess the dataset def preprocess_dataset(example): inputs = f"USER: {example['dialogue'][-2]} \nASSISTANT: {example['dialogue'][-1]}" outputs = example['dialogue'][1:-1] return {'inputs': inputs, 'outputs': outputs} # Load and preprocess the dataset dataset = load_dataset("allenai/soda") dataset = dataset.map(preprocess_dataset) # Split the dataset into training and validation sets train_dataset, val_dataset = train_test_split(dataset['train'], test_size=0.1, shuffle=True) ``` It is worth noting that VergilGPT2 is already trained on the allensi/soda dataset so in actual training be sure to change the conversational dialogue. ## Text Files You can create an instance where your code can create text files so you can continue tarining and create check points: ```python # Extract the 'text' column from the train_dataset and val_dataset train_texts = train_dataset['inputs'] val_texts = val_dataset['inputs'] # Write train_texts to a text file train_file = "train_texts.txt" with open(train_file, 'w', encoding='utf-8') as f: for text in train_texts: f.write(text + '\n') # Write val_texts to a text file val_file = "val_texts.txt" with open(val_file, 'w', encoding='utf-8') as f: for text in val_texts: f.write(text + '\n') ``` ## Training Arguments You can use these training arguments to train & fine-tune your model. ```python # Define the training arguments training_args = TrainingArguments( output_dir=output_dir, overwrite_output_dir=True, num_train_epochs=3, per_device_train_batch_size=4, save_steps=500, save_total_limit=2, learning_rate=2e-5, prediction_loss_only=True, ) # Create the data collator data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False) # Create the Accelerator instance accelerator = Accelerator() # Create the Trainer instance trainer = Trainer( model=model.to(accelerator.device), args=training_args, data_collator=data_collator, train_dataset=train_text_dataset, eval_dataset=val_text_dataset, ) # Fine-tune the model trainer = accelerator.prepare(trainer) trainer.train() # Save the fine-tuned model trainer.save_model(output_dir) ```