Starcodium

Note: As of current writing this model is still under development.

VergilGPT2

VergilGPT2 is an exceptional model harnessing the power of the renowned gpt2 architecture. It has undergone meticulous training on multiple datasets, including allenai/soda, allenai/prosocial-dialog, vicgalle/alpaca-gpt4, conv_ai, conv_ai_2, and conv_ai_3, using Google Collaboratory. This expansive training allows VergilGPT2 to excel as an interactive chatbot, delivering remarkable responses and engaging in meaningful conversations.

The incorporation of diverse datasets enriches VergilGPT2's capabilities. Among them, the allenai/soda dataset serves as a foundational pillar, offering an extensive corpus of conversational dialogue. With an impressive collection of 1.19 million training lines, 149,000 test lines, and 146,000 validation lines, this dataset provides a robust framework for fostering natural and coherent interactions. Spanning a file size of 856 MB, the dataset encompasses a wide range of conversational scenarios, ensuring comprehensive training.

Driven by the cutting-edge gpt2 model architecture and the rich context provided by multiple datasets, VergilGPT2 generates responses that exhibit fluency, coherence, and relevance. Its training on extensive conversational data enables it to capture the intricacies of human interaction, facilitating engaging and interactive experiences.

VergilGPT2 stands as a testament to the advancements in conversational AI, embodying the fusion of cutting-edge technology, meticulous training, and the diverse knowledge contained within the multiple datasets. This remarkable model holds immense potential for various applications, such as virtual assistants, dialogue systems, and interactive chatbot experiences.

While VergilGPT2 showcases impressive conversational capabilities, it is important to note that, like all language models, its responses are generated based on patterns and examples from the training data. As a result, occasional inaccuracies or nonsensical outputs may occur. Therefore, it is advisable to interpret and verify its responses in context.

Engaging with Vergil

If you're eager to have a conversation with VergilGPT2, you can utilize the following code snippet. Feel free to experiment with the variables 'temperature', 'top_k', 'top_p', to customize the response generation according to your preferences.

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

# Load pre-trained model and tokenizer
access_token = "REPLACE_WITH_ACCESS_TOKEN"
model_id = "Starcodium/Vergil_GPT-2"
tokenizer = AutoTokenizer.from_pretrained(model_id, use_auth_token=access_token)
model = AutoModelForCausalLM.from_pretrained(model_id, use_auth_token=access_token)

tokenizer.pad_token = tokenizer.eos_token
model.config.pad_token_id = model.config.eos_token_id

# Get user input and generate responses
while True:
    # Get user input
    prompt = input("\nEnter your prompt (or 'exit' to quit): ")
    if prompt.lower() == 'exit':
        break

    prompt_template = f"""A chat between a curious user and an artificial intelligence assistant named 'Vergil'. Vergil gives helpful, detailed, and polite answers to the user's questions.

    USER: {prompt}
    VERGIL:
    """

    print("\n\nGenerating")

    input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
    output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
    response = tokenizer.decode(output[0]).replace(prompt_template,"").replace("<s> ", "").replace("</s>", "").split("VERGIL: ")[-1].strip()  # Only keep the model's response

    # Print only the model's response, without the conversation history
    print(response)

This code snippet allows you to engage in conversations with VergilGPT2. Simply enter your input text, and VergilGPT2 will generate responses based on the provided context. Experiment with different values of the variables temperature, top_k, and top_p to customize the response generation process according to your desired preferences.

Please note that the code assumes you have access to the Starcodium/VergilGPT2 model and its associated tokenizer. Ensure you have the required authentication token (access_token) to access the model and tokenizer.

Installation

Make sure to install the required dependencies by running the following commands: (Note these installations were done in google collaboratory, if you are installing them on your local PC take out the '!')

!pip install torch
!pip install datasets
!pip install transformers==4.29.2
!pip install tokenizers==0.13.3
!pip install toml==0.10.2
!pip install accelerate

Training Example

To train a model on a dataset, you can use the following example:

from datasets import load_dataset

dataset = load_dataset("allenai/soda")

In this example, we load the allenai/soda conversational dataset.

Loading the Model

To load the original GPT2 model for training, you can use the following example:

from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "gpt2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

To load the GPT2 model with the allenai/soda dataset, follow this example:

import torch
from transformers import GPT2LMHeadModel, GPT2Tokenizer, GPT2Config
from transformers import TextDataset, DataCollatorForLanguageModeling
from transformers import Trainer, TrainingArguments
from sklearn.model_selection import train_test_split
from datasets import load_dataset
from accelerate import Accelerator

# Define the model and tokenizer
model_name = "gpt2"
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
model = GPT2LMHeadModel.from_pretrained(model_name)

# Preprocess the dataset
def preprocess_dataset(example):
    inputs = f"USER: {example['dialogue'][-2]} \nASSISTANT: {example['dialogue'][-1]}"
    outputs = example['dialogue'][1:-1]
    return {'inputs': inputs, 'outputs': outputs}

# Load and preprocess the dataset
dataset = load_dataset("allenai/soda")
dataset = dataset.map(preprocess_dataset)

Loading & Training VergilGPT2

To load the VergilGPT2 model for training, you can use the following example:

from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "VergilGPT2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

To load the VergilGPT2 model with the allenai/soda dataset, follow this example:

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
from transformers import TextDataset, DataCollatorForLanguageModeling
from transformers import Trainer, TrainingArguments
from sklearn.model_selection import train_test_split
from datasets import load_dataset
from accelerate import Accelerator

# Define the model and tokenizer
model_name = "VergilGPT2"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

# Preprocess the dataset
def preprocess_dataset(example):
    inputs = f"USER: {example['dialogue'][-2]} \nASSISTANT: {example['dialogue'][-1]}"
    outputs = example['dialogue'][1:-1]
    return {'inputs': inputs, 'outputs': outputs}

# Load and preprocess the dataset
dataset = load_dataset("allenai/soda")
dataset = dataset.map(preprocess_dataset)

# Split the dataset into training and validation sets
train_dataset, val_dataset = train_test_split(dataset['train'], test_size=0.1, shuffle=True)

It is worth noting that VergilGPT2 is already trained on the allenai/soda dataset so in actual training be sure to change the conversational dialogue.

Text Files

You can create an instance where your code can create text files so you can continue tarining and create check points:

# Extract the 'text' column from the train_dataset and val_dataset
train_texts = train_dataset['inputs']
val_texts = val_dataset['inputs']

# Write train_texts to a text file
train_file = "train_texts.txt"
with open(train_file, 'w', encoding='utf-8') as f:
    for text in train_texts:
        f.write(text + '\n')

# Write val_texts to a text file
val_file = "val_texts.txt"
with open(val_file, 'w', encoding='utf-8') as f:
    for text in val_texts:
        f.write(text + '\n')

Training Arguments

You can use these training arguments to train & fine-tune your model.

# Define the training arguments
training_args = TrainingArguments(
    output_dir=output_dir,
    overwrite_output_dir=True,
    num_train_epochs=3,
    per_device_train_batch_size=4,
    save_steps=500,
    save_total_limit=2,
    learning_rate=2e-5,
    prediction_loss_only=True,
)

# Create the data collator
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)

# Create the Accelerator instance
accelerator = Accelerator()

# Create the Trainer instance
trainer = Trainer(
    model=model.to(accelerator.device),
    args=training_args,
    data_collator=data_collator,
    train_dataset=train_text_dataset,
    eval_dataset=val_text_dataset,
)

# Fine-tune the model
trainer = accelerator.prepare(trainer)
trainer.train()

# Save the fine-tuned model
trainer.save_model(output_dir)
Downloads last month
13

Datasets used to train Starcodium/Vergil_GPT-2