Edit model card

WizardCoder-Guanaco-15B-V1.1 Model Card

The WizardCoder-Guanaco-15B-V1.1 is a language model that combines the strengths of the WizardCoder base model and the openassistant-guanaco dataset for finetuning. The openassistant-guanaco dataset was further trimmed to within 2 standard deviations of token size for input and output pairs and all non-english data has been removed to reduce training size requirements.

Version 1.1 showcases notable enhancements, employing a modified version of the previous openassistant-guanaco dataset. This dataset underwent a comprehensive revision, replacing every single answer with those generated by GPT-4.

The volume of the datasets has also been augmented by approximately 50%, with a particular focus on high school and abstract algebra. This expansion leveraged the combined capabilities of GPT-4 and GPT-3.5-Turbo. The initial evaluation of algebraic functions over 12 epochs indicated promising results from this enriched dataset. However, this is just the beginning; further refinements are in the pipeline, aiming to optimize the dataset quality and subsequently decrease the number of epochs required to achieve comparable results.

Considering the need to curtail memory consumption during training, this dataset was tailored to consist solely of English language questions and answers. Consequently, the model's performance in language translation may not be up to par. Nevertheless, the focus remains on enhancing the model's proficiency and efficiency within its defined scope.

Intended Use

This model is designed to be used for a wide array of text generation tasks that require understanding and generating English text. The model is expected to perform well in tasks such as answering questions, writing essays, summarizing text, translation, and more. However, given the specific data processing and finetuning done, it might be particularly effective for tasks related to English language question-answering systems.

Limitations

Despite the powerful capabilities of this model, users should be aware of its limitations. The model's knowledge is up to date only until the time it was trained, and it doesn't know about events in the world after that. It can sometimes produce incorrect or nonsensical responses, as it doesn't understand the text in the same way humans do. It should be used as a tool to assist in generating text and not as a sole source of truth.

How to use

Here is an example of how to use this model:

from transformers import AutoModelForCausalLM, AutoTokenizer
import time
import torch

class Chatbot:
    def __init__(self, model_name):
        self.tokenizer = AutoTokenizer.from_pretrained(model_name, padding_side='left')
        self.model = AutoModelForCausalLM.from_pretrained(model_name, load_in_4bit=True, torch_dtype=torch.bfloat16)
        if self.tokenizer.pad_token_id is None:
            self.tokenizer.pad_token_id = self.tokenizer.eos_token_id

    def get_response(self, prompt):
        inputs = self.tokenizer.encode_plus(prompt, return_tensors="pt", padding='max_length', max_length=100)
        if next(self.model.parameters()).is_cuda:
            inputs = {name: tensor.to('cuda') for name, tensor in inputs.items()}
        start_time = time.time()
        tokens = self.model.generate(input_ids=inputs['input_ids'], 
                                    attention_mask=inputs['attention_mask'],
                                    pad_token_id=self.tokenizer.pad_token_id,
                                    max_new_tokens=400)
        end_time = time.time()
        output_tokens = tokens[0][inputs['input_ids'].shape[-1]:]
        output = self.tokenizer.decode(output_tokens, skip_special_tokens=True)
        time_taken = end_time - start_time
        return output, time_taken

def main():
    chatbot = Chatbot("LoupGarou/WizardCoder-Guanaco-15B-V1.1")
    while True:
        user_input = input("Enter your prompt: ")
        if user_input.lower() == 'quit':
            break
        output, time_taken = chatbot.get_response(user_input)
        print("\033[33m" + output + "\033[0m")
        print("Time taken to process: ", time_taken, "seconds")
    print("Exited the program.")

if __name__ == "__main__":
    main()

Training Procedure

The WizardCoder model, serving as the base, was fine-tuned on a modified version of the openassistant-guanaco dataset. This dataset underwent a significant revision, replacing every single answer with responses generated by the AI model GPT-4. It was then expanded by approximately 50%, emphasizing high school and abstract algebra-related questions, using a mix of GPT-4 and GPT-3.5-Turbo for answer generation.

The selected dataset was standardized to fall within two standard deviations of token size for the question sets, ensuring consistency in data handling. The order of the questions was also randomized to mitigate any potential biases during the training phase.

In the interest of optimizing memory usage during the training process, the dataset was streamlined to only include English language content. As a result, all non-English data was systematically expunged from this fine-tuning dataset. It's worth noting that this modification limits the model's performance in language translation tasks, but it significantly boosts its efficiency and effectiveness when dealing with English language questions and answers.

Acknowledgements

This model, WizardCoder-Guanaco-15B-V1.1, is simply building on the efforts of two great teams to evaluate the performance of a combined model with the strengths of the WizardCoder base model and the openassistant-guanaco dataset.

A sincere appreciation goes out to the developers and the community involved in the creation and refinement of these models. Their commitment to providing open source tools and datasets have been instrumental in making this project a reality.

Moreover, a special note of thanks to the Hugging Face team, whose transformative library has not only streamlined the process of model creation and adaptation, but also democratized the access to state-of-the-art machine learning technologies. Their impact on the development of this project cannot be overstated.

Downloads last month
3,509

Spaces using LoupGarou/WizardCoder-Guanaco-15B-V1.1 21