--- datasets: - b-mc2/sql-create-context language: - en library_name: transformers pipeline_tag: text-generation tags: - text-2-sql - text-generation --- # Model Description Our Model is fine tuned on Llama-2 7B model on text-2-sql Dataset on alpaca format described by Meta. The dataset is provided by "b-mc2/sql-create-context" present on Huggingface . We have used QLora, Bits&Bytes, Accelerate and Transformers Library to implement PEFT concept. We have fine-tuned this model based on pre-trained llama-2 7B model provided by 'NousResearch/Llama-2-7b-chat-hf'. # Inference ```python !pip install transformers accelerate xformers bitsandbytes from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline tokenizer = AutoTokenizer.from_pretrained("ekshat/Llama-2-7b-chat-finetune-for-text2sql") # Load model directly model = AutoModelForCausalLM.from_pretrained("ekshat/Llama-2-7b-chat-finetune-for-text2sql", load_in_4bit=True) context = "CREATE TABLE head (name VARCHAR, born_state VARCHAR, age VARCHAR)" question = "List the name, born state and age of the heads of departments ordered by age." prompt = f"""Below is an context that describes a sql query, paired with an question that provides further information. Write an answer that appropriately completes the request. ### Context: {context} ### Question: {question} ### Answer:""" pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=200) result = pipe(prompt) print(result[0]['generated_text']) ```