--- license: apache-2.0 library_name: peft tags: - trl - sft - generated_from_trainer base_model: meta-llama/Meta-Llama-3-8B-Instruct datasets: - b-mc2/sql-create-context model-index: - name: llama3-8b-instruct-text-to-sql results: [] metrics: - accuracy 79.90 language: - en --- # llama3-8b-instruct-text-to-sql This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 3 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 6 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0 - Pytorch 2.2.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1 ### Train jupyter notebook [my github](https://github.com/bofen97/llama3-8b-instruct-text-to-sql/blob/main/llama3-8b-instruct-text-to-sql.ipynb) ### Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "tyfeng1997/llama3-8b-instruct-text-to-sql" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are an text to SQL query translator. Users will ask you questions in English and you will generate a SQL query based on the provided SCHEMA.\nSCHEMA:\nCREATE TABLE match_season (College VARCHAR, POSITION VARCHAR)"}, {"role": "user", "content": "Which college have both players with position midfielder and players with position defender?"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) response = outputs[0] print(tokenizer.decode(response, skip_special_tokens=True)) # #system #You are an text to SQL query translator. Users will ask you questions in English and you will generate a SQL query based on the provided SCHEMA. #SCHEMA: #CREATE TABLE match_season (College VARCHAR, POSITION VARCHAR) #user #Which college have both players with position midfielder and players with position defender? #assistant #SELECT College FROM match_season WHERE POSITION = "Midfielder" INTERSECT SELECT College FROM match_season WHERE POSITION = "Defender" # ```