Edit model card

Model Card for Model ID

This is an SFT-based (Supervised Fine-Tuned) Gemma-2B model for SQL-based tasks without applying flash-attention or using other methods libraries to reduce inference. We used LoRa(Low-Ranking Adaptors) method for Fine-Tuning.

Model Details

Model Description

This is SFT based Fine-Tuned Gemma-2B model for SQL-based tasks by providing prompts to the model in the format given below(an Example): """ Question: What is the average number of cows per farm with more than 100 acres of land? Context: CREATE TABLE farm (Cows INTEGER, Acres INTEGER) """.

Formatting (Prompting) was applied to dataset to improve training loss over time during training as well reducing basic inference speed.

  • Finetuned from model : "google/gemma-2b"

Inference Code:

do the necessary imports then

device_map = {"": 0} model_id = "google/gemma-2b" new_model = "Akil15/Gemma_SQL_v.0.1"

Reload model in FP16 and merge it with LoRA weights

base_model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16, device_map=device_map, )

model = PeftModel.from_pretrained(base_model, new_model) model = model.merge_and_unload()

Reload tokenizer to save it

tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True) tokenizer.pad_token = tokenizer.eos_token tokenizer.padding_side = "right"

text = input() inputs = tokenizer(text, return_tensors="pt").to(device) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Framework versions

  • PEFT 0.9.0
Downloads last month
9
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Akil15/Gemma_SQL_v.0.1

Base model

google/gemma-2b
Adapter
(23003)
this model

Dataset used to train Akil15/Gemma_SQL_v.0.1