text_to_sql / README.md
singhjagpreet's picture
Update README.md
2201671 verified
metadata
license: other
library_name: transformers
tags:
  - trl
  - sft
  - generated_from_trainer
base_model: google/gemma-2b
model-index:
  - name: gemma-2b_text_to_sql
    results: []
inference:
  parameters:
    do_sample: false
    max_length: 200
widget:
  - text: >-
      CREATE TABLE stadium (
          stadium_id number,
          location text,
          name text,
          capacity number,
      )


      -- Using valid SQLite, answer the following questions for the tables
      provided above.


      -- how many stadiums in total?


      SELECT
    example_title: Number stadiums
  - text: >-
      CREATE TABLE work_orders ( ID NUMBER, CREATED_AT TEXT, COST FLOAT,
      INVOICE_AMOUNT FLOAT, IS_DUE BOOLEAN, IS_OPEN BOOLEAN, IS_OVERDUE BOOLEAN,
      COUNTRY_NAME TEXT, )


      -- Using valid SQLite, answer the following questions for the tables
      provided above.


      -- how many work orders are open?


      SELECT
    example_title: Open work orders
  - text: >-
      CREATE TABLE stadium ( stadium_id number, location text, name text,
      capacity number, highest number, lowest number, average number )


      CREATE TABLE singer ( singer_id number, name text, country text, song_name
      text, song_release_year text, age number, is_male others )


      CREATE TABLE concert ( concert_id number, concert_name text, theme text,
      stadium_id text, year text )


      CREATE TABLE singer_in_concert ( concert_id number, singer_id text )


      -- Using valid SQLite, answer the following questions for the tables
      provided above.


      -- What is the maximum, the average, and the minimum capacity of stadiums
      ?


      SELECT
    example_title: Stadium capacity
pipeline_tag: text2text-generation

text_to_sql

This model is a fine-tuned version of google/gemma-2b on an unknown dataset.

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 2
  • training_steps: 25
  • mixed_precision_training: Native AMP

Training results

Framework versions

  • PEFT 0.8.2
  • Transformers 4.38.0
  • Pytorch 2.2.1+cu121
  • Datasets 2.17.0
  • Tokenizers 0.15.2