How to run the evaluation codes
Data Preparation and Organization
The evaluation code assumes the following structure.
If you want to use a different structure you can consider changing the X_DIR variables in src/utils
root_folder
├──src
│ └──eval codes (the directory this README is in)
└──data (DATA_DIR)
├──results
│ └──experiment_results.pkl
├──bird_data (SQLITE_DB_DIR)
│ ├──db_name
│ │ └──db_name.sqlite
│ └──...
├──dp-bench (BENCH_DIR)
│ ├──v7_dp_bench_with_topics.json
│ └──v7_hard_dp_bench_with_topics.json
└──bird_elt_dp
└──bird_db_schema (SCHEMA_DIR)
├──bird_train_tables.json
└──bird_dev_tables.json
The SQLite DBs should be downloaded from https://bird-bench.github.io/ and organized
into a single directory, such that each database's DB is under a directory like bird_data/address/address.sqlite
Environment setup
Create a venv and install required packages found in requirements.txt
Running evaluations
Basic evaluation code is found in src/dp_bench_evaluations.py.
calc_results will take a file with data product predictions and input and calculate various metrics against the
ground-truth data product data.
eval_text2sql will evaluate the accuracy of text2sql experiments.
Example usage:
with open((BENCH_DIR / 'v7_dp_bench_with_topics.json').resolve(), 'r') as f:
dpr_data = json.load(f)
with open((SCHEMA_DIR / 'bird_train_tables.json').resolve(), 'r') as f:
raw_schema_data = json.load(f)
with open((SCHEMA_DIR / 'bird_dev_tables.json').resolve(), 'r') as f:
raw_schema_data |= json.load(f)
dp_result_file = 'dpbench_baseline_gptoss_120b.pkl'
with open((DATA_DIR / 'results' / dp_result_file).resolve(), 'rb') as f:
experiment_res = pickle.load(f)
print(f"Results for {dp_result_file}")
calc_results(experiment_res, dpr_data, schema_data=raw_schema_data, evaluate_derived_columns_soft_match=True)
text2sql_result_file = 'llama4_gptoss_120b_text2sql.json'
with open((DATA_DIR / 'results' / text2sql_result_file).resolve(), 'r') as f:
text2sql_experiment_res = json.load(f)
print(f"Results for {text2sql_result_file}")
eval_text2sql(text2sql_experiment_res)
Experiment results are expected to be formatted as follows:
For data product generation results, we currently expect a dictionary formatted as follows:
{
'address': [
{
'DPR': 'Collect data that will allow queries on geographic and demographic characteristics, including population, households, and economic information, as well as details about employment, congressional representatives and postal services, allowing for insights into various aspects of residential areas and their surroundings.',
'PREDICTION': {
'zip_data': ['zip_code', 'city', 'state', ...],
'area_code': [...],
...,
'DERIVED_COLUMNS': [
{
'name': 'population_density',
'SQL': 'SELECT zip_code, (CAST(population_2020 AS REAL) / NULLIF(land_area,0)) AS population_density FROM zip_data;',
'source_columns': {
'zip_data': ['zip_code',
'population_2020',
'land_area']
}
},
...
]
}
},
{
'DPR': ...,
...
}
],
'db_name': [...],
...
}
Each database is a key in the dict, and the value is a list of predicted data products for each DPR in our benchmark.
Data product PREDICTION contents are dictionaries, where tables and columns to include in the data product are key/values.
Any Derived Columns produced as part of the prediction are included under DERIVED_COLUMNS.
For Text2SQL results, we expect a list of dictionaries formatted like
[
{'database': 'address',
'question': 'Which residential area in Arecibo county has the highest average house value? Please give its zip_code.',
'GT': "SELECT T1.zip_code FROM zip_data AS T1 INNER JOIN country AS T2 ON T1.zip_code = T2.zip_code WHERE T2.county = 'ARECIBO' ORDER BY T1.avg_house_value DESC LIMIT 1",
'pred': "SELECT T1.zip_code FROM zip_data AS T1 INNER JOIN country AS T2 ON T1.zip_code = T2.zip_code WHERE T2.county = 'ARECIBO' ORDER BY T1.avg_house_value DESC LIMIT 1;"}
...
]