{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# BigQuery Essentials for Teradata Users\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "\n",
    "In this lab you will take an existing 2TB+ [TPC-DS benchmark dataset](http://www.tpc.org/tpc_documents_current_versions/pdf/tpc-ds_v2.10.0.pdf) and learn common day-to-day activities you'll perform in BigQuery. \n",
    "\n",
    "### What you'll do\n",
    "\n",
    "In this lab, you will learn how to:\n",
    "\n",
    "- Use BigQuery to access and query the TPC-DS benchmark dataset\n",
    "- Understand common differences between Teradata and BigQuery\n",
    "- Run pre-defined queries to establish baseline performance benchmarks\n",
    "\n",
    "\n",
    "### BigQuery\n",
    "\n",
    "[BigQuery](https://cloud.google.com/bigquery/) is Google's fully managed, NoOps, low cost analytics database. With BigQuery you can query terabytes and terabytes of data without managing infrastructure.  BigQuery allows you to focus on analyzing data to find meaningful insights.\n",
    "\n",
    "## TPC-DS Background\n",
    "In order to benchmark the performance of a data warehouse we first must get tables and data to run queries against. There is a public organization, TPC, that provides large benchmarking datasets to companies explicitly for this purpose. The purpose of TPC benchmarks is to provide relevant, objective performance data to industry users.\n",
    "\n",
    "The TPC-DS Dataset we will be using comprises of __25 tables__ and __99 queries__ that simulate common data analysis tasks. View the full documentation [here](http://www.tpc.org/tpc_documents_current_versions/pdf/tpc-ds_v2.11.0.pdf).\n",
    "\n",
    "## Exploring TPC-DS in BigQuery\n",
    "\n",
    "The TPC-DS tables have been loaded into BigQuery for you to explore. We have limited the size to 2TB for the timing of this lab but the dataset itself can be expanded as needed.\n",
    "\n",
    "Note: The TPC Benchmark and TPC-DS are trademarks of the Transaction Processing Performance Council (http://www.tpc.org). The Cloud DW benchmark is derived from the TPC-DS Benchmark and as such is not comparable to published TPC-DS results."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Google Cloud and BigQuery organization"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "First, a note on resource hierarchy. At the lowest level, resources are the fundamental components that make up all Google Cloud services. Examples of resources include Compute Engine Virtual Machines (VMs), Pub/Sub topics, Cloud Storage buckets, App Engine instances, and BigQuery datasets. All these lower level resources can only be parented by projects, which represent the first grouping mechanism of the Google Cloud resource hierarchy.\n",
    "\n",
    "You may have noticed you had a project name in the upper left of the console when you opened this notebook:\n",
    "\n",
    "\n",
    "<img src=\"img/project.png\">\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "You can also run a local `gcloud` command to detect what your project and id currently are set:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash\n",
    "\n",
    "gcloud config list"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Google Cloud resources are organized hierarchically. Starting from the bottom of the hierarchy, projects are the first level, and they contain other resources. All resources except for organizations have exactly one parent. The Organization is the top of the hierarchy and does not have a parent.\n",
    "\n",
    "Folders are an additional grouping mechanism on top of projects.\n",
    "\n",
    "<img src=\"img/cloud-folders-hierarchy.png\">"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For the purposes of a BigQuery user, this is helpful to know as access management policies (IAM) and Organizational policies are largely imposed at the project, folder, or organizational level. Also, BigQuery \"Reservations\", or chunks of allocated Bigcompute (but not storage) are currently assigned at the project or folder level."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### BigQuery Datasets"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<p>A dataset is contained within a specific <a href=\"https://cloud.google.com/bigquery/docs/projects\">project</a>. Datasets\n",
    "  are top-level containers that are used to organize and control access to your\n",
    "  <a href=\"https://cloud.google.com/bigquery/docs/tables\">tables</a> and <a href=\"https://cloud.google.com/bigquery/docs/views\">views</a>. A table\n",
    "  or view must belong to a dataset, so you need to create at least one dataset before\n",
    "  <a href=\"https://cloud.google.com/bigquery/docs/loading-data\">loading data into BigQuery</a>.</p>\n",
    "  \n",
    "  BigQuery datasets are subject to the following limitations:\n",
    "\n",
    "* You can set the geographic location at creation time only. After a dataset has\n",
    "  been created, the location becomes immutable and can't be changed by using the\n",
    "  Console, using the `bq` tool, or calling the `patch` or\n",
    "  `update` API methods.\n",
    "* All tables that are referenced in a query must be stored in datasets in the\n",
    "  same location\n",
    "* When [you copy a table](https://cloud.google.com/bigquery/docs/managing-tables#copy-table), the\n",
    "  datasets that contain the source table and destination table must reside in\n",
    "  the same location.\n",
    "* Dataset names must be unique for each project."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "How many datasets are in your current project? Run the following to find out:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!bq ls"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For this lab, you will be accessing data stored in _another_ project, in this case a publically accessible sample project `qwiklabs-resources`. See how many datasets exist in this project:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!bq ls --project_id qwiklabs-resources"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "And let's look at the tables and views in one of these datasets:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!bq ls --project_id qwiklabs-resources tpcds_2t_baseline"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "But how are we able to access other data? And won't querying that data create work in that user's cluster? Not at all! Because BigQuery has completely separated the compute and storage layers so they can scale independently, we can easily query data (so long as we have permissions) that are in public datasets or datasets from other teams, without incurring compute costs for them, _and without slowing their queries down, even if we're accessing the same data_.\n",
    "\n",
    "To explain why, we dive a little deeper into the architecture of BigQuery."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## BigQuery Architecture\n",
    "\n",
    "BigQuery’s serverless architecture decouples storage and compute and allows them to scale independently on demand. This structure offers both immense flexibility and cost controls for customers because they don’t need to keep their expensive compute resources up and running all the time. This is very different from traditional node-based cloud data warehouse solutions or on-premise massively parallel processing (MPP) systems. This approach also allows customers of any size to bring their data into the data warehouse and start analyzing their data using Standard SQL without worrying about database operations and system engineering.\n",
    "\n",
    "<img src=\"img/bq_explained_2.jpg\">"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Under the hood, BigQuery employs a vast set of multi-tenant services driven by low-level Google infrastructure technologies like [Dremel, Colossus, Jupiter and Borg](https://cloud.google.com/blog/products/gcp/bigquery-under-the-hood).\n",
    "\n",
    "<img src=\"img/bq_explained_3.jpg\">"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "__Compute is Dremel, a large multi-tenant cluster that executes SQL queries.__\n",
    "\n",
    "Dremel turns SQL queries into distributed, scaled-out execution plans. The nodes of these execution plans are called slots and do the heavy lifting of reading data from storage and any necessary computation. \n",
    "\n",
    "Dremel dynamically apportions slots to queries on an as-needed basis, maintaining fairness for concurrent queries from multiple users. A single user can get thousands of slots to run their queries. These slots are assigned just-in-time to your query, and the moment that unit of work is done it gets assigned new work, potentially for someone else's query. This is how BigQuery is able to execute so quickly at low cost. You don't have to over-provision resources like you would with statically sized clusters."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "__Storage is Colossus, Google’s global storage system.__\n",
    "\n",
    "BigQuery leverages the [columnar storage format](https://cloud.google.com/blog/products/gcp/inside-capacitor-bigquerys-next-generation-columnar-storage-format) and compression algorithm to store data in Colossus, optimized for reading large amounts of structured data. This is the same technology powering Google Cloud's blog storage services - [GCS](https://cloud.google.com/storage).\n",
    "\n",
    "Colossus also handles replication, recovery (when disks crash) and distributed management (so there is no single point of failure). Colossus allows BigQuery users to scale to dozens of petabytes of data stored seamlessly, without paying the penalty of attaching much more expensive compute resources as in traditional data warehouses."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "__Compute and storage talk to each other through the petabit Jupiter network.__\n",
    "\n",
    "In between storage and compute is ‘shuffle’, which takes advantage of Google’s Jupiter network to move data extremely rapidly from one place to another."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "__BigQuery is orchestrated via Borg, Google’s precursor to Kubernetes.__\n",
    "\n",
    "The mixers and slots are all run by Borg, which allocates hardware resources. Essentially, a single BigQuery 'cluster' is able to run thousands of physical machines at once _and_ be securely shared between users, giving massive compute power just-in-time to those who need it."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### What does this mean for you?\n",
    "\n",
    "Working with BigQuery is different. Some concepts that are __important__:\n",
    "* Compute and storage are separate and storage is CHEAP - making copies of data will not waste compute space on nodes like in previous systems. It is also easy to set a TTL on temporary datasets and tables to the garbage collect automatically.\n",
    "* The 'workers' in bigquery are called Slots. These are scheduled fairly amongst all the users and queries within a project. Sometimes your query is bound by the amount of parallelism that BigQuery can achieve. Sometimes it is bound by the number of slots available to your organization - hence getting more slots will speed it up\n",
    "* While your organization may have a reservation for Slots, meaning a guaranteed number of compute power available to teams, your organization doesn't have it's own BigQuery cluster, per se. It is running in a much larger installation of BigQuery, shared securely amongst other customers. This means you can easily increase and decrease the amount of slots your organization has reserved at a moment's notice with [Flex Slots](https://cloud.google.com/blog/products/data-analytics/introducing-bigquery-flex-slots)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Exploring the TPC-DS Schema with SQL\n",
    "\n",
    "Question: \n",
    "- How many tables are in the dataset?\n",
    "- What is the name of the largest table (in GB)? How many rows does it have?\n",
    "- Note the `FROM` clause - which identifier is the project, which is the datasets, and which is the table or view?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery\n",
    "SELECT \n",
    "  dataset_id,\n",
    "  table_id,\n",
    "  -- Convert bytes to GB.\n",
    "  ROUND(size_bytes/pow(10,9),2) as size_gb,\n",
    "  -- Convert UNIX EPOCH to a timestamp.\n",
    "  TIMESTAMP_MILLIS(creation_time) AS creation_time,\n",
    "  TIMESTAMP_MILLIS(last_modified_time) as last_modified_time,\n",
    "  row_count,\n",
    "  CASE \n",
    "    WHEN type = 1 THEN 'table'\n",
    "    WHEN type = 2 THEN 'view'\n",
    "  ELSE NULL\n",
    "  END AS type\n",
    "FROM\n",
    "  `qwiklabs-resources.tpcds_2t_baseline.__TABLES__`\n",
    "ORDER BY size_gb DESC"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The core tables in the data warehouse are derived from 5 separate core operational systems (each with many tables):\n",
    "\n",
    "![tpc-ds-components.png](img/tpc-ds-components.png)\n",
    "\n",
    "These systems are driven by the core functions of our retail business. As you can see, our store accepts sales from online (web), mail-order (catalog), and in-store. The business must keep track of inventory and can offer promotional discounts on items sold. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Exploring all available columns of data\n",
    "\n",
    "Question:\n",
    "- How many columns of data are in the entire dataset (all tables)?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery\n",
    "SELECT * FROM \n",
    " `qwiklabs-resources.tpcds_2t_baseline.INFORMATION_SCHEMA.COLUMNS`"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Question:\n",
    "- Are any of the columns of data in this baseline dataset partitioned or clustered? (This will be covered in another lab)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery\n",
    "SELECT * FROM \n",
    " `qwiklabs-resources.tpcds_2t_baseline.INFORMATION_SCHEMA.COLUMNS`\n",
    "WHERE \n",
    "  is_partitioning_column = 'YES' OR clustering_ordinal_position IS NOT NULL"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Question\n",
    "- How many columns of data does each table have (sorted by most to least?)\n",
    "- Which table has the most columns of data?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery\n",
    "SELECT \n",
    "  COUNT(column_name) AS column_count, \n",
    "  table_name \n",
    "FROM \n",
    " `qwiklabs-resources.tpcds_2t_baseline.INFORMATION_SCHEMA.COLUMNS`\n",
    "GROUP BY table_name\n",
    "ORDER BY column_count DESC, table_name"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Previewing sample rows of data values\n",
    "\n",
    "Click on the `catalog_sales` table name for the `tpcds_2t_baseline` dataset under `qwiklabs-resources`\n",
    "\n",
    "Question\n",
    "- How many rows are in the table?\n",
    "- How large is the table in TB?\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!bq show qwiklabs-resources:tpcds_2t_baseline.catalog_sales"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "Question:\n",
    "- `Preview` the data and find the Catalog Sales Extended Sales Price `cs_ext_sales_price` field (which is calculated based on product quantity * sales price)\n",
    "- Are there any missing data values for Catalog Sales Quantity (`cs_quantity`)? \n",
    "- Are there any missing values for cs_ext_ship_cost? For what type of product could this be expected? (Digital products)\n",
    "\n",
    "We are using the `bq head` command line tool to avoid a full table scan with a `SELECT * LIMIT 15`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!bq head -n 15 --selected_fields \"cs_order_number,cs_quantity,cs_ext_sales_price,cs_ext_ship_cost\"  qwiklabs-resources:tpcds_2t_baseline.catalog_sales "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Create an example sales report\n",
    "\n",
    "__TODO(you):__ Write a query that shows key sales stats for each item sold from the Catalog and execute it here:\n",
    "- total orders\n",
    "- total unit quantity\n",
    "- total revenue\n",
    "- total profit\n",
    "- sorted by total orders highest to lowest, limit 10"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery --verbose\n",
    "--Query should fail\n",
    "\n",
    "SELECT\n",
    "  \n",
    "FROM\n",
    "  `qwiklabs-resources.tpcds_2t_baseline.catalog_sales`\n",
    "\n",
    "LIMIT\n",
    "  10"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery --verbose\n",
    "--Query should succeed\n",
    "\n",
    "SELECT\n",
    "  cs_item_sk,\n",
    "  COUNT(cs_order_number) AS total_orders,\n",
    "  SUM(cs_quantity) AS total_quantity,\n",
    "  SUM(cs_ext_sales_price) AS total_revenue,\n",
    "  SUM(cs_net_profit) AS total_profit\n",
    "FROM\n",
    "  `qwiklabs-resources.tpcds_2t_baseline.catalog_sales`\n",
    "GROUP BY\n",
    "  cs_item_sk\n",
    "ORDER BY\n",
    "  total_orders DESC\n",
    "LIMIT\n",
    "  10"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "A note on our data: The TPC-DS benchmark allows data warehouse practitioners to generate any volume of data programmatically. Since the rows of data are system generated, they may not make the most sense in a business context (like why are we selling our top product at such a huge profit loss!).\n",
    "\n",
    "The good news is that to benchmark our performance we care most about the volume of rows and columns to run our benchmark against. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Analyzing query performance\n",
    "\n",
    "You can use the [INFORMATION_SCHEMA](https://cloud.google.com/bigquery/docs/information-schema-intro) to inspect your query performance. A lot of this data is also presented in the UI under __Execution Details__.\n",
    "\n",
    "Refer to the query below (which should be similar to your results) and answer the following questions.\n",
    "\n",
    "Question\n",
    "- How long did it take the query to run? 14s\n",
    "- How much data in GB was processed? 150GB\n",
    "- How much slot time was consumed? 1hr 7min"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery\n",
    "\n",
    "SELECT\n",
    "  project_id,\n",
    "  job_id,\n",
    "  query,\n",
    "  cache_hit,\n",
    "  reservation_id,\n",
    "  EXTRACT(DATE FROM creation_time) AS creation_date,\n",
    "  creation_time,\n",
    "  end_time,\n",
    "  TIMESTAMP_DIFF(end_time, start_time, SECOND) AS job_duration_seconds,\n",
    "  job_type,\n",
    "  user_email,\n",
    "  state,\n",
    "  error_result,\n",
    "  total_bytes_processed,\n",
    "  total_slot_ms / 1000 / 60 AS slot_minutes,\n",
    "  -- Average slot utilization per job is calculated by dividing\n",
    "  -- total_slot_ms by the millisecond duration of the job\n",
    "  total_slot_ms / (TIMESTAMP_DIFF(end_time, start_time, MILLISECOND)) AS avg_slots\n",
    "FROM\n",
    "  `region-us`.INFORMATION_SCHEMA.JOBS_BY_PROJECT\n",
    "ORDER BY\n",
    "  creation_time DESC\n",
    "LIMIT 15;"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!bq ls -j -a -n 15"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Side note: Slot Time\n",
    "\n",
    "We know the query took 15 seconds to run so what does the 1hr 7 min slot time metric mean?\n",
    "\n",
    "Inside of the BigQuery service are lots of virtual machines that massively process your data and query logic in parallel. These workers, or \"slots\", work together to process a single query job really quickly. For accounts with on-demand pricing, you can have up to 2,000 slots.\n",
    "\n",
    "So say we had 30 minutes of slot time or 1800 seconds. If the query took 20 seconds in total to run, \n",
    "but it was 1800 seconds worth of work, how many workers at minimum worked on it? \n",
    "1800/20 = 90\n",
    "\n",
    "And that's assuming each worker instantly had all the data it needed (no shuffling of data between workers) and was at full capacity for all 20 seconds!\n",
    "\n",
    "In reality, workers have a variety of tasks (waiting for data, reading it, performing computations, and writing data)\n",
    "and also need to compare notes with each other on what work was already done on the job. The good news for you is\n",
    "that you don't need to worry about optimizing these workers or the underlying data to run perfectly in parallel. That's why BigQuery is a managed service -- there's an entire team dedicated to hardware and data storage optimization.\n",
    "\n",
    "The \"avg_slots\" metric indicates the average number of slots being utilized by your query at any given time. Often, portions of the query plan will have different amounts of parallelism and thus can benefit (or not) from more slots. For example, if you're performing a basic READ+FILTER+AGGREGATE query, reading data from a large table may require 1,000 slots for the `INPUT` phase since each slot reads a file, but if a lot of the data is immediately filtered, there may be fewer slots or even one slot needed for the next stage to aggregate. Certain portions of your queries may become bottlenecks for parallelism, for example, `JOIN`s, `SORT`s, etc. BigQuery can execute many of these in a parallel manner and optimizing this queries is a more advanced topic. At this point, it's important to know slot_time, and conceptually what a slot is.\n",
    "\n",
    "In case you were wondering, the worker limit for your project is 2,000 slots at once. In a production setting, this will vary depending on whether your organization is using \"flat-rate\" pricing on \"on-demand\". If you're \"flat-rate\", the amount of slots will depend on the organization's reservation, how that reservations is apportioned to different folders, projects, and teams, and how busy each slice of the reservation is at any given moment."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Running a performance benchmark\n",
    "\n",
    "To performance benchmark our data warehouse in BigQuery we need to create more than just a single SQL report. The good news is the TPC-DS dataset ships with __99 standard benchmark queries__ that we can run and log the performance outcomes. \n",
    "\n",
    "In this lab, we are doing no adjustments to the existing data warehouse tables (no partitioning, no clustering, no nesting) so we can establish a performance benchmark to beat in future labs."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Viewing the 99 pre-made SQL queries\n",
    "\n",
    "We have a long SQL file with 99 standard queries against this dataset stored in our /sql/ directory.\n",
    "\n",
    "Let's view the first 50 lines of those baseline queries to get familiar with how we will be performance benchmarking our dataset."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!head --lines=50 'sql/example_baseline_queries.sql'"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Running the first benchmark test\n",
    "Now let's run the first query against our dataset and note the execution time. Tip: You can use the [--verbose flag](https://googleapis.dev/python/bigquery/latest/magics.html) in %%bigquery magics to return the job and completion time. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery --verbose\n",
    "# start query 1 in stream 0 using template query96.tpl\n",
    "select  count(*) \n",
    "from `qwiklabs-resources.tpcds_2t_baseline.store_sales` as store_sales\n",
    "    ,`qwiklabs-resources.tpcds_2t_baseline.household_demographics` as household_demographics \n",
    "    ,`qwiklabs-resources.tpcds_2t_baseline.time_dim` as time_dim, \n",
    "    `qwiklabs-resources.tpcds_2t_baseline.store` as store\n",
    "where ss_sold_time_sk = time_dim.t_time_sk   \n",
    "    and ss_hdemo_sk = household_demographics.hd_demo_sk \n",
    "    and ss_store_sk = s_store_sk\n",
    "    and time_dim.t_hour = 8\n",
    "    and time_dim.t_minute >= 30\n",
    "    and household_demographics.hd_dep_count = 5\n",
    "    and store.s_store_name = 'ese'\n",
    "order by count(*)\n",
    "limit 100;"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "It should execute in just a few seconds. __Then try running it again__ and see if you get the same performance. BigQuery will automatically [cache the results](https://cloud.google.com/bigquery/docs/cached-results) from the first time you ran the query and then serve those same results to you when you can the query again. We can confirm this by analyzing the query job statistics. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Viewing BigQuery job statistics\n",
    "\n",
    "Let's list our five most recent query jobs run on BigQuery using the `bq` [command line interface](https://cloud.google.com/bigquery/docs/managing-jobs#viewing_information_about_jobs). Then we will get even more detail on our most recent job with the `bq show` command."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!bq ls -j -a -n 5"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "__Be sure to replace the job id with your own most recent job.__"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!bq show --format=prettyjson -j fae46669-5e96-4744-9d2c-2b1b95fa21e7"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Looking at the job statistics we can see our most recent query hit cache \n",
    "- `cacheHit: true` and therefore \n",
    "- `totalBytesProcessed: 0`. \n",
    "\n",
    "While this is great in normal uses for BigQuery (you aren't charged for queries that hit cache) it kind of ruins our performance test. While cache is super useful we want to disable it for testing purposes."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Disabling Cache and Dry Running Queries\n",
    "As of the time this lab was created, you can't pass a flag to `%%bigquery` iPython notebook magics to disable cache or to quickly see the amount of data processed. So we will use the traditional `bq` [command line interface in bash](https://cloud.google.com/bigquery/docs/reference/bq-cli-reference#bq_query).\n",
    "\n",
    "First we will do a `dry run` of the query without processing any data just to see how many bytes of data would be processed. Then we will remove that flag and ensure `nouse_cache` is set to avoid hitting cache as well."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash \n",
    "bq query \\\n",
    "--dry_run \\\n",
    "--nouse_cache \\\n",
    "--use_legacy_sql=false \\\n",
    "\"\"\"\\\n",
    "select  count(*) \n",
    "from \\`qwiklabs-resources.tpcds_2t_baseline.store_sales\\` as store_sales\n",
    "    ,\\`qwiklabs-resources.tpcds_2t_baseline.household_demographics\\` as household_demographics  \n",
    "    ,\\`qwiklabs-resources.tpcds_2t_baseline.time_dim\\` as time_dim, \\`qwiklabs-resources.tpcds_2t_baseline.store\\` as store\n",
    "where ss_sold_time_sk = time_dim.t_time_sk   \n",
    "    and ss_hdemo_sk = household_demographics.hd_demo_sk \n",
    "    and ss_store_sk = s_store_sk\n",
    "    and time_dim.t_hour = 8\n",
    "    and time_dim.t_minute >= 30\n",
    "    and household_demographics.hd_dep_count = 5\n",
    "    and store.s_store_name = 'ese'\n",
    "order by count(*)\n",
    "limit 100;\n",
    "\"\"\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Convert bytes to GB\n",
    "132086388641 / 1e+9"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "132 GB will be processed. At the time of writing, [BigQuery pricing](https://cloud.google.com/bigquery/pricing) is \\\\$5 per 1 TB (or 1000 GB) of data after the first free 1 TB each month. Assuming we've exhausted our 1 TB free this month, this would be \\\\$0.66 to run.\n",
    "\n",
    "Now let's run it an ensure we're not pulling from cache so we get an accurate time-to-completion benchmark."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash \n",
    "bq query \\\n",
    "--nouse_cache \\\n",
    "--use_legacy_sql=false \\\n",
    "\"\"\"\\\n",
    "select  count(*) \n",
    "from \\`qwiklabs-resources.tpcds_2t_baseline.store_sales\\` as store_sales\n",
    "    ,\\`qwiklabs-resources.tpcds_2t_baseline.household_demographics\\` as household_demographics  \n",
    "    ,\\`qwiklabs-resources.tpcds_2t_baseline.time_dim\\` as time_dim, \\`qwiklabs-resources.tpcds_2t_baseline.store\\` as store\n",
    "where ss_sold_time_sk = time_dim.t_time_sk   \n",
    "    and ss_hdemo_sk = household_demographics.hd_demo_sk \n",
    "    and ss_store_sk = s_store_sk\n",
    "    and time_dim.t_hour = 8\n",
    "    and time_dim.t_minute >= 30\n",
    "    and household_demographics.hd_dep_count = 5\n",
    "    and store.s_store_name = 'ese'\n",
    "order by count(*)\n",
    "limit 100;\n",
    "\"\"\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "If you're an experienced BigQuery user, you likely have seen these same metrics in the Web UI as well as highlighted in the red box below:\n",
    "\n",
    "![img/bq-ui-results.png](img/bq-ui-results.png)\n",
    "\n",
    "It's a matter of preference whether you do your work in the Web UI or the command line -- each has it's advantages.\n",
    "\n",
    "One major advantage of using the `bq` command line interface is the ability to create a script that will run the remaining 98 benchmark queries for us and log the results. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Copy the qwiklabs-resources dataset into your own GCP project\n",
    "\n",
    "We will use the new [BigQuery Transfer Service](https://cloud.google.com/bigquery/docs/copying-datasets) to quickly copy our large dataset from the `qwiklabs-resources` GCP project into your own so you can perform the benchmarking. \n",
    "\n",
    "### Create a new baseline dataset in your project"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash\n",
    "\n",
    "export PROJECT_ID=$(gcloud config list --format 'value(core.project)')\n",
    "export BENCHMARK_DATASET_NAME=tpcds_2t_baseline # Name of the dataset you want to create\n",
    "\n",
    "## Create a BigQuery dataset for tpcds_2t_flat_part_clust if it doesn't exist\n",
    "datasetexists=$(bq ls -d | grep -w $BENCHMARK_DATASET_NAME)\n",
    "\n",
    "if [ -n \"$datasetexists\" ]; then\n",
    "    echo -e \"BigQuery dataset $BENCHMARK_DATASET_NAME already exists, let's not recreate it.\"\n",
    "\n",
    "else\n",
    "    echo \"Creating BigQuery dataset titled: $BENCHMARK_DATASET_NAME\"\n",
    "    \n",
    "    bq --location=US mk --dataset \\\n",
    "        --description 'Benchmark Dataset' \\\n",
    "        $PROJECT:$BENCHMARK_DATASET_NAME\n",
    "\n",
    "fi"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Inspect your project and datasets\n",
    "!bq ls \n",
    "!bq ls tpcds_2t_baseline"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Here we will use the `bq cp` command to copy tables over. If you need to periodically refresh data, the BQ Transfer service or scheduled queries are good tools as well."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash\n",
    "\n",
    "# Should take about 30 seconds, starts a bunch of asynchronous copy jobs\n",
    "\n",
    "\n",
    "bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.call_center tpcds_2t_baseline.call_center\n",
    "bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.catalog_page tpcds_2t_baseline.catalog_page\n",
    "bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.catalog_returns tpcds_2t_baseline.catalog_returns\n",
    "bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.catalog_sales tpcds_2t_baseline.catalog_sales\n",
    "bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.customer tpcds_2t_baseline.customer\n",
    "bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.customer_address tpcds_2t_baseline.customer_address\n",
    "bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.customer_demographics tpcds_2t_baseline.customer_demographics\n",
    "bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.date_dim tpcds_2t_baseline.date_dim\n",
    "bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.dbgen_version tpcds_2t_baseline.dbgen_version\n",
    "bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.household_demographics tpcds_2t_baseline.household_demographics\n",
    "bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.income_band tpcds_2t_baseline.income_band\n",
    "bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.inventory tpcds_2t_baseline.inventory\n",
    "bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.item tpcds_2t_baseline.item\n",
    "bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.perf tpcds_2t_baseline.perf\n",
    "bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.promotion tpcds_2t_baseline.promotion\n",
    "bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.reason tpcds_2t_baseline.reason\n",
    "bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.ship_mode tpcds_2t_baseline.ship_mode\n",
    "bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.store tpcds_2t_baseline.store\n",
    "bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.store_returns tpcds_2t_baseline.store_returns\n",
    "bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.store_sales tpcds_2t_baseline.store_sales\n",
    "bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.time_dim tpcds_2t_baseline.time_dim\n",
    "bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.warehouse tpcds_2t_baseline.warehouse\n",
    "bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.web_page tpcds_2t_baseline.web_page\n",
    "bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.web_returns tpcds_2t_baseline.web_returns\n",
    "bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.web_sales tpcds_2t_baseline.web_sales\n",
    "bq cp -nosync qwiklabs-resources:tpcds_2t_baseline.web_site tpcds_2t_baseline.web_site\n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Inspect the tables now in your project."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!bq ls tpcds_2t_baseline"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Verify you now have the baseline data in your project\n",
    "\n",
    "Run the below query and confirm you see data. Note that if you omit the `project-id` ahead of the dataset name in the `FROM` clause, BigQuery will assume your default project."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery\n",
    "SELECT COUNT(*) AS store_transaction_count\n",
    "FROM tpcds_2t_baseline.store_sales"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Setup an automated test\n",
    "\n",
    "Running each of the 99 queries manually via the Console UI would be a tedious effort. We'll show you how you can run all 99 programmatically and automatically log the output (time and GB processed) to a log file for analysis. \n",
    "\n",
    "Below is a shell script that:\n",
    "1. Accepts a BigQuery dataset to benchmark\n",
    "2. Accepts a list of semi-colon separated queries to run\n",
    "3. Loops through each query and calls the `bq` query command\n",
    "4. Records the execution time into a separate BigQuery performance table `perf`\n",
    "\n",
    "Execute the below statement and follow along with the results as you benchmark a few example queries (don't worry, we've already ran the full 99 recently so you won't have to).\n",
    "\n",
    "__After executing, wait 1-2 minutes for the benchmark test to complete__\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash\n",
    "# runs the SQL queries from the TPCDS benchmark \n",
    "\n",
    "# Pull the current Google Cloud Platform project name\n",
    "\n",
    "BQ_DATASET=\"tpcds_2t_baseline\" # let's start by benchmarking our baseline dataset \n",
    "QUERY_FILE_PATH=\"./sql/example_baseline_queries.sql\" # the full test is on 99_baseline_queries but that will take 80+ mins to run\n",
    "IFS=\";\"\n",
    "\n",
    "# create perf table to keep track of run times for all 99 queries\n",
    "printf \"\\033[32;1m Housekeeping tasks... \\033[0m\\n\\n\";\n",
    "printf \"Creating a reporting table perf to track how fast each query runs...\";\n",
    "perf_table_ddl=\"CREATE TABLE IF NOT EXISTS $BQ_DATASET.perf(performance_test_num int64, query_num int64, elapsed_time_sec int64, ran_on int64)\"\n",
    "bq rm -f $BQ_DATASET.perf\n",
    "bq query --nouse_legacy_sql $perf_table_ddl \n",
    "\n",
    "start=$(date +%s)\n",
    "index=0\n",
    "for select_stmt in $(<$QUERY_FILE_PATH)　\n",
    "do \n",
    "  # run the test until you hit a line with the string 'END OF BENCHMARK' in the file\n",
    "  if [[ \"$select_stmt\" == *'END OF BENCHMARK'* ]]; then\n",
    "    break\n",
    "  fi\n",
    "\n",
    "  printf \"\\n\\033[32;1m Let's benchmark this query... \\033[0m\\n\";\n",
    "  printf \"$select_stmt\";\n",
    "  \n",
    "  SECONDS=0;\n",
    "  bq query --use_cache=false --nouse_legacy_sql $select_stmt # critical to turn cache off for this test\n",
    "  duration=$SECONDS\n",
    "\n",
    "  # get current timestamp in milliseconds  \n",
    "  ran_on=$(date +%s)\n",
    "\n",
    "  index=$((index+1))\n",
    "\n",
    "  printf \"\\n\\033[32;1m Here's how long it took... \\033[0m\\n\\n\";\n",
    "  echo \"Query $index ran in $(($duration / 60)) minutes and $(($duration % 60)) seconds.\"\n",
    "\n",
    "  printf \"\\n\\033[32;1m Writing to our benchmark table... \\033[0m\\n\\n\";\n",
    "  insert_stmt=\"insert into $BQ_DATASET.perf(performance_test_num, query_num, elapsed_time_sec, ran_on) values($start, $index, $duration, $ran_on)\"\n",
    "  printf \"$insert_stmt\"\n",
    "  bq query --nouse_legacy_sql $insert_stmt\n",
    "done\n",
    "\n",
    "end=$(date +%s)\n",
    "\n",
    "printf \"Benchmark test complete\"\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Viewing the benchmark results\n",
    "\n",
    "As part of the benchmark test, we stored the processing time of each query into a new `perf` BigQuery table. We can query that table and get some performance stats for our test. \n",
    "\n",
    "First are each of the tests we ran:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery\n",
    "SELECT * FROM tpcds_2t_baseline.perf\n",
    "WHERE \n",
    " # Let's only pull the results from our most recent test\n",
    " performance_test_num = (SELECT MAX(performance_test_num) FROM tpcds_2t_baseline.perf)\n",
    "ORDER BY ran_on"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "And finally, the overall statistics for the entire test:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery\n",
    "SELECT\n",
    "  TIMESTAMP_SECONDS(MAX(performance_test_num)) AS test_date,\n",
    "  MAX(performance_test_num) AS latest_performance_test_num,\n",
    "  COUNT(DISTINCT query_num) AS count_queries_benchmarked,\n",
    "  SUM(elapsed_time_sec) AS total_time_sec,\n",
    "  MIN(elapsed_time_sec) AS fastest_query_time_sec,\n",
    "  MAX(elapsed_time_sec) AS slowest_query_time_sec\n",
    "FROM\n",
    "  tpcds_2t_baseline.perf\n",
    "WHERE\n",
    "  performance_test_num = (SELECT MAX(performance_test_num) FROM tpcds_2t_baseline.perf)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Congratulations!\n",
    "\n",
    "And there you have it! You successfully ran a performance benchmark test against your data warehouse.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Copyright 2021 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "environment": {
   "name": "tf2-gpu.2-1.m49",
   "type": "gcloud",
   "uri": "gcr.io/deeplearning-platform-release/tf2-gpu.2-1:m49"
  },
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
