{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Teradata to BigQuery SQL Translation"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Introduction"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Both BigQuery and Teradata Database conform to the [ANSI/ISO SQL:2011](https://wikipedia.org/wiki/SQL:2011) standard. In addition, Teradata has created some extensions to the SQL standard to enable Teradata-specific functionalities.\n",
    "\n",
    "In contrast, BigQuery does not support these proprietary extensions. Therefore, some of your queries might need to be refactored during migration from Teradata to BigQuery. Having queries that only use the ANSI/ISO SQL standard that's supported by BigQuery has the added benefit that it helps ensure portability and helps your queries be agnostic to the underlying data warehouse."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Teradata SQL differences"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This notebook discusses notable differences between Teradata SQL and the BigQuery standard SQL, and some strategies for translating between the two dialects. The list of differences presented in this notebook is not exhaustive. For additional information, see the [Teradata-to-BigQuery SQL translation reference](https://cloud.google.com/solutions/migration/dw2bq/td2bq/td-bq-sql-translation-reference-tables)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Data Types"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "BigQuery supports a more concise set of data types than\n",
    "Teradata, with groups of Teradata types mapping into a single standard SQL data\n",
    "type. For instance:\n",
    "\n",
    "-   `INTEGER`, `SMALLINT`, `BYTEINT`, and `BIGINT` all map to `INT64`.\n",
    "-   `CLOB`, `JSON`, `XML`, `UDT` and other types that contain large\n",
    "    character fields map to `STRING`.\n",
    "-   `BLOB`, `BYTE`, and `VARBYTE` types that contain binary information map\n",
    "    to `BYTES`.\n",
    "\n",
    "For dates, the main types (`DATE`, `TIME`, and `TIMESTAMP`) are equivalent in\n",
    "Teradata and BigQuery. However, other specialized date types from\n",
    "Teradata need to be mapped, such as the following:\n",
    "\n",
    "-   `TIME_WITH_TIME_ZONE` to `TIME`.\n",
    "-   `TIMESTAMP_WITH_TIME_ZONE` to `TIMESTAMP`.\n",
    "-   `INTERVAL_HOUR`, `INTERVAL_MINUTE`, and other `INTERVAL_*` types map to\n",
    "    `INT64` in BigQuery.\n",
    "-   `PERIOD(DATE)`,` PERIOD(TIME)`, and other` PERIOD(*)` types map to `STRING`.\n",
    "\n",
    "[Multi-dimensional arrays](https://docs.teradata.com/reader/S0Fw2AVH8ff3MDA0wDOHlQ/D3QuBsLccP9JObIH8f4yJA)\n",
    "are not directly supported in BigQuery. Instead, you create an\n",
    "[array of structs](/bigquery/docs/reference/standard-sql/arrays#building_arrays_of_arrays),\n",
    "with each struct containing a field of type `ARRAY`."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Data Types - Exercise"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In this exercise, you will examine several of the TIMESTAMP and TIME functions and data types available to you. You will be using a public BigQuery dataset that contains rental records from the London bike share program"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Use the `bq` command line tool to examine the schema of the table.\n",
    "\n",
    "`bq head` or using the `Preview` tab in the BigQuery UI are much more efficient than a `SELECT * LIMIT 1` as this triggers a whole table scan."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!bq head -n 5  --selected_fields rental_id,duration,bike_id,end_date,end_station_id,start_date,start_station_id bigquery-public-data:london_bicycles.cycle_hire"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can similarily see table level data, such as number of rows and the schema of the table. Notice the `TIMESTAMP` fields."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "!bq show bigquery-public-data:london_bicycles.cycle_hire"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Run a query to return the most recent 5 rentals by end_date:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery\n",
    "\n",
    "SELECT\n",
    "  rental_id,\n",
    "  duration,\n",
    "  bike_id,\n",
    "  end_date,\n",
    "  end_station_id,\n",
    "  end_station_name,\n",
    "  start_date,\n",
    "  start_station_id,\n",
    "  start_station_name\n",
    "FROM\n",
    "  `bigquery-public-data`.london_bicycles.cycle_hire\n",
    "ORDER BY\n",
    "  end_date DESC\n",
    "LIMIT\n",
    "  5"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "__#TODO(you):__ Modify this query to print the `end_date` and `start_date` fields in UNIX seconds as well.\n",
    "\n",
    "[Hint: Use UNIX_SECONDS().](https://cloud.google.com/bigquery/docs/reference/standard-sql/timestamp_functions#unix_seconds)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery\n",
    "\n",
    "SELECT\n",
    "  rental_id,\n",
    "  duration,\n",
    "  bike_id,\n",
    "  end_date,\n",
    "  --TODO:\n",
    "     AS end_date_unix,\n",
    "  end_station_id,\n",
    "  end_station_name,\n",
    "  start_date,\n",
    "  --TODO:\n",
    "     AS start_date_unix,\n",
    "  start_station_id,\n",
    "  start_station_name\n",
    "FROM\n",
    "  `bigquery-public-data`.london_bicycles.cycle_hire\n",
    "ORDER BY\n",
    "  end_date DESC\n",
    "LIMIT\n",
    "  5"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "__#TODO(you):__ Modify this query to print the time from the `end_date` and `start_date` fields in formatted PST timezone.\n",
    "\n",
    "[Hint: Use EXTRACT( ... AT TIME ZONE ... ).](https://cloud.google.com/bigquery/docs/reference/standard-sql/timestamp_functions#extract)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery\n",
    "\n",
    "SELECT\n",
    "  rental_id,\n",
    "  duration,\n",
    "  bike_id,\n",
    "  end_date,\n",
    "  --TODO:\n",
    "     AS end_time_california,\n",
    "  end_station_id,\n",
    "  end_station_name,\n",
    "  start_date,\n",
    "  --TODO:\n",
    "     AS start_time_california,\n",
    "  start_station_id,\n",
    "  start_station_name\n",
    "FROM\n",
    "  `bigquery-public-data`.london_bicycles.cycle_hire\n",
    "ORDER BY\n",
    "  end_date DESC\n",
    "LIMIT\n",
    "  5"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## The SELECT Statement\n",
    "\n",
    "\n",
    "The syntax of the `SELECT` statement is generally compatible between Teradata and\n",
    "BigQuery. This section notes differences that often must be\n",
    "addressed during migration."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Identifiers\n",
    "\n",
    "BigQuery lets you use the following as\n",
    "[identifiers](https://cloud.google.com/bigquery/docs/reference/standard-sql/lexical#identifiers): projects;datasets; tables or views; columns.\n",
    "\n",
    "As a serverless product, BigQuery does not have a concept of a\n",
    "cluster or environment or fixed endpoint, therefore the project specifies the dataset's\n",
    "[resource hierarchy](https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy).\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In a `SELECT` statement in Teradata, fully qualified column names can be used.\n",
    "BigQuery always references column names from tables or aliases,\n",
    "and never from projects or datasets."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For example, here are some options to address identifiers in BigQuery.\n",
    "\n",
    "Columns implicitly inferred from the table:\n",
    "\n",
    "```sql\n",
    "SELECT\n",
    " c\n",
    "FROM\n",
    " project.dataset.table\n",
    "```\n",
    "\n",
    "\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Or by using an explicit table reference:\n",
    "\n",
    "```sql\n",
    "SELECT\n",
    " table.c\n",
    "FROM\n",
    " project.dataset.table\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Or by using an explicit table alias:\n",
    "\n",
    "```sql\n",
    "SELECT\n",
    " t.c\n",
    "FROM\n",
    " project.dataset.table t\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "__#TODO(you):__ Run the following queries showing the different indentifier options."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery\n",
    "\n",
    "SELECT\n",
    "  rental_id,\n",
    "  duration,\n",
    "  bike_id\n",
    "FROM\n",
    "  `bigquery-public-data`.london_bicycles.cycle_hire\n",
    "LIMIT\n",
    "  1"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery\n",
    "\n",
    "SELECT\n",
    "  cycle_hire.rental_id,\n",
    "  cycle_hire.duration,\n",
    "  cycle_hire.bike_id\n",
    "FROM\n",
    "  `bigquery-public-data`.london_bicycles.cycle_hire\n",
    "LIMIT\n",
    "  1"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery\n",
    "\n",
    "SELECT\n",
    "  r.rental_id,\n",
    "  r.duration,\n",
    "  r.bike_id\n",
    "FROM\n",
    "  `bigquery-public-data`.london_bicycles.cycle_hire r\n",
    "LIMIT\n",
    "  1"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Alias references\n",
    "\n",
    "In a `SELECT` statement in Teradata, aliases can be defined and referenced\n",
    "within the same query. For instance, in the following snippet, `flag` is defined\n",
    "as a column alias, and then immediately referred to in the enclosed `CASE`\n",
    "statement.\n",
    "\n",
    "```sql\n",
    "SELECT\n",
    " F AS flag,\n",
    " CASE WHEN flag = 1 THEN ...\n",
    "```\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "In standard SQL, references between columns *within the same query* are not\n",
    "allowed. To translate, you move the logic into a nested query:\n",
    "\n",
    "```sql\n",
    "SELECT\n",
    " q.*,\n",
    " CASE WHEN q.flag = 1 THEN ...\n",
    "FROM (\n",
    " SELECT\n",
    "   F AS flag,\n",
    "   ...\n",
    ") AS q\n",
    "```\n",
    "\n",
    "The sample placeholder `F` could itself be a nested query that returns a single\n",
    "column.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "__#TODO(you):__ Run the following query, notice the syntax error, and rewrite it with a nested query to conform to standard SQL.\n",
    "\n",
    "_Note:_ You could just move the EXTRACT() function but for the purposes of the exercise use a nested query."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery\n",
    "\n",
    "SELECT\n",
    "  rental_id,\n",
    "  duration,\n",
    "  bike_id,\n",
    "  start_date,\n",
    "  EXTRACT(HOUR FROM start_date) AS start_hour,\n",
    "  CASE\n",
    "    WHEN start_hour <= 12 THEN TRUE\n",
    "  ELSE FALSE\n",
    "END\n",
    "  AS morning_ride\n",
    "FROM\n",
    "  `bigquery-public-data`.london_bicycles.cycle_hire\n",
    "LIMIT\n",
    "  1"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Filtering with LIKE\n",
    "\n",
    "In Teradata, the `LIKE ANY` operator is used to filter the results to a given\n",
    "set of possible options. For example:\n",
    "\n",
    "```sql\n",
    "SELECT*\n",
    "FROM t1\n",
    "WHERE a LIKE ANY ('string1', 'string2')\n",
    "```\n",
    "\n",
    "To translate statements that have this operator to standard SQL, you can split\n",
    "the list after `ANY` into several `OR` predicates:\n",
    "\n",
    "```sql\n",
    "SELECT*\n",
    "FROM t1\n",
    "WHERE a LIKE 'string1' OR a LIKE 'string2'\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "__#TODO(you):__ Rewrite this query with OR predicate so that it succeeds."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery\n",
    "\n",
    "SELECT\n",
    "  rental_id,\n",
    "  duration,\n",
    "  bike_id,\n",
    "  start_date,\n",
    "  start_station_name\n",
    "FROM\n",
    "  `bigquery-public-data`.london_bicycles.cycle_hire\n",
    "WHERE\n",
    "  start_station_name LIKE ANY ('%Hyde Park%', '%Soho%')\n",
    "LIMIT\n",
    "  5"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### The QUALIFY clause\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "Teradata's\n",
    "[QUALIFY](https://docs.teradata.com/reader/2_MC9vCtAJRlKle2Rpb0mA/19NnI91neorAi7LX6SJXBw) clause is a conditional clause in the `SELECT` statement that filters results of a previously computed, ordered [analytic function](https://cloud.google.com/bigquery/docs/reference/standard-sql/analytic-function-concepts) according to user‑specified search conditions. Its syntax consists of the `QUALIFY` clause followed by the analytic function, such as [`ROW_NUMBER`](https://docs.teradata.com/reader/kmuOwjp1zEYg98JsB8fu_A/8AEiTSe3nkHWox93XxcLrg) or [`RANK`](https://docs.teradata.com/reader/kmuOwjp1zEYg98JsB8fu_A/8Ex9CS5XErnUTmh7zcrOPg), and the values you want to find:\n",
    "\n",
    "```sql\n",
    "SELECT a, b\n",
    "FROM t1\n",
    "QUALIFY ROW_NUMBER() OVER (PARTITION BY a ORDER BY b) = 1\n",
    "```\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "Teradata users commonly use this function as a shorthand way to rank and\n",
    "return results without the need for an additional subquery.\n",
    "\n",
    "The `QUALIFY` clause is translated to BigQuery by adding a\n",
    "`WHERE` condition to an enclosing query:\n",
    "\n",
    "```sql\n",
    "SELECT a, b\n",
    "FROM (\n",
    " SELECT a, b,\n",
    " ROW_NUMBER() OVER (PARTITION BY A ORDER BY B) row_num\n",
    " FROM t1\n",
    ") WHERE row_num = 1\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "__#TODO(you):__ Rewrite this query such that it succeeds without a QUALIFY clause\n",
    "\n",
    "This query is returning the very first completed rental for each unique `bike_id`, ordered by end_date."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery\n",
    "\n",
    "SELECT\n",
    "  rental_id,\n",
    "  duration,\n",
    "  bike_id,\n",
    "  end_date\n",
    "FROM\n",
    "  `bigquery-public-data`.london_bicycles.cycle_hire\n",
    "  QUALIFY ROW_NUMBER() OVER (PARTITION BY bike_id ORDER BY end_date ASC) = 1\n",
    "LIMIT\n",
    "  5"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Notes on Scalable Analytic and Aggregate Functions\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Many of the Analytic Functions and Aggregate Functions in BigQuery have been implemented in a distributed, scalable manner, meaning it is now harder to overload a single worker. If you have highly skewed data (for example a single `bike_id` accounts for 95% of rides) or you are sorting a very large dataset, this used to be processed on a single BigQuery worker.\n",
    "\n",
    "That said, it is still important to utilize BigQuery best-pratcies wherever possible. For example filtering early and often and applying `LIMIT` clauses on aggregate functions like `ARRAY_AGG()`."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The 'latest record' use-case has a particularly fast implementation using `ARRAY_AGG(.... LIMIT 1)[offset(0)]` which allows can run more efficiently because the `ORDER BY` is allowed to drop everything except the top record on each `GROUP BY`"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In this example query, we are no longer grouping by `bike_id`, so we are asking BigQuery to sort the entire dataset _and_ assign a row_number to all 24 million rows  before only picking the first one:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery --verbose\n",
    "-- Query should succeed, but will take a bit\n",
    "\n",
    "SELECT\n",
    "  rental_id,\n",
    "  duration,\n",
    "  bike_id,\n",
    "  end_date\n",
    "FROM (\n",
    "  SELECT\n",
    "    rental_id,\n",
    "    duration,\n",
    "    bike_id,\n",
    "    end_date,\n",
    "    \n",
    "    -- NOTE: we removed the 'PARTITION BY bike_id' clause\n",
    "    ROW_NUMBER() OVER (ORDER BY end_date ASC) rental_num\n",
    "    \n",
    "  FROM\n",
    "    `bigquery-public-data`.london_bicycles.cycle_hire )\n",
    "WHERE\n",
    "  rental_num = 1"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can apply the `ARRAY_AGG(.... LIMIT 1)[offset(0)]` trick to this query to speed it up greatly."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery --verbose\n",
    "-- Query should succeed more quickly\n",
    "\n",
    "SELECT\n",
    "  rental.*\n",
    "FROM (\n",
    "  SELECT\n",
    "    ARRAY_AGG( rentals\n",
    "    ORDER BY rentals.end_date ASC LIMIT 1)[OFFSET(0)] rental\n",
    "  FROM (\n",
    "    SELECT\n",
    "      rental_id,\n",
    "      duration,\n",
    "      bike_id,\n",
    "      end_date\n",
    "    FROM\n",
    "      `bigquery-public-data`.london_bicycles.cycle_hire) rentals )\n",
    "LIMIT\n",
    "  5"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The first rule of BigQuery optimization is if your query runs in an acceptable amount of time and with acceptable resources, don't fix it! BigQuery has lots of intelligent (and brute-force) tricks under the hood to optimize your query for you.\n",
    "\n",
    "For example, applying this ARRAY_AGG() trick to the original query where we had a `GROUP BY bike_id` class will greatly slow it down, mostly because this dataset is too small to benefit from this trick.\n",
    "\n",
    "__Bonus:__ Try this trick on the previous query and see if it's faster or slower"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### CSUM (Cumulative Sum)\n",
    "\n",
    "`CSUM()` is a Teratadata extension to standard SQL and is not supported in BigQuery. The same effect can be achieved with a `SUM()` over a Window function like this:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery\n",
    "\n",
    "SELECT\n",
    "  rental_id,\n",
    "  bike_id,\n",
    "  end_date,\n",
    "  duration,\n",
    "  SUM(duration) OVER (PARTITION BY bike_id ORDER BY end_date ASC) AS running_sum\n",
    "FROM\n",
    "  `bigquery-public-data`.london_bicycles.cycle_hire\n",
    "LIMIT 10"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### TIMESTAMP() and Time Zones\n",
    "\n",
    "By default, all TIMESTAMP objects in BigQuery are UTC time, no matter where in the world you process your queries. Because of that, you can't assume a time-zone, such as America/Los_Angeles. Here are some examples of adding a TimeZone."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery\n",
    "\n",
    "SELECT\n",
    "  CAST('2020-12-01 14:30:00' AS TIMESTAMP) incoming_time_as_ts,\n",
    "  CAST('2020-12-01 14:30:00' AS DATETIME) incoming_time_as_dt,\n",
    "  DATETIME(CAST(TIMESTAMP('2020-12-01 14:30:00', 'America/Los_Angeles') AS TIMESTAMP),\n",
    "    'US/Central') PST_TO_CST;\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery\n",
    "\n",
    "SELECT\n",
    "  CAST('2020-12-01 14:30:00' AS TIMESTAMP) incoming_time_as_ts,\n",
    "  CAST('2020-12-01 14:30:00' AS DATETIME) incoming_time_as_dt,\n",
    "  DATETIME(CAST(CAST(TIMESTAMP('2020-12-01 14:30:00-08') AS DATETIME) AS TIMESTAMP),\n",
    "    'US/Central') PST_TO_CST;"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "To see the current time using `CURRENT_TIMESTAMP` and `AT TIME ZONE`:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery\n",
    "\n",
    "SELECT\n",
    "  EXTRACT(DATETIME\n",
    "  FROM\n",
    "    CURRENT_TIMESTAMP() AT TIME ZONE \"America/Los_Angeles\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Data Manipulation Language (DML)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The [Data Manipulation Language (DML)](https://wikipedia.org/wiki/Data_manipulation_language) is used to list, add, delete, and modify data in a database. It includes the\n",
    "`SELECT`, `INSERT`, `DELETE`, and `UPDATE` statements.\n",
    "\n",
    "While the basic forms of these statements are the same between Teradata SQL and\n",
    "standard SQL, Teradata includes additional, non-standard clauses and special\n",
    "statement constructs that you need to convert when you migrate. The following\n",
    "sections present a non-exhaustive list of the most common statements, the main\n",
    "differences, and the recommended translations."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### The INSERT statement\n",
    "\n",
    "BigQuery is an enterprise data warehouse that focuses on Online\n",
    "Analytical Processing (OLAP). Using point-specific DML statements, such as\n",
    "executing a script with many `INSERT` statements, is an attempt to treat\n",
    "BigQuery like an Online Transaction Processing (OLTP) system,\n",
    "which is not a correct approach.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "BigQuery DML statements are intended for bulk updates, therefore\n",
    "each DML statement that modifies data\n",
    "[initiates an implicit transaction](https://cloud.google.com/bigquery/docs/reference/standard-sql/data-manipulation-language#limitations).\n",
    "You should group your DML statements whenever possible to avoid unnecessary\n",
    "transaction overhead.\n",
    "\n",
    "As an example, if you have the following set of statements from Teradata,\n",
    "running them as is in BigQuery is an anti-pattern:\n",
    "\n",
    "```sql\n",
    "INSERT INTO t1 (...) VALUES (...);\n",
    "INSERT INTO t1 (...) VALUES (...);\n",
    "```\n",
    "\n",
    "You can translate the previous script into a single `INSERT` statement, which\n",
    "performs a bulk operation instead:\n",
    "\n",
    "```sql\n",
    "INSERT INTO t1 VALUES (...), (...)\n",
    "```\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "A typical scenario where a large number of `INSERT` statements is used is when\n",
    "you create a new table from an existing  table. In BigQuery,\n",
    "instead of using multiple `INSERT` statements, create a new table and insert all\n",
    "the rows in one operation using the\n",
    "[`CREATE TABLE ... AS SELECT`](https://cloud.google.com/bigquery/docs/reference/standard-sql/data-definition-language#creating_a_new_table_from_an_existing_table)\n",
    "statement.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For the next example we first create a local copy of the data so that we have Write permissions:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash\n",
    "\n",
    "# Create a dataset in your project\n",
    "bq mk --location eu my_london_bicycles_dataset\n",
    "\n",
    "# Copy the public dataset to your project\n",
    "bq cp bigquery-public-data:london_bicycles.cycle_hire my_london_bicycles_dataset.cycle_hire\n",
    "bq cp bigquery-public-data:london_bicycles.cycle_stations my_london_bicycles_dataset.cycle_stations"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash\n",
    "\n",
    "# Examine your local table\n",
    "bq show my_london_bicycles_dataset.cycle_hire"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "__#TODO(you):__ Rewrite this `INSERT INTO` query so that it only executes one DML transaction"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery\n",
    "-- Rows before\n",
    "\n",
    "SELECT COUNT(*) FROM my_london_bicycles_dataset.cycle_hire"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery\n",
    "\n",
    "INSERT INTO\n",
    "  my_london_bicycles_dataset.cycle_hire\n",
    "VALUES\n",
    "  (47469109, 3180, 7054, '2015-09-03 12:45:00 UTC', 111, 'Park Lane, Hyde Park', '2015-09-03 11:52:00 UTC', 300, 'Serpentine Car Park, Hyde Park', NULL, NULL, NULL);\n",
    "INSERT INTO\n",
    "  my_london_bicycles_dataset.cycle_hire\n",
    "VALUES\n",
    "  (46915469, 7380, 3792, '2015-08-16 11:59:00 UTC', 407, 'Speakers\\' Corner 1, Hyde Park', '2015-08-16 09:56:00 UTC', 407, 'Speakers\\' Corner 1, Hyde Park', NULL, NULL, NULL);"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery\n",
    "-- Rows after\n",
    "\n",
    "SELECT COUNT(*) FROM my_london_bicycles_dataset.cycle_hire"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### The UPDATE statement\n",
    "\n",
    "`UPDATE` statements in Teradata are similar to `UPDATE` statements in standard\n",
    "SQL. The important differences are:\n",
    "\n",
    "-   The order of the `SET` and `FROM` clauses is reversed.\n",
    "-   Any\n",
    "    [Teradata correlation names](https://docs.teradata.com/reader/huc7AEHyHSROUkrYABqNIg/k6fC7ozmhIZZXa315VjJAw)\n",
    "    used as table aliases in the `UPDATE` must be removed.\n",
    "-   In Standard SQL, each `UPDATE` statement must include the `WHERE` keyword,\n",
    "    followed by a condition. To update all rows in the table, use `WHERE true`.\n",
    "\n",
    "The following example shows an `UPDATE` statement from Teradata that uses\n",
    "joins:\n",
    "\n",
    "```sql\n",
    "UPDATE t1\n",
    "FROM t1, t2\n",
    "SET\n",
    " b = t2.b\n",
    "WHERE a = t2.a;\n",
    "```\n",
    "\n",
    "The equivalent statement in standard SQL is the following:\n",
    "\n",
    "```sql\n",
    "UPDATE t1\n",
    "SET\n",
    " b = t2.b\n",
    "FROM t2\n",
    "WHERE a = t2.a;\n",
    "```\n",
    "\n",
    "The considerations from the previous section about executing large numbers of\n",
    "DML statements in BigQuery also apply in this case. We recommend\n",
    "using a single\n",
    "[`MERGE`](https://cloud.google.com/bigquery/docs/reference/standard-sql/dml-syntax#merge_statement)\n",
    "statement instead of multiple `UPDATE` statements.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "__#TODO(you):__ Rewrite this UPDATE statement so that it executes in BigQuery"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery --verbose\n",
    "\n",
    "UPDATE\n",
    "  my_london_bicycles_dataset.cycle_hire\n",
    "FROM my_london_bicycles_dataset.cycle_hire t1, `bigquery-public-data`.london_bicycles.cycle_hire t2 \n",
    "SET\n",
    "  bike_id = t2.bike_id \n",
    "WHERE\n",
    "  t1.rental_id = t2.rental_id"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### The DELETE statement\n",
    "\n",
    "Standard SQL requires `DELETE` statements to have a `WHERE` clause. In\n",
    "Teradata, `WHERE` clauses are\n",
    "[optional in `DELETE` statements](https://docs.teradata.com/reader/huc7AEHyHSROUkrYABqNIg/z8eO9bdxtjFRveHdDwwYPQ)\n",
    "if you're deleting all the rows in a table. (If specific rows are being deleted,\n",
    "the Teradata DML also requires a `WHERE` clause.) During translation, any\n",
    "missing `WHERE` clauses must be added to scripts. This change is necessary only\n",
    "when all the rows in a table will be deleted.\n",
    "\n",
    "For instance, the following statement in Teradata SQL deletes all the rows from\n",
    "a table. The `ALL` clause is optional:\n",
    "\n",
    "```sql\n",
    "DELETE t1 ALL;\n",
    "```\n",
    "\n",
    "The translation into standard SQL is as follows:\n",
    "\n",
    "```sql\n",
    "DELETE FROM t1 WHERE TRUE;\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "__#TODO(you):__ Rewrite this UPDATE statement so that it executes in BigQuery"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery\n",
    "\n",
    "\n",
    "DELETE my_london_bicycles_dataset.cycle_hire ALL;"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Data Definition Language (DDL)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The\n",
    "[Data Definition Language](https://wikipedia.org/wiki/Data_definition_language)\n",
    "(DDL) is used to define your database schema. It includes a subset of SQL\n",
    "statements such as `CREATE`, `ALTER`, and `DROP`.\n",
    "\n",
    "For the most part, these statements are equivalent between Teradata SQL and\n",
    "standard SQL. Here is a non-exhaustive list of notable exceptions:\n",
    "\n",
    "-   Index manipulation options are not supported in\n",
    "    BigQuery, such as `CREATE INDEX` and `PRIMARY INDEX`.\n",
    "    BigQuery does not use indexes when querying your data. It\n",
    "    produces fast results thanks to its underlying model using\n",
    "    [Dremel](https://ai.google/research/pubs/pub36632),\n",
    "    its storage techniques using\n",
    "    [Capacitor](https://cloud.google.com/blog/products/gcp/inside-capacitor-bigquerys-next-generation-columnar-storage-format),\n",
    "    and its massively parallel architecture.\n",
    "-   [Constraints](https://docs.teradata.com/reader/rgAb27O_xRmMVc_aQq2VGw/_X6axAFdllKMCoVKT9~hHg),\n",
    "    which are checks applied to individual columns or an entire table.\n",
    "    BigQuery supports only `NOT NULL` constraints.\n",
    "-   [`MULTISET`](https://docs.teradata.com/reader/VrFCOAaniAIfrJsA51oQJA/3vKnwH1vZNoJpZZmuKCsGg),\n",
    "    which is used to allow duplicate rows in Teradata.\n",
    "-   [`CASESPECIFIC`](https://docs.teradata.com/reader/S0Fw2AVH8ff3MDA0wDOHlQ/CrmHZxipG~s_PP3s~5Wg4w),\n",
    "    which specifies case for character data comparisons and collations."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Indexing for consistency (UNIQUE, PRIMARY INDEX)\n",
    "\n",
    "\n",
    "In Teradata, a unique index can be used to prevent rows with non-unique keys in a table. If a process tries to insert or update data that has a value that's already in the index, the operation either fails with an index violation (`MULTISET` tables) or silently ignores it (`SET` tables).\n",
    "\n",
    "Because BigQuery doesn't provide explicit indexes, other strategies can be employed to achieve the same effect. A `MERGE` statement can be used instead to insert only unique records into a target table from a staging table while discarding duplicate records. However, there is no way to prevent a user with edit permissions from inserting a duplicate record, because BigQuery never locks during `INSERT` operations. To generate an error for duplicate records in BigQuery, you can use a `MERGE` statement from a staging table, as shown in the following example."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash\n",
    "# Re-insert cycle_hire data that was deleted\n",
    "\n",
    "bq cp -f bigquery-public-data:london_bicycles.cycle_hire my_london_bicycles_dataset.cycle_hire"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery\n",
    "-- Create a loading table with some duplicate rows and a new unique row. `rental_id` will be the unique key.\n",
    "\n",
    "CREATE OR REPLACE TABLE\n",
    "  my_london_bicycles_dataset.temp_loading_table AS (\n",
    "\n",
    "  --Grab 5 duplicate rows\n",
    "  SELECT\n",
    "    *\n",
    "  FROM\n",
    "    my_london_bicycles_dataset.cycle_hire\n",
    "  LIMIT\n",
    "    5)\n",
    "UNION ALL (\n",
    "  \n",
    "  --Add a new unique row\n",
    "  SELECT\n",
    "    111147469109,\n",
    "    3180,\n",
    "    7054,\n",
    "    '2015-09-03 12:45:00 UTC',\n",
    "    111,\n",
    "    'Park Lane, Hyde Park',\n",
    "    '2015-09-03 11:52:00 UTC',\n",
    "    300,\n",
    "    'Serpentine Car Park, Hyde Park',\n",
    "    NULL,\n",
    "    NULL,\n",
    "    NULL)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery --verbose\n",
    "--Number of Rows in base table\n",
    "\n",
    "SELECT COUNT(*) FROM my_london_bicycles_dataset.cycle_hire;"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery --verbose\n",
    "--Number of Rows in loading table\n",
    "\n",
    "SELECT COUNT(*) FROM my_london_bicycles_dataset.temp_loading_table;"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We will now use a `MERGE` statement to insert and dedupe rows to the main table:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery\n",
    "\n",
    "MERGE\n",
    "  my_london_bicycles_dataset.cycle_hire rentals\n",
    "USING\n",
    "  my_london_bicycles_dataset.temp_loading_table temp\n",
    "ON\n",
    "  temp.rental_id = rentals.rental_id\n",
    "  WHEN NOT MATCHED\n",
    "  THEN\n",
    "    INSERT ROW"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can now see that the 1 unique row has been inserted:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery --verbose\n",
    "--Number of rows now in base table\n",
    "\n",
    "SELECT COUNT(*) FROM my_london_bicycles_dataset.cycle_hire;"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "More often, users prefer to remove duplicates independently in order to find errors in downstream systems.\n",
    "BigQuery does not support `DEFAULT` and `IDENTITY` (sequences) columns.\n",
    "\n",
    "Here you will insert the 5 redundant values into the base table and use a `ROW_NUMBER()` function and `SELECT * EXCEPT()` to create a unique set of the data. `DISTINCT rental_id, * EXCEPT(rental_id)` is another option but is often not as fast."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery\n",
    "\n",
    "INSERT INTO\n",
    "  my_london_bicycles_dataset.cycle_hire\n",
    "SELECT\n",
    "  *\n",
    "FROM\n",
    "  my_london_bicycles_dataset.temp_loading_table"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery --verbose\n",
    "--Number of rows now in base table\n",
    "\n",
    "SELECT COUNT(*) FROM my_london_bicycles_dataset.cycle_hire;"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery\n",
    "\n",
    "-- Number of rows in a unique 'view' of the data:\n",
    "SELECT\n",
    "  COUNT(*)\n",
    "FROM (\n",
    "  SELECT\n",
    "    * EXCEPT(row_number)\n",
    "  FROM (\n",
    "    SELECT\n",
    "      *,\n",
    "      ROW_NUMBER() OVER (PARTITION BY rental_id) row_number\n",
    "    FROM\n",
    "      `my_london_bicycles_dataset.cycle_hire`)\n",
    "  WHERE\n",
    "    row_number = 1 )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### True Row Uniqueness in BigQuery\n",
    "\n",
    "If you need true uniqueness and don't have a unique key, you can use `SELECT DISTINCT *`. This is not ideal for performance reasons.\n",
    "\n",
    "Here you will create a de-duped view and examine peformance impact of calling `SELECT DISTINCT *` each time. Consider regularly re-materializing your data if you have this use case."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bash\n",
    "# Re-insert cycle_hire data that was deleted\n",
    "\n",
    "bq cp -f bigquery-public-data:london_bicycles.cycle_hire my_london_bicycles_dataset.cycle_hire"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Create a View using `SELECT DISTINCT *`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery\n",
    "CREATE OR REPLACE VIEW\n",
    "  my_london_bicycles_dataset.cycle_hire_dedupe AS\n",
    "SELECT\n",
    "  DISTINCT *\n",
    "FROM\n",
    "  my_london_bicycles_dataset.cycle_hire"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery --verbose\n",
    "--Number of Rows in base table\n",
    "\n",
    "SELECT COUNT(*) FROM my_london_bicycles_dataset.cycle_hire;"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery --verbose\n",
    "--Number of Rows in de-duped View\n",
    "\n",
    "SELECT COUNT(*) FROM my_london_bicycles_dataset.cycle_hire_dedupe;"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Next, you'll create duplicates of 5 rows in the table and examine the underlying number of rows:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery\n",
    "\n",
    "INSERT INTO\n",
    "  my_london_bicycles_dataset.cycle_hire\n",
    "SELECT\n",
    "  *\n",
    "FROM\n",
    "  my_london_bicycles_dataset.cycle_hire\n",
    "LIMIT 5"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery --verbose\n",
    "--Number of Rows in base table\n",
    "\n",
    "SELECT COUNT(*) FROM my_london_bicycles_dataset.cycle_hire;"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery --verbose\n",
    "--Number of Rows in de-duped View\n",
    "\n",
    "SELECT COUNT(*) FROM my_london_bicycles_dataset.cycle_hire_dedupe;"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "`SELECT DISTINCT *` is essentially performing a `GROUP BY` on every field in the table. BigQuery can perform this scalably, but when using a view like this, you are asking it to perform the de-duplication upon every query call. Consider periodically rematerializing deduped views or building some de-duplication into your ETL pipelines."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "You can see this in the query algebra:\n",
    "<img src=\"img/select_distinct_query_algebra.png\">\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Stored Procedures"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "[Stored procedures](https://docs.teradata.com/reader/zzfV8dn~lAaKSORpulwFMg/qGy9u~3hCZ7HjA6Q51CVtA)\n",
    "in Teradata are a combination of SQL and control statements. Stored procedures\n",
    "can take parameters that let you build a customized interface to the Teradata\n",
    "Database.\n",
    "\n",
    "Stored procedures are supported as part of BigQuery\n",
    "[Scripting](https://cloud.google.com/bigquery/docs/reference/standard-sql/scripting).\n",
    "\n",
    "However, there are some cases where other features might be more appropriate.\n",
    "These alternatives depend on how your stored procedures are being used.\n",
    "For example:\n",
    "\n",
    "-   Replace triggers that are used to run periodic queries with\n",
    "    [scheduled queries](https://cloud.google.com/bigquery/docs/scheduling-queries).\n",
    "-   Replace stored procedures that control the complex execution of queries\n",
    "    and their interdependencies with workflows defined in\n",
    "    [Cloud Composer](https://cloud.google.com/composer) (manged Apached Airflow).\n",
    "-   Refactor stored procedures that are used as an API into your data\n",
    "    warehouse with\n",
    "    [parameterized queries](https://cloud.google.com/bigquery/docs/parameterized-queries)\n",
    "    and using the\n",
    "    [{{bigquery_api}}](https://cloud.google.com/bigquery/docs/reference).\n",
    "    This change implies that you must rebuild the logic from the stored\n",
    "    procedure in a different programming language such as Java or Go, and that\n",
    "    you then call SQL queries with parameters from the code.\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### IPython Magic Hints"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "As a helpful hint, you can paramterize your bigquery cells as such:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery --params {\"bike_id\": 5}\n",
    "\n",
    "SELECT\n",
    "  MAX(duration) AS max_duration,\n",
    "  bike_id\n",
    "FROM\n",
    "  `bigquery-public-data`.london_bicycles.cycle_hire\n",
    "WHERE\n",
    "  bike_id=@bike_id\n",
    "GROUP BY\n",
    "  bike_id"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### BigQuery scripting\n",
    "\n",
    "BigQuery scripting enables you to send multiple statements to\n",
    "BigQuery in one request, to use variables, and to use control flow\n",
    "statements such as [`IF`](#if) and [`WHILE`](#while). For example, you can\n",
    "declare a variable, assign a value to it, and then reference it in a third\n",
    "statement.\n",
    "\n",
    "In BigQuery, a script is a SQL statement list to be executed in\n",
    "sequence. A SQL statement list is a list of any valid BigQuery\n",
    "statements that are separated by semicolons.\n",
    "\n",
    "For example:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery\n",
    "\n",
    "-- Declare a variable to hold names as an array.\n",
    "DECLARE top_names ARRAY<STRING>;\n",
    "-- Build an array of the top 100 names from the year 2017.\n",
    "SET top_names = (\n",
    "  SELECT ARRAY_AGG(name ORDER BY number DESC LIMIT 100)\n",
    "  FROM `bigquery-public-data`.usa_names.usa_1910_current\n",
    "  WHERE year = 2017\n",
    ");\n",
    "-- Which names appear as words in Shakespeare's plays?\n",
    "SELECT\n",
    "  name AS shakespeare_name\n",
    "FROM UNNEST(top_names) AS name\n",
    "WHERE name IN (\n",
    "  SELECT word\n",
    "  FROM `bigquery-public-data`.samples.shakespeare\n",
    ");"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<p>Scripts are executed in BigQuery using\n",
    "<a href=\"/bigquery/docs/reference/rest/v2/jobs/insert\"><code>jobs.insert</code></a>,\n",
    "similar to any other query, with the multi-statement script specified as the\n",
    "query text. When a script executes, additional jobs, known as child jobs,\n",
    "are created for each statement in the script.  You can enumerate the child jobs\n",
    "of a script by calling\n",
    "<a href=\"/bigquery/docs/reference/rest/v2/jobs/list\"><code>jobs.list</code></a>,\n",
    "passing in the script’s job ID as the <code>parentJobId</code> parameter.</p>\n",
    "<p>When\n",
    "<a href=\"/bigquery/docs/reference/rest/v2/jobs/getQueryResults\"><code>jobs.getQueryResults</code></a>\n",
    "is invoked on a script, it will return the query results for the last SELECT,\n",
    "DML, or DDL statement to execute in the script, with no query results if none of\n",
    "the above statements have executed.  To obtain the results of all statements in\n",
    "the script, enumerate the child jobs and call <code>jobs.getQueryResults</code>\n",
    "on each of them.</p>\n",
    "\n",
    "BigQuery interprets any request with multiple statements as a script,\n",
    "unless the statements consist of `CREATE TEMP FUNCTION` statement(s), with a\n",
    "single final query statement. For example, the following would not be considered\n",
    "a script:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery\n",
    "\n",
    "CREATE TEMP FUNCTION Add(x INT64, y INT64) AS (x + y);\n",
    "\n",
    "SELECT Add(3, 4);"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Stored Procedures\n",
    "\n",
    "Unlike temporary functions which persist only for the length of the query statement, stored procedures can be created and used over time. They are associated with a dataset, just like tables and views:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery\n",
    "\n",
    "CREATE PROCEDURE my_london_bicycles_dataset.AddDelta(INOUT x INT64, delta INT64)\n",
    "BEGIN\n",
    "  SET x = x + delta;\n",
    "END;"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%bigquery\n",
    "\n",
    "DECLARE accumulator INT64 DEFAULT 0;\n",
    "CALL my_london_bicycles_dataset.AddDelta(accumulator, 5);\n",
    "CALL my_london_bicycles_dataset.AddDelta(accumulator, 3);\n",
    "SELECT accumulator;"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Next Steps"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This was a sample of the most common SQL translations required when moving from Teradata to BigQuery. For an exhuastive list, consult the [SQL Translation Reference Page](https://cloud.google.com/solutions/migration/dw2bq/td2bq/td-bq-sql-translation-reference-tables)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License."
   ]
  }
 ],
 "metadata": {
  "environment": {
   "name": "tf2-gpu.2-1.m49",
   "type": "gcloud",
   "uri": "gcr.io/deeplearning-platform-release/tf2-gpu.2-1:m49"
  },
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
