{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ueCj9KW2QTCP"
      },
      "source": [
        "##### Copyright 2020 The TensorFlow Authors."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "wFk_qMvcQZ8S"
      },
      "outputs": [],
      "source": [
        "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "# you may not use this file except in compliance with the License.\n",
        "# You may obtain a copy of the License at\n",
        "#\n",
        "# https://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "# Unless required by applicable law or agreed to in writing, software\n",
        "# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "# See the License for the specific language governing permissions and\n",
        "# limitations under the License."
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "HKYXncPn7mSs"
      },
      "source": [
        "# Fairness Indicators Lineage Case Study"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "d7A099z02DB6"
      },
      "source": [
        "\u003ctable class=\"tfo-notebook-buttons\" align=\"left\"\u003e\n",
        "  \u003ctd\u003e\n",
        "    \u003ca target=\"_blank\" href=\"https://www.tensorflow.org/responsible_ai/fairness_indicators/tutorials/Fairness_Indicators_Lineage_Case_Study\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" /\u003eView on TensorFlow.org\u003c/a\u003e\n",
        "  \u003c/td\u003e\n",
        "  \u003ctd\u003e\n",
        "    \u003ca target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_Lineage_Case_Study.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" /\u003eRun in Google Colab\u003c/a\u003e\n",
        "  \u003c/td\u003e\n",
        "  \u003ctd\u003e\n",
        "    \u003ca target=\"_blank\" href=\"https://github.com/tensorflow/fairness-indicators/tree/master/g3doc/tutorials/Fairness_Indicators_Lineage_Case_Study.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" /\u003eView on GitHub\u003c/a\u003e\n",
        "  \u003c/td\u003e\n",
        "  \u003ctd\u003e\n",
        "    \u003ca href=\"https://storage.googleapis.com/tensorflow_docs/fairness-indicators/g3doc/tutorials/Fairness_Indicators_Lineage_Case_Study.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/download_logo_32px.png\" /\u003eDownload notebook\u003c/a\u003e\n",
        "  \u003c/td\u003e\n",
        "  \u003ctd\u003e\n",
        "    \u003ca href=\"https://tfhub.dev/google/random-nnlm-en-dim128/1\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/hub_logo_32px.png\" /\u003eSee TF Hub model\u003c/a\u003e\n",
        "  \u003c/td\u003e\n",
        "\u003c/table\u003e"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "oZWUeUxjlMjQ"
      },
      "source": [
        "## COMPAS Dataset\n",
        "[COMPAS](https://www.propublica.org/datastore/dataset/compas-recidivism-risk-score-data-and-analysis) (Correctional Offender Management Profiling for Alternative Sanctions) is a public dataset, which contains approximately 18,000 criminal cases from Broward County, Florida between January, 2013 and December, 2014. The data contains information about 11,000 unique defendants, including criminal history demographics, and a risk score intended to represent the defendant’s likelihood of reoffending (recidivism). A machine learning model trained on this data has been used by judges and parole officers to determine whether or not to set bail and whether or not to grant parole. \n",
        "\n",
        "In 2016, [an article published in ProPublica](https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing) found that the COMPAS model was  incorrectly predicting that African-American defendants would recidivate at much higher rates than their white counterparts while Caucasian would not recidivate at a much higher rate. For Caucasian defendants, the model made mistakes in the opposite direction, making incorrect predictions that they wouldn’t commit another crime. The authors went on to show that these biases were likely due to an uneven distribution in the data between African-Americans and Caucasian defendants. Specifically, the ground truth label of a negative example (a defendant **would not** commit another crime) and a positive example (defendant **would** commit another crime) were disproportionate between the two races. Since 2016, the COMPAS dataset has appeared frequently in the ML fairness literature \u003csup\u003e1, 2, 3\u003c/sup\u003e, with researchers using it to demonstrate techniques for identifying and remediating fairness concerns. This [tutorial from the FAT* 2018 conference](https://youtu.be/hEThGT-_5ho?t=1) illustrates how COMPAS can dramatically impact a defendant’s prospects in the real world. \n",
        "\n",
        "It is important to note that developing a machine learning model to predict pre-trial detention has a number of important ethical considerations. You can learn more about these issues in the Partnership on AI “[Report on Algorithmic Risk Assessment Tools in the U.S. Criminal Justice System](https://www.partnershiponai.org/report-on-machine-learning-in-risk-assessment-tools-in-the-u-s-criminal-justice-system/).” The Partnership on AI is a multi-stakeholder organization -- of which Google is a member -- that creates guidelines around AI.\n",
        "\n",
        "We’re using the COMPAS dataset only as an example of how to identify and remediate fairness concerns in data. This dataset is canonical in the algorithmic fairness literature. \n",
        "\n",
        "## About the Tools in this Case Study\n",
        "*   **[TensorFlow Extended (TFX)](https://www.tensorflow.org/tfx)** is a Google-production-scale machine learning platform based on TensorFlow. It provides a configuration framework and shared libraries to integrate common components needed to define, launch, and monitor your machine learning system.\n",
        "\n",
        "*   **[TensorFlow Model Analysis](https://www.tensorflow.org/tfx/tutorials/model_analysis/tfma_basic)** is a library for evaluating machine learning models. Users can evaluate their models on a large amount of data in a distributed manner and view metrics over different slices within a notebook.\n",
        "\n",
        "*   **[Fairness Indicators](https://www.tensorflow.org/tfx/guide/fairness_indicators)** is a suite of tools built on top of TensorFlow Model Analysis that enables regular evaluation of fairness metrics in product pipelines.\n",
        "\n",
        "*   **[ML Metadata](https://www.tensorflow.org/tfx/guide/mlmd)** is a library for recording and retrieving the lineage and metadata of ML artifacts such as models, datasets, and metrics. Within TFX ML Metadata will help us understand the artifacts created in a pipeline, which is a unit of data that is passed between TFX components.\n",
        "\n",
        "*   **[TensorFlow Data Validation](https://www.tensorflow.org/tfx/guide/tfdv)** is a library to analyze your data and check for errors that can affect model training or serving.\n",
        "\n",
        "\n",
        "## Case Study Overview\n",
        "\n",
        "For the duration of this case study we will define “fairness concerns” as a bias within a model that negatively impacts a slice within our data. Specifically, we’re trying to limit any recidivism prediction that could be biased towards race.\n",
        "\n",
        "The walk through of the case study will proceed as follows:\n",
        "\n",
        "1.   Download the data, preprocess, and explore the initial dataset.\n",
        "2.   Build a TFX pipeline with the COMPAS dataset using a Keras binary classifier.\n",
        "3.   Run our results through TensorFlow Model Analysis, TensorFlow Data Validation, and load Fairness Indicators to explore any potential fairness concerns within our model.\n",
        "4.   Use ML Metadata to track all the artifacts for a model that we trained with TFX.\n",
        "5.   Weight the initial COMPAS dataset for our second model to account for the uneven distribution between recidivism and race.\n",
        "6.   Review the performance changes within the new dataset.\n",
        "7.   Check the underlying changes within our TFX pipeline with ML Metadata to understand what changes were made between the two models. \n",
        "\n",
        "## Helpful Resources\n",
        "This case study is an extension of the below case studies. It is recommended working through the below case studies first. \n",
        "*    [TFX Pipeline Overview](https://github.com/tensorflow/workshops/blob/master/tfx_labs/Lab_1_Pipeline_in_Colab.ipynb)\n",
        "*    [Fairness Indicator Case Study](https://github.com/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_Example_Colab.ipynb)\n",
        "*    [TFX Data Validation](https://github.com/tensorflow/tfx/blob/master/tfx/examples/airflow_workshop/notebooks/step3.ipynb)\n",
        "\n",
        "\n",
        "## Setup\n",
        "To start, we will install the necessary packages, download the data, and import the required modules for the case study.\n",
        "\n",
        "To install the required packages for this case study in your notebook run the below PIP command.\n",
        "\n",
        "**Note:** See [here](https://github.com/tensorflow/tfx#compatible-versions) for a reference on compatibility between different versions of the libraries used in this case study.\n",
        "\n",
        "___\n",
        "\n",
        "1.  Wadsworth, C., Vera, F., Piech, C. (2017). Achieving Fairness Through Adversarial Learning: an Application to Recidivism Prediction. https://arxiv.org/abs/1807.00199.\n",
        "\n",
        "2.  Chouldechova, A., G’Sell, M., (2017). Fairer and more accurate, but for whom? https://arxiv.org/abs/1707.00046.\n",
        "\n",
        "3.  Berk et al., (2017), Fairness in Criminal Justice Risk Assessments: The State of the Art, https://arxiv.org/abs/1703.09207.\n",
        "\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "cellView": "both",
        "id": "42BmC-ctlMjR"
      },
      "outputs": [],
      "source": [
        "!python -m pip install -q -U \\\n",
        "  tfx \\\n",
        "  tensorflow-model-analysis \\\n",
        "  tensorflow-data-validation \\\n",
        "  tensorflow-metadata \\\n",
        "  tensorflow-transform \\\n",
        "  ml-metadata \\\n",
        "  tfx-bsl"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "yeS4Xy2MlMjW",
        "scrolled": true
      },
      "outputs": [],
      "source": [
        "import os\n",
        "import tempfile\n",
        "import six.moves.urllib as urllib\n",
        "\n",
        "from ml_metadata.metadata_store import metadata_store\n",
        "from ml_metadata.proto import metadata_store_pb2\n",
        "\n",
        "import pandas as pd\n",
        "from google.protobuf import text_format\n",
        "from sklearn.utils import shuffle\n",
        "import tensorflow as tf\n",
        "import tensorflow_data_validation as tfdv\n",
        "\n",
        "import tensorflow_model_analysis as tfma\n",
        "from tensorflow_model_analysis.addons.fairness.post_export_metrics import fairness_indicators\n",
        "from tensorflow_model_analysis.addons.fairness.view import widget_view\n",
        "\n",
        "import tfx\n",
        "from tfx.components.evaluator.component import Evaluator\n",
        "from tfx.components.example_gen.csv_example_gen.component import CsvExampleGen\n",
        "from tfx.components.schema_gen.component import SchemaGen\n",
        "from tfx.components.statistics_gen.component import StatisticsGen\n",
        "from tfx.components.trainer.component import Trainer\n",
        "from tfx.components.transform.component import Transform\n",
        "from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext\n",
        "from tfx.proto import evaluator_pb2\n",
        "from tfx.proto import trainer_pb2"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "YZQLS05WlMjV"
      },
      "source": [
        "## Download and preprocess the dataset\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "7uOVs7WJlMjl"
      },
      "outputs": [],
      "source": [
        "# Download the COMPAS dataset and setup the required filepaths.\n",
        "_DATA_ROOT = tempfile.mkdtemp(prefix='tfx-data')\n",
        "_DATA_PATH = 'https://storage.googleapis.com/compas_dataset/cox-violent-parsed.csv'\n",
        "_DATA_FILEPATH = os.path.join(_DATA_ROOT, 'compas-scores-two-years.csv')\n",
        "\n",
        "data = urllib.request.urlopen(_DATA_PATH)\n",
        "_COMPAS_DF = pd.read_csv(data)\n",
        "\n",
        "# To simpliy the case study, we will only use the columns that will be used for\n",
        "# our model.\n",
        "_COLUMN_NAMES = [\n",
        "  'age',\n",
        "  'c_charge_desc',\n",
        "  'c_charge_degree',\n",
        "  'c_days_from_compas',\n",
        "  'is_recid',\n",
        "  'juv_fel_count',\n",
        "  'juv_misd_count',\n",
        "  'juv_other_count',\n",
        "  'priors_count',\n",
        "  'r_days_from_arrest',\n",
        "  'race',\n",
        "  'sex',\n",
        "  'vr_charge_desc',\n",
        "]\n",
        "_COMPAS_DF = _COMPAS_DF[_COLUMN_NAMES]\n",
        "\n",
        "# We will use 'is_recid' as our ground truth lable, which is boolean value\n",
        "# indicating if a defendant committed another crime. There are some rows with -1\n",
        "# indicating that there is no data. These rows we will drop from training.\n",
        "_COMPAS_DF = _COMPAS_DF[_COMPAS_DF['is_recid'] != -1]\n",
        "\n",
        "# Given the distribution between races in this dataset we will only focuse on\n",
        "# recidivism for African-Americans and Caucasians.\n",
        "_COMPAS_DF = _COMPAS_DF[\n",
        "  _COMPAS_DF['race'].isin(['African-American', 'Caucasian'])]\n",
        "\n",
        "# Adding we weight feature that will be used during the second part of this\n",
        "# case study to help improve fairness concerns.\n",
        "_COMPAS_DF['sample_weight'] = 0.8\n",
        "\n",
        "# Load the DataFrame back to a CSV file for our TFX model.\n",
        "_COMPAS_DF.to_csv(_DATA_FILEPATH, index=False, na_rep='')"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "JyCQbe5RlMjn"
      },
      "source": [
        "## Building a TFX Pipeline\n",
        "\n",
        "---\n",
        "There are several [TFX Pipeline Components](https://www.tensorflow.org/tfx/guide#tfx_pipeline_components) that can be used for a production model, but for the purpose the this case study will focus on using only the below components: \n",
        "*   **ExampleGen** to read our dataset.\n",
        "*   **StatisticsGen** to calculate the statistics of our dataset.\n",
        "*   **SchemaGen** to create a data schema.\n",
        "*   **Transform** for feature engineering.\n",
        "*   **Trainer** to run our machine learning model.\n",
        "\n",
        "## Create the InteractiveContext\n",
        "\n",
        "To run TFX within a notebook, we first will need to create an `InteractiveContext` to run the components interactively. \n",
        "\n",
        "`InteractiveContext` will use a temporary directory with an ephemeral ML Metadata database instance. To use your own pipeline root or database, the optional properties `pipeline_root` and `metadata_connection_config` may be passed to `InteractiveContext`."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "XVMS3Dz7xk8M"
      },
      "outputs": [],
      "source": [
        "context = InteractiveContext()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "NxAOGNCelMjq"
      },
      "source": [
        "### TFX ExampleGen Component\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "0hzCIDdblMjr"
      },
      "outputs": [],
      "source": [
        "# The ExampleGen TFX Pipeline component ingests data into TFX pipelines.\n",
        "# It consumes external files/services to generate Examples which will be read by\n",
        "# other TFX components. It also provides consistent and configurable partition,\n",
        "# and shuffles the dataset for ML best practice.\n",
        "\n",
        "example_gen = CsvExampleGen(input_base=_DATA_ROOT)\n",
        "context.run(example_gen)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "SW23fvThlMjz"
      },
      "source": [
        "### TFX StatisticsGen Component\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "28D_qP3IlMj0",
        "scrolled": false
      },
      "outputs": [],
      "source": [
        "# The StatisticsGen TFX pipeline component generates features statistics over\n",
        "# both training and serving data, which can be used by other pipeline\n",
        "# components. StatisticsGen uses Beam to scale to large datasets.\n",
        "\n",
        "statistics_gen = StatisticsGen(examples=example_gen.outputs['examples'])\n",
        "context.run(statistics_gen)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "a72E7hT5lMj9"
      },
      "source": [
        "### TFX SchemaGen Component"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "dkfTgKCBlMj9"
      },
      "outputs": [],
      "source": [
        "# Some TFX components use a description of your input data called a schema. The\n",
        "# schema is an instance of schema.proto. It can specify data types for feature\n",
        "# values, whether a feature has to be present in all examples, allowed value\n",
        "# ranges, and other properties. A SchemaGen pipeline component will\n",
        "# automatically generate a schema by inferring types, categories, and ranges\n",
        "# from the training data.\n",
        "\n",
        "infer_schema = SchemaGen(statistics=statistics_gen.outputs['statistics'])\n",
        "context.run(infer_schema)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "43z_COkolMkI"
      },
      "source": [
        "### TFX Transform Component\n",
        "\n",
        "The `Transform` component performs data transformations and feature engineering.  The results include an input TensorFlow graph which is used during both training and serving to preprocess the data before training or inference.  This graph becomes part of the SavedModel that is the result of model training.  Since the same input graph is used for both training and serving, the preprocessing will always be the same, and only needs to be written once.\n",
        "\n",
        "The Transform component requires more code than many other components because of the arbitrary complexity of the feature engineering that you may need for the data and/or model that you're working with.\n",
        "\n",
        "Define some constants and functions for both the `Transform` component and the `Trainer` component.  Define them in a Python module, in this case saved to disk using the `%%writefile` magic command since you are working in a notebook.\n",
        "\n",
        "The transformation that we will be performing in this case study are as follows:\n",
        "*   For string values we will generate a vocabulary that maps to an integer via tft.compute_and_apply_vocabulary.\n",
        "*   For integer values we will standardize the column mean 0 and variance 1 via tft.scale_to_z_score.\n",
        "*   Remove empty row values and replace them with an empty string or 0 depending on the feature type.\n",
        "*   Append ‘_xf’ to column names to denote the features that were processed in the Transform Component.\n",
        "\n",
        "\n",
        "Now let's define a module containing the `preprocessing_fn()` function that we will pass to the `Transform` component:"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "83MZZqUQlMkJ"
      },
      "outputs": [],
      "source": [
        "# Setup paths for the Transform Component.\n",
        "_transform_module_file = 'compas_transform.py'"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "NLzxWiOBlMkL"
      },
      "outputs": [],
      "source": [
        "%%writefile {_transform_module_file}\n",
        "import tensorflow as tf\n",
        "import tensorflow_transform as tft\n",
        "\n",
        "CATEGORICAL_FEATURE_KEYS = [\n",
        "    'sex',\n",
        "    'race',\n",
        "    'c_charge_desc',\n",
        "    'c_charge_degree',\n",
        "]\n",
        "\n",
        "INT_FEATURE_KEYS = [\n",
        "    'age',\n",
        "    'c_days_from_compas',\n",
        "    'juv_fel_count',\n",
        "    'juv_misd_count',\n",
        "    'juv_other_count',\n",
        "    'priors_count',\n",
        "    'sample_weight',\n",
        "]\n",
        "\n",
        "LABEL_KEY = 'is_recid'\n",
        "\n",
        "# List of the unique values for the items within CATEGORICAL_FEATURE_KEYS.\n",
        "MAX_CATEGORICAL_FEATURE_VALUES = [\n",
        "    2,\n",
        "    6,\n",
        "    513,\n",
        "    14,\n",
        "]\n",
        "\n",
        "\n",
        "def transformed_name(key):\n",
        "  return '{}_xf'.format(key)\n",
        "\n",
        "\n",
        "def preprocessing_fn(inputs):\n",
        "  \"\"\"tf.transform's callback function for preprocessing inputs.\n",
        "\n",
        "  Args:\n",
        "    inputs: Map from feature keys to raw features.\n",
        "\n",
        "  Returns:\n",
        "    Map from string feature key to transformed feature operations.\n",
        "  \"\"\"\n",
        "  outputs = {}\n",
        "  for key in CATEGORICAL_FEATURE_KEYS:\n",
        "    outputs[transformed_name(key)] = tft.compute_and_apply_vocabulary(\n",
        "        _fill_in_missing(inputs[key]),\n",
        "        vocab_filename=key)\n",
        "\n",
        "  for key in INT_FEATURE_KEYS:\n",
        "    outputs[transformed_name(key)] = tft.scale_to_z_score(\n",
        "        _fill_in_missing(inputs[key]))\n",
        "\n",
        "  # Target label will be to see if the defendant is charged for another crime.\n",
        "  outputs[transformed_name(LABEL_KEY)] = _fill_in_missing(inputs[LABEL_KEY])\n",
        "  return outputs\n",
        "\n",
        "\n",
        "def _fill_in_missing(tensor_value):\n",
        "  \"\"\"Replaces a missing values in a SparseTensor.\n",
        "\n",
        "  Fills in missing values of `tensor_value` with '' or 0, and converts to a\n",
        "  dense tensor.\n",
        "\n",
        "  Args:\n",
        "    tensor_value: A `SparseTensor` of rank 2. Its dense shape should have size\n",
        "      at most 1 in the second dimension.\n",
        "\n",
        "  Returns:\n",
        "    A rank 1 tensor where missing values of `tensor_value` are filled in.\n",
        "  \"\"\"\n",
        "  if not isinstance(tensor_value, tf.sparse.SparseTensor):\n",
        "    return tensor_value\n",
        "  default_value = '' if tensor_value.dtype == tf.string else 0\n",
        "  sparse_tensor = tf.SparseTensor(\n",
        "      tensor_value.indices,\n",
        "      tensor_value.values,\n",
        "      [tensor_value.dense_shape[0], 1])\n",
        "  dense_tensor = tf.sparse.to_dense(sparse_tensor, default_value)\n",
        "  return tf.squeeze(dense_tensor, axis=1)\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "5yzFOQrPlMkM"
      },
      "outputs": [],
      "source": [
        "# Build and run the Transform Component.\n",
        "transform = Transform(\n",
        "    examples=example_gen.outputs['examples'],\n",
        "    schema=infer_schema.outputs['schema'],\n",
        "    module_file=_transform_module_file\n",
        ")\n",
        "context.run(transform)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "A_ubj158lMkP"
      },
      "source": [
        "### TFX Trainer Component\n",
        "The `Trainer` Component trains a specified TensorFlow model.\n",
        "\n",
        "In order to run the trainer component we need to create a Python module containing a `trainer_fn` function that will return an estimator for our model. If you prefer creating a Keras model, you can do so and then convert it to an estimator using `keras.model_to_estimator()`.\n",
        "\n",
        "The `Trainer` component trains a specified TensorFlow model. In order to run the model we need to create a Python module containing a a function called `trainer_fn` function that TFX will call. \n",
        "\n",
        "For our case study we will build a Keras model that will return will return [`keras.model_to_estimator()`](https://www.tensorflow.org/api_docs/python/tf/keras/estimator/model_to_estimator)."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "K9zxx6CnlMkQ"
      },
      "outputs": [],
      "source": [
        "# Setup paths for the Trainer Component.\n",
        "_trainer_module_file = 'compas_trainer.py'"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "KhuwfYIRlMkR",
        "scrolled": true
      },
      "outputs": [],
      "source": [
        "%%writefile {_trainer_module_file}\n",
        "import tensorflow as tf\n",
        "\n",
        "import tensorflow_model_analysis as tfma\n",
        "import tensorflow_transform as tft\n",
        "from tensorflow_transform.tf_metadata import schema_utils\n",
        "\n",
        "from compas_transform import *\n",
        "\n",
        "_BATCH_SIZE = 1000\n",
        "_LEARNING_RATE = 0.00001\n",
        "_MAX_CHECKPOINTS = 1\n",
        "_SAVE_CHECKPOINT_STEPS = 999\n",
        "\n",
        "\n",
        "def transformed_names(keys):\n",
        "  return [transformed_name(key) for key in keys]\n",
        "\n",
        "\n",
        "def transformed_name(key):\n",
        "  return '{}_xf'.format(key)\n",
        "\n",
        "\n",
        "def _gzip_reader_fn(filenames):\n",
        "  \"\"\"Returns a record reader that can read gzip'ed files.\n",
        "\n",
        "  Args:\n",
        "    filenames: A tf.string tensor or tf.data.Dataset containing one or more\n",
        "      filenames.\n",
        "\n",
        "  Returns: A nested structure of tf.TypeSpec objects matching the structure of\n",
        "    an element of this dataset and specifying the type of individual components.\n",
        "  \"\"\"\n",
        "  return tf.data.TFRecordDataset(filenames, compression_type='GZIP')\n",
        "\n",
        "\n",
        "# Tf.Transform considers these features as \"raw\".\n",
        "def _get_raw_feature_spec(schema):\n",
        "  \"\"\"Generates a feature spec from a Schema proto.\n",
        "\n",
        "  Args:\n",
        "    schema: A Schema proto.\n",
        "\n",
        "  Returns:\n",
        "    A feature spec defined as a dict whose keys are feature names and values are\n",
        "      instances of FixedLenFeature, VarLenFeature or SparseFeature.\n",
        "  \"\"\"\n",
        "  return schema_utils.schema_as_feature_spec(schema).feature_spec\n",
        "\n",
        "\n",
        "def _example_serving_receiver_fn(tf_transform_output, schema):\n",
        "  \"\"\"Builds the serving in inputs.\n",
        "\n",
        "  Args:\n",
        "    tf_transform_output: A TFTransformOutput.\n",
        "    schema: the schema of the input data.\n",
        "\n",
        "  Returns:\n",
        "    TensorFlow graph which parses examples, applying tf-transform to them.\n",
        "  \"\"\"\n",
        "  raw_feature_spec = _get_raw_feature_spec(schema)\n",
        "  raw_feature_spec.pop(LABEL_KEY)\n",
        "\n",
        "  raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(\n",
        "      raw_feature_spec)\n",
        "  serving_input_receiver = raw_input_fn()\n",
        "\n",
        "  transformed_features = tf_transform_output.transform_raw_features(\n",
        "      serving_input_receiver.features)\n",
        "  transformed_features.pop(transformed_name(LABEL_KEY))\n",
        "  return tf.estimator.export.ServingInputReceiver(\n",
        "      transformed_features, serving_input_receiver.receiver_tensors)\n",
        "\n",
        "\n",
        "def _eval_input_receiver_fn(tf_transform_output, schema):\n",
        "  \"\"\"Builds everything needed for the tf-model-analysis to run the model.\n",
        "\n",
        "  Args:\n",
        "    tf_transform_output: A TFTransformOutput.\n",
        "    schema: the schema of the input data.\n",
        "\n",
        "  Returns:\n",
        "    EvalInputReceiver function, which contains:\n",
        "      - TensorFlow graph which parses raw untransformed features, applies the\n",
        "          tf-transform preprocessing operators.\n",
        "      - Set of raw, untransformed features.\n",
        "      - Label against which predictions will be compared.\n",
        "  \"\"\"\n",
        "  # Notice that the inputs are raw features, not transformed features here.\n",
        "  raw_feature_spec = _get_raw_feature_spec(schema)\n",
        "\n",
        "  serialized_tf_example = tf.compat.v1.placeholder(\n",
        "      dtype=tf.string, shape=[None], name='input_example_tensor')\n",
        "\n",
        "  # Add a parse_example operator to the tensorflow graph, which will parse\n",
        "  # raw, untransformed, tf examples.\n",
        "  features = tf.io.parse_example(\n",
        "      serialized=serialized_tf_example, features=raw_feature_spec)\n",
        "\n",
        "  transformed_features = tf_transform_output.transform_raw_features(features)\n",
        "  labels = transformed_features.pop(transformed_name(LABEL_KEY))\n",
        "\n",
        "  receiver_tensors = {'examples': serialized_tf_example}\n",
        "\n",
        "  return tfma.export.EvalInputReceiver(\n",
        "      features=transformed_features,\n",
        "      receiver_tensors=receiver_tensors,\n",
        "      labels=labels)\n",
        "\n",
        "\n",
        "def _input_fn(filenames, tf_transform_output, batch_size=200):\n",
        "  \"\"\"Generates features and labels for training or evaluation.\n",
        "\n",
        "  Args:\n",
        "    filenames: List of CSV files to read data from.\n",
        "    tf_transform_output: A TFTransformOutput.\n",
        "    batch_size: First dimension size of the Tensors returned by input_fn.\n",
        "\n",
        "  Returns:\n",
        "    A (features, indices) tuple where features is a dictionary of\n",
        "      Tensors, and indices is a single Tensor of label indices.\n",
        "  \"\"\"\n",
        "  transformed_feature_spec = (\n",
        "      tf_transform_output.transformed_feature_spec().copy())\n",
        "\n",
        "  dataset = tf.compat.v1.data.experimental.make_batched_features_dataset(\n",
        "      filenames,\n",
        "      batch_size,\n",
        "      transformed_feature_spec,\n",
        "      shuffle=False,\n",
        "      reader=_gzip_reader_fn)\n",
        "\n",
        "  transformed_features = dataset.make_one_shot_iterator().get_next()\n",
        "\n",
        "  # We pop the label because we do not want to use it as a feature while we're\n",
        "  # training.\n",
        "  return transformed_features, transformed_features.pop(\n",
        "      transformed_name(LABEL_KEY))\n",
        "\n",
        "\n",
        "def _keras_model_builder():\n",
        "  \"\"\"Build a keras model for COMPAS dataset classification.\n",
        "  \n",
        "  Returns:\n",
        "    A compiled Keras model.\n",
        "  \"\"\"\n",
        "  feature_columns = []\n",
        "  feature_layer_inputs = {}\n",
        "\n",
        "  for key in transformed_names(INT_FEATURE_KEYS):\n",
        "    feature_columns.append(tf.feature_column.numeric_column(key))\n",
        "    feature_layer_inputs[key] = tf.keras.Input(shape=(1,), name=key)\n",
        "\n",
        "  for key, num_buckets in zip(transformed_names(CATEGORICAL_FEATURE_KEYS),\n",
        "                              MAX_CATEGORICAL_FEATURE_VALUES):\n",
        "    feature_columns.append(\n",
        "        tf.feature_column.indicator_column(\n",
        "            tf.feature_column.categorical_column_with_identity(\n",
        "                key, num_buckets=num_buckets)))\n",
        "    feature_layer_inputs[key] = tf.keras.Input(\n",
        "        shape=(1,), name=key, dtype=tf.dtypes.int32)\n",
        "\n",
        "  feature_columns_input = tf.keras.layers.DenseFeatures(feature_columns)\n",
        "  feature_layer_outputs = feature_columns_input(feature_layer_inputs)\n",
        "\n",
        "  dense_layers = tf.keras.layers.Dense(\n",
        "      20, activation='relu', name='dense_1')(feature_layer_outputs)\n",
        "  dense_layers = tf.keras.layers.Dense(\n",
        "      10, activation='relu', name='dense_2')(dense_layers)\n",
        "  output = tf.keras.layers.Dense(\n",
        "      1, name='predictions')(dense_layers)\n",
        "\n",
        "  model = tf.keras.Model(\n",
        "      inputs=[v for v in feature_layer_inputs.values()], outputs=output)\n",
        "\n",
        "  model.compile(\n",
        "      loss=tf.keras.losses.MeanAbsoluteError(),\n",
        "      optimizer=tf.optimizers.Adam(learning_rate=_LEARNING_RATE))\n",
        "\n",
        "  return model\n",
        "\n",
        "\n",
        "# TFX will call this function.\n",
        "def trainer_fn(hparams, schema):\n",
        "  \"\"\"Build the estimator using the high level API.\n",
        "\n",
        "  Args:\n",
        "    hparams: Hyperparameters used to train the model as name/value pairs.\n",
        "    schema: Holds the schema of the training examples.\n",
        "\n",
        "  Returns:\n",
        "    A dict of the following:\n",
        "      - estimator: The estimator that will be used for training and eval.\n",
        "      - train_spec: Spec for training.\n",
        "      - eval_spec: Spec for eval.\n",
        "      - eval_input_receiver_fn: Input function for eval.\n",
        "  \"\"\"\n",
        "  tf_transform_output = tft.TFTransformOutput(hparams.transform_output)\n",
        "\n",
        "  train_input_fn = lambda: _input_fn(\n",
        "      hparams.train_files,\n",
        "      tf_transform_output,\n",
        "      batch_size=_BATCH_SIZE)\n",
        "\n",
        "  eval_input_fn = lambda: _input_fn(\n",
        "      hparams.eval_files,\n",
        "      tf_transform_output,\n",
        "      batch_size=_BATCH_SIZE)\n",
        "\n",
        "  train_spec = tf.estimator.TrainSpec(\n",
        "      train_input_fn,\n",
        "      max_steps=hparams.train_steps)\n",
        "\n",
        "  serving_receiver_fn = lambda: _example_serving_receiver_fn(\n",
        "      tf_transform_output, schema)\n",
        "\n",
        "  exporter = tf.estimator.FinalExporter('compas', serving_receiver_fn)\n",
        "  eval_spec = tf.estimator.EvalSpec(\n",
        "      eval_input_fn,\n",
        "      steps=hparams.eval_steps,\n",
        "      exporters=[exporter],\n",
        "      name='compas-eval')\n",
        "\n",
        "  run_config = tf.estimator.RunConfig(\n",
        "      save_checkpoints_steps=_SAVE_CHECKPOINT_STEPS,\n",
        "      keep_checkpoint_max=_MAX_CHECKPOINTS)\n",
        "\n",
        "  run_config = run_config.replace(model_dir=hparams.serving_model_dir)\n",
        "\n",
        "  estimator = tf.keras.estimator.model_to_estimator(\n",
        "      keras_model=_keras_model_builder(), config=run_config)\n",
        "\n",
        "  # Create an input receiver for TFMA processing.\n",
        "  receiver_fn = lambda: _eval_input_receiver_fn(tf_transform_output, schema)\n",
        "\n",
        "  return {\n",
        "      'estimator': estimator,\n",
        "      'train_spec': train_spec,\n",
        "      'eval_spec': eval_spec,\n",
        "      'eval_input_receiver_fn': receiver_fn\n",
        "  }"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "oiC1wABllMkU",
        "scrolled": false
      },
      "outputs": [],
      "source": [
        "# Uses user-provided Python function that implements a model using TensorFlow's\n",
        "# Estimators API.\n",
        "trainer = Trainer(\n",
        "    module_file=_trainer_module_file,\n",
        "    transformed_examples=transform.outputs['transformed_examples'],\n",
        "    schema=infer_schema.outputs['schema'],\n",
        "    transform_graph=transform.outputs['transform_graph'],\n",
        "    train_args=trainer_pb2.TrainArgs(num_steps=10000),\n",
        "    eval_args=trainer_pb2.EvalArgs(num_steps=5000)\n",
        ")\n",
        "context.run(trainer)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "0tfnGpl2lMkv"
      },
      "source": [
        "## TensorFlow Model Analysis\n",
        "\n",
        "Now that our model is trained developed and trained within TFX, we can use several additional components within the TFX exosystem to understand our models performance in a little more detail. By looking at different metrics we’re able to get a better picture of how the overall model performs for different slices within our model to make sure our model is not underperforming for any subgroup.\n",
        "\n",
        "First we'll examine TensorFlow Model Analysis, which is a library for evaluating TensorFlow models. It allows users to evaluate their models on large amounts of data in a distributed manner, using the same metrics defined in their trainer. These metrics can be computed over different slices of data and visualized in a notebook.\n",
        "\n",
        "For a list of possible metrics that can be added into TensorFlow Model Analysis see [here](https://github.com/tensorflow/model-analysis/blob/master/g3doc/metrics.md).\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "i8VdZ4z3lMk0"
      },
      "outputs": [],
      "source": [
        "# Uses TensorFlow Model Analysis to compute a evaluation statistics over\n",
        "# features of a model.\n",
        "model_analyzer = Evaluator(\n",
        "    examples=example_gen.outputs['examples'],\n",
        "    model=trainer.outputs['model'],\n",
        "\n",
        "    eval_config = text_format.Parse(\"\"\"\n",
        "      model_specs {\n",
        "        label_key: 'is_recid'\n",
        "      }\n",
        "      metrics_specs {\n",
        "        metrics {class_name: \"BinaryAccuracy\"}\n",
        "        metrics {class_name: \"AUC\"}\n",
        "        metrics {\n",
        "          class_name: \"FairnessIndicators\"\n",
        "          config: '{\"thresholds\": [0.25, 0.5, 0.75]}'\n",
        "        }\n",
        "      }\n",
        "      slicing_specs {\n",
        "        feature_keys: 'race'\n",
        "      }\n",
        "    \"\"\", tfma.EvalConfig())\n",
        ")\n",
        "context.run(model_analyzer)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "gXGxPEAnBkUM"
      },
      "source": [
        "## Fairness Indicators\n",
        "\n",
        "Load Fairness Indicators to examine the underlying data."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "4ZgUtH_OBg2x"
      },
      "outputs": [],
      "source": [
        "evaluation_uri = model_analyzer.outputs['evaluation'].get()[0].uri\n",
        "eval_result = tfma.load_eval_result(evaluation_uri)\n",
        "tfma.addons.fairness.view.widget_view.render_fairness_indicator(eval_result)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "igoChEEblMk4"
      },
      "source": [
        "Fairness Indicators will allow us to drill down to see the performance of different slices and is designed to support teams in evaluating and improving models for fairness concerns. It enables easy computation of binary and multiclass classifiers and will allow you to evaluate across any size of use case.\n",
        "\n",
        "We willl load Fairness Indicators into this notebook and analyse the results and take a look at the results. After you have had a moment explored with Fairness Indicators, examine the False Positive Rate and False Negative Rate tabs in the tool. In this case study, we're concerned with trying to reduce the number of false predictions of recidivism, corresponding to the [False Positive Rate](https://en.wikipedia.org/wiki/Receiver_operating_characteristic).\n",
        "\n",
        "![Type I and Type II errors](http://services.google.com/fh/gumdrop/preview/blogs/type_i_type_ii.png)\n",
        "\n",
        "Within Fairness Indicator tool you'll see two dropdowns options:\n",
        "1.   A \"Baseline\" option that is set by `column_for_slicing`.\n",
        "2.   A \"Thresholds\" option that is set by `fairness_indicator_thresholds`.\n",
        "\n",
        "“Baseline” is the slice you want to compare all other slices to. Most commonly, it is represented by the overall slice, but can also be one of the specific slices as well. \n",
        "\n",
        "\"Threshold\" is a value set within a given binary classification model to indicate where a prediction should be placed. When setting a threshold there are two things you should keep in mind.\n",
        "\n",
        "1.   Precision: What is the downside if your prediction results in a Type 1 error? In this case study a higher threshold would mean we're predicting more defendants *will* commit another crime when they actually *don't*.\n",
        "2.   Recall: What is the downside of a Type II error? In this case study a higher threshold would mean we're predicting more defendants *will not* commit another crime when they actually *do*.\n",
        "\n",
        "We will set arbitrary thresholds at 0.75 and we will only focus on the fairness metrics for African-American and Caucasian defendants given the small sample sizes for the other races, which aren’t large enough to draw statistically significant conclusions.\n",
        "\n",
        "The rates of the below might differ slightly based on how the data was shuffled at the beginning of this case study, but take a look at the difference between the data between African-American and Caucasian defendants. At a lower threshold our model is more likely to predict that a Caucasian defended will commit a second crime compared to an African-American defended. However this prediction inverts as we increase our threshold. \n",
        "\n",
        "* **False Positive Rate @ 0.75**\n",
        "  * **African-American:** ~30%\n",
        "     * AUC: 0.71\n",
        "     * Binary Accuracy: 0.67\n",
        "  * **Caucasian:** ~8%\n",
        "     * AUC: 0.71\n",
        "     * AUC: 0.67\n",
        "\n",
        "More information on Type I/II errors and threshold setting can be found [here](https://developers.google.com/machine-learning/crash-course/classification/thresholding).\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "Mpbs4x9dB2PA"
      },
      "source": [
        "## ML Metadata\n",
        "\n",
        "To understand where disparity could be coming from and to take a snapshot of our current model, we can use ML Metadata for recording and retrieving metadata associated with our model. ML Metadata is an integral part of TFX, but is designed so that it can be used independently.\n",
        "\n",
        "For this case study, we will list all artifacts that we developed previously within this case study. By cycling through the artifacts, executions, and context we will have a high level view of our TFX model to dig into where any potential issues are coming from. This will provide us a baseline overview of how our model was developed and what TFX components helped to develop our initial model.\n",
        "\n",
        "We will start by first laying out the high level artifacts, execution, and context types in our model.\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "0wjiFKOxlMkn"
      },
      "outputs": [],
      "source": [
        "# Connect to the TFX database.\n",
        "connection_config = metadata_store_pb2.ConnectionConfig()\n",
        "\n",
        "connection_config.sqlite.filename_uri = os.path.join(\n",
        "  context.pipeline_root, 'metadata.sqlite')\n",
        "store = metadata_store.MetadataStore(connection_config)\n",
        "\n",
        "def _mlmd_type_to_dataframe(mlmd_type):\n",
        "  \"\"\"Helper function to turn MLMD into a Pandas DataFrame.\n",
        "\n",
        "  Args:\n",
        "    mlmd_type: Metadata store type.\n",
        "\n",
        "  Returns:\n",
        "    DataFrame containing type ID, Name, and Properties.\n",
        "  \"\"\"\n",
        "  pd.set_option('display.max_columns', None)  \n",
        "  pd.set_option('display.expand_frame_repr', False)\n",
        "\n",
        "  column_names = ['ID', 'Name', 'Properties']\n",
        "  df = pd.DataFrame(columns=column_names)\n",
        "  for a_type in mlmd_type:\n",
        "    mlmd_row = pd.DataFrame([[a_type.id, a_type.name, a_type.properties]],\n",
        "                            columns=column_names)\n",
        "    df = df.append(mlmd_row)\n",
        "  return df\n",
        "\n",
        "# ML Metadata stores strong-typed Artifacts, Executions, and Contexts.\n",
        "# First, we can use type APIs to understand what is defined in ML Metadata\n",
        "# by the current version of TFX. We'll be able to view all the previous runs\n",
        "# that created our initial model.\n",
        "print('Artifact Types:')\n",
        "display(_mlmd_type_to_dataframe(store.get_artifact_types()))\n",
        "\n",
        "print('\\nExecution Types:')\n",
        "display(_mlmd_type_to_dataframe(store.get_execution_types()))\n",
        "\n",
        "print('\\nContext Types:')\n",
        "display(_mlmd_type_to_dataframe(store.get_context_types()))\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "lJQoer33ZEXD"
      },
      "source": [
        "## Identify where the fairness issue could be coming from\n",
        "\n",
        "For each of the above artifacts, execution, and context types we can use ML Metadata to dig into the attributes and how each part of our ML pipeline was developed.\n",
        "\n",
        "We'll start by diving into the `StatisticsGen` to examine the underlying data that we initially fed into the model. By knowing the artifacts within our model we can use ML Metadata and TensorFlow Data Validation to look backward and forward within the model to identify where a potential problem is coming from.\n",
        "\n",
        "After running the below cell, select `Lift (Y=1)` in the second chart on the `Chart to show` tab to see the [lift](https://en.wikipedia.org/wiki/Lift_(data_mining)) between the different data slices. Within `race`, the lift for African-American is approximatly 1.08 whereas Caucasian is approximatly 0.86."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "xvcw9KL0byeY"
      },
      "outputs": [],
      "source": [
        "statistics_gen = StatisticsGen(\n",
        "    examples=example_gen.outputs['examples'],\n",
        "    schema=infer_schema.outputs['schema'],\n",
        "    stats_options=tfdv.StatsOptions(label_feature='is_recid'))\n",
        "exec_result = context.run(statistics_gen)\n",
        "\n",
        "for event in store.get_events_by_execution_ids([exec_result.execution_id]):\n",
        "  if event.path.steps[0].key == 'statistics':\n",
        "    statistics_w_schema_uri = store.get_artifacts_by_id([event.artifact_id])[0].uri\n",
        "\n",
        "model_stats = tfdv.load_statistics(\n",
        "    os.path.join(statistics_w_schema_uri, 'eval/stats_tfrecord/'))\n",
        "tfdv.visualize_statistics(model_stats)\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "ofWXz48zzlGT"
      },
      "source": [
        "## Tracking a Model Change\n",
        "\n",
        "Now that we have an idea on how we could improve the fairness of our model, we will first document our initial run within the ML Metadata for our own record and for anyone else that might review our changes at a future time.\n",
        "\n",
        "ML Metadata can keep a log of our past models along with any notes that we would like to add between runs. We'll add a simple note on our first run denoting that this run was done on the full COMPAS dataset"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "GCQ-7kzMRbXM"
      },
      "outputs": [],
      "source": [
        "_MODEL_NOTE_TO_ADD = 'First model that contains fairness concerns in the model.'\n",
        "\n",
        "first_trained_model = store.get_artifacts_by_type('Model')[-1]\n",
        "\n",
        "# Add the two notes above to the ML metadata.\n",
        "first_trained_model.custom_properties['note'].string_value = _MODEL_NOTE_TO_ADD\n",
        "store.put_artifacts([first_trained_model])\n",
        "\n",
        "def _mlmd_model_to_dataframe(model, model_number):\n",
        "  \"\"\"Helper function to turn a MLMD modle into a Pandas DataFrame.\n",
        "\n",
        "  Args:\n",
        "    model: Metadata store model.\n",
        "    model_number: Number of model run within ML Metadata.\n",
        "\n",
        "  Returns:\n",
        "    DataFrame containing the ML Metadata model.\n",
        "  \"\"\"\n",
        "  pd.set_option('display.max_columns', None)  \n",
        "  pd.set_option('display.expand_frame_repr', False)\n",
        "\n",
        "  df = pd.DataFrame()\n",
        "  custom_properties = ['name', 'note', 'state', 'producer_component',\n",
        "                       'pipeline_name']\n",
        "  df['id'] = [model[model_number].id]\n",
        "  df['uri'] = [model[model_number].uri]\n",
        "  for prop in custom_properties:\n",
        "    df[prop] = model[model_number].custom_properties.get(prop)\n",
        "    df[prop] = df[prop].astype(str).map(\n",
        "        lambda x: x.lstrip('string_value: \"').rstrip('\"\\n'))\n",
        "  return df\n",
        "\n",
        "# Print the current model to see the results of the ML Metadata for the model.\n",
        "display(_mlmd_model_to_dataframe(store.get_artifacts_by_type('Model'), 0))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "-gwiNtcoeO8S"
      },
      "source": [
        "## Improving fairness concerns by weighting the model\n",
        "\n",
        "\n",
        "There are several ways we can approach fixing fairness concerns within a model. Manipulating observed data/labels, implementing fairness constraints, or prejudice removal by regularization are some techniques\u003csup\u003e1\u003c/sup\u003e that have been used to fix fairness concerns. In this case study we will reweight the model by implementing a custom loss function into Keras.\n",
        "\n",
        "The code below is the same as the above Transform Component but with the exception of a new class called `LogisticEndpoint` that we will use for our loss within Keras and a few parameter changes.\n",
        "\n",
        "___\n",
        "\n",
        "1.  Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, N. (2019). A Survey on Bias and Fairness in Machine Learning. https://arxiv.org/pdf/1908.09635.pdf\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "yzLWm3-1Zjvv"
      },
      "outputs": [],
      "source": [
        "%%writefile {_trainer_module_file}\n",
        "import numpy as np\n",
        "import tensorflow as tf\n",
        "\n",
        "import tensorflow_model_analysis as tfma\n",
        "import tensorflow_transform as tft\n",
        "from tensorflow_transform.tf_metadata import schema_utils\n",
        "\n",
        "from compas_transform import *\n",
        "\n",
        "_BATCH_SIZE = 1000\n",
        "_LEARNING_RATE = 0.00001\n",
        "_MAX_CHECKPOINTS = 1\n",
        "_SAVE_CHECKPOINT_STEPS = 999\n",
        "\n",
        "\n",
        "def transformed_names(keys):\n",
        "  return [transformed_name(key) for key in keys]\n",
        "\n",
        "\n",
        "def transformed_name(key):\n",
        "  return '{}_xf'.format(key)\n",
        "\n",
        "\n",
        "def _gzip_reader_fn(filenames):\n",
        "  \"\"\"Returns a record reader that can read gzip'ed files.\n",
        "\n",
        "  Args:\n",
        "    filenames: A tf.string tensor or tf.data.Dataset containing one or more\n",
        "      filenames.\n",
        "\n",
        "  Returns: A nested structure of tf.TypeSpec objects matching the structure of\n",
        "    an element of this dataset and specifying the type of individual components.\n",
        "  \"\"\"\n",
        "  return tf.data.TFRecordDataset(filenames, compression_type='GZIP')\n",
        "\n",
        "\n",
        "# Tf.Transform considers these features as \"raw\".\n",
        "def _get_raw_feature_spec(schema):\n",
        "  \"\"\"Generates a feature spec from a Schema proto.\n",
        "\n",
        "  Args:\n",
        "    schema: A Schema proto.\n",
        "\n",
        "  Returns:\n",
        "    A feature spec defined as a dict whose keys are feature names and values are\n",
        "      instances of FixedLenFeature, VarLenFeature or SparseFeature.\n",
        "  \"\"\"\n",
        "  return schema_utils.schema_as_feature_spec(schema).feature_spec\n",
        "\n",
        "\n",
        "def _example_serving_receiver_fn(tf_transform_output, schema):\n",
        "  \"\"\"Builds the serving in inputs.\n",
        "\n",
        "  Args:\n",
        "    tf_transform_output: A TFTransformOutput.\n",
        "    schema: the schema of the input data.\n",
        "\n",
        "  Returns:\n",
        "    TensorFlow graph which parses examples, applying tf-transform to them.\n",
        "  \"\"\"\n",
        "  raw_feature_spec = _get_raw_feature_spec(schema)\n",
        "  raw_feature_spec.pop(LABEL_KEY)\n",
        "\n",
        "  raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(\n",
        "      raw_feature_spec)\n",
        "  serving_input_receiver = raw_input_fn()\n",
        "\n",
        "  transformed_features = tf_transform_output.transform_raw_features(\n",
        "      serving_input_receiver.features)\n",
        "  transformed_features.pop(transformed_name(LABEL_KEY))\n",
        "  return tf.estimator.export.ServingInputReceiver(\n",
        "      transformed_features, serving_input_receiver.receiver_tensors)\n",
        "\n",
        "\n",
        "def _eval_input_receiver_fn(tf_transform_output, schema):\n",
        "  \"\"\"Builds everything needed for the tf-model-analysis to run the model.\n",
        "\n",
        "  Args:\n",
        "    tf_transform_output: A TFTransformOutput.\n",
        "    schema: the schema of the input data.\n",
        "\n",
        "  Returns:\n",
        "    EvalInputReceiver function, which contains:\n",
        "      - TensorFlow graph which parses raw untransformed features, applies the\n",
        "          tf-transform preprocessing operators.\n",
        "      - Set of raw, untransformed features.\n",
        "      - Label against which predictions will be compared.\n",
        "  \"\"\"\n",
        "  # Notice that the inputs are raw features, not transformed features here.\n",
        "  raw_feature_spec = _get_raw_feature_spec(schema)\n",
        "\n",
        "  serialized_tf_example = tf.compat.v1.placeholder(\n",
        "      dtype=tf.string, shape=[None], name='input_example_tensor')\n",
        "\n",
        "  # Add a parse_example operator to the tensorflow graph, which will parse\n",
        "  # raw, untransformed, tf examples.\n",
        "  features = tf.io.parse_example(\n",
        "      serialized=serialized_tf_example, features=raw_feature_spec)\n",
        "\n",
        "  transformed_features = tf_transform_output.transform_raw_features(features)\n",
        "  labels = transformed_features.pop(transformed_name(LABEL_KEY))\n",
        "\n",
        "  receiver_tensors = {'examples': serialized_tf_example}\n",
        "\n",
        "  return tfma.export.EvalInputReceiver(\n",
        "      features=transformed_features,\n",
        "      receiver_tensors=receiver_tensors,\n",
        "      labels=labels)\n",
        "\n",
        "\n",
        "def _input_fn(filenames, tf_transform_output, batch_size=200):\n",
        "  \"\"\"Generates features and labels for training or evaluation.\n",
        "\n",
        "  Args:\n",
        "    filenames: List of CSV files to read data from.\n",
        "    tf_transform_output: A TFTransformOutput.\n",
        "    batch_size: First dimension size of the Tensors returned by input_fn.\n",
        "\n",
        "  Returns:\n",
        "    A (features, indices) tuple where features is a dictionary of\n",
        "      Tensors, and indices is a single Tensor of label indices.\n",
        "  \"\"\"\n",
        "  transformed_feature_spec = (\n",
        "      tf_transform_output.transformed_feature_spec().copy())\n",
        "\n",
        "  dataset = tf.compat.v1.data.experimental.make_batched_features_dataset(\n",
        "      filenames,\n",
        "      batch_size,\n",
        "      transformed_feature_spec,\n",
        "      shuffle=False,\n",
        "      reader=_gzip_reader_fn)\n",
        "\n",
        "  transformed_features = dataset.make_one_shot_iterator().get_next()\n",
        "\n",
        "  # We pop the label because we do not want to use it as a feature while we're\n",
        "  # training.\n",
        "  return transformed_features, transformed_features.pop(\n",
        "      transformed_name(LABEL_KEY))\n",
        "\n",
        "\n",
        "# TFX will call this function.\n",
        "def trainer_fn(hparams, schema):\n",
        "  \"\"\"Build the estimator using the high level API.\n",
        "\n",
        "  Args:\n",
        "    hparams: Hyperparameters used to train the model as name/value pairs.\n",
        "    schema: Holds the schema of the training examples.\n",
        "\n",
        "  Returns:\n",
        "    A dict of the following:\n",
        "      - estimator: The estimator that will be used for training and eval.\n",
        "      - train_spec: Spec for training.\n",
        "      - eval_spec: Spec for eval.\n",
        "      - eval_input_receiver_fn: Input function for eval.\n",
        "  \"\"\"\n",
        "  tf_transform_output = tft.TFTransformOutput(hparams.transform_output)\n",
        "\n",
        "  train_input_fn = lambda: _input_fn(\n",
        "      hparams.train_files,\n",
        "      tf_transform_output,\n",
        "      batch_size=_BATCH_SIZE)\n",
        "\n",
        "  eval_input_fn = lambda: _input_fn(\n",
        "      hparams.eval_files,\n",
        "      tf_transform_output,\n",
        "      batch_size=_BATCH_SIZE)\n",
        "\n",
        "  train_spec = tf.estimator.TrainSpec(\n",
        "      train_input_fn,\n",
        "      max_steps=hparams.train_steps)\n",
        "\n",
        "  serving_receiver_fn = lambda: _example_serving_receiver_fn(\n",
        "      tf_transform_output, schema)\n",
        "\n",
        "  exporter = tf.estimator.FinalExporter('compas', serving_receiver_fn)\n",
        "  eval_spec = tf.estimator.EvalSpec(\n",
        "      eval_input_fn,\n",
        "      steps=hparams.eval_steps,\n",
        "      exporters=[exporter],\n",
        "      name='compas-eval')\n",
        "\n",
        "  run_config = tf.estimator.RunConfig(\n",
        "      save_checkpoints_steps=_SAVE_CHECKPOINT_STEPS,\n",
        "      keep_checkpoint_max=_MAX_CHECKPOINTS)\n",
        "\n",
        "  run_config = run_config.replace(model_dir=hparams.serving_model_dir)\n",
        "\n",
        "  estimator = tf.keras.estimator.model_to_estimator(\n",
        "      keras_model=_keras_model_builder(), config=run_config)\n",
        "\n",
        "  # Create an input receiver for TFMA processing.\n",
        "  receiver_fn = lambda: _eval_input_receiver_fn(tf_transform_output, schema)\n",
        "\n",
        "  return {\n",
        "      'estimator': estimator,\n",
        "      'train_spec': train_spec,\n",
        "      'eval_spec': eval_spec,\n",
        "      'eval_input_receiver_fn': receiver_fn\n",
        "  }\n",
        "\n",
        "\n",
        "def _keras_model_builder():\n",
        "  \"\"\"Build a keras model for COMPAS dataset classification.\n",
        "  \n",
        "  Returns:\n",
        "    A compiled Keras model.\n",
        "  \"\"\"\n",
        "  feature_columns = []\n",
        "  feature_layer_inputs = {}\n",
        "\n",
        "  for key in transformed_names(INT_FEATURE_KEYS):\n",
        "    feature_columns.append(tf.feature_column.numeric_column(key))\n",
        "    feature_layer_inputs[key] = tf.keras.Input(shape=(1,), name=key)\n",
        "\n",
        "  for key, num_buckets in zip(transformed_names(CATEGORICAL_FEATURE_KEYS),\n",
        "                              MAX_CATEGORICAL_FEATURE_VALUES):\n",
        "    feature_columns.append(\n",
        "        tf.feature_column.indicator_column(\n",
        "            tf.feature_column.categorical_column_with_identity(\n",
        "                key, num_buckets=num_buckets)))\n",
        "    feature_layer_inputs[key] = tf.keras.Input(\n",
        "        shape=(1,), name=key, dtype=tf.dtypes.int32)\n",
        "\n",
        "  feature_columns_input = tf.keras.layers.DenseFeatures(feature_columns)\n",
        "  feature_layer_outputs = feature_columns_input(feature_layer_inputs)\n",
        "\n",
        "  dense_layers = tf.keras.layers.Dense(\n",
        "      20, activation='relu', name='dense_1')(feature_layer_outputs)\n",
        "  dense_layers = tf.keras.layers.Dense(\n",
        "      10, activation='relu', name='dense_2')(dense_layers)\n",
        "  output = tf.keras.layers.Dense(\n",
        "      1, name='predictions')(dense_layers)\n",
        "\n",
        "  model = tf.keras.Model(\n",
        "      inputs=[v for v in feature_layer_inputs.values()], outputs=output)\n",
        "\n",
        "  # To weight our model we will develop a custom loss class within Keras.\n",
        "  # The old loss is commented out below and the new one is added in below.\n",
        "  model.compile(\n",
        "      # loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n",
        "      loss=LogisticEndpoint(),\n",
        "      optimizer=tf.optimizers.Adam(learning_rate=_LEARNING_RATE))\n",
        "\n",
        "  return model\n",
        "\n",
        "\n",
        "class LogisticEndpoint(tf.keras.layers.Layer):\n",
        "\n",
        "  def __init__(self, name=None):\n",
        "    super(LogisticEndpoint, self).__init__(name=name)\n",
        "    self.loss_fn = tf.keras.losses.BinaryCrossentropy(from_logits=True)\n",
        "\n",
        "  def __call__(self, y_true, y_pred, sample_weight=None):\n",
        "    inputs = [y_true, y_pred]\n",
        "    inputs += sample_weight or ['sample_weight_xf']\n",
        "    return super(LogisticEndpoint, self).__call__(inputs)\n",
        "\n",
        "  def call(self, inputs):\n",
        "    y_true, y_pred = inputs[0], inputs[1]\n",
        "    if len(inputs) == 3:\n",
        "      sample_weight = inputs[2]\n",
        "    else:\n",
        "      sample_weight = None\n",
        "    loss = self.loss_fn(y_true, y_pred, sample_weight)\n",
        "    self.add_loss(loss)\n",
        "    reduce_loss = tf.math.divide_no_nan(\n",
        "        tf.math.reduce_sum(tf.nn.softmax(y_pred)), _BATCH_SIZE)\n",
        "    return reduce_loss\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "thSmshFN94pt"
      },
      "source": [
        "## Retrain the TFX model with the weighted model\n",
        "\n",
        "In this next part we will use the weighted Transform Component to rerun the same Trainer model as before to see the improvement in fairness after the weighting is applied."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Bb0Rl9UOFgoM"
      },
      "outputs": [],
      "source": [
        "trainer_weighted = Trainer(\n",
        "    module_file=_trainer_module_file,\n",
        "    transformed_examples=transform.outputs['transformed_examples'],\n",
        "    schema=infer_schema.outputs['schema'],\n",
        "    transform_graph=transform.outputs['transform_graph'],\n",
        "    train_args=trainer_pb2.TrainArgs(num_steps=10000),\n",
        "    eval_args=trainer_pb2.EvalArgs(num_steps=5000)\n",
        ")\n",
        "context.run(trainer_weighted)\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "n7xH61MCPwUO"
      },
      "outputs": [],
      "source": [
        "# Again, we will run TensorFlow Model Analysis and load Fairness Indicators\n",
        "# to examine the performance change in our weighted model.\n",
        "model_analyzer_weighted = Evaluator(\n",
        "    examples=example_gen.outputs['examples'],\n",
        "    model=trainer_weighted.outputs['model'],\n",
        "\n",
        "    eval_config = text_format.Parse(\"\"\"\n",
        "      model_specs {\n",
        "        label_key: 'is_recid'\n",
        "      }\n",
        "      metrics_specs {\n",
        "        metrics {class_name: 'BinaryAccuracy'}\n",
        "        metrics {class_name: 'AUC'}\n",
        "        metrics {\n",
        "          class_name: 'FairnessIndicators'\n",
        "          config: '{\"thresholds\": [0.25, 0.5, 0.75]}'\n",
        "        }\n",
        "      }\n",
        "      slicing_specs {\n",
        "        feature_keys: 'race'\n",
        "      }\n",
        "    \"\"\", tfma.EvalConfig())\n",
        ")\n",
        "context.run(model_analyzer_weighted)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "206gQS1r-1FX"
      },
      "outputs": [],
      "source": [
        "evaluation_uri_weighted = model_analyzer_weighted.outputs['evaluation'].get()[0].uri\n",
        "eval_result_weighted = tfma.load_eval_result(evaluation_uri_weighted)\n",
        "\n",
        "multi_eval_results = {\n",
        "    'Unweighted Model': eval_result,\n",
        "    'Weighted Model': eval_result_weighted\n",
        "}\n",
        "tfma.addons.fairness.view.widget_view.render_fairness_indicator(\n",
        "    multi_eval_results=multi_eval_results)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "bwoz69Wzvt8q"
      },
      "source": [
        "After retraining our results with the weighted model, we can once again look at the fairness metrics to gauge any improvements in the model. This time, however, we will use the model comparison feature within Fairness Indicators to see the difference between the weighted and unweighted model. Although we’re still seeing some fairness concerns with the weighted model, the discrepancy is far less pronounced.\n",
        "\n",
        "The drawback, however, is that our AUC and binary accuracy has also dropped after weighting the model.\n",
        "\n",
        "\n",
        "* **False Positive Rate @ 0.75**\n",
        "  * **African-American:** ~1%\n",
        "     * AUC: 0.47\n",
        "     * Binary Accuracy: 0.59\n",
        "  * **Caucasian:** ~0%\n",
        "     * AUC: 0.47\n",
        "     * Binary Accuracy: 0.58\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "oEhq3ne7gazf"
      },
      "source": [
        "## Examine the data of the second run\n",
        "\n",
        "Finally, we can visualize the data with TensorFlow Data Validation and overlay the data changes between the two models and add an additional note to the ML Metadata indicating that this model has improved the fairness concerns."
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "WM-uqqfOggcw"
      },
      "outputs": [],
      "source": [
        "# Pull the URI for the two models that we ran in this case study.\n",
        "first_model_uri = store.get_artifacts_by_type('ExampleStatistics')[-1].uri\n",
        "second_model_uri = store.get_artifacts_by_type('ExampleStatistics')[0].uri\n",
        "\n",
        "# Load the stats for both models.\n",
        "first_model_uri = tfdv.load_statistics(os.path.join(\n",
        "    first_model_uri, 'eval/stats_tfrecord/'))\n",
        "second_model_stats = tfdv.load_statistics(os.path.join(\n",
        "    second_model_uri, 'eval/stats_tfrecord/'))\n",
        "\n",
        "# Visualize the statistics between the two models.\n",
        "tfdv.visualize_statistics(\n",
        "    lhs_statistics=second_model_stats,\n",
        "    lhs_name='Sampled Model',\n",
        "    rhs_statistics=first_model_uri,\n",
        "    rhs_name='COMPAS Orginal')"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "YOMbqITkhNkO"
      },
      "outputs": [],
      "source": [
        "# Add a new note within ML Metadata describing the weighted model.\n",
        "_NOTE_TO_ADD = 'Weighted model between race and is_recid.'\n",
        "\n",
        "# Pulling the URI for the weighted trained model.\n",
        "second_trained_model = store.get_artifacts_by_type('Model')[-1]\n",
        "\n",
        "# Add the note to ML Metadata.\n",
        "second_trained_model.custom_properties['note'].string_value = _NOTE_TO_ADD\n",
        "store.put_artifacts([second_trained_model])\n",
        "\n",
        "display(_mlmd_model_to_dataframe(store.get_artifacts_by_type('Model'), -1))\n",
        "display(_mlmd_model_to_dataframe(store.get_artifacts_by_type('Model'), 0))"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "f0fGWt-OIzEb"
      },
      "source": [
        "## Conclusion\n",
        "\n",
        "Within this case study we developed a Keras classifier within a TFX pipeline with the COMPAS dataset to examine any fairness concerns within the dataset. After initially developing the TFX, fairness concerns were not immediately apparent until examining the individual slices within our model by our sensitive features --in our case race. After identifying the issues, we were able to track down the source of the fairness issue with TensorFlow DataValidation to identify a method to mitigate the fairness concerns via model weighting while tracking and annotating the changes via ML Metadata. Although we are not able to fully fix all the fairness concerns within the dataset, by adding a note for future developers to follow will allow others to understand and issues we faced while developing this model. \n",
        "\n",
        "Finally it is important to note that this case study did not fix the fairness issues that are present in the COMPAS dataset. By improving the fairness concerns in the model we also reduced the AUC and accuracy in the performance of the model. What we were able to do, however, was build a model that showcased the fairness concerns and track down where the problems could be coming from by tracking or model's lineage while annotating any model concerns within the metadata.\n",
        "\n",
        "For more information on the issues that the predicting pre-trial detention can have see the FAT* 2018 talk on [\"Understanding the Context and Consequences of Pre-trial Detention\"](https://www.youtube.com/watch?v=hEThGT-_5ho\u0026feature=youtu.be\u0026t=1)"
      ]
    }
  ],
  "metadata": {
    "colab": {
      "collapsed_sections": [],
      "name": "Fairness Indicators Lineage Case Study",
      "provenance": [],
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
