{
  "cells": [
    {
      "metadata": {
        "id": "73Og35sR_SRL"
      },
      "cell_type": "markdown",
      "source": [
        "This Colab provides an example of setting up a simulated dataset. Please check [Meridian_RF_Demo.ipynb](https://github.com/google/meridian/blob/main/demo/Meridian_RF_Demo.ipynb) to see how to run a Google's Meridian MMM using the simulated dataset.\n",
        "\n",
        "Throughout this Colab, we use the N(mean, stdev) parameterization for the normal distribution."
      ]
    },
    {
      "metadata": {
        "id": "9exQwlA2U472"
      },
      "cell_type": "markdown",
      "source": [
        "# Install and import packages\n",
        "\n",
        "1\\. Make sure you are using one of the available GPU Colab runtimes which is **required** to run Meridian. You can change your notebook's runtime in `Runtime > Change runtime type` in the menu. All users can use the T4 GPU runtime which is sufficient to run the demo colab, free of charge. Users who have purchased one of Colab's paid plans have access to premium GPUs (such as V100, A100 or L4 Nvidia GPU)."
      ]
    },
    {
      "metadata": {
        "id": "y9Fucb67U2XZ"
      },
      "cell_type": "code",
      "source": [
        "# Install meridian: from PyPI @ latest release\n",
        "!pip install --upgrade google-meridian[colab,and-cuda]\n",
        "\n",
        "# Install meridian: from PyPI @ specific version\n",
        "# !pip install google-meridian[colab,and-cuda]==1.0.3\n",
        "\n",
        "# Install meridian: from GitHub @HEAD\n",
        "# !pip install --upgrade \"google-meridian[colab,and-cuda] @ git+https://github.com/google/meridian.git\""
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "metadata": {
        "id": "i1GlXY2T_DOu"
      },
      "cell_type": "code",
      "source": [
        "import datetime\n",
        "\n",
        "import matplotlib.pyplot as plt\n",
        "import numpy as np\n",
        "import tensorflow as tf\n",
        "import tensorflow_probability as tfp\n",
        "import xarray as xr\n",
        "import pickle\n",
        "\n",
        "from meridian.model import model\n",
        "from meridian.model import prior_distribution\n",
        "from meridian.model import transformers\n",
        "from meridian.model import knots\n",
        "\n",
        "\n",
        "# check if GPU is available\n",
        "from psutil import virtual_memory\n",
        "ram_gb = virtual_memory().total / 1e9\n",
        "print('Your runtime has {:.1f} gigabytes of available RAM\\n'.format(ram_gb))\n",
        "print(\"Num GPUs Available: \", len(tf.config.experimental.list_physical_devices('GPU')))\n",
        "print(\"Num CPUs Available: \", len(tf.config.experimental.list_physical_devices('CPU')))"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "metadata": {
        "id": "mEY2GI1kBLPo"
      },
      "cell_type": "markdown",
      "source": [
        "# Simulation setup"
      ]
    },
    {
      "metadata": {
        "id": "ThoVmfiiAz4O"
      },
      "cell_type": "code",
      "source": [
        "# @title Specify simulation parameters {form-width:'400px'}\n",
        "\n",
        "n_imp_channels = 3  # @param{type:'number'}\n",
        "n_rf_channels = 1  # @param{type:'number'}\n",
        "\n",
        "assert n_imp_channels > 0 and n_rf_channels > 0, 'Number of channels must be positive'\n",
        "\n",
        "n_total_paid_channels = n_imp_channels + n_rf_channels\n",
        "\n",
        "n_controls = 2  # @param{type:'number'}\n",
        "n_times = 156  # @param{type:'number'}\n",
        "n_geos = 20  # @param{type:'number'}\n",
        "\n",
        "seed_num = 1320  # @param{type:'number'}\n",
        "tf.random.set_seed(seed_num)"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "metadata": {
        "id": "YjUU2AZXYQqz"
      },
      "cell_type": "markdown",
      "source": [
        "Explanation:\n",
        "* n_imp_channels: number of impression-based (paid) media channels\n",
        "* n_rf_channels: number of RF-based (paid) media channels\n",
        "* n_controls: number of controls\n",
        "* n_times: number of time periods\n",
        "* n_geos: number of geos\n",
        "* seed_num: seed number"
      ]
    },
    {
      "metadata": {
        "id": "fNYcpb2CBbU3"
      },
      "cell_type": "markdown",
      "source": [
        "## Population"
      ]
    },
    {
      "metadata": {
        "id": "oSKzcytlCAUw"
      },
      "cell_type": "markdown",
      "source": [
        "Simulate population $p_g$ for each DMA $g$:\n",
        "\n",
        "$$p_g \\overset{\\mathrm{iid}}{\\sim} uniform(10^5,10^6)$$"
      ]
    },
    {
      "metadata": {
        "id": "UOE3dzEjBFY7"
      },
      "cell_type": "code",
      "source": [
        "# @title Simulate population for each DMA\n",
        "\n",
        "p_g = tfp.distributions.Uniform(1e5, 1e6).sample(n_geos)\n",
        "# For broadcasting\n",
        "p_gtm = tf.reshape(p_g, (-1, 1, 1))"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "metadata": {
        "id": "PH_nMyqzCKsh"
      },
      "cell_type": "markdown",
      "source": [
        "The control variables $\\ddot{z}_{g,t,c}$ are simulated first to account for its potential influence as a confounding factor on both media spending and sales. For each geo $g$, time period $t$ and control variable $c$,\n",
        "\n",
        "$$\\ddot{z}_{g,t,c} \\stackrel{\\text{iid}}{\\sim} N(0,3) $$\n"
      ]
    },
    {
      "metadata": {
        "id": "tRYjABIuB9yJ"
      },
      "cell_type": "code",
      "source": [
        "# @title Simulate control variables\n",
        "\n",
        "control_gtc = tfp.distributions.Normal(0, 3).sample(\n",
        "    [n_geos, n_times, n_controls]\n",
        ")"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "metadata": {
        "id": "UJ1C7uLSLrfU"
      },
      "cell_type": "markdown",
      "source": [
        "To ensure that the control variables and media imressions are simulated at the same scale, we will standardize the control variables. We use the notation $z_{g,t,c}$ to represent the transformed control variables.  \n",
        "\n",
        "In practice, when users feed data into Meridian, Meridian will automatically standardize the control variables before posterior sampling. Details about the data transformation is on the [documentation page](https://developers.google.com/meridian/docs/basics/input-data)."
      ]
    },
    {
      "metadata": {
        "id": "YiS2qfMuPFRu"
      },
      "cell_type": "code",
      "source": [
        "# @title Standardize control variables\n",
        "\n",
        "control_transformer = transformers.CenteringAndScalingTransformer(\n",
        "    tensor=control_gtc, population=p_g, population_scaling_id=None\n",
        ")\n",
        "transformed_control_gtc = control_transformer.forward(control_gtc)"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "metadata": {
        "id": "P_XkND9WQJsn"
      },
      "cell_type": "markdown",
      "source": [
        "By default, Meridian does not population-scale control variables; we also do not do it here (as represented by the argument `population_scaling_id=None`). However, users have to determine if certain control variables may need to be population scaled; see this [documentation page](https://developers.google.com/meridian/docs/advanced-modeling/control-variables#population-scaling) for more details.  "
      ]
    },
    {
      "metadata": {
        "id": "1_FSJMybCqkU"
      },
      "cell_type": "markdown",
      "source": [
        "## Media Channels"
      ]
    },
    {
      "metadata": {
        "id": "VuIe3a7WDR03"
      },
      "cell_type": "markdown",
      "source": [
        "We then simulate the impression per capita $x^{\\dagger}_{g,t,m}$ for each channel $m$ (these include both impression-based and rf-based channels).\n",
        "\n",
        "$$x^{\\dagger}_{g,t,m} = \\max \\left\\{ u_{m} + u_{t,m} + u_{g,t,m} \\,\\, , \\,\\,0\\right\\} $$\n",
        "\n",
        "* $u_m \\overset{\\mathrm{iid}}{\\sim} N(1, 0.5)$,\n",
        "* $u_{t,m} \\overset{\\mathrm{iid}}{\\sim} N(0.8, 0.5)$\n",
        "* $u_{g,t,m}\\overset{\\mathrm{iid}}{\\sim} N(0, 0.5)$\n",
        "\n",
        "The impression per capita per channel $x^{\\dagger}_{g,t,m}$ is then multiplied with simulated population $p_g$ to obtain impression per channel $x_{g,t,m}$. We will simulate reach and frequency for RF channels later."
      ]
    },
    {
      "metadata": {
        "id": "frv7IlbiUQ_R"
      },
      "cell_type": "code",
      "source": [
        "# @title Simulate impression per channel\n",
        "\n",
        "u_m = tfp.distributions.Normal(1, 0.5).sample(n_total_paid_channels)\n",
        "u_tm = tfp.distributions.Normal(0.8, 0.5).sample(\n",
        "    [n_times, n_total_paid_channels]\n",
        ")\n",
        "u_gtm = tfp.distributions.Normal(0, 0.5).sample(\n",
        "    (n_geos, n_times, n_total_paid_channels)\n",
        ")\n",
        "\n",
        "# impression per capita per channel\n",
        "ipc_gtm = np.maximum(u_m + u_tm + u_gtm, 0.0)\n",
        "\n",
        "# impression per channel\n",
        "impression_gtm = tf.round(ipc_gtm * p_gtm)\n",
        "\n",
        "# Check percentage or sparsity\n",
        "ipc_sparsity = np.sum(ipc_gtm == 0.0, axis=(0, 1)) / (n_geos * n_times)\n",
        "ipc_sparsity_percentage = [f'{s * 100:.2f}%' for s in ipc_sparsity]\n",
        "print(\n",
        "    'percentage of sparsity of impression_per_capita for each channel:'\n",
        "    f' {ipc_sparsity_percentage}'\n",
        ")"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "metadata": {
        "id": "R_fOOYeCVOBP"
      },
      "cell_type": "markdown",
      "source": [
        "We now simulate reach $r_{g,t,m}$ and frequency $f_{g,t,m}$ based on the previously simulated impression $x_{g,t,m}$. We adopt the following assumptions in our simulation:\n",
        "\n",
        "*   Let $\\theta$ be the expected reach per person. An estimate of $\\theta$ is the impression-per-capita (ipc).\n",
        "*   For each user, assume the reach frequency follows a Poisson distribution with parameter $\\theta$. Then the probability that user is reached at least once is\n",
        "$$\n",
        "\\mathbf{P}(\\text{A person is reached at least once}) = 1 - \\mathbf{P}(\\text{never reached}) = 1 - e^{-\\theta}.\n",
        "$$\n",
        "Replacing $\\theta$ with ipc yields an estimate of this probability, which can be used in simulation.\n",
        "\n",
        "* Assume individuals are independent from one another. Then the number of people reached has mean $n_{\\rm pop}(1 - e^{-\\theta})$ and variance $n_{\\rm pop}e^{-\\theta}(1 - e^{-\\theta})$. To simulate total people reached, we draw from\n",
        "$$\n",
        "\\text{total_reach} \\sim N \\left(n_{\\rm pop}(1 - e^{-\\theta}), n_{\\rm pop}e^{-\\theta}(1 - e^{-\\theta}) \\right).\n",
        "$$\n",
        "The reach-per-capita (rpc) would be\n",
        "$$\n",
        "\\text{rpc} = \\frac{\\text{total_reach}}{n_{\\rm pop}}.\n",
        "$$\n",
        "\n",
        "* The frequency would then be\n",
        "$$\n",
        "\\text{frequency} = \\frac{\\text{impressions per capita (ipc)}}{\\text{reach per capita (rpc)}}.\n",
        "$$"
      ]
    },
    {
      "metadata": {
        "id": "vxwwLqgsVKSV"
      },
      "cell_type": "code",
      "source": [
        "# @title Simulate reach and frequency\n",
        "\n",
        "# impressions per capita for R&F channels in [geos, times, channels] dimensions.\n",
        "ipc_gtm_rf = ipc_gtm[..., -n_rf_channels:]\n",
        "impression_gtm_rf = impression_gtm[..., -n_rf_channels:]\n",
        "\n",
        "# reach\n",
        "reach_gtm_mean = p_gtm * (1 - tf.math.exp(-ipc_gtm_rf))\n",
        "reach_gtm_scale = np.sqrt(\n",
        "    p_gtm * (tf.math.exp(-ipc_gtm_rf) - tf.math.exp(-2 * ipc_gtm_rf))\n",
        ")\n",
        "reach_gtm = tf.round(\n",
        "    tfp.distributions.Normal(reach_gtm_mean, reach_gtm_scale).sample()\n",
        ")\n",
        "\n",
        "# frequency\n",
        "freq_gtm = np.where(impression_gtm_rf == 0, 0, impression_gtm_rf / reach_gtm)"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "metadata": {
        "id": "-3Ef17kORilu"
      },
      "cell_type": "markdown",
      "source": [
        "Again, to ensure that the control variables and media variables are simulated at the same scale, we will transform the simulated media reach and impression data.    \n",
        "\n",
        "In practice, when users feed data into Meridian, Meridian will automatically transform the media variables (including population scaling) before posterior sampling. Details about the data transformation is on the [documentation page](https://developers.google.com/meridian/docs/basics/input-data)."
      ]
    },
    {
      "metadata": {
        "id": "xnGmYRHZUUqa"
      },
      "cell_type": "code",
      "source": [
        "# @title Transform media variables\n",
        "\n",
        "# Transform impression\n",
        "impression_gtm_imp_only = impression_gtm[..., :-n_rf_channels]\n",
        "impression_transformer = transformers.MediaTransformer(\n",
        "    media=impression_gtm_imp_only, population=p_g\n",
        ")\n",
        "transformed_ipc_gtm = impression_transformer.forward(impression_gtm_imp_only)\n",
        "\n",
        "# Transform reach\n",
        "reach_transformer = transformers.MediaTransformer(\n",
        "    media=reach_gtm, population=p_g\n",
        ")\n",
        "transformed_rpc_gtm = reach_transformer.forward(reach_gtm)"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "metadata": {
        "id": "WSOlVtStTGZp"
      },
      "cell_type": "markdown",
      "source": [
        "## Time-varying intercepts ($\\mu_t$) and geo effects ($\\tau_g$)"
      ]
    },
    {
      "metadata": {
        "id": "vU7LxlC4Tk4y"
      },
      "cell_type": "markdown",
      "source": [
        "$\\tau_g \\stackrel{iid}{\\sim} N(15, 1.2)$\n",
        "\n",
        "$\\mu_t \\stackrel{iid}{\\sim} N(0, 2)$\n",
        "\n",
        "Here, $\\mu_t$ was simulated using [full knots](https://developers.google.com/meridian/docs/basics/model-spec#_mu_t_parameters), as specified by n_knots_simul = n_times. (For more information on how the knot arguments work, see this [documentation page](https://developers.google.com/meridian/docs/advanced-modeling/setting-knots#how-knots-argument-works).) If we want to simulate $\\mu_t$ using fewer knots, we could set n_knots_simul to be a positive integer smaller than n_times. When we simulate $\\mu_t$ with full knots, we could also simplify the codes to be \n",
        "\n",
        "```python\n",
        "mu_t =  tfp.distributions.Normal(0, 2.0).sample(n_times).\n",
        "```"
      ]
    },
    {
      "metadata": {
        "id": "3984qkecTgAu"
      },
      "cell_type": "code",
      "source": [
        "# @title Simulate mu_t and tau_g\n",
        "\n",
        "tau_g = tfp.distributions.Normal(15.0, 1.2).sample(n_geos)\n",
        "\n",
        "n_knots_simul = n_times\n",
        "knots_k = tfp.distributions.Normal(0, 2.0).sample(n_knots_simul)\n",
        "# Initialize Meridian's knots object (we need the weights)\n",
        "knots_object = knots.get_knot_info(n_times, n_knots_simul, False)\n",
        "# From simulated knots, get mu_t\n",
        "mu_t = tfp.distributions.Deterministic(\n",
        "    tf.einsum(\n",
        "        '...k,kt->...t',\n",
        "        knots_k,\n",
        "        tf.convert_to_tensor(knots_object.weights),\n",
        "    )\n",
        ").sample()\n",
        "\n",
        "plt.plot(mu_t)\n",
        "plt.xlabel('time')\n",
        "plt.ylabel('mu_t')\n",
        "plt.show()"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "metadata": {
        "id": "zIuCG8VXZWDb"
      },
      "cell_type": "markdown",
      "source": [
        "## Cost and unit value"
      ]
    },
    {
      "metadata": {
        "id": "VEXoR6ilZiyf"
      },
      "cell_type": "markdown",
      "source": [
        "$$\n",
        "\\text{cost_per_impression} \\sim Unif (0.011, 0.012)\n",
        "$$\n",
        "$$\n",
        "\\text{unit_value} \\sim Unif (0.0345, 0.0355)\n",
        "$$"
      ]
    },
    {
      "metadata": {
        "id": "1q9hiZo-ZYeJ"
      },
      "cell_type": "code",
      "source": [
        "# @title Simulate cost and unit value\n",
        "\n",
        "CPM_m = tfp.distributions.Uniform(0.011, 0.012).sample(n_total_paid_channels)\n",
        "cost_gtm = impression_gtm * CPM_m\n",
        "unit_value = tfp.distributions.Uniform(0.0345, 0.0355).sample((n_geos, n_times))\n",
        "\n",
        "# Plot the histograms\n",
        "fig, axes = plt.subplots(n_total_paid_channels // 2, 2, figsize=(12, 6))\n",
        "axes = axes.flatten()\n",
        "for i, ax in enumerate(axes):\n",
        "  ax.hist(tf.reshape(cost_gtm[..., i], [-1]), bins=30, alpha=0.7)\n",
        "  ax.set_title(f'Histogram for Channel {i}')\n",
        "  ax.set_xlabel('Values across all geos and times')\n",
        "  ax.set_ylabel('Spend per channel')\n",
        "plt.tight_layout()\n",
        "plt.show()"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "metadata": {
        "id": "APGlXHoNaBkN"
      },
      "cell_type": "markdown",
      "source": [
        "## Coefficients ($\\beta_{g,m}$ and $\\gamma_{g,c}$) and error term $\\epsilon_{g,t}$"
      ]
    },
    {
      "metadata": {
        "id": "z7xPVO-OoGPl"
      },
      "cell_type": "markdown",
      "source": [
        "For media coefficients,\n",
        "* $\\beta_m \\stackrel{iid}{\\sim} N(0.9, 0.1)$\n",
        "* $\\eta_m \\stackrel{iid}{\\sim} \\text{HalfNormal}(0.18)$\n",
        "* $\\beta_{g,m} \\big| (\\beta_m, \\eta_m) \\stackrel{iid}{\\sim} \\text{LogNormal} (\\beta_m, \\eta_m)$.\n",
        "\n",
        "For control coefficients,\n",
        "* $\\gamma_c \\stackrel{iid}{\\sim} N(3.5, 0.5)$\n",
        "* $\\xi_c \\stackrel{iid}{\\sim} \\text{HalfNormal}(0.3)$\n",
        "* $\\gamma_{g,c} \\big| (\\gamma_c, \\xi_c) \\stackrel{iid}{\\sim} N(\\gamma_c, \\xi_c)$.\n",
        "\n",
        "For residual variance,\n",
        "* $\\epsilon_{g,t} \\stackrel{iid}{\\sim} N(0, 0.5)$."
      ]
    },
    {
      "metadata": {
        "id": "3dc0UGxXaMcW"
      },
      "cell_type": "code",
      "source": [
        "# @title Simulate coefficients and error term\n",
        "\n",
        "# Media's coefficients\n",
        "beta_m = tfp.distributions.Normal(0.9, 0.1).sample(n_total_paid_channels)\n",
        "eta_m = tfp.distributions.HalfNormal(0.18).sample(n_total_paid_channels)\n",
        "beta_gm_dev = tfp.distributions.Normal(0, 1).sample(\n",
        "    [n_geos, n_total_paid_channels]\n",
        ")\n",
        "beta_gm = tf.exp(beta_m + eta_m * beta_gm_dev)\n",
        "\n",
        "# Controls' coefficients\n",
        "gamma_c = tfp.distributions.Normal(3.5, 0.5).sample(n_controls)\n",
        "xi_c = tfp.distributions.HalfNormal(0.3).sample(n_controls)\n",
        "gamma_gc_dev = tfp.distributions.Normal(0, 1).sample([n_geos, n_controls])\n",
        "gamma_gc = gamma_c + xi_c * gamma_gc_dev\n",
        "\n",
        "# epsilon\n",
        "sigma = tf.fill([1], 0.5)\n",
        "eps_gt = tfp.distributions.Normal(0, sigma[0]).sample([n_geos, n_times])"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "metadata": {
        "id": "F7jU7svUbfjL"
      },
      "cell_type": "markdown",
      "source": [
        "## HillAdstock Parameters"
      ]
    },
    {
      "metadata": {
        "id": "eeVD_ZA4biAJ"
      },
      "cell_type": "markdown",
      "source": [
        "We simulate HillAdstock parameters (alpha's, ec's, slope's) from their corresponding Meridian's default prior distributions."
      ]
    },
    {
      "metadata": {
        "id": "9QpzJf9ob2oo"
      },
      "cell_type": "code",
      "source": [
        "#@title Simulate HillAdstock parameters\n",
        "\n",
        "# Initialize prior distribution object\n",
        "parameters=prior_distribution.PriorDistribution.broadcast(\n",
        "    prior_distribution.PriorDistribution(),\n",
        "    n_geos=n_geos,\n",
        "    n_media_channels=n_imp_channels,\n",
        "    n_controls=n_controls,\n",
        "    n_rf_channels=n_rf_channels,\n",
        "    unique_sigma_for_each_geo=False,\n",
        "    n_knots=1,\n",
        "    is_national=False,\n",
        "    n_organic_media_channels=0,\n",
        "    n_organic_rf_channels=0,\n",
        "    n_non_media_channels=0,\n",
        "    set_total_media_contribution_prior=False,\n",
        "    kpi=0,\n",
        "    total_spend=0,\n",
        ")\n",
        "\n",
        "# HillAdstock parameters for impression-based channels\n",
        "alpha_m = parameters.alpha_m.sample()\n",
        "ec_m = parameters.ec_m.sample()\n",
        "slope_m = parameters.slope_m.sample()\n",
        "\n",
        "# HillAdstock parameters for RF channels\n",
        "alpha_rf = parameters.alpha_rf.sample()\n",
        "ec_rf = parameters.ec_rf.sample()\n",
        "slope_rf = parameters.slope_rf.sample()"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "metadata": {
        "id": "KRKNo_u5x3L4"
      },
      "cell_type": "markdown",
      "source": [
        "Note that we want to make our simulation look plausible. We did not simulate all parameters from Meridian's default priors because our default priors have fairly \"wide\" distributions. One may end up drawing some strange combination of parameters from these wide distributions, which lead to a strange ROI.    "
      ]
    },
    {
      "metadata": {
        "id": "e1sxQo8fc3tG"
      },
      "cell_type": "markdown",
      "source": [
        "## KPI and revenue"
      ]
    },
    {
      "metadata": {
        "id": "uyKooSx3rW3q"
      },
      "cell_type": "code",
      "source": [
        "# @title Collecting HillAdstock terms for media channels\n",
        "\n",
        "# HillAdstock term for impression-based channels\n",
        "hill_transformer = model.adstock_hill.HillTransformer(ec=ec_m, slope=slope_m)\n",
        "adstock_transformer = model.adstock_hill.AdstockTransformer(\n",
        "    alpha=alpha_m,\n",
        "    max_lag=8,\n",
        "    n_times_output=n_times,\n",
        ")\n",
        "media_transformed = hill_transformer.forward(\n",
        "    adstock_transformer.forward(transformed_ipc_gtm)\n",
        ")\n",
        "\n",
        "# HillAdstock term for RF channels\n",
        "hill_transformer = model.adstock_hill.HillTransformer(ec=ec_rf, slope=slope_rf)\n",
        "adstock_transformer = model.adstock_hill.AdstockTransformer(\n",
        "    alpha=alpha_rf,\n",
        "    max_lag=8,\n",
        "    n_times_output=n_times,\n",
        ")\n",
        "adj_frequency = hill_transformer.forward(freq_gtm)\n",
        "rf_transformed = adstock_transformer.forward(\n",
        "    transformed_rpc_gtm * adj_frequency\n",
        ")\n",
        "\n",
        "# Both impression-based and RF-based media channels\n",
        "media_rf_transformed = tf.concat([media_transformed, rf_transformed], axis=-1)"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "metadata": {
        "id": "upFAZTU4d6PM"
      },
      "cell_type": "code",
      "source": [
        "# @title Generate KPI and Revenue\n",
        "\n",
        "# KPI per capita\n",
        "KPI_per_capita_gt = tf.maximum(\n",
        "    tau_g[..., tf.newaxis]\n",
        "    + tf.einsum(\"gtm,gm->gt\", transformed_control_gtc, gamma_gc)\n",
        "    + eps_gt\n",
        "    + mu_t[tf.newaxis, ...],\n",
        "    0.0,\n",
        ") + tf.einsum(\"gtm,gm->gt\", media_rf_transformed, beta_gm)\n",
        "\n",
        "# KPI and revenue\n",
        "KPI_gt = KPI_per_capita_gt * p_g[..., tf.newaxis]\n",
        "Revenue_gt = KPI_gt * unit_value\n",
        "\n",
        "# plot\n",
        "plt.plot(np.mean(KPI_gt, axis=(0)))\n",
        "plt.ylabel(\"KPI averaged over geos\")\n",
        "plt.xlabel(\"Time\")\n",
        "plt.show()"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "metadata": {
        "id": "WWdQfoT0ecE7"
      },
      "cell_type": "markdown",
      "source": [
        "## Name arrays and convert into dataframe"
      ]
    },
    {
      "metadata": {
        "id": "Xoaj4_5uehh9"
      },
      "cell_type": "code",
      "source": [
        "# @title Helping functions for naming arrays\n",
        "\n",
        "def _sample_names(prefix: str, n_names: int | None = None) -> list[str]:\n",
        "  \"\"\"Generates a list of sample names.\n",
        "\n",
        "  It concatenates the same prefix with consecutive numbers to generate a list\n",
        "  of strings that can be used as sample names of columns/arrays/etc.\n",
        "  \"\"\"\n",
        "  res = [prefix + str(n) for n in range(n_names)] if n_names else None\n",
        "  return res\n",
        "\n",
        "\n",
        "def _sample_times(\n",
        "    n_times: int, start_date: datetime.date = datetime.date(2021, 1, 25)\n",
        ") -> list[str]:\n",
        "  \"\"\"Generates sample `time`s.\"\"\"\n",
        "  res = [\n",
        "      (start_date + datetime.timedelta(weeks=w)).strftime('%Y-%m-%d')\n",
        "      for w in range(n_times)\n",
        "  ]\n",
        "  return res\n",
        "\n",
        "\n",
        "def add_suffix(string: str, suffix: str) -> str:\n",
        "  return '_'.join([string, suffix])\n",
        "\n",
        "\n",
        "def remove_suffix(string: str, suffix: str) -> str:\n",
        "  return string.replace('_' + suffix, '')\n",
        "\n",
        "\n",
        "control_names = ['sentiment_score', 'competitor_activity_score']  # @param\n",
        "if len(control_names) != n_controls:\n",
        "  raise ValueError(f'Number of control names is not equal to {n_controls}')\n",
        "\n",
        "\n",
        "CHANNEL_NAME_PREFIX = 'Channel'\n",
        "GEO_NAME_PREFIX = 'Geo'\n",
        "\n",
        "CHANNEL_DIM_NAME = 'channel'\n",
        "RF_CHANNEL_DIM_NAME = 'rf_channel'\n",
        "CONTROL_DIM_NAME = 'control'\n",
        "GEO_DIM_NAME = GEO_COL_NAME = 'geo'\n",
        "TIME_DIM_NAME = TIME_COL_NAME = 'time'\n",
        "\n",
        "KPI_COL_NAME = 'conversions'\n",
        "POPULATION_COL_NAME = 'population'\n",
        "UNIT_VALUE_COL_NAME = 'revenue_per_conversion'\n",
        "\n",
        "SPEND_COL_SUFFIX = 'spend'\n",
        "IMPRESSIONS_COL_SUFFIX = 'impression'\n",
        "REACH_COL_SUFFIX = 'reach'\n",
        "FREQ_COL_SUFFIX = 'frequency'\n",
        "CONTROL_COL_SUFFIX = 'control'\n",
        "\n",
        "CONTROL_COL_NAMES = [add_suffix(c, CONTROL_COL_SUFFIX) for c in control_names]\n",
        "CHANNEL_NAMES = _sample_names(CHANNEL_NAME_PREFIX, n_total_paid_channels)\n",
        "GEO_NAMES = _sample_names(GEO_NAME_PREFIX, n_geos)\n",
        "TIME_NAMES = _sample_times(n_times)"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "metadata": {
        "id": "afwcAcK0eyAZ"
      },
      "cell_type": "code",
      "source": [
        "# @title Combine tensors into an xarray or a Pandas DataFrame\n",
        "\n",
        "media_data = xr.DataArray(\n",
        "    impression_gtm,\n",
        "    dims=[GEO_DIM_NAME, TIME_DIM_NAME, CHANNEL_DIM_NAME],\n",
        "    coords={\n",
        "        GEO_DIM_NAME: GEO_NAMES,\n",
        "        TIME_DIM_NAME: TIME_NAMES,\n",
        "        CHANNEL_DIM_NAME: CHANNEL_NAMES,\n",
        "    },\n",
        "    name=IMPRESSIONS_COL_SUFFIX,\n",
        ")\n",
        "\n",
        "control_data_name = 'control_value'\n",
        "control_data = xr.DataArray(\n",
        "    control_gtc,\n",
        "    dims=[GEO_DIM_NAME, TIME_DIM_NAME, CONTROL_DIM_NAME],\n",
        "    coords={\n",
        "        GEO_DIM_NAME: GEO_NAMES,\n",
        "        TIME_DIM_NAME: TIME_NAMES,\n",
        "        CONTROL_DIM_NAME: CONTROL_COL_NAMES,\n",
        "    },\n",
        "    name=control_data_name,\n",
        ")\n",
        "\n",
        "spend_data = xr.DataArray(\n",
        "    cost_gtm,\n",
        "    dims=[GEO_DIM_NAME, TIME_DIM_NAME, CHANNEL_DIM_NAME],\n",
        "    coords={\n",
        "        GEO_DIM_NAME: GEO_NAMES,\n",
        "        TIME_DIM_NAME: TIME_NAMES,\n",
        "        CHANNEL_DIM_NAME: CHANNEL_NAMES,\n",
        "    },\n",
        "    name=SPEND_COL_SUFFIX,\n",
        ")\n",
        "\n",
        "kpi_data = xr.DataArray(\n",
        "    KPI_gt,\n",
        "    dims=[GEO_DIM_NAME, TIME_DIM_NAME],\n",
        "    coords={GEO_DIM_NAME: GEO_NAMES, TIME_DIM_NAME: TIME_NAMES},\n",
        "    name=KPI_COL_NAME,\n",
        ")\n",
        "\n",
        "unit_value_data = xr.DataArray(\n",
        "    unit_value,\n",
        "    dims=[GEO_DIM_NAME, TIME_DIM_NAME],\n",
        "    coords={GEO_DIM_NAME: GEO_NAMES, TIME_DIM_NAME: TIME_NAMES},\n",
        "    name=UNIT_VALUE_COL_NAME,\n",
        ")\n",
        "\n",
        "population_data = xr.DataArray(\n",
        "    p_g,\n",
        "    dims=[GEO_DIM_NAME],\n",
        "    coords={GEO_DIM_NAME: GEO_NAMES},\n",
        "    name=POPULATION_COL_NAME,\n",
        ")\n",
        "\n",
        "reach_data = xr.DataArray(\n",
        "    reach_gtm,\n",
        "    dims=[GEO_DIM_NAME, TIME_DIM_NAME, RF_CHANNEL_DIM_NAME],\n",
        "    coords={\n",
        "        GEO_DIM_NAME: GEO_NAMES,\n",
        "        TIME_DIM_NAME: TIME_NAMES,\n",
        "        RF_CHANNEL_DIM_NAME: CHANNEL_NAMES[-n_rf_channels:],\n",
        "    },\n",
        "    name=REACH_COL_SUFFIX,\n",
        ")\n",
        "\n",
        "frequency_data = xr.DataArray(\n",
        "    freq_gtm,\n",
        "    dims=[GEO_DIM_NAME, TIME_DIM_NAME, RF_CHANNEL_DIM_NAME],\n",
        "    coords={\n",
        "        GEO_DIM_NAME: GEO_NAMES,\n",
        "        TIME_DIM_NAME: TIME_NAMES,\n",
        "        RF_CHANNEL_DIM_NAME: CHANNEL_NAMES[-n_rf_channels:],\n",
        "    },\n",
        "    name=FREQ_COL_SUFFIX,\n",
        ")\n",
        "\n",
        "# DF\n",
        "media_df = (\n",
        "    media_data.to_dataframe()\n",
        "    .reset_index()\n",
        "    .pivot(\n",
        "        index=[GEO_DIM_NAME, TIME_DIM_NAME],\n",
        "        columns=CHANNEL_DIM_NAME,\n",
        "        values=IMPRESSIONS_COL_SUFFIX,\n",
        "    )\n",
        "    .rename(columns=lambda x: add_suffix(x, IMPRESSIONS_COL_SUFFIX))\n",
        ")\n",
        "\n",
        "control_df = (\n",
        "    control_data.to_dataframe()\n",
        "    .reset_index()\n",
        "    .pivot(\n",
        "        index=[GEO_DIM_NAME, TIME_DIM_NAME],\n",
        "        columns=CONTROL_DIM_NAME,\n",
        "        values=control_data_name,\n",
        "    )\n",
        ")\n",
        "\n",
        "spend_df = (\n",
        "    spend_data.to_dataframe()\n",
        "    .reset_index()\n",
        "    .pivot(\n",
        "        index=[GEO_DIM_NAME, TIME_DIM_NAME],\n",
        "        columns=CHANNEL_DIM_NAME,\n",
        "        values=SPEND_COL_SUFFIX,\n",
        "    )\n",
        "    .rename(columns=lambda x: add_suffix(x, SPEND_COL_SUFFIX))\n",
        ")\n",
        "\n",
        "kpi_df = kpi_data.to_dataframe().reset_index()\n",
        "\n",
        "unit_value_df = unit_value_data.to_dataframe().reset_index()\n",
        "\n",
        "population_df = population_data.to_dataframe().reset_index()\n",
        "\n",
        "reach_df = (\n",
        "    reach_data.to_dataframe()\n",
        "    .reset_index()\n",
        "    .pivot(\n",
        "        index=[GEO_DIM_NAME, TIME_DIM_NAME],\n",
        "        columns=RF_CHANNEL_DIM_NAME,\n",
        "        values=REACH_COL_SUFFIX,\n",
        "    )\n",
        "    .rename(columns=lambda x: add_suffix(x, REACH_COL_SUFFIX))\n",
        ")\n",
        "\n",
        "frequency_df = (\n",
        "    frequency_data.to_dataframe()\n",
        "    .reset_index()\n",
        "    .pivot(\n",
        "        index=[GEO_DIM_NAME, TIME_DIM_NAME],\n",
        "        columns=RF_CHANNEL_DIM_NAME,\n",
        "        values=FREQ_COL_SUFFIX,\n",
        "    )\n",
        "    .rename(columns=lambda x: add_suffix(x, FREQ_COL_SUFFIX))\n",
        ")\n",
        "\n",
        "media_df.reset_index(inplace=True)\n",
        "control_df.reset_index(inplace=True)\n",
        "spend_df.reset_index(inplace=True)\n",
        "reach_df.reset_index(inplace=True)\n",
        "frequency_df.reset_index(inplace=True)\n",
        "\n",
        "geo_data_df = (\n",
        "    media_df.merge(control_df)\n",
        "    .merge(spend_df)\n",
        "    .merge(kpi_df)\n",
        "    .merge(unit_value_df)\n",
        "    .merge(population_df)\n",
        ")\n",
        "\n",
        "geo_rf_data_df = geo_data_df.merge(reach_df).merge(frequency_df)"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "metadata": {
        "id": "ZxFHztxGJy0W"
      },
      "cell_type": "code",
      "source": [
        "# @title Generate a national dataset from the geo dataframe\n",
        "\n",
        "def _weighted_average(geo_df, freq_col_name):\n",
        "  channel_name = remove_suffix(freq_col_name, FREQ_COL_SUFFIX)\n",
        "  reach_col_name = add_suffix(channel_name, REACH_COL_SUFFIX)\n",
        "  reach_col = geo_df[reach_col_name]\n",
        "  total_reach = reach_col.sum()\n",
        "\n",
        "  if total_reach == 0:\n",
        "    # Handle cases where the sum of reach is zero to avoid division by zero.\n",
        "    return 0\n",
        "\n",
        "  return np.average(geo_df[freq_col_name], weights=reach_col)\n",
        "\n",
        "\n",
        "def geo_df_to_national(geo_df):\n",
        "  \"\"\"Aggregates a pandas DataFrame by TIME_COL_NAME.\n",
        "\n",
        "  Args:\n",
        "      geo_df: The input DataFrame.\n",
        "\n",
        "  Returns:\n",
        "      A new DataFrame with aggregated data.\n",
        "  \"\"\"\n",
        "  geo_df = geo_df.drop(columns=[POPULATION_COL_NAME])\n",
        "\n",
        "  aggregations = {\n",
        "      KPI_COL_NAME: 'sum',\n",
        "      UNIT_VALUE_COL_NAME: 'mean',\n",
        "  }\n",
        "\n",
        "  # Add specific columns to the aggregation dictionary\n",
        "  freq_cols = []\n",
        "\n",
        "  for col in geo_df.columns:\n",
        "    if col.endswith(IMPRESSIONS_COL_SUFFIX):\n",
        "      aggregations[col] = 'sum'\n",
        "    elif col.endswith(SPEND_COL_SUFFIX):\n",
        "      aggregations[col] = 'sum'\n",
        "    elif col.endswith(REACH_COL_SUFFIX):\n",
        "      aggregations[col] = 'sum'\n",
        "    elif col.endswith(CONTROL_COL_SUFFIX):\n",
        "      aggregations[col] = 'mean'\n",
        "    elif col.endswith(FREQ_COL_SUFFIX):\n",
        "      freq_cols.append(col)\n",
        "\n",
        "  # Aggregate the DataFrame\n",
        "  national_df = geo_df.groupby(TIME_COL_NAME).agg(aggregations)\n",
        "\n",
        "  for col in freq_cols:\n",
        "    national_df[col] = geo_df.groupby(TIME_COL_NAME).apply(\n",
        "        _weighted_average, freq_col_name=col, include_groups=False\n",
        "    )\n",
        "\n",
        "  return national_df\n",
        "\n",
        "\n",
        "national_rf_data_df = geo_df_to_national(geo_rf_data_df).reset_index()"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "metadata": {
        "id": "gthGOJNMbIE4"
      },
      "cell_type": "code",
      "source": [
        "# @title Collecting simulated parameters (on raw scale of KPI_per_capita)\n",
        "\n",
        "dict_simul_param_on_raw_scale = {\n",
        "    'alpha_m': alpha_m.numpy(),\n",
        "    'alpha_rf': alpha_rf.numpy(),\n",
        "    'beta_gm': beta_gm.numpy()[:, :n_imp_channels],\n",
        "    'beta_grf': beta_gm.numpy()[:, -n_rf_channels:],\n",
        "    'beta_m': beta_m.numpy()[:n_imp_channels],\n",
        "    'beta_rf': beta_m.numpy()[-n_rf_channels:],\n",
        "    'ec_m': ec_m.numpy(),\n",
        "    'ec_rf': ec_rf.numpy(),\n",
        "    'eta_m': eta_m.numpy()[:n_imp_channels],\n",
        "    'eta_rf': eta_m.numpy()[-n_rf_channels:],\n",
        "    'gamma_c': gamma_c.numpy(),\n",
        "    'gamma_gc': gamma_gc.numpy(),\n",
        "    'mu_t': mu_t.numpy(),\n",
        "    'sigma': sigma.numpy(),\n",
        "    'slope_m': slope_m.numpy(),\n",
        "    'slope_rf': slope_rf.numpy(),\n",
        "    'tau_g': tau_g.numpy(),\n",
        "    'xi_c': xi_c.numpy(),\n",
        "    'intercept_gt': (tau_g[..., tf.newaxis] + mu_t[tf.newaxis, ...]).numpy(),\n",
        "}"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "metadata": {
        "id": "9JQecjRnaRjj"
      },
      "cell_type": "markdown",
      "source": [
        "## Ground-truth Parameters and ROI calculation"
      ]
    },
    {
      "metadata": {
        "id": "GEE4k0YTK6i6"
      },
      "cell_type": "markdown",
      "source": [
        "* First, recall that Meridian automatically transforms KPI data and media data (for more information, see [Input data](https://developers.google.com/meridian/docs/basics/input-data) documentation page).\n",
        "\n",
        "* Thus, Meridian's posterior samples for model parameters should be interpreted on (the scales of) transformed KPI data and media data.\n",
        "\n",
        "* Consequently, the simulated parameters used to generate raw KPI data does not have the same meaning as Meridian's model parameter fit on the transformed KPI data.\n",
        "\n",
        "* We need to scale these simulated parameters accordingly, so that what we regard as the \"ground-truth\" parameters are on the same scale as Meridian's sampled parameters."
      ]
    },
    {
      "metadata": {
        "id": "Z8kgtUiSNwVS"
      },
      "cell_type": "code",
      "source": [
        "# @title Obtain ground-truths to ensure comparison with posterior samples at the same scale\n",
        "\n",
        "# Get the centering and scaling factors needed.\n",
        "kpi_transformer = transformers.KpiTransformer(kpi=KPI_gt, population=p_g)\n",
        "kpi_mean = kpi_transformer.population_scaled_mean.numpy().item()\n",
        "kpi_stdev = kpi_transformer.population_scaled_stdev.numpy().item()\n",
        "\n",
        "# Define the simulated parameters that need to be scaled.\n",
        "param_to_scale = [\n",
        "    'beta_gm',\n",
        "    'beta_grf',\n",
        "    'gamma_c',\n",
        "    'gamma_gc',\n",
        "    'sigma',\n",
        "    'xi_c',\n",
        "]\n",
        "\n",
        "# Initialize\n",
        "dict_ground_truth = {}\n",
        "\n",
        "# Scale\n",
        "for param in param_to_scale:\n",
        "  dict_ground_truth[param] = dict_simul_param_on_raw_scale[param] / kpi_stdev\n",
        "\n",
        "# Since we assume lognormal distribution for prior of beta_gm and beta_grf,\n",
        "# we need to transform beta_m and beta_rf differently.\n",
        "# Note: if log(X) ~ N(mu, sigma), then log(X/k) ~ N(mu-log(k), sigma).\n",
        "for param in ['beta_m', 'beta_rf']:\n",
        "  dict_ground_truth[param] = dict_simul_param_on_raw_scale[param] - np.log(\n",
        "      kpi_stdev\n",
        "  )\n",
        "\n",
        "# We consider the geo-and-time intercept terms (mu_t + tau_g) together, due to\n",
        "# the centering around kpi_mean.\n",
        "dict_ground_truth['intercept_gt'] = (\n",
        "    dict_simul_param_on_raw_scale['intercept_gt'] - kpi_mean\n",
        ") / kpi_stdev"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "metadata": {
        "id": "ETYU653LdEHm"
      },
      "cell_type": "code",
      "source": [
        "# @title Compute incremental revenue and ROI\n",
        "\n",
        "# ROI\n",
        "# ROI formula is explained at https://developers.google.com/meridian/docs/basics/roi-and-mroi-parameterization#roi\n",
        "Incremental_Revenue_m = kpi_stdev * tf.einsum(\n",
        "    'g,gt,gtm,gm->m',\n",
        "    p_g,\n",
        "    unit_value,\n",
        "    media_rf_transformed,\n",
        "    tf.concat(\n",
        "        [dict_ground_truth['beta_gm'], dict_ground_truth['beta_grf']], axis=1\n",
        "    ),\n",
        ")\n",
        "ground_truth_roi = Incremental_Revenue_m / tf.einsum('gtm->m', cost_gtm)\n",
        "total_incremental_roi = np.sum(Incremental_Revenue_m) / np.sum(cost_gtm)\n",
        "total_revenue = np.sum(Revenue_gt)\n",
        "\n",
        "print(f'ground-truth ROI for every channel = {ground_truth_roi.numpy()}')\n",
        "print(f'ground-truth total incremental ROI = {total_incremental_roi:.2f}')\n",
        "print()\n",
        "print(f'Total revenue = {total_revenue / 1e6:.2f}M')\n",
        "print(f'Total ROI = {total_revenue / np.sum(cost_gtm)}')\n",
        "\n",
        "# Store in dictionary\n",
        "dict_ground_truth['roi_m'] = ground_truth_roi.numpy()[:n_imp_channels]\n",
        "dict_ground_truth['roi_rf'] = ground_truth_roi.numpy()[-n_rf_channels:]"
      ],
      "outputs": [],
      "execution_count": null
    },
    {
      "metadata": {
        "id": "joNzlLUcxHTm"
      },
      "cell_type": "markdown",
      "source": [
        "# Save the simulated data and the ground-truth values"
      ]
    },
    {
      "metadata": {
        "id": "vtjlPaKdxJoS"
      },
      "cell_type": "code",
      "source": [
        "geo_rf_data_df.to_csv(\n",
        "    'geo_rf_data.csv', index=False\n",
        ")\n",
        "national_rf_data_df.to_csv(\n",
        "    'national_rf_data.csv', index=False\n",
        ")\n",
        "with open('dict_ground_truth.pkl', 'wb') as f:\n",
        "  pickle.dump(dict_ground_truth, f)"
      ],
      "outputs": [],
      "execution_count": null
    }
  ],
  "metadata": {
    "accelerator": "GPU",
    "colab": {
      "gpuType": "T4",
      "machine_shape": "hm",
      "provenance": [],
      "toc_visible": true
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
