{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#| default_exp favorita"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Favorita\n",
    "\n",
    "## Description\n",
    "The 2018 Kaggle competition was organized by Corporación Favorita, a major Ecuatorian grocery retailer. The Favorita dataset is comprised of item sales history and promotions information, with additional information of items and stores,regional and national holidays, among other. \n",
    "\n",
    "The competition task consisted on forecasting sixteen days for the log-sales of particular item store combinations, for 210,654 series. \n",
    "The original dataset is available in the [Kaggle Competition url](https://www.kaggle.com/c/favorita-grocery-sales-forecasting/).\n",
    "\n",
    "During the model's optimization we consider a balanced dataset of items and stores, for 217,944 bottom level series (4,036 items * 54 stores). We consider a geographical hierarchical structure of 4 levels corresponding to stores, cities, states, and national level for a total of 371,312 time series. The dataset is at the daily level and starts from 2013-01-01 and ends by 2017-08-15 that comprehend 1688 days, we keep 34 days (1654 to 1988 days) as hold-out test and 34 days (1620 to 1654 days) as validation.\n",
    "\n",
    "| Geographical Division | Number of nodes per division  | Number of series per division |    Total    |\n",
    "|          ---          |               ---             |              ---              |     ---     |\n",
    "|  Ecuador              |              1                |             4,036             |     4,036   |\n",
    "|  States               |             16                |            64,576             |    64,576   |\n",
    "|  Cities               |             22                |            88,792             |    88,792   |\n",
    "|  Stores               |             54                |           217,944             |   217,944   |\n",
    "|  Total                |             93                |           371,312             |   371,312   |\n",
    "\n",
    "## References\n",
    "- [Corporación Favorita (2018). Corporación favorita grocery sales forecasting. Kaggle Competition. URL: https://www.kaggle.com/c/favorita-grocery-sales-forecasting/.](https://www.kaggle.com/c/favorita-grocery-sales-forecasting/)<br>\n",
    "- [Kin G. Olivares, O. Nganba Meetei, Ruijun Ma, Rohan Reddy, Mengfei Cao, Lee Dicker (2022).\"Probabilistic Hierarchical Forecasting with Deep Poisson Mixtures\". International Journal Forecasting, special issue.](https://doi.org/10.1016/j.ijforecast.2023.04.007)<br>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#| hide\n",
    "#| eval: false\n",
    "import matplotlib.pyplot as plt"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#|hide\n",
    "from fastcore.test import test_eq\n",
    "from nbdev.showdoc import show_doc"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#| export\n",
    "import os\n",
    "import gc\n",
    "import timeit\n",
    "from typing import Tuple\n",
    "from dataclasses import dataclass\n",
    "\n",
    "from pathlib import Path\n",
    "from itertools import chain\n",
    "\n",
    "import numpy as np\n",
    "import pandas as pd\n",
    "\n",
    "# TODO: @kdgutier double check if it is possible to avoid scikit-learn dependency\n",
    "# We are only using OneHotEncoder, and adds unnecesary dependencies' complexity.\n",
    "import sklearn.preprocessing as preprocessing\n",
    "from sklearn.preprocessing import OneHotEncoder\n",
    "\n",
    "from datasetsforecast.utils import download_file, extract_file, Info #, CodeTimer"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Auxiliary Functions\n",
    "\n",
    "This auxiliary functions are used to efficiently create and wrangle Favorita's series."
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Numpy Wrangling"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#| exporti\n",
    "# TODO: @kdgutier `CodeTimer`/`numpy_balance` are shared with hierarchicalforecast.utils\n",
    "# In case of merging datasetsforecast/hierarchicalforeast we wil need to keep only one.\n",
    "class CodeTimer:\n",
    "    def __init__(self, name=None, verbose=True):\n",
    "        self.name = \" '\"  + name + \"'\" if name else ''\n",
    "        self.verbose = verbose\n",
    "\n",
    "    def __enter__(self):\n",
    "        self.start = timeit.default_timer()\n",
    "\n",
    "    def __exit__(self, exc_type, exc_value, traceback):\n",
    "        self.took = (timeit.default_timer() - self.start)\n",
    "        if self.verbose:\n",
    "            print('Code block' + self.name + \\\n",
    "                  ' took:\\t{0:.5f}'.format(self.took) + ' seconds')\n",
    "\n",
    "def numpy_balance(*arrs):\n",
    "    \"\"\"\n",
    "    Fast NumPy implementation of 'balance' operation, useful to\n",
    "    create a balanced panel dataset, ie a dataset with all the \n",
    "    interactions of 'unique_id' and 'ds'.\n",
    "\n",
    "    **Parameters:**<br>\n",
    "    `arrs`: NumPy arrays.<br>\n",
    "\n",
    "    **Returns:**<br>\n",
    "    `out`: NumPy array.\n",
    "    \"\"\"\n",
    "    N = len(arrs)\n",
    "    out =  np.transpose(np.meshgrid(*arrs, indexing='ij'),\n",
    "                        np.roll(np.arange(N + 1), -1)).reshape(-1, N)\n",
    "    return out\n",
    "\n",
    "def numpy_ffill(arr):\n",
    "    \"\"\"\n",
    "    Fast NumPy implementation of `ffill` that fills missing values\n",
    "    in an array by propagating the last non-missing value forward.\n",
    "\n",
    "    For example, if the array has the following values:<br>\n",
    "    0  1  2    3<br>\n",
    "    1  2  NaN  4<br>\n",
    "\n",
    "    The `ffill` method would fill the missing values as follows:<br>\n",
    "    0  1  2  3<br>\n",
    "    1  2  2  4<br>\n",
    "\n",
    "    **Parameters:**<br>\n",
    "    `arr`: NumPy array.<br>\n",
    "\n",
    "    **Returns:**<br>\n",
    "    `out`: NumPy array.\n",
    "    \"\"\"\n",
    "    # (n_series, n_dates) = arr.shape\n",
    "    mask = np.isnan(arr)\n",
    "    idx = np.where(~mask, np.arange(mask.shape[1]), 0)\n",
    "    np.maximum.accumulate(idx, axis=1, out=idx)\n",
    "    out = arr[np.arange(idx.shape[0])[:,None], idx]\n",
    "    return out\n",
    "\n",
    "def numpy_bfill(arr):\n",
    "    \"\"\"\n",
    "    Fast NumPy implementation of `bfill` that fills missing values\n",
    "    in an array by propagating the last non-missing value backwards.\n",
    "\n",
    "    For example, if the array has the following values:<br>\n",
    "    0  1  2    3<br>\n",
    "    1  2  NaN  4<br>\n",
    "\n",
    "    The `bfill` method would fill the missing values as follows:<br>\n",
    "    0  1  2  3<br>\n",
    "    1  2  4  4<br>\n",
    "    \n",
    "    **Parameters:**<br>\n",
    "    `arr`: NumPy array.<br>\n",
    "\n",
    "    **Returns:**\n",
    "    `out`: NumPy array.\n",
    "    \"\"\"\n",
    "    mask = np.isnan(arr)\n",
    "    idx = np.where(~mask, np.arange(mask.shape[1]), mask.shape[1] - 1)\n",
    "    idx = np.minimum.accumulate(idx[:, ::-1], axis=1)[:, ::-1]\n",
    "    out = arr[np.arange(idx.shape[0])[:,None], idx]\n",
    "    return out"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "show_doc(numpy_balance, title_level=4)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "show_doc(numpy_ffill, title_level=4)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "show_doc(numpy_bfill, title_level=4)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Pandas Wrangling"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#| exporti\n",
    "def one_hot_encoding(df, index_col):\n",
    "    \"\"\" \n",
    "    Encodes dataFrame `df`'s categorical variables skipping `index_col`.\n",
    "\n",
    "    **Parameters:**<br>\n",
    "    `df`: pd.DataFrame with categorical columns.<br>\n",
    "    `index_col`: str, the index column to avoid encoding.<br>\n",
    "\n",
    "    **Returns:**\n",
    "    `one_hot_concat_df`: pd.DataFrame with one hot encoded categorical columns.<br>\n",
    "    \"\"\"\n",
    "    encoder = OneHotEncoder()\n",
    "    columns = list(df.columns)\n",
    "    columns.remove(index_col)\n",
    "    one_hot_concat_df = pd.DataFrame(df[index_col].values, columns=[index_col])\n",
    "    for col in columns:\n",
    "        dummy_columns = [f'{col}_[{x}]' for x in list(df[col].unique())]\n",
    "        dummy_values  = encoder.fit_transform(df[col].values.reshape(-1,1)).toarray()\n",
    "        one_hot_df    = pd.DataFrame(dummy_values, columns=dummy_columns)        \n",
    "        one_hot_concat_df = pd.concat([one_hot_concat_df, one_hot_df], axis=1)\n",
    "    return one_hot_concat_df\n",
    "\n",
    "def nested_one_hot_encoding(df, index_col):\n",
    "    \"\"\" \n",
    "    Encodes dataFrame `df`'s hierarchically-nested categorical variables skipping `index_col`.\n",
    "\n",
    "    Nested categorical variables (example geographic levels country>state),\n",
    "    require the dummy features to preserve encoding order, to reflect the hierarchy\n",
    "    of the categorical variables.\n",
    "\n",
    "    **Parameters:**<br>\n",
    "    `df`: pd.DataFrame with hierarchically-nested categorical columns.<br>\n",
    "    `index_col`: str, the index column to avoid encoding.<br>\n",
    "\n",
    "    **Returns:**<br>\n",
    "    `one_hot_concat_df`: pd.DataFrame with one hot encoded hierarchically-nested categorical columns.<br>\n",
    "    \"\"\"\n",
    "    bottom_ids = list(df[index_col])\n",
    "    del df[index_col]\n",
    "    categories = [df[col].unique() for col in df.columns]\n",
    "    encoder = OneHotEncoder(categories=categories,\n",
    "                            sparse_output=False, dtype=np.float32)\n",
    "    dummies = encoder.fit_transform(df)\n",
    "    one_hot_concat_df = pd.DataFrame(dummies, index=bottom_ids,\n",
    "                                     columns=list(chain(*categories)))\n",
    "    return one_hot_concat_df\n",
    "\n",
    "def get_levels_from_S_df(S_df):\n",
    "    \"\"\" Get hierarchical index levels implied by aggregation constraints dataframe `S_df`.\n",
    "\n",
    "    Create levels from summation matrix (base, bottom).\n",
    "    Goes through the rows until all the bottom level series are 'covered'\n",
    "    by the aggregation constraints to discover blocks/hierarchy levels.\n",
    "\n",
    "    **Parameters:**<br>\n",
    "    `S_df`: pd.DataFrame with summing matrix of size `(base, bottom)`, see [aggregate method](https://nixtla.github.io/hierarchicalforecast/utils.html#aggregate).<br>\n",
    "\n",
    "    **Returns:**<br>\n",
    "    `levels`: list, with hierarchical aggregation indexes, where each entry is a level.\n",
    "    \"\"\"\n",
    "    cut_idxs, = np.where(S_df.sum(axis=1).cumsum() % S_df.shape[1] == 0.)\n",
    "    levels = [S_df.iloc[(cut_idxs[i] + 1):(cut_idxs[i+1] + 1)].index.values for i in range(cut_idxs.size-1)]\n",
    "    levels = [S_df.iloc[[0]].index.values] + levels\n",
    "    assert sum([len(lv) for lv in levels]) == S_df.shape[0]\n",
    "    return levels"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "show_doc(one_hot_encoding, title_level=4)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "show_doc(nested_one_hot_encoding, title_level=4)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "show_doc(get_levels_from_S_df, title_level=4)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#| exporti\n",
    "# TODO: @kdgutier `make_holidays_distance_df` partially shared with neuralforecast.utils\n",
    "# In particular some Transformers use a holiday-based global positional encoding.\n",
    "# Same goes for HINT experiment that uses such general purpose holiday distances.\n",
    "def distance_to_holiday(holiday_dates, dates):\n",
    "    # Get holidays around dates\n",
    "    dates = pd.DatetimeIndex(dates)\n",
    "    dates_np = np.array(dates).astype('datetime64[D]')\n",
    "    holiday_dates_np = np.array(pd.DatetimeIndex(holiday_dates)).astype('datetime64[D]')\n",
    "\n",
    "    # Compute day distance to holiday\n",
    "    distance = np.expand_dims(dates_np, axis=1) - np.expand_dims(holiday_dates_np, axis=0)\n",
    "    distance = np.abs(distance)\n",
    "    distance = np.min(distance, axis=1)\n",
    "    \n",
    "    # Convert to float\n",
    "    distance = distance.astype(float)\n",
    "    distance = distance * (distance>0)\n",
    "    \n",
    "    # Fix start and end of date range\n",
    "    # TODO: Think better way of fixing absence of holiday\n",
    "    # It seems that the holidays dataframe has missing holidays\n",
    "    distance[distance>183] = 365 - distance[distance>183]\n",
    "    distance = np.abs(distance)\n",
    "    distance[distance>183] = 365 - distance[distance>183]\n",
    "    distance = np.abs(distance)\n",
    "    distance[distance>183] = 365 - distance[distance>183]\n",
    "    distance = np.abs(distance)\n",
    "    distance[distance>183] = 365 - distance[distance>183]    \n",
    "    \n",
    "    # Scale\n",
    "    distance = (distance/183) - 0.5\n",
    "\n",
    "    return distance\n",
    "\n",
    "def make_holidays_distance_df(holidays_df, dates):\n",
    "    #Make dataframe of distance in days to holidays\n",
    "    #for holiday dates and date range\n",
    "    distance_dict = {'date': dates}\n",
    "    for holiday in holidays_df.description.unique():\n",
    "        holiday_dates = holidays_df[holidays_df.description==holiday]['date']\n",
    "        holiday_dates = holiday_dates.tolist()\n",
    "        \n",
    "        holiday_str = f'dist2_[{holiday}]'\n",
    "        distance_dict[holiday_str] = distance_to_holiday(holiday_dates, dates)\n",
    "\n",
    "    holidays_distance_df = pd.DataFrame(distance_dict)\n",
    "    return holidays_distance_df"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Favorita Dataset"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#| exporti\n",
    "@dataclass\n",
    "class Favorita200:\n",
    "    freq: str = 'D'\n",
    "    horizon: int = 34\n",
    "    seasonality: int = 7\n",
    "    test_size: int = 34\n",
    "    tags_names: Tuple[str] = (\n",
    "        'Country',\n",
    "        'Country/State',\n",
    "        'Country/State/City',\n",
    "        'Country/State/City/Store',\n",
    "    )\n",
    "\n",
    "@dataclass\n",
    "class Favorita500:\n",
    "    freq: str = 'D'\n",
    "    horizon: int = 34\n",
    "    seasonality: int = 7\n",
    "    test_size: int = 34\n",
    "    tags_names: Tuple[str] = (\n",
    "        'Country',\n",
    "        'Country/State',\n",
    "        'Country/State/City',\n",
    "        'Country/State/City/Store',\n",
    "    )\n",
    "\n",
    "class FavoritaComplete:\n",
    "    freq: str = 'D'\n",
    "    horizon: int = 34\n",
    "    seasonality: int = 7\n",
    "    test_size: int = 34\n",
    "    tags_names: Tuple[str] = (\n",
    "        'Country',\n",
    "        'Country/State',\n",
    "        'Country/State/City',\n",
    "        'Country/State/City/Store',\n",
    "    )\n",
    "\n",
    "FavoritaInfo = Info((Favorita200, Favorita500, FavoritaComplete))"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Favorita Raw"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#| export\n",
    "class FavoritaRawData:\n",
    "    \"\"\" Favorita Raw Data\n",
    "\n",
    "    Raw subset datasets from the Favorita 2018 Kaggle competition.\n",
    "    This class contains utilities to download, load and filter portions of the dataset.\n",
    "\n",
    "    If you prefer, you can also download original dataset available from Kaggle directly.<br>\n",
    "    `pip install kaggle --upgrade`<br>\n",
    "    `kaggle competitions download -c favorita-grocery-sales-forecasting`\n",
    "    \"\"\"\n",
    "    source_url = 'https://www.dropbox.com/s/xi019gtvdtmsj9j/favorita-grocery-sales-forecasting2.zip?dl=1'\n",
    "    files = ['holidays_events.csv.zip', 'items.csv.zip', 'oil.csv.zip', 'sample_submission.csv.zip',\n",
    "             'stores.csv.zip', 'test.csv.zip', 'train.csv.zip', 'transactions.csv.zip']\n",
    "\n",
    "    @staticmethod\n",
    "    def unzip(path):\n",
    "        # Unzip Load, Price, Solar and Wind data\n",
    "        # shutil.register_unpack_format('7zip', ['.7z'], unpack_7zarchive)\n",
    "        for file in FavoritaRawData.files:\n",
    "            filepath = f'{path}/{file}'\n",
    "            extract_file(filepath, path)\n",
    "\n",
    "    @staticmethod\n",
    "    def download(directory: str) -> None:\n",
    "        \"\"\"Downloads Favorita Competition Dataset.\n",
    "        The dataset weights 980MB, its download is not currently robust to\n",
    "        brief interruptions of the process. It is recommended execute with\n",
    "        good connection.\n",
    "        \"\"\"\n",
    "        if not os.path.exists(directory):\n",
    "            download_file(directory, FavoritaRawData.source_url, decompress=True)\n",
    "        if not os.path.exists(f'{directory}/train.csv'):\n",
    "            FavoritaRawData.unzip(directory)\n",
    "\n",
    "    @staticmethod\n",
    "    def _read_raw_data(directory):\n",
    "        # Download Favorita Kaggle competition and unzip\n",
    "        FavoritaRawData.download(directory)\n",
    "\n",
    "        # We avoid memory-intensive task of infering dtypes\n",
    "        dtypes_dict = {'id': 'int32',\n",
    "                      'date': 'str',\n",
    "                      'item_nbr': 'int32',\n",
    "                      'store_nbr': 'int8', # there are only 54 stores\n",
    "                      'unit_sales': 'float64', # values beyond are f32 outliers\n",
    "                      'onpromotion': 'float64'}\n",
    "\n",
    "        # We read once from csv then from feather (much faster)\n",
    "        if not os.path.exists(f'{directory}/train.feather'):\n",
    "            train_df = pd.read_csv(f'{directory}/train.csv',\n",
    "                                  dtype=dtypes_dict,\n",
    "                                  parse_dates=['date'])\n",
    "            del train_df['id']\n",
    "            train_df.reset_index(drop=True, inplace=True)\n",
    "            train_df.to_feather(f'{directory}/train.feather')\n",
    "            print(\"saved train.csv to train.feather for fast access\")\n",
    "\n",
    "        items = pd.read_csv(f'{directory}/items.csv')\n",
    "        store_info = pd.read_csv(f'{directory}/stores.csv')\n",
    "        store_info['country'] = 'Ecuador'\n",
    "\n",
    "        # Change dtype for faster categorical wrangling\n",
    "        items['class'] = items['class'].astype('category')\n",
    "        items['family'] = items['family'].astype('category')        \n",
    "        for col in ['country', 'state', 'city', 'store_nbr']:\n",
    "            store_info[col] = store_info[col].astype('category')\n",
    "\n",
    "        # Test is avoided because y_true is unavailable\n",
    "        temporal = pd.read_feather(f'{directory}/train.feather')\n",
    "        test = pd.read_csv(f'{directory}/test.csv', parse_dates=['date'])\n",
    "        oil = pd.read_csv(f'{directory}/oil.csv', parse_dates=['date'])\n",
    "        holidays = pd.read_csv(f'{directory}/holidays_events.csv', parse_dates=['date'])\n",
    "        transactions = pd.read_csv(f'{directory}/transactions.csv', parse_dates=['date'])\n",
    "\n",
    "        temporal['open'] = 1\n",
    "        temporal['open'] = temporal['open'].astype('float32')\n",
    "\n",
    "        return temporal, oil, items, store_info, holidays, transactions, test\n",
    "\n",
    "    @staticmethod\n",
    "    def _load_raw_group_data(directory, group, verbose=False):\n",
    "        \"\"\" Load raw group data.\n",
    "\n",
    "        Reads, filters and sorts Favorita subset dataset.\n",
    "\n",
    "        **Parameters:**<br>\n",
    "        `directory`: str, Directory where data will be downloaded.<br>\n",
    "        `group`: str, dataset group name in 'Favorita200', 'Favorita500', 'FavoritaComplete'.<br>\n",
    "        `verbose`: bool=False, wether or not print partial outputs.<br>\n",
    "\n",
    "        **Returns:**<br>\n",
    "        `filter_items`: ordered list with unique items identifiers in the Favorita subset.<br>\n",
    "        `filter_stores`: ordered list with unique store identifiers in the Favorita subset.<br>\n",
    "        `filter_dates`: ordered list with dates in the Favorita subset.<br>\n",
    "        `raw_group_data`: dictionary with original raw Favorita pd.DataFrames, \n",
    "        temporal, oil, items, store_info, holidays, transactions. <br>\n",
    "        \"\"\"\n",
    "        if group not in FavoritaInfo.groups:\n",
    "            raise Exception(f'group not found {group}, select from Favorita200, Favorita500, FavoritaComplete')\n",
    "\n",
    "        with CodeTimer('Read  ', verbose):\n",
    "            temporal, oil, items, store_info, holidays, transactions, test \\\n",
    "                                                    = FavoritaRawData._read_raw_data(directory=directory)\n",
    "        with CodeTimer('Filter', verbose):\n",
    "            # https://arxiv.org/pdf/2106.07630.pdf reported 1687 vs 1688 days in our wrangling\n",
    "            # we follow https://arxiv.org/abs/2110.13179 that keeps 2017> dates\n",
    "            #date_range = pd.date_range(start_date, end_date, freq='D') \n",
    "            #print('len(date_range)', len(date_range))    \n",
    "\n",
    "            temporal_dates  = temporal['date'].unique() # 1684 days\n",
    "            start_date = '2017-01-01' # min(temporal_dates)\n",
    "            end_date = max(temporal_dates)\n",
    "            #end_date = '2017-08-31'  # Last date for test in Kaggle competition\n",
    "\n",
    "            catalog_items   = set(items['item_nbr'].unique())\n",
    "            catalog_stores  = set(store_info['store_nbr'].unique())\n",
    "            catalog_dates   = pd.date_range(start=start_date, end=end_date, freq='D')\n",
    "            catalog_dates   = set(catalog_dates.values.astype('datetime64[ns]'))\n",
    "\n",
    "            temporal_dates  = set(temporal_dates)\n",
    "            temporal_items  = set(temporal['item_nbr'].unique())\n",
    "            temporal_stores = set(temporal['store_nbr'].unique())\n",
    "\n",
    "            filter_dates = list(catalog_dates)\n",
    "            filter_items  = list(catalog_items.intersection(temporal_items))\n",
    "            filter_stores = list(catalog_stores.intersection(temporal_stores))\n",
    "\n",
    "            if group=='Favorita200':\n",
    "                #filter_items = filter_items[:200]\n",
    "                np.random.seed(1)\n",
    "                filter_items = np.random.choice(filter_items, size=200, replace=False)\n",
    "\n",
    "            elif group=='Favorita500':\n",
    "                #filter_items = filter_items[:500]\n",
    "                np.random.seed(1)\n",
    "                filter_items = np.random.choice(filter_items, size=500, replace=False)\n",
    "\n",
    "            filter_items.sort()\n",
    "            filter_stores.sort()\n",
    "            filter_dates.sort()\n",
    "\n",
    "            # Filter\n",
    "            oil          = oil[(oil['date'] >= start_date) & (oil['date'] < end_date)]\n",
    "            items        = items[items.item_nbr.isin(filter_items)]\n",
    "            store_info   = store_info[store_info.store_nbr.isin(filter_stores)]\n",
    "            holidays     = holidays[(holidays['date'] >= start_date) & (holidays['date'] < end_date)]\n",
    "            transactions = transactions[(transactions['date'] >= start_date) & (transactions['date'] < end_date)]\n",
    "            transactions = transactions[transactions.store_nbr.isin(filter_stores)]\n",
    "\n",
    "            temporal     = temporal[temporal.item_nbr.isin(filter_items)]\n",
    "            temporal     = temporal[temporal.store_nbr.isin(filter_stores)]\n",
    "\n",
    "        with CodeTimer('Sort  ', verbose):\n",
    "            # new sorted by hierarchy store_nbr for R benchmarks\n",
    "            store_info = store_info.sort_values(by=['state', 'city', 'store_nbr'])\n",
    "            store_info['new_store_nbr'] = np.arange(len(store_info))\n",
    "\n",
    "            # Share the new store id with temporal and transactions data\n",
    "            new_store_nbrs = store_info[['store_nbr', 'new_store_nbr']]\n",
    "            transactions   = transactions.merge(new_store_nbrs, on=['store_nbr'], how='left')\n",
    "            temporal       = temporal.merge(new_store_nbrs, on=['store_nbr'], how='left')\n",
    "\n",
    "            # Overwrite the store_nbr ids\n",
    "            del temporal['store_nbr'], transactions['store_nbr'], store_info['store_nbr']\n",
    "            temporal['store_nbr']     = temporal['new_store_nbr']\n",
    "            transactions['store_nbr'] = transactions['new_store_nbr']\n",
    "            store_info['store_nbr']   = store_info['new_store_nbr']\n",
    "            del temporal['new_store_nbr'], transactions['new_store_nbr'], store_info['new_store_nbr']\n",
    "\n",
    "            # Final Sort\n",
    "            temporal     = temporal.sort_values(by=['item_nbr', 'store_nbr', 'date'])\n",
    "            oil          = oil.sort_values(by=['date'])\n",
    "            items        = items.sort_values(by=['item_nbr'])\n",
    "            store_info   = store_info.sort_values(by=['store_nbr'])\n",
    "            transactions = transactions.sort_values(by=['store_nbr', 'date'])\n",
    "\n",
    "        raw_group_data = dict(\n",
    "            temporal=temporal, \n",
    "            oil=oil, \n",
    "            items=items, \n",
    "            store_info=store_info, \n",
    "            holidays=holidays, \n",
    "            transactions=transactions, \n",
    "            test=test,\n",
    "        )\n",
    "\n",
    "        return filter_items, filter_stores, filter_dates, raw_group_data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "show_doc(FavoritaRawData, title_level=4)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "show_doc(FavoritaRawData._load_raw_group_data, title_level=4)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Favorita Raw Usage example"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#| eval: false\n",
    "from datasetsforecast.favorita import FavoritaRawData\n",
    "\n",
    "verbose = True\n",
    "group = 'Favorita200' # 'Favorita500', 'FavoritaComplete'\n",
    "directory = './data/favorita' # directory = f's3://favorita'\n",
    "\n",
    "filter_items, filter_stores, filter_dates, raw_group_data = \\\n",
    "    FavoritaRawData._load_raw_group_data(directory=directory, group=group, verbose=verbose)\n",
    "n_items  = len(filter_items)\n",
    "n_stores = len(filter_stores)\n",
    "n_dates  = len(filter_dates)\n",
    "\n",
    "print('\\n')\n",
    "print('n_stores: \\t', n_stores)\n",
    "print('n_items: \\t', n_items)\n",
    "print('n_dates: \\t', n_dates)\n",
    "print('n_items * n_dates: \\t\\t',n_items * n_dates)\n",
    "print('n_items * n_stores: \\t\\t',n_items * n_stores)\n",
    "print('n_items * n_dates * n_stores: \\t', n_items * n_dates * n_stores)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### FavoritaData"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#| export\n",
    "class FavoritaData:\n",
    "    \"\"\" Favorita Data\n",
    "\n",
    "    The processed Favorita dataset of grocery contains item sales daily history with additional\n",
    "    information on promotions, items, stores, and holidays, containing 371,312 series from \n",
    "    January 2013 to August 2017, with a geographic hierarchy of states, cities, and stores. \n",
    "    This wrangling matches that of the DPMN paper.\n",
    "\n",
    "    - [Kin G. Olivares, O. Nganba Meetei, Ruijun Ma, Rohan Reddy, Mengfei Cao, Lee Dicker (2022).\"Probabilistic Hierarchical Forecasting with Deep Poisson Mixtures\". International Journal Forecasting, special issue.](https://doi.org/10.1016/j.ijforecast.2023.04.007)\n",
    "    \"\"\"\n",
    "    @staticmethod\n",
    "    def _get_static_data(filter_items, filter_stores, items, store_info, temporal, verbose=False):\n",
    "        with CodeTimer('static_bottom', verbose):\n",
    "            # Create balanced item x store interaction\n",
    "            balanced_prod = numpy_balance(filter_items, filter_stores)\n",
    "            item_store_df = pd.DataFrame(balanced_prod, columns=['item_nbr', 'store_nbr'])\n",
    "            item_store_df['unique_id'] = np.arange(len(item_store_df))\n",
    "\n",
    "            # Create dummy variable for original vs series introduced by\n",
    "            # balance procedure and the main unique_id index\n",
    "            idxs = temporal[['item_nbr', 'store_nbr']].values\n",
    "            unique_idxs = np.unique(idxs, axis = 0)\n",
    "\n",
    "            original_series_df = pd.DataFrame(unique_idxs,\n",
    "                                              columns=['item_nbr', 'store_nbr'])\n",
    "            original_series_df['is_original'] = 1\n",
    "            item_store_df = item_store_df.merge(original_series_df,\n",
    "                                                on=['item_nbr', 'store_nbr'], how='left')\n",
    "            item_store_df['is_original'] = item_store_df['is_original'].fillna(0)\n",
    "\n",
    "            # Regional Static Variables\n",
    "            # Adding prefix to avoid categorical hash collishion\n",
    "            hier_df = store_info[['store_nbr', 'country', 'state', 'city']].copy()\n",
    "            hier_df['state'] = 'state_['+ hier_df['state'].astype(str) + ']'\n",
    "            hier_df['city'] = 'city_['+ hier_df['city'].astype(str) + ']'\n",
    "            static_bottom = nested_one_hot_encoding(hier_df, index_col='store_nbr')\n",
    "            Agg = static_bottom.values.T\n",
    "            S = np.concatenate([Agg, np.eye(len(filter_stores), dtype=np.float32)], axis=0)\n",
    "            \n",
    "            filter_stores_str = [f'store_[{store}]' for store in filter_stores]\n",
    "            S_df = pd.DataFrame(S, columns=filter_stores_str,\n",
    "                                index=list(static_bottom.columns)+filter_stores_str)\n",
    "\n",
    "            # Visualize geographic hierarchical aggregation matrix\n",
    "            if verbose:\n",
    "                plt.figure(num=1, figsize=(7, 3), dpi=80, facecolor='w')\n",
    "                plt.spy(Agg.T)\n",
    "                plt.show()\n",
    "\n",
    "            static_bottom_columns = static_bottom.columns\n",
    "            static_bottom = np.expand_dims(static_bottom, axis=0)\n",
    "            static_bottom = np.repeat(static_bottom, repeats=len(filter_items), axis=0)\n",
    "            static_bottom = static_bottom.reshape(-1, static_bottom.shape[-1])\n",
    "            static_bottom = pd.DataFrame(static_bottom, columns=static_bottom_columns)\n",
    "\n",
    "            static_items_bottom  = np.repeat(np.array(filter_items), len(filter_stores))\n",
    "            static_stores_bottom = np.tile(np.array(filter_stores), len(filter_items))\n",
    "            static_bottom['item_nbr']  = static_items_bottom\n",
    "            static_bottom['store_nbr'] = static_stores_bottom\n",
    "\n",
    "            static_bottom = static_bottom.merge(item_store_df,\n",
    "                                            on=['item_nbr', 'store_nbr'], how='left')\n",
    "\n",
    "        with CodeTimer('static_agg', verbose):\n",
    "            static_agg = one_hot_encoding(items, index_col='item_nbr')\n",
    "\n",
    "            if verbose:\n",
    "                plt.figure(num=1, figsize=(7, 3), dpi=80, facecolor='w')\n",
    "                plt.spy(static_agg.values.T)\n",
    "                plt.show()\n",
    "\n",
    "            # Add weight for loss perishable 1.25 and normal 1.0\n",
    "            # https://www.kaggle.com/c/favorita-grocery-sales-forecasting/overview/evaluation\n",
    "            static_agg['prob'] = np.ones(len(static_agg)) + 0.25 * static_agg[\"perishable_[1]\"]\n",
    "            static_bottom['prob'] = np.repeat(static_agg['prob'].values, len(filter_stores))\n",
    "\n",
    "        return S_df, item_store_df, static_agg, static_bottom\n",
    "\n",
    "    @staticmethod\n",
    "    def _get_temporal_bottom(temporal, item_store_df, filter_dates, verbose=False):\n",
    "        with CodeTimer('temporal_bottom', verbose):\n",
    "            #-------------------- with CodeTimer('Temporal Balance') --------------------#\n",
    "            # Two stage fast numpy balance, \n",
    "            # for memory and computational efficiency\n",
    "            #item_nbrs  = temporal['item_nbr'].unique()\n",
    "            #store_nbrs = temporal['store_nbr'].unique()\n",
    "            #dates      = temporal['date'].unique()\n",
    "\n",
    "            n_items  = len(item_store_df['item_nbr'].unique())\n",
    "            n_stores = len(item_store_df['store_nbr'].unique())\n",
    "            n_dates  = len(filter_dates)\n",
    "\n",
    "            unique_ids = item_store_df['unique_id'].values\n",
    "            balanced_prod = numpy_balance(unique_ids, filter_dates)\n",
    "            balanced_df   = pd.DataFrame(balanced_prod, columns=['unique_id', 'date'])\n",
    "            balanced_df['date'] = balanced_df['date'].astype(temporal['date'].dtype)\n",
    "\n",
    "            zfill_cols  = ['unit_sales'] #, 'open'] # OPEN VARIABLE FROM TFT-Google IS BAAAD\n",
    "            ffill_cols  = ['onpromotion']\n",
    "            filter_cols = ['item_nbr', 'store_nbr', 'date']\n",
    "            filter_cols = filter_cols + zfill_cols + ffill_cols\n",
    "            temporal_df = temporal.filter(items=filter_cols)\n",
    "\n",
    "            #-------------------- with CodeTimer('Temporal Merge'): --------------------#\n",
    "            # Two stage merge with balanced data\n",
    "            item_store_df = item_store_df[['unique_id', 'item_nbr', 'store_nbr', 'is_original']]\n",
    "            item_store_df.set_index(['unique_id'], inplace=True)\n",
    "            balanced_df.set_index(['unique_id'], inplace=True)\n",
    "            balanced_df = balanced_df.merge(item_store_df, how='left',\n",
    "                                            left_on=['unique_id'],\n",
    "                                            right_index=True).reset_index()\n",
    "            #check_nans(balanced_df)\n",
    "            item_store_df = item_store_df.reset_index()\n",
    "\n",
    "            temporal_df.set_index(['item_nbr', 'store_nbr', 'date'], inplace=True)\n",
    "            balanced_df.set_index(['item_nbr', 'store_nbr', 'date'], inplace=True)\n",
    "            balanced_df = balanced_df.merge(temporal_df, how='left',\n",
    "                                            left_on=['item_nbr', 'store_nbr', 'date'],\n",
    "                                            right_index=True).reset_index()\n",
    "            #check_nans(balanced_df)\n",
    "            del temporal_df, balanced_prod\n",
    "            gc.collect()\n",
    "\n",
    "            #-------------------- with CodeTimer('ZFill Data'): --------------------#\n",
    "            for col in zfill_cols:\n",
    "                balanced_df[col] = balanced_df[col].fillna(0)\n",
    "            #check_nans(balanced_df)\n",
    "\n",
    "            #-------------------- with CodeTimer('FFill Data'): --------------------#\n",
    "            for col in ffill_cols:\n",
    "                # Fast numpy vectorized ffill, requires balanced dataframe\n",
    "                col_values = balanced_df[col].astype('float32').values\n",
    "                col_values = col_values.reshape(n_items * n_stores, n_dates)\n",
    "                col_values = numpy_ffill(col_values)\n",
    "                col_values = numpy_bfill(col_values)\n",
    "                balanced_df[col] = col_values.flatten()\n",
    "                balanced_df[col] = balanced_df[col].fillna(0)\n",
    "            #check_nans(balanced_df)\n",
    "        \n",
    "        # Rename variables for StatsForecast/NeuralForecast compatibility\n",
    "        balanced_df.rename(columns={\"date\": \"ds\", \"unit_sales\": \"y\"}, inplace=True)\n",
    "\n",
    "        return balanced_df\n",
    "\n",
    "    @staticmethod\n",
    "    def _get_temporal_agg(filter_items, filter_stores, filter_dates,\n",
    "                          oil, holidays, transactions,\n",
    "                          temporal_bottom, verbose=False):\n",
    "\n",
    "        # Copy to avoid overwriting original\n",
    "        oil = oil.copy()\n",
    "        holidays = holidays.copy()\n",
    "        transactions = transactions.copy()\n",
    "\n",
    "        with CodeTimer('temporal_agg', verbose):\n",
    "            normalizer  = preprocessing.StandardScaler()\n",
    "\n",
    "            #-------------------- with CodeTimer('1. Temporal'): --------------------#\n",
    "            # National sales per item\n",
    "            balanced_prod = numpy_balance(filter_items, filter_dates)\n",
    "            balanced_df = pd.DataFrame(balanced_prod, columns=['item_nbr', 'date'])\n",
    "            balanced_df['item_nbr'] = balanced_df['item_nbr'].astype(filter_items[0].dtype)\n",
    "            balanced_df['date'] = balanced_df['date'].astype(filter_dates[0].dtype)\n",
    "\n",
    "            # collapse store dimension -> national\n",
    "            #unit_sales  = temporal_bottom[['unit_sales']].values\n",
    "            unit_sales  = temporal_bottom[['y']].values\n",
    "            unit_sales  = unit_sales.reshape(len(filter_items), len(filter_stores), len(filter_dates), 1)\n",
    "            unit_sales  = np.sum(unit_sales, axis=1)\n",
    "            balanced_df['y'] = unit_sales.reshape(-1, 1)\n",
    "\n",
    "            temporal_agg = balanced_df\n",
    "\n",
    "            #-------------------- with CodeTimer('2. Oil'): --------------------#\n",
    "            balanced_df = pd.DataFrame({'date': filter_dates})\n",
    "            balanced_df = balanced_df.merge(oil, on='date', how='left')\n",
    "            #check_nans(balanced_df)\n",
    "\n",
    "            balanced_df['dcoilwtico'] = balanced_df['dcoilwtico'].fillna(method='ffill')\n",
    "            balanced_df['dcoilwtico'] = balanced_df['dcoilwtico'].fillna(method='bfill')\n",
    "\n",
    "            #check_nans(balanced_df)\n",
    "            oil_agg = balanced_df\n",
    "            oil_agg['dcoilwtico'] = normalizer.fit_transform(oil_agg['dcoilwtico'].values[:,None])\n",
    "\n",
    "            if verbose:\n",
    "                plt.figure(num=1, figsize=(7, 3), dpi=80, facecolor='w')\n",
    "                plt.plot(balanced_df['date'], balanced_df['dcoilwtico'])\n",
    "                plt.grid()\n",
    "                plt.ylabel('Oil Price')\n",
    "                plt.xlabel('Date')\n",
    "                plt.show()\n",
    "                plt.close()\n",
    "\n",
    "            #-------------------- with CodeTimer('3. Holidays'): --------------------#\n",
    "            # Calendar Variables\n",
    "            calendar = pd.DataFrame({'date': filter_dates})\n",
    "            calendar['day_of_week']  = calendar['date'].dt.dayofweek\n",
    "            calendar['day_of_month'] = calendar['date'].dt.day\n",
    "            calendar['month']        = calendar['date'].dt.month\n",
    "\n",
    "            calendar['day_of_week']  = calendar['day_of_week'].astype('float64')\n",
    "            calendar['day_of_month'] = calendar['day_of_month'].astype('float64')\n",
    "            calendar['month']        = calendar['month'].astype('float64')\n",
    "\n",
    "            calendar['day_of_week']  = normalizer.fit_transform(calendar['day_of_week'].values[:,None])\n",
    "            calendar['day_of_month'] = normalizer.fit_transform(calendar['day_of_month'].values[:,None])\n",
    "            calendar['month'] = normalizer.fit_transform(calendar['month'].values[:,None])\n",
    "\n",
    "            if verbose:\n",
    "                plt.figure(num=1, figsize=(7, 3), dpi=80, facecolor='w')\n",
    "                plt.plot(calendar['day_of_week'], label='day_of_week')\n",
    "                plt.plot(calendar['day_of_month'], label='day_of_month')\n",
    "                plt.plot(calendar['month'], label='month')\n",
    "                plt.legend()\n",
    "                plt.grid()\n",
    "                plt.ylabel('Calendar Variables')\n",
    "                plt.xlabel('Date')\n",
    "                plt.show()\n",
    "                plt.close()\n",
    "\n",
    "            # Holiday variables\n",
    "            hdays = holidays[holidays['transferred']==False].copy()\n",
    "            hdays.rename(columns={'type': 'holiday_type'}, inplace=True)\n",
    "\n",
    "            national_hdays = hdays[hdays['locale']=='National']    \n",
    "            national_hdays = national_hdays[national_hdays.holiday_type.isin(['Holiday', 'Transfer'])]\n",
    "            national_hdays = make_holidays_distance_df(national_hdays, filter_dates)\n",
    "\n",
    "            calendar_agg = calendar.merge(national_hdays, on=['date'], how='left')\n",
    "\n",
    "            if verbose:\n",
    "                # Plot to see calendar variables, depending on filter_dates some holidays are missing\n",
    "                print('calendar_agg.columns: \\n', calendar_agg.columns)\n",
    "                # plt.plot(calendar_agg['dist2_[Navidad]'], label='Christmas')\n",
    "                # plt.plot(calendar_agg['dist2_[Independencia de Cuenca]'], label='Independence')\n",
    "                # plt.plot(calendar_agg['dist2_[Primer dia del ano]'], label='New Year')\n",
    "                # plt.legend()\n",
    "                # plt.grid()\n",
    "                # plt.ylabel('Holiday Distance (days)')\n",
    "                # plt.xlabel('Date')\n",
    "                # plt.show()\n",
    "                # plt.close()\n",
    "\n",
    "            #-------------------- with CodeTimer('4. Transactions'): --------------------#\n",
    "            # 'Transactions Balance'\n",
    "            # Fast numpy balance\n",
    "            balanced_prod = numpy_balance(filter_stores, filter_dates)\n",
    "            balanced_df   = pd.DataFrame(balanced_prod, columns=['store_nbr', 'date'])\n",
    "            balanced_df['date'] = balanced_df['date'].astype('datetime64[ns]')\n",
    "\n",
    "            # 'Transactions Merge'\n",
    "            # Merge with balanced data\n",
    "            transactions.set_index(['store_nbr', 'date'], inplace=True)\n",
    "            balanced_df.set_index(['store_nbr', 'date'], inplace=True)\n",
    "            transactions = balanced_df.merge(transactions, how='left',\n",
    "                                             left_on=['store_nbr', 'date'],\n",
    "                                             right_index=True).reset_index()\n",
    "            #check_nans(transactions)\n",
    "\n",
    "            transactions = transactions.sort_values(by=['store_nbr', 'date'])\n",
    "            trans_values = transactions.transactions.values\n",
    "            \n",
    "            trans_values = trans_values.reshape(len(filter_stores), len(filter_dates))\n",
    "            trans_values = numpy_ffill(trans_values)\n",
    "            trans_values = np.nan_to_num(trans_values)\n",
    "            trans_values = trans_values.T\n",
    "            trans_values = trans_values > 0\n",
    "            trans_columns = [f'transactions_store_[{x}]' for x in filter_stores]\n",
    "\n",
    "            transactions_agg = pd.DataFrame(trans_values, columns=trans_columns)\n",
    "            transactions_agg['date'] = filter_dates\n",
    "\n",
    "            # for x in STORES:\n",
    "            #     transactions_agg[f'transactions_store_[{x}]'] = \\\n",
    "            #           normalizer.fit_transform(transactions_agg[f'transactions_store_[{x}]'].values[:,None])\n",
    "\n",
    "            if verbose:\n",
    "                plt.figure(num=1, figsize=(7, 3), dpi=80, facecolor='w')\n",
    "                plt.plot(transactions_agg['transactions_store_[45]'], label='transactions_store_[45]')\n",
    "                plt.plot(transactions_agg['transactions_store_[52]'], label='transactions_store_[52]')\n",
    "                plt.grid()\n",
    "                plt.ylabel('Total store transactions')\n",
    "                plt.xlabel('Date')\n",
    "                plt.legend()\n",
    "                plt.show()\n",
    "                plt.close()\n",
    "\n",
    "            del balanced_prod, balanced_df, oil, holidays, transactions\n",
    "            gc.collect()\n",
    "\n",
    "            #-------------------- with CodeTimer('5. temporal_agg'): --------------------#\n",
    "            #print(\"1. temporal_agg.shape\", temporal_agg.shape)\n",
    "            #print(\"2. oil_agg.shape\", oil_agg.shape)\n",
    "            #print(\"3. calendar_agg.shape\", calendar_agg.shape)\n",
    "            #print(\"4. transactions_agg.shape\", transactions_agg.shape)\n",
    "            #print(\"\\n\\n\")\n",
    "            #print(\"1. temporal_agg.dtypes \\n\", temporal_agg.dtypes, \"\\n\")\n",
    "            #print(\"2. oil_agg.dtypes \\n\", oil_agg.dtypes, \"\\n\")\n",
    "            #print(\"3. calendar_agg.dtypes \\n\", calendar_agg.dtypes, \"\\n\")\n",
    "            #print(\"4. transactions_agg.dtypes \\n\", transactions_agg.dtypes, \"\\n\")\n",
    "\n",
    "            temporal_agg.set_index(['date'], inplace=True)\n",
    "            oil_agg.set_index(['date'], inplace=True)\n",
    "            calendar_agg.set_index(['date'], inplace=True)\n",
    "            transactions_agg.set_index(['date'], inplace=True)\n",
    "\n",
    "            # Compile national aggregated data\n",
    "            temporal_agg = temporal_agg.merge(oil_agg, how='left', left_on=['date'],\n",
    "                                        right_index=True)#.reset_index()\n",
    "            temporal_agg = temporal_agg.merge(calendar_agg, how='left', left_on=['date'],\n",
    "                                        right_index=True)#.reset_index()\n",
    "            temporal_agg = temporal_agg.merge(transactions_agg, how='left', left_on=['date'],\n",
    "                                        right_index=True)#.reset_index()\n",
    "            temporal_agg = temporal_agg.reset_index()\n",
    "            #check_nans(temporal_agg)\n",
    "\n",
    "        # Rename variables for StatsForecast/NeuralForecast compatibility\n",
    "        temporal_agg.rename(columns={\"date\": \"ds\", \"unit_sales\": \"y\"}, inplace=True)\n",
    "\n",
    "        return temporal_agg\n",
    "\n",
    "    @staticmethod\n",
    "    def load_preprocessed(directory: str, group: str, cache: bool=True, verbose: bool=False) -> \\\n",
    "        Tuple[pd.DataFrame, pd.DataFrame, pd.DataFrame, pd.DataFrame]:\n",
    "        \"\"\" Load Favorita group datasets.\n",
    "\n",
    "        For the exploration of more complex models, we make available the entire information\n",
    "        including data at the bottom level of the items sold in Favorita stores, in addition\n",
    "        to the aggregate/national level information for the items.\n",
    "\n",
    "        **Parameters:**<br>\n",
    "        `directory`: str, directory where data will be downloaded and saved.<br>\n",
    "        `group`: str, dataset group name in 'Favorita200', 'Favorita500', 'FavoritaComplete'.<br>\n",
    "        `cache`: bool=False, If `True` saves and loads.<br>\n",
    "        `verbose`: bool=False, wether or not print partial outputs.<br>\n",
    "\n",
    "        **Returns:**<br>\n",
    "        `static_bottom`: pd.DataFrame, with static variables of bottom level series.<br>\n",
    "        `static_agg`: pd.DataFrame, with static variables of aggregate level series.<br>\n",
    "        `temporal_bottom`: pd.DataFrame, with temporal variables of bottom level series.<br>\n",
    "        `temporal_agg`: pd.DataFrame, with temporal variables of aggregate level series.<br>\n",
    "        \"\"\"\n",
    "        group_path = f'{directory}/{group}'\n",
    "\n",
    "        if os.path.exists(group_path) and cache:\n",
    "            if verbose: print('Read preprocessed data to avoid unnecesary computation')\n",
    "            S_df = pd.read_csv(f'{group_path}/hier_constraints.csv', index_col=0)\n",
    "            static_agg = pd.read_csv(f'{group_path}/static_agg.csv')\n",
    "            static_bottom = pd.read_csv(f'{group_path}/static_bottom.feather')\n",
    "            temporal_agg = pd.read_feather(f'{group_path}/temporal_agg.feather')\n",
    "            temporal_bottom = pd.read_feather(f'{group_path}/temporal_bottom.feather')\n",
    "            return static_agg, static_bottom, temporal_agg, temporal_bottom, S_df\n",
    "\n",
    "        else:\n",
    "            filter_items, filter_stores, filter_dates, raw_group_data = \\\n",
    "                FavoritaRawData._load_raw_group_data(directory=directory, group=group, verbose=verbose)\n",
    "\n",
    "            S_df, item_store_df, static_agg, static_bottom = \\\n",
    "                              FavoritaData._get_static_data(filter_items=filter_items, \n",
    "                                                        filter_stores=filter_stores,\n",
    "                                                        items=raw_group_data['items'], \n",
    "                                                        store_info=raw_group_data['store_info'], \n",
    "                                                        temporal=raw_group_data['temporal'], \n",
    "                                                        verbose=verbose)\n",
    "\n",
    "            temporal_bottom = FavoritaData._get_temporal_bottom(temporal=raw_group_data['temporal'],\n",
    "                                                        item_store_df=item_store_df,\n",
    "                                                        filter_dates=filter_dates,\n",
    "                                                        verbose=verbose)\n",
    "\n",
    "            temporal_agg = FavoritaData._get_temporal_agg(filter_items=filter_items,\n",
    "                                                        filter_stores=filter_stores,\n",
    "                                                        filter_dates=filter_dates,\n",
    "                                                        oil=raw_group_data['oil'],\n",
    "                                                        holidays=raw_group_data['holidays'],\n",
    "                                                        transactions=raw_group_data['transactions'],\n",
    "                                                        temporal_bottom=temporal_bottom, verbose=verbose)\n",
    "            \n",
    "        del raw_group_data\n",
    "        gc.collect()\n",
    "\n",
    "        if not os.path.exists(group_path):\n",
    "            os.makedirs(group_path)\n",
    "\n",
    "        S_df.to_csv(f'{group_path}/hier_constraints.csv', index=True)\n",
    "        item_store_df.to_csv(f'{group_path}/item_store.csv')\n",
    "\n",
    "        static_agg.to_csv(f'{group_path}/static_agg.csv', index=False)\n",
    "        static_bottom.to_csv(f'{group_path}/static_bottom.feather', index=False)\n",
    "\n",
    "        temporal_bottom.to_feather(f'{group_path}/temporal_bottom.feather')\n",
    "        temporal_agg.to_feather(f'{group_path}/temporal_agg.feather')\n",
    "\n",
    "        return static_agg, static_bottom, temporal_agg, temporal_bottom, S_df\n",
    "    \n",
    "    @staticmethod\n",
    "    def load(directory: str, group: str, cache: bool=True, verbose: bool=False):\n",
    "        \"\"\"\n",
    "        Load Favorita forecasting benchmark dataset.\n",
    "\n",
    "        In contrast with other hierarchical datasets, this dataset contains a geographic\n",
    "        hierarchy for each individual grocery item series, identified with 'item_id' column.\n",
    "        The geographic hierarchy is captured by the 'hier_id' column.\n",
    "\n",
    "        For this reason minor wrangling is needed to adapt it for use with [`HierarchicalForecast`](https://github.com/Nixtla/hierarchicalforecast),\n",
    "        and [`StatsForecast`](https://github.com/Nixtla/statsforecast) libraries.\n",
    "\n",
    "        **Parameters:**<br>\n",
    "        `directory`: str, directory where data will be downloaded and saved.<br>\n",
    "        `group`: str, dataset group name in 'Favorita200', 'Favorita500', 'FavoritaComplete'.<br>\n",
    "        `cache`: bool=False, If `True` saves and loads.<br>\n",
    "        `verbose`: bool=False, wether or not print partial outputs.<br>\n",
    "\n",
    "        **Returns:**<br>\n",
    "        `Y_df`: pd.DataFrame, target base time series with columns ['item_id', 'hier_id', 'ds', 'y'].<br>\n",
    "        `S_df`: pd.DataFrame, hierarchical constraints dataframe of size (base, bottom).<br>\n",
    "        \"\"\"\n",
    "        # Load preprocessed data\n",
    "        _, _, _, temporal_bottom, S_df = \\\n",
    "            FavoritaData.load_preprocessed(directory=directory, group=group,\n",
    "                                           cache=cache, verbose=verbose)\n",
    "\n",
    "        stores    = temporal_bottom.store_nbr.unique()\n",
    "        items     = temporal_bottom.item_nbr.unique()\n",
    "        dates     = temporal_bottom.ds.unique()\n",
    "\n",
    "        cls_group = FavoritaInfo[group]\n",
    "        tags = dict(zip(cls_group.tags_names, get_levels_from_S_df(S_df)))\n",
    "\n",
    "        # Apply hierarchical aggregation\n",
    "        # [n_items, n_stores, n_time] -> [n_items, n_hier, n_time]\n",
    "        Y_bottom  = temporal_bottom['y'].values\n",
    "        Y_bottom  = Y_bottom.reshape((len(items), len(stores), len(dates)))\n",
    "        Y_hier = np.einsum('ist,sh->iht', Y_bottom, S_df.values.T)\n",
    "\n",
    "        # Create hierarchical series dataframe\n",
    "        item_id = np.repeat(items, len(S_df) * len(dates))\n",
    "        hier_id = np.tile(np.repeat(S_df.index, len(dates)), len(items))\n",
    "        ds = np.tile(np.tile(dates, len(S_df)), len(items))\n",
    "\n",
    "        Y_df = pd.DataFrame(dict(\n",
    "                item_id = item_id, hier_id = hier_id,\n",
    "                ds = ds, y = Y_hier.flatten()))\n",
    "        \n",
    "        return Y_df, S_df, tags"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "show_doc(FavoritaData, title_level=4)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "show_doc(FavoritaData.load_preprocessed, title_level=4)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "show_doc(FavoritaData.load, title_level=4)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#| hide\n",
    "#| eval: false\n",
    "verbose = True\n",
    "group = 'Favorita200'\n",
    "# group = 'Favorita500'\n",
    "# group = 'FavoritaComplete'\n",
    "directory = './data/favorita'\n",
    "# directory = f's3://favorita'"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#| hide\n",
    "#| eval: false\n",
    "filter_items, filter_stores, filter_dates, raw_group_data = \\\n",
    "    FavoritaRawData._load_raw_group_data(directory=directory, group=group, verbose=verbose)\n",
    "\n",
    "S_df, item_store_df, static_bottom, static_agg = \\\n",
    "                    FavoritaData._get_static_data(filter_items=filter_items, \n",
    "                                                  filter_stores=filter_stores,\n",
    "                                                  items=raw_group_data['items'], \n",
    "                                                  store_info=raw_group_data['store_info'], \n",
    "                                                  temporal=raw_group_data['temporal'], \n",
    "                                                  verbose=verbose)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#| hide\n",
    "#| eval: false\n",
    "static_agg.head(5)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#| hide\n",
    "#| eval: false\n",
    "static_bottom.head(5)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#| hide\n",
    "#| eval: false\n",
    "temporal_bottom = FavoritaData._get_temporal_bottom(temporal=raw_group_data['temporal'],\n",
    "                                                    item_store_df=item_store_df,\n",
    "                                                    filter_dates=filter_dates,\n",
    "                                                    verbose=verbose)\n",
    "temporal_bottom.head(5)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#| hide\n",
    "#| eval: false\n",
    "temporal_agg = FavoritaData._get_temporal_agg(filter_items=filter_items,\n",
    "                                              filter_stores=filter_stores,\n",
    "                                              filter_dates=filter_dates,\n",
    "                                              oil=raw_group_data['oil'],\n",
    "                                              holidays=raw_group_data['holidays'],\n",
    "                                              transactions=raw_group_data['transactions'],\n",
    "                                              temporal_bottom=temporal_bottom, verbose=verbose)\n",
    "temporal_agg.head(5)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# #| hide\n",
    "# #| eval: false\n",
    "# # Test the equality of created and loaded datasets columns and rows\n",
    "# static_agg1, static_bottom1, temporal_agg1, temporal_bottom1, S_df1 = \\\n",
    "#                         FavoritaData.load_preprocessed(directory=directory, group=group, cache=False)\n",
    "\n",
    "# static_agg2, static_bottom2, temporal_agg2, temporal_bottom2, S_df2 = \\\n",
    "#                         FavoritaData.load_preprocessed(directory=directory, group=group)\n",
    "\n",
    "# test_eq(len(static_agg1)+len(static_agg1.columns), \n",
    "#         len(static_agg2)+len(static_agg2.columns))\n",
    "# test_eq(len(static_bottom1)+len(static_bottom1.columns), \n",
    "#         len(static_bottom2)+len(static_bottom2.columns))\n",
    "\n",
    "# test_eq(len(temporal_agg1)+len(temporal_agg1.columns), \n",
    "#         len(temporal_agg2)+len(temporal_agg2.columns))\n",
    "# test_eq(len(temporal_bottom1)+len(temporal_bottom1.columns), \n",
    "#         len(temporal_bottom2)+len(temporal_bottom2.columns))"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Favorita Usage Example"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#| eval: false\n",
    "# Qualitative evaluation of hierarchical data\n",
    "from datasetsforecast.favorita import FavoritaData\n",
    "from hierarchicalforecast.utils import HierarchicalPlot\n",
    "\n",
    "group = 'Favorita200' # 'Favorita500', 'FavoritaComplete'\n",
    "directory = './data/favorita'\n",
    "Y_df, S_df, tags = FavoritaData.load(directory=directory, group=group)\n",
    "\n",
    "Y_item_df = Y_df[Y_df.item_id==1916577] # 112830, 1501570, 1916577\n",
    "Y_item_df = Y_item_df.rename(columns={'hier_id': 'unique_id'})\n",
    "Y_item_df = Y_item_df.set_index('unique_id')\n",
    "del Y_item_df['item_id']\n",
    "\n",
    "hplots = HierarchicalPlot(S=S_df, tags=tags)\n",
    "hplots.plot_hierarchically_linked_series(\n",
    "    Y_df=Y_item_df, bottom_series='store_[40]',\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "python3",
   "language": "python",
   "name": "python3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
