{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Hybrid Approach -- Scenario I\n",
    "\n",
    "In this notebook, we report the code related to the *hybrid* appraoch in our paper [[1]](#ourpaper). We consider the delay caused by the feedback loop used by the mobile user equipment to send to the base station the channel state information (CSI). Due to this feedback delay, the CSI available at the base station becomes outdated. In our paper, we propose a hybrid *data-driven* and *model-based* approach. The model-based part consists of using a FIR Wiener filter to predict the instantaneous CSI given the channel history, which we assume available at the base station. The data-driven part consists of a neural network whose input is the output of the Wiener filter. Ths network maps the CSI to the probability of an error event for all the MCSs and selects the MCS that maximizes the spectral efficiency.\n",
    "\n",
    "![Hybrid approach overall structure](figures/hybrid_structure.png)\n",
    "*Overall hybrid approach. The past channel history $\\boldsymbol{\\psi}$ is used to optimally estimate, via Wiener filter, the channel in effect at transmission time. Then, the estimate $\\hat{\\boldsymbol{h}}$ is mapped by the Fully Connected Neural Network to the corresponding set of conditional error event probabilities, i.e., $\\hat{\\rho}_1,...,\\hat{\\rho}_K$, for each MCS available at the BS. The final step is the selection of the MCS which gives the maximum expected spectral efficiency.Each layer of the neural network is parameterized by a weight matrix $\\boldsymbol{W}$ and a bias vector $\\boldsymbol{b}$ such that the output of the $i^{th}$ layer, i.e., $\\boldsymbol{\\alpha}^{(i)}$, is given by $\\boldsymbol{\\alpha}^{(i)}=\\Phi^{(i)} ( \\boldsymbol{W}^{(i)}\\boldsymbol{\\alpha}^{(i-1)} + \\boldsymbol{b}^{(i)} )$ where $D_{i}$ is the dimension of the $i^{th}$ layer, where $\\Phi$ is a non-linear activation function applied element-wise, where $\\boldsymbol{W}^{(i)} \\in \\mathbb{R}^{D_{i}\\mathrm{x}(D_{i-1})}$ and where $\\boldsymbol{b} \\in \\mathbb{R}^{D_{i}}$.*\n",
    "\n",
    "Under Scenario I, we implement a different Wiener filter and we train a different neural network for each combination of delay, doppler, and signal-to-noise ratio.\n",
    "\n",
    "It must be noted that the training datasets listed below in the code are currently not available in the repository due to space limitations. The **training datasets can be found at**: https://kth.box.com/s/tcd7y7rg3yau75kctw3regmyns8kfkr6 in the folder *Datasets*. At any rate, in the repository, the reader can also find the codes in *radio_data* folder which can be run to generate the datasets.\n",
    "\n",
    "**Note**: the training might take some hours, depending on the available computational resources, the dimension of the training set, the dimension of the network, and the number of epochs. \n",
    "\n",
    "\n",
    "<a id='ourpaper'></a> [1] \"Wireless link adaptation - a hybrid data-driven and model-based approach\", Lissy Pellaco, Vidit Saxena, Mats Bengtsson, Joakim Jaldén. Submitted to SPAWC 2020."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Import libraries and utility functions"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import time\n",
    "from keras.optimizers import Adam\n",
    "import utilities as utils\n",
    "from keras.backend.tensorflow_backend import set_session\n",
    "from keras.backend import clear_session\n",
    "import tensorflow as tf"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Flag used to indicate if the channel is noisy\n",
    "CHANNEL_EST_NOISE = True\n",
    "# Number of subcarriers in the OFDM\n",
    "NROF_SUBCARRIERS = 72\n",
    "# Number of MCSs\n",
    "NROF_MCS = 29\n",
    "# Number of Wiener filter taps\n",
    "Wiener_filter_taps = 10\n",
    "# Parameters related to neural network training\n",
    "BATCH_SIZE = 32\n",
    "NROF_EPOCHS = 10\n",
    "TRAINING_FRACTION = 1\n",
    "\n",
    "# Flag to indicate if the trained models should be saved\n",
    "save_model = False"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Load the Dataset\n",
    "\n",
    "The channel dataset is a dict with the following keys :  \n",
    " - 'channel'\n",
    "     - Complex channel coefficients \n",
    "     - Numpy array [ NROF_FRAMES x NROF_SUBCARRIERS x NROF_SNRS]\n",
    " - 'block_success'\n",
    "      - Binary success events (ACKs)\n",
    "      - Numpy array [ NROF_FRAMES x NROF_MCS x NROF_SNRS]\n",
    " - 'snrs_db '      \n",
    "     - Evaluated average SNR values\n",
    "     - Numpy array [ NROF_SNRS ]\n",
    " - 'block_sizes'\n",
    "     - Evaluated transport block sizes\n",
    "     - Numpy array [ NROF_MCS ]\n",
    "     \n",
    "The name of the dataset, e.g., ITU_VEHICULAR_B_1000_210_111_72_5dB, is to be interpreted in this way: \n",
    " - channel model (ITU_VEHICULAR_B)\n",
    " - number of channel realizations per batch (1000)\n",
    " - number of batches of the dataset (210)\n",
    " - doppler in Hz cast to integer (111)\n",
    " - number of subcarriers (72)\n",
    " - snr (5 dB)\n",
    " \n",
    "The **training datasets can be found at**: https://kth.box.com/s/tcd7y7rg3yau75kctw3regmyns8kfkr6 in the folder *Datasets*"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# The files stored in the file_set ARE NOT in the repository due to space limitations.\n",
    "# The training datasets can be found at: https://kth.box.com/s/tcd7y7rg3yau75kctw3regmyns8kfkr6 in the folder *Datasets*\n",
    "# The reader has also access to the \"radio_data/Generate_Data.ipynb\" which we used to generate the training datasets.\n",
    "# N.B. for datasets with snr of 25dB numerical results show that to reach convergence a a learning rate of 0.0001 should be used\n",
    "file_set=[\n",
    "            'Datasets/ITU_VEHICULAR_B_1000_210_16.67_72_5dB.npy',\n",
    "          'Datasets/ITU_VEHICULAR_B_1000_210_16.67_72_15dB.npy',\n",
    "          'Datasets/ITU_VEHICULAR_B_1000_500_16.67_72_25dB.npy',\n",
    "        ]\n",
    "# Doppler frequency related to the datasets in \"file_set\"\n",
    "dopplers_set = (20.0 / 3.0) * np.array([16.67,16.67,16.67])\n",
    "\n",
    "# Snrs related to the datasets in \"file_set\" \n",
    "snrs_set=[5,15,25]\n",
    "\n",
    "# Number of batches related to the datasets in \"file_set\"\n",
    "num_batches_per_dataset=[210,210,500]\n",
    "\n",
    "# Maximum feedback delay that we consider\n",
    "maximum_delay=9"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Build the neural network model, define the optimizer, and the cost function"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def create_ann_model():\n",
    "    from keras.models import Sequential\n",
    "    from keras.layers import Dense, Dropout\n",
    "\n",
    "    model = Sequential()\n",
    "    model.add( Dense( 1024, \n",
    "                      input_dim = NROF_SUBCARRIERS * 2, \n",
    "                      kernel_initializer='normal', \n",
    "                      activation='relu' ) )\n",
    "\n",
    "    model.add( Dense( 512,\n",
    "                      kernel_initializer = 'normal', \n",
    "                      activation='relu' ) )\n",
    "        \n",
    "    model.add( Dense( 1024, \n",
    "                      kernel_initializer = 'normal', \n",
    "                      activation='relu' ) )\n",
    "    \n",
    "    model.add( Dense( NROF_MCS, \n",
    "                      kernel_initializer='normal', \n",
    "                      activation='sigmoid' ) )\n",
    "\n",
    "    # Compile model\n",
    "    adam = Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, amsgrad=False)\n",
    "    model.compile(loss='binary_crossentropy', optimizer=adam, metrics=['accuracy'])  # for binary classification\n",
    " \n",
    "    return model"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# FIR Wiener filter"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def apply_wiener_filtering( freq_domain_channel, snrs_dB, doppler_freq, delay = 1, N = 10 ):\n",
    "    nrof_samples, nrof_subcarriers= freq_domain_channel.shape\n",
    "    \n",
    "    filtered_channel_freq_response = np.ndarray( freq_domain_channel.shape, dtype = np.complex128 )\n",
    "   \n",
    "      \n",
    "    \n",
    "    snr = 10 ** ( 0.1 *snrs_dB )\n",
    "\n",
    "    channel_sampling_interval = 0.001\n",
    "    autocorrelation_of_reference = utils.autocorrelation( np.arange(0,N,1),\n",
    "                                                              doppler_freq,\n",
    "                                                              channel_sampling_interval )\n",
    "\n",
    "    crosscorrelation = utils.autocorrelation( np.arange(delay, N + delay, 1),\n",
    "                                                  doppler_freq,\n",
    "                                                  channel_sampling_interval )\n",
    "\n",
    "    Wiener_coeff = utils.Wiener_filter_coeff_scaled( autocorrelation_of_reference,\n",
    "                                                         crosscorrelation,\n",
    "                                                         delay,\n",
    "                                                         N,\n",
    "                                                         snr,\n",
    "                                                         True,\n",
    "                                                         doppler_freq,\n",
    "                                                         channel_sampling_interval )\n",
    "\n",
    "    for subc_index in range( nrof_subcarriers ):\n",
    "            \n",
    "        subcarrier_response = freq_domain_channel[:, subc_index]\n",
    "        \n",
    "            #Apply the Wiener filter\n",
    "        filtered_subc_response = np.convolve( Wiener_coeff, subcarrier_response, \"full\")\n",
    "        filtered_channel_freq_response[ :, subc_index] = filtered_subc_response[ : -N + 1 ]\n",
    "            \n",
    "    return filtered_channel_freq_response\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Train the Neural Network on Wiener filter's output"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "start_time = time.time()\n",
    "\n",
    "#loop over the delays\n",
    "for DELAY in range(0,maximum_delay +1):\n",
    "    \n",
    "    #loop over the snrs\n",
    "    for i in range(0,len(snrs_set)):\n",
    "        \n",
    "        DOPPLER=dopplers_set[i]\n",
    "        FILE=file_set[i]\n",
    "        SELECTED_SNR=snrs_set[i]\n",
    "        DATASET = np.load( FILE, allow_pickle = True )[()]\n",
    "        \n",
    "        config = tf.ConfigProto()\n",
    "        config.gpu_options.allow_growth = True  \n",
    "\n",
    "        sess = tf.Session( config = config )\n",
    "        set_session(sess) \n",
    "        model = create_ann_model( )\n",
    "\n",
    "        \n",
    "        for j in range(0,num_batches_per_dataset[i]):\n",
    "            channel_coeff  = []\n",
    "            block_success  = []\n",
    "\n",
    "            DATASET_channel=DATASET['channel'][:,:,j]\n",
    "            DATASET_block_success=DATASET['block_success'][:,:,j]\n",
    "\n",
    "            nrof_train_samples = int( TRAINING_FRACTION * DATASET_channel.shape[0] )\n",
    "\n",
    "\n",
    "            coeff = utils.calculate_channel_coefficients_scaled_fixed_snr( DATASET_channel[ :nrof_train_samples, :],\n",
    "                                                                         DATASET['snrs_db'][0],\n",
    "                                                                         channel_estimation_noise = CHANNEL_EST_NOISE )\n",
    "            \n",
    "            coeff_filtered = apply_wiener_filtering( coeff, \n",
    "                                                     DATASET[ 'snrs_db' ][0], \n",
    "                                                     doppler_freq = DOPPLER, \n",
    "                                                     delay = DELAY, \n",
    "                                                     N = Wiener_filter_taps )\n",
    "            \n",
    "            \n",
    "            \n",
    "            channel_coeff.append( coeff_filtered )\n",
    "            block_success.append( DATASET_block_success[ :nrof_train_samples, :] )\n",
    "\n",
    "            channel_coeff = np.vstack( channel_coeff )\n",
    "            block_success = np.vstack( block_success )\n",
    "\n",
    "            NROF_FRAMES, _ = channel_coeff.shape\n",
    "\n",
    "\n",
    "            channel_coeff_concat = np.concatenate( ( np.real( channel_coeff ), np.imag( channel_coeff ) ), axis = 1 )\n",
    "            \n",
    "            \n",
    "            if DELAY > 0:\n",
    "                train_input  =( channel_coeff_concat[ :-DELAY, : ] )\n",
    "            else:\n",
    "                train_input  =( channel_coeff_concat[ :, :]  )\n",
    "\n",
    "            train_target = ( block_success [ DELAY :, :] )\n",
    "           \n",
    "\n",
    "            train_input, train_target = utils.shuffle_data( train_input, train_target )\n",
    "\n",
    "            history = model.fit( train_input, \n",
    "                     train_target, \n",
    "                     batch_size = BATCH_SIZE, \n",
    "                     epochs     = NROF_EPOCHS, \n",
    "                     validation_split = 0.1, \n",
    "                     verbose    = 0 ) # the \"verbose parameter\" can be changed to display more about the training progess of each epoch\n",
    "            \n",
    "        file_to_save = 'Trained_models_ScenarioI/ANN_MCS_PRED_WIENER_DELAY_%d_DP_%d_SNR_%d.h5'%(DELAY,DOPPLER,SELECTED_SNR)\n",
    "        if save_model == True:\n",
    "            model.save( file_to_save )\n",
    "        clear_session()\n",
    "        print(\"--- %s seconds ---\" % (time.time() - start_time))\n",
    "        "
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.7"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
