{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## On the suitability of deep convolutional neural networks for continental-wide downscaling of climate change projections\n",
    "### *Climate Dynamics*\n",
    "### J. Baño-Medina, R. Manzanas and J. M. Gutiérrez"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This notebook reproduces the results presented in *On the suitability of deep convolutional neural networks for continental-wide downscaling of climate change projections*, submitted to *Climate Dynamics* by *J. Baño-Medina, R. Manzanas and J. M. Gutiérrez*. That paper focuses on the suitability of statistical downscaling (SD) methods (in particular different configurations of generalized linear models (GLM) and convolutional neural networks (CNN)) for climate change applications under the perfect-prognosis approach. Throughout this notebook we deploy the code necessary to test the 3 key assumptions that have to be fullfilled for any SD method to be applied for climate change purposes (the reader is referred to the paper for more details). To do this, we build on the experimental framework developed in the experiment 2 of the [COST action VALUE](http://www.value-cost.eu/), which focuses on  producing high-resolution climate change projections of temperature and precipitation over Europe by downscaling the 12th run of the EC-Earth global climate model (GCM). The technical specifications of the machine used to run the code presented herein can be found at the end of the notebook. **Please note that about 5 days were required to run the full notebook**.\n",
    "\n",
    "**Note:** This notebook is written in the free programming language `R`(version 3.6.1) and builds on [climate4R](https://github.com/SantanderMetGroup/climate4R), a suite of `R` packages developed by the [Santander Meteorology Group](http://meteo.unican.es) for transparent climate data access, post processing (including bias correction and downscaling) and visualization. For details on climate4R (C4R hereafter), the interested reader is referred to [Iturbide et al. 2019](https://www.sciencedirect.com/science/article/pii/S1364815218303049?via%3Dihub)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1. Loading libraries"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The C4R libraries that are needed to run this notebook can be installed either through the `devtools` package (e.g. `devtools::install_github(\"SantanderMetGroup/loadeR\")` for `loadeR`) or with *conda* (version 1.3.0); see detailed instructions [here](https://github.com/SantanderMetGroup/climate4R). The deep learning models used in this work are implemented in [`downscaleR.keras`](https://github.com/SantanderMetGroup/downscaleR.keras), an extension of `downscaleR` which integrates *keras* in the C4R."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "options(java.parameters = \"-Xmx8g\")\n",
    "\n",
    "# loading C4R and auxiliary packages needed to run this notebook\n",
    "library(climate4R.UDG) # version 0.2.1\n",
    "library(loadeR) # version 1.6.1\n",
    "library(loadeR.2nc)\n",
    "library(transformeR) # version 1.7.4\n",
    "library(downscaleR) # version 3.3.2\n",
    "library(visualizeR) # version 1.5.1\n",
    "library(climate4R.value) # version 0.0.2 (also relies on VALUE version 2.2.1)\n",
    "library(magrittr)  # useful library to improve readability and maintainability of code\n",
    "library(gridExtra)  # plotting functionalities\n",
    "library(RColorBrewer)  # plotting functionalities\n",
    "library(sp)  # plotting functionalities             \n",
    "library(downscaleR.keras) # version 0.0.2 (relies on keras version 2.2.2 and tensorflow version 2.0.0)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In order to avoid errors while running the notebook, please set the path to your desired working directory and create three files named \"Data\", \"figures\" and \"models\" which will contain the downscaled predictions, the figures, and the trained deep models, respectively. Moreover, since we will undertake two distinct studies here (one for precipitation and another for temperature), please create two more directories named \"precip\" and \"temperature\" within \"Data\" and \"models\". "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "path = \"yourworkingdirectory\"  \n",
    "setwd(path)\n",
    "\n",
    "# creating the directories where the generated outputs will be stored\n",
    "dir.create(\"Data\")\n",
    "dir.create(\"Data/precip/\")\n",
    "dir.create(\"Data/temperature/\")\n",
    "dir.create(\"models\")\n",
    "dir.create(\"models/temperature/\")\n",
    "dir.create(\"models/precip/\")\n",
    "dir.create(\"figures\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2. Loading data\n",
    "\n",
    "In this section we describe how to load into your R session the 3 datasets involved in this study: ERA-Interim, E-OBS (version 14), and the 12th run of the EC-Earth. This work is framed under VALUE experiment 2, and therefore data has become publicly available by the innitiative. A specific notebook was devoted to particularly explain the different ways to access the data. In this study we rely on the [User Data Gateway (UDG)](http://meteo.unican.es/udg-tap/home), a THREDDS-based service from the Santander Climate Data Service (CDS) to load the data into our session (register [here](http://meteo.unican.es/udg-tap/signup) freely to get a user). We can log in using the `loginUDG` function from `loadeR`:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "loginUDG(username = \"***\", password = \"***\") # log into the Santander CDS"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We use as predictor data the set of variables defined in VALUE's experiment 1: Temperature (ta), zonal (ua) and meridional (va) wind velocity, geopotential (z) and specific humidity (hus) at 1000, 850, 700 and 500 hPa levels."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# vector containing the labels that identify each of the \n",
    "# predictor variables used in this work: e.g. 'z@500' stands for geopotential at 500 hPa\n",
    "variables <- c(\"z@500\",\"z@700\",\"z@850\",\"z@1000\",\n",
    "               \"hus@500\",\"hus@700\",\"hus@850\",\"hus@1000\",\n",
    "               \"ta@500\",\"ta@700\",\"ta@850\",\"ta@1000\",\n",
    "               \"ua@500\",\"ua@700\",\"ua@850\",\"ua@1000\",\n",
    "               \"va@500\",\"va@700\",\"va@850\",\"va@1000\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The `loadGridData` function from `loadeR` can be used to load into our `R` session all the predictor datasets we are going to need. For each dataset there is a label identifying the desired dataset (type `?UDG.datasets()` for information on the full list of datasets available at UDG). For example, ERA-Interim can be loaded passing the *ECMWF_ERA-Interim-ESD* label to the `dataset` input argument in `loadGridData`. In the following chunk of code, we use `lapply` to create a list whose elements contain each of the predictors defined in the `variables` vector. Afterwards, we use `makeMultiGrid` from `transformeR` to create a multigrid encompassing all those variables into a single C4R object. Note that the domain (latitude-longitude) and temporal period are both given by the `latLim`, `lonLim` and `years` parameters of the `loadGridData` funtion."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# ERA-Interim\n",
    "x <- lapply(variables, function(x) {\n",
    "  loadGridData(dataset = \"ECMWF_ERA-Interim-ESD\",\n",
    "               var = x,\n",
    "               lonLim = c(-10,32),\n",
    "               latLim = c(36,72), \n",
    "               years = 1979:2008)\n",
    "}) %>% makeMultiGrid()\n",
    "\n",
    "# EC-Earth (historical) --> we use interpGrid to interpolate EC-Earth's resolution (1.125º) to the ERA-Interim's resolution (2º)\n",
    "xh <- lapply(variables, function(z) {\n",
    "  loadGridData(dataset = \"CMIP5-subset_EC-EARTH_r12i1p1_historical\",\n",
    "               var = z,\n",
    "               lonLim = c(-10,32),\n",
    "               latLim = c(36,72), \n",
    "               years = 1979:2008) %>% \n",
    "    interpGrid(new.coordinates = getGrid(x))\n",
    "}) %>% makeMultiGrid()\n",
    "\n",
    "# EC-Earth (RCP8.5) --> we use interpGrid to interpolate EC-Earth's resolution (1.125º) to the ERA-Interim's resolution (2º)\n",
    "xf <- lapply(variables, function(z) {\n",
    "  loadGridData(dataset = \"CMIP5-subset_EC-EARTH_r12i1p1_rcp85\",\n",
    "               var = z,\n",
    "               lonLim = c(-10,32),\n",
    "               latLim = c(36,72), \n",
    "               years = 2071:2100) %>% \n",
    "    interpGrid(new.coordinates = getGrid(x))\n",
    "}) %>% makeMultiGrid()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3. The perfect-prognosis assumption\n",
    "\n",
    "One of the key assumptions in 'perfect prognosis' downscaling is that the statistical distributions of reanalysis and GCM predictors should be compatible. To test this hypothesis, we define in the next block of code the `ksPanelPlot`function, which will be later used to apply a Kolmogorov-Smirnoff (KS) test comparing the distributions of reanalysis and GCM (historical) predictors. In particular, note that the `valueMeasure` function from `climate4R.value` is internally used to obtain both the distance score and the p-value from the KS test. Moreover, the input argument `type` allows for switching between the two types of standardization considered in this work (further details are given in the paper), 'harmonize+scaling' and 'scaling', which are applied using the `scaleGrid` function from `transformeR`. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# function to apply a KS test comparing the distribution\n",
    "# of reanalysis and GCM (historical) predictors\n",
    "ksPanelPlot <- function(x.grid, y.grid,\n",
    "                        type = c(\"harmonize+scaling\",\"scaling\"),\n",
    "                        season,\n",
    "                        vars = getVarNames(x.grid)) {\n",
    "  ks.score.list <- pval.list <- rep(list(bquote()), length(vars))\n",
    "  x.grid2 <- scaleGrid(x.grid,type = \"standardize\")\n",
    "  if (type == \"harmonize+scaling\") {\n",
    "    y.grid2 <- scaleGrid(y.grid, ref = x, \n",
    "                         type = \"center\", \n",
    "                         spatial.frame = \"gridbox\", \n",
    "                         time.frame = \"monthly\") %>%\n",
    "      scaleGrid(type = \"standardize\")\n",
    "  }\n",
    "  if (type == \"scaling\") {  \n",
    "    y.grid2 <- scaleGrid(y.grid,type = \"standardize\")\n",
    "  }\n",
    "  for (i in 1:length(vars)) {\n",
    "    # We use valueMeasure from climate4R.value to compute the KS-statistic and the p-value  \n",
    "    ks.score.list[[i]] <- valueMeasure(y = subsetGrid(y.grid, var = vars[i]) %>% subsetGrid(season = season),\n",
    "                                       x = subsetGrid(x.grid, var = vars[i]) %>% subsetGrid(season = season),\n",
    "                                       measure.code = \"ts.ks.pval\")$\"Measure\"   \n",
    "    pval.list[[i]] <- valueMeasure(y = subsetGrid(y.grid2, var = vars[i]) %>% subsetGrid(season = season),\n",
    "                                   x = subsetGrid(x.grid2, var = vars[i]) %>% subsetGrid(season = season),\n",
    "                                   measure.code = \"ts.ks.pval\")$\"Measure\" %>% climatology() %>% map.stippling(condition = \"LT\",\n",
    "                                                                                                              pch = 4,\n",
    "                                                                                                              cex = 0.4,\n",
    "                                                                                                              col = \"red\",\n",
    "                                                                                                              which = i)\n",
    "  }\n",
    "  ksmap <- do.call(\"makeMultiGrid\", ks.score.list)\n",
    "  return(list(\"map\" = ksmap, \"stippling\" = pval.list))\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Next, we plot maps showing the results obtained from the application of `ksPanelPlot` to ERA-Interim and EC-EARTH (historical) predictors over the whole year, boreal summer (Jun-Jul-Aug) and winter (Dec-Jan-Feb), for the two types of standardization considered."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# plotting maps showing the results obtained from the KS test\n",
    "# comparing ERA-Interim and EC-EARTH (historical) predictors\n",
    "ppfigs <- lapply(c(\"harmonize+scaling\",\"scaling\"), FUN = function(j){\n",
    "lapply(list(1:12,c(6,7,8),c(12,1,2)), FUN = function(i){\n",
    "  fig.info <- ksPanelPlot(x.grid = x, y.grid = xh,\n",
    "                          type = j,\n",
    "                          season = i)  \n",
    "  spatialPlot(fig.info$map, color.theme = \"BuPu\",\n",
    "              at = seq(0,0.3,0.02),\n",
    "              set.min = 0,set.max = 0.3,\n",
    "              backdrop.theme = \"coastline\",\n",
    "              sp.layout = fig.info$stippling)\n",
    "  \n",
    "  }) \n",
    "})\n",
    "grid.arrange(grobs = ppfigs[[1]][i]) # harmonize+scaling: list of 3 elements with maps for the whole year, summer and winter\n",
    "grid.arrange(grobs = ppfigs[[2]][i]) # scaling: list of 3 elements with maps for the whole year, summer and winter"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "According to these results based on the KS-statistic we standardize the predictors from the historical and RCP85 scenario by substracting the monthly mean using as reference the observational grid (ERA-Interim) and then scale with the mean and standard deviation of the harmonized predictors of the historical scenario. This is done with function `scaleGrid` of `transformeR`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#  To perform the harmonzation+scaling step prior to model training and prediction            \n",
    "xf <- scaleGrid(xf, base = xh, ref = x, type = \"center\", spatial.frame = \"gridbox\", time.frame = \"monthly\") \n",
    "xh <- scaleGrid(xh, base = xh, ref = x, type = \"center\", spatial.frame = \"gridbox\", time.frame = \"monthly\") \n",
    "xf <- scaleGrid(xf, base = xh, type = \"standardize\")\n",
    "xh <- scaleGrid(xh, type = \"standardize\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 4. Temperature\n",
    "In this section we present the code needed to downscale temperature in the historical and RCP8.5 scenarios from the EC-EARTH model. \n",
    "\n",
    "First, we use `loadGriData` to load into our `R` session the predictand dataset, surface temperature from E-OBS (version 14) at 0.5º resolution. The spatial and temporal domains proposed by VALUE are considered. Once loaded, the data are saved as a netCDF file into our local machine using the `grid2nc` function."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# loading E-OBS temperature\n",
    "y <- loadGridData(dataset = \"E-OBS_v14_0.50regular\",\n",
    "                  var = \"tas\",\n",
    "                  lonLim = c(-10,32),\n",
    "                  latLim = c(36,72), \n",
    "                  years = 1979:2008)\n",
    "# saving to local directory\n",
    "grid2nc(y, NetCDFOutFile = \"./Data/temperature/tas_E-OBS_v14_0.50regular.nc4\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4.1 Downscaling (temperature) with two local GLMs\n",
    "The following block of code allows to build and apply the GLM1 and GLM4 models, which rely on local predictor information at neighbouring gridboxes (see the paper for details). To do so, we use the `downscaleChunk` function from `downscaleR`, which first trains the model using reanalysis and observations. Afterwards, the same function applies the learnt model to make predictions from a new dataset (i.e., historical and RCP8.5 scenarios from EC-EARTH in this case). Note that the predictors are conveniently harmonized and scaled before entering the model using `scaleGrid`. The predictions are saved as netCDF files in the specified local directory (`grid2nc`). These netCDF files will be later used during the validation step."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# building GLM1 and GLM4 models to downscale temperature \n",
    "# from EC-EARTH (historical and RCP8.5 scenarios)\n",
    "\n",
    "# NOTE THAT YOU MAY HAVE TO RUN THE LOOP MANUALLY IF YOUR COMPUTER DO NOT HAVE ENOUGH MEMORY CAPACITY\n",
    "glmName <- c(\"glm1\",\"glm4\")\n",
    "neighs <- c(1,4)\n",
    "scenario <- c(\"historical\",\"rcp85\")\n",
    "lapply(1:length(glmName), FUN = function(z) {\n",
    "  s1 <- Sys.time()  \n",
    "  # downscaleChunk function from downsaleR, to build the models and to predict on the GCM projections  \n",
    "  p <- downscaleChunk(x = scaleGrid(x,type = \"standardize\"), \n",
    "                      y = y, newdata = list(xh,xf),\n",
    "                      method = \"GLM\", family = \"gaussian\", \n",
    "                      prepareData.args = list(local.predictors = list(n=neighs[z], vars = getVarNames(x)))) \n",
    "  # save the predictions to local directory of a given GCM scenario and GLM configuration (GLM1 or GLM4) \n",
    "  lapply(2:length(p), FUN = function(zz) { \n",
    "    grid2nc(p[[zz]],NetCDFOutFile = paste0(\"./Data/temperature/predictions_\",scenario[zz-1],\"_\",glmName[z],\".nc4\"))\n",
    "  })\n",
    "  s2 <- Sys.time()\n",
    "  c(s1,s2)  \n",
    "})"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4.2 Downscaling (temperature) with a spatial GLM\n",
    "\n",
    "The next model we use to downscale temperature is a spatial GLM which considers the leading principal components as predictors, called GLMPC (see the paper for details). To do this, we split the whole domain into the 8 [PRUDENCE regions](http://ensemblesrt3.dmi.dk/quicklook/regions.html), whose coordinates are included in the `visualizeR` package:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# loading PRUDENCE regions\n",
    "areas <- PRUDENCEregions\n",
    "n <- names(PRUDENCEregions)\n",
    "n_regions <- length(n)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The next block of code allows to reproduce Fig. 1 of the paper. We use `spatialPlot` from `visualizeR` to plot the map. In addition, we also plot the spatial resolution of predictor and predictand fields using `SpatialPoints` (from the `sp` library) inside `spatialPlot`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# code to reproduce Fig. 1 of the paper\n",
    "coords_x <- expand.grid(x$xyCoords$x,x$xyCoords$y) ; names(coords_x) <- c(\"x\",\"y\") \n",
    "grid_clim <- climatology(subsetDimension(x,dimension = \"var\",indices = 1))\n",
    "coords_y <- expand.grid(y$xyCoords$x,y$xyCoords$y) ; names(coords_y) <- c(\"x\",\"y\")\n",
    "spatialPlot(grid_clim,at = seq(-2, 2, 0.1), set.min = 4, set.max = 8, \n",
    "            backdrop.theme = \"coastline\", \n",
    "            sp.layout = list(list(SpatialPoints(coords_x), first = FALSE, \n",
    "                                  col = \"black\", pch = 19, cex = 0.4),\n",
    "                             list(SpatialPoints(coords_y), first = FALSE, \n",
    "                                  col = \"gray\", pch = 19, cex = 0.1),\n",
    "                             list(areas[1], col = \"red\", lwd = 2),\n",
    "                             list(areas[2], col = \"brown\", lwd = 2),\n",
    "                             list(areas[3], col = \"orange\", lwd = 2),\n",
    "                             list(areas[4], col = \"darkolivegreen4\", lwd = 2),\n",
    "                             list(areas[5], col = \"purple\", lwd = 2),\n",
    "                             list(areas[6], col = \"deeppink\", lwd = 2),\n",
    "                             list(areas[7], col = \"gray47\", lwd = 2),\n",
    "                             list(areas[8], col = \"blue\", lwd = 2)),colorkey = FALSE)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "At this point, we build a GLM model for each PRUDENCE region which uses as predictors the principal components explaining the 95% of the total variance over the region. To do this, we first use `downscaleTrain` to train the GLM based on reanalysis and observations (note that `prepareData` allows to easily compute the principal components required). Subsequently, `downscalePredict` is used to apply the learnt model to make predictions from EC-EARTH (both for the historical and RCP8.5 scenarios). The perdictions are merged into a single C4R object (`mergeGrid`) and saved locally as netCDF files (`grid2nc`)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# building the GLMPC model to downscale temperature \n",
    "# from EC-EARTH (historical and RCP8.5 scenarios)\n",
    "s1 <- Sys.time()\n",
    "p <- lapply(1:n_regions, FUN = function(i) { # to loop over the Prudence Regions\n",
    "  xlim <- areas[n[i]]@bbox[1,]; ylim <- areas[n[i]]@bbox[2,] \n",
    "  if (i == 6) xlim[2] <- xlim[2] + 0.5\n",
    "  x <- subsetGrid(x,lonLim = xlim,latLim = ylim) # subset the latitude-longitude are of the given Prudence Region\n",
    "  y <- loadGridData(dataset = \"E-OBS_v14_0.50regular\",\n",
    "                    var = \"tas\",\n",
    "                    lonLim = xlim,\n",
    "                    latLim = ylim,\n",
    "                    years = 1979:2008)  \n",
    "  # Compute the PCs that explain the 95% of the variance with prepareData, as indicated by the argument spatial.predictors\n",
    "  xyT <- prepareData(x = scaleGrid(x,type = \"standardize\"), y = y,\n",
    "                     spatial.predictors = list(v.exp=0.95, which.combine = getVarNames(x)),\n",
    "                     combined.only = TRUE)\n",
    "  # Build the GLM model  \n",
    "  model <- downscaleTrain(xyT,\n",
    "                          method = \"GLM\",\n",
    "                          family = \"gaussian\")\n",
    "    \n",
    "  lapply(1:2, FUN = function(z) {  \n",
    "    if (z == 1) {grid <- xh} else if (z == 2) {grid <- xf}\n",
    "    grid <- subsetGrid(grid,lonLim = xlim,latLim = ylim) # subset the latitude-longitude are of the given Prudence Region for the GCM predictors  \n",
    "    xyt <- prepareNewData(grid,xyT)\n",
    "    # Predict  \n",
    "    downscalePredict(xyt,model) %>% redim(drop = TRUE)\n",
    "  })\n",
    "}) %>% unlist(recursive = FALSE)\n",
    "s2 <- Sys.time()\n",
    "\n",
    "# We bind the 8 PRUDENCE regions in a single C4R object\n",
    "lapply(c(\"historical\",\"rcp85\"), FUN = function(z){\n",
    "  if (z == \"historical\") {ind <- seq(1,n_regions*2,2)} \n",
    "  else if (z == \"rcp85\") {ind <- seq(2,n_regions*2,2)}\n",
    "  p <- p[ind]  \n",
    "  p <- lapply(1:getShape(p[[1]],\"time\"), FUN = function(zz){ # for computational tractability we loop over time and after bind with bindGrid at the end of the loop\n",
    "    lapply(1:length(p), FUN = function(z){\n",
    "      subsetDimension(p[[z]],dimension = \"time\", indices = zz)\n",
    "    }) %>% mergeGrid(aggr.fun = list(FUN = \"mean\",na.rm = TRUE)) # use mergeGrid to merge all the Prudence Regions predictions into one single C4R object\n",
    "  }) %>% bindGrid(dimension = \"time\")\n",
    "  p <- p[c(\"Variable\",\"Data\",\"xyCoords\",\"Dates\")]\n",
    "  # save the predictions to local directory of a given GCM scenario  \n",
    "  grid2nc(p,NetCDFOutFile = paste0(\"./Data/temperature/predictions_\",z,\"_glmPC.nc4\"))\n",
    "})\n",
    "s3 <- Sys.time()\n",
    "c(s1,s2,s3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4.3 Downscaling (temperature) with deep neural networks\n",
    "The following blocks of code explain how to build the CNN model used in the paper. We would like to encourage the reader to visit the [`downsaleR.keras` GitHub repository](https://github.com/SantanderMetGroup/downscaleR.keras) and/or the official [keras documentation](https://keras.io/getting_started/) to better understand this part of the notebook.\n",
    "\n",
    "First of all, we call the `prepareData.keras` function from `downscaleR.keras` to reshape the predictors and predictands so that they fit the type of network topology used. In our case the input layer is convolutionally connected to the first hidden layer (`first.connection = \"conv\"`) whereas the last hidden layer is fully connected to the output layer (`last.connection = \"dense\"`)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "s1 <- Sys.time()  \n",
    "# reshaping predictand and predictors to fit our network topology\n",
    "xyT <- prepareData.keras(scaleGrid(x,type = \"standardize\"),\n",
    "                         y,\n",
    "                         first.connection = \"conv\",\n",
    "                         last.connection = \"dense\",\n",
    "                         channels = \"last\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Our CNN is defined using the `keras` functions loaded via `downscaleR.keras`. The model consists of three convolutional hidden layers with 50, 25 and 10 feature maps with *ReLu* activation functions. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# defining our CNN (see the keras documentation for details)\n",
    "inputs <- layer_input(shape = dim(xyT$x.global)[-1])\n",
    "l0 = inputs\n",
    "l1 = layer_conv_2d(l0 ,filters = 50, kernel_size = c(3,3), activation = 'relu', padding = \"valid\")\n",
    "l2 = layer_conv_2d(l1,filters = 25, kernel_size = c(3,3), activation = 'relu', padding = \"valid\")\n",
    "l3 = layer_conv_2d(l2,filters = 10, kernel_size = c(3,3), activation = 'relu', padding = \"valid\")\n",
    "l4 = layer_flatten(l3)\n",
    "outputs = layer_dense(l4,units = dim(xyT$y$Data)[2])\n",
    "model <- keras_model(inputs = inputs, outputs = outputs)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The model is trained based on reanalysis and observations using the `downscaleTrain.keras` function. Note that an early-stopping criterion with a patience of 30 epochs is set up through the `callback_early_stopping` function. Once trained, the resulting model is saved to the local output directory using the `callback_model_checkpoint` function. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# training the CNN model\n",
    "downscaleTrain.keras(obj = xyT,\n",
    "                     model = model,\n",
    "                     clear.session = TRUE,\n",
    "                     compile.args = list(\"loss\" = \"mse\",\n",
    "                                         \"optimizer\" = optimizer_adam(lr = 0.0001)),\n",
    "                     fit.args = list(\"batch_size\" = 100,\n",
    "                                     \"epochs\" = 1000,\n",
    "                                     \"validation_split\" = 0.1,\n",
    "                                     \"verbose\" = 1,\n",
    "                                     \"callbacks\" = list(callback_early_stopping(patience = 30),\n",
    "                                                        callback_model_checkpoint(filepath=paste0('./models/temperature/CNN.h5'),\n",
    "                                                                                  monitor='val_loss', save_best_only=TRUE))))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Subsequently, the model is applied to make predictions from EC-EARTH (both for the historical and RCP8.5 scenarios) using the `downscalePredict.keras`. The results are locally saved as netCDF files (`grid2nc`)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# using the CNN model to make temperature predictions from \n",
    "# EC-EARTH (both for historical and RCP8.5 scenarios)\n",
    "scenario <- c(\"historical\",\"rcp85\")\n",
    "lapply(scenario,FUN = function(z){\n",
    "  # 'if' loop to distnguish between the historical and RCP8.5 scenario. \n",
    "  if (z == \"historical\") {xy <- prepareNewData.keras(xh,xyT)} \n",
    "  else if (z == \"rcp85\") {xy <- prepareNewData.keras(xf,xyT)}\n",
    "  # Downscale with the CNN  \n",
    "  p <- downscalePredict.keras(xy,\n",
    "                              model = list(\"filepath\" = paste0(\"./models/temperature/CNN.h5\")),\n",
    "                              C4R.template = y,\n",
    "                              clear.session = TRUE)\n",
    "  # save the predictions to local directory of a given GCM scenario      \n",
    "  grid2nc(p,NetCDFOutFile = paste0(\"./Data/temperature/predictions_\",z,\"_CNN.nc4\"))\n",
    "}) \n",
    "rm(xyT)\n",
    "s2 <- Sys.time()  \n",
    "c(s1,s2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4.4 Download temperature from the EC-Earth\n",
    "In this section we download EC-EARTH's temperature over continental Europe. For direct comparison purposes with the downscaled projections, we apply the E-OBS mask (note this dataset only provides data over land) to filter out sea points, and interpolate both models to a commo 0.5º regular grid. The resulting data are locally saved as netCDF files. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# preparing E-OBS mask\n",
    "mask_h <- loadGridData(\"./Data/temperature/tas_E-OBS_v14_0.50regular.nc4\",var = \"tas\") %>% \n",
    "  gridArithmetics(0) %>% gridArithmetics(1, operator = \"+\") \n",
    "mask_f <- subsetDimension(mask_h,dimension = \"time\", indices = 1:(getShape(mask_h,\"time\")-1))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# post-processing EC-EARTH (temperature)\n",
    "labelsCM <- c(\"CMIP5-subset_EC-EARTH_r12i1p1_historical\",  # UDG labels (see UDG.datasets())\n",
    "              \"CMIP5-subset_EC-EARTH_r12i1p1_rcp85\")  # UDG labels (see UDG.datasets())\n",
    "for (z in 1:length(labelsCM)) {\n",
    "  if (z == 1) {years <- 1979:2008 ; mask <- mask_h}\n",
    "  if (z == 2) {years <- 2071:2100 ; mask <- mask_f}\n",
    "  grid <- loadGridData(dataset = labelsCM[z],\n",
    "                       var = \"tas\",\n",
    "                       lonLim = c(-10,32),\n",
    "                       latLim = c(36,72), \n",
    "                       years = years) %>% \n",
    "    interpGrid(getGrid(y)) \n",
    "  grid %>% gridArithmetics(mask) %>% \n",
    "  grid2nc(NetCDFOutFile = paste0(\"./Data/temperature/tas_\",labelsCM[z],\".nc4\"))\n",
    "  # interpGrid: We interpolate the EC-Earth's resolution to math that of the predictand resolution of interest: 0.5º\n",
    "  # gridArithmetics: to mask the sea\n",
    "  # grid2nc: save the EC-Earth's air temperature in netCDF file  \n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4.5 Validation of results\n",
    "The metrics used to validate the downscaled temperature predictions obtained in the previous sections are listed in Table 1 of the paper. Here, we explain how to replicate some of the results presented in the paper. In particular, we focuse on the biases for P02, Mean and P98 between 1) the historical and observed values and 2) the RCP8.5 and historical values. These pairs of datasets are coupled in a list below, named as `models`. `loadGridData` is used to load into `R` the corresponding netCDF files produced (and locally saved) along the previous sections and the `valueMeasure` function from `climate4R.value` is used to compute the associated biases. The result is a list of multigrid C4R objects containing the aforementioned validation metrics."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# computing validation metrics\n",
    "models <- list(\n",
    "  c(\"tas_E-OBS_v14_0.50regular\",NA),\n",
    "  c(\"tas_E-OBS_v14_0.50regular\",\"tas_CMIP5-subset_EC-EARTH_r12i1p1_historical\"),\n",
    "  c(\"tas_E-OBS_v14_0.50regular\",\"predictions_historical_glm1\"),\n",
    "  c(\"tas_E-OBS_v14_0.50regular\",\"predictions_historical_glm4\"),\n",
    "  c(\"tas_E-OBS_v14_0.50regular\",\"predictions_historical_glmPC\"),\n",
    "  c(\"tas_E-OBS_v14_0.50regular\",\"predictions_historical_CNN\"),\n",
    "  c(\"tas_CMIP5-subset_EC-EARTH_r12i1p1_historical\",NA),\n",
    "  c(\"tas_CMIP5-subset_EC-EARTH_r12i1p1_historical\",\"tas_CMIP5-subset_EC-EARTH_r12i1p1_rcp85\"),\n",
    "  c(\"predictions_historical_glm1\",\"predictions_rcp85_glm1\"),\n",
    "  c(\"predictions_historical_glm4\",\"predictions_rcp85_glm4\"),\n",
    "  c(\"predictions_historical_glmPC\",\"predictions_rcp85_glmPC\"),\n",
    "  c(\"predictions_historical_CNN\",\"predictions_rcp85_CNN\")\n",
    ")\n",
    "index <- c(\"P02\",\"Mean\",\"P98\")\n",
    "validation.list <- lapply(1:length(models), FUN = function(zz){ # We loop over the grids to compute the validation indices of interest\n",
    "  args <- list()\n",
    "  if (!any(zz == c(1,7))) {\n",
    "    args[[\"y\"]] <- loadGridData(paste0(\"./Data/temperature/\",models[[zz]][1],\".nc4\"),var = \"tas\")\n",
    "    args[[\"x\"]] <- loadGridData(paste0(\"./Data/temperature/\",models[[zz]][2],\".nc4\"),var = \"tas\")\n",
    "  } else {\n",
    "    args[[\"grid\"]] <- loadGridData(paste0(\"./Data/temperature/\",models[[zz]][1],\".nc4\"),var = \"tas\")  \n",
    "  }\n",
    "  lapply(1:length(index), FUN = function(z) {\n",
    "    if (zz == 5) args[[\"y\"]] <- intersectGrid(args[[\"y\"]],args[[\"x\"]],type = \"spatial\")\n",
    "    if (!any(zz == c(1,7))) args[[\"measure.code\"]] <- \"bias\"\n",
    "    args[[\"index.code\"]] <- index[z]\n",
    "    if (any(zz == c(1,7))) return(do.call(\"valueIndex\",args)$Index)\n",
    "    if (!any(zz == c(1,7))) return(do.call(\"valueMeasure\",args)$Measure)\n",
    "  }) %>% makeMultiGrid()\n",
    "})"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Next, we use `spatialPlot` to plot the results (temperature pannels in Fig.3 of the paper)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# replicating temperature pannels in Fig. 3 of the paper\n",
    "nmes <- c(\"EC-EARTH\",\"GLM1\",\"GLM4\",\"GLMPC\",\"CNN\")\n",
    "cb <- rev(brewer.pal(n = 11, \"RdBu\"))\n",
    "cb[5:7] <- \"#FFFFFF\"; cb <- cb %>% colorRampPalette()\n",
    "val.plots <- lapply(1:length(nmes), FUN = function(zz) {\n",
    "  lapply(1:length(index), FUN = function(z) {\n",
    "    at <- \n",
    "    spatialPlot(redim(subsetGrid(validation.list[[zz+1]],var = index[z]),drop = TRUE),\n",
    "                backdrop.theme = \"coastline\",\n",
    "                main = paste(nmes[zz],\"- bias\",index[z]),\n",
    "                col.regions = cb,\n",
    "                at = seq(-5,5,length.out = 21),\n",
    "                set.min = -5, set.max = 5) \n",
    "  }) \n",
    "}) %>% unlist(recursive = FALSE)\n",
    "pdf(file = \"./figures/fig01_temperature.pdf\",width = 10,height = 16)\n",
    "grid.arrange(grobs = val.plots, ncol = 3)\n",
    "dev.off() "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We also plot the climate change signals obtained for the downscaled projections and for the raw EC-EARTH outputs (temperature pannels in Fig. 4 of the paper)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# replicating temperature pannels in Fig. 4 of the paper\n",
    "delta.plots <- lapply(1:length(nmes), FUN = function(zz) {\n",
    "  lapply(1:length(index), FUN = function(z) {\n",
    "    if (zz == 1) {\n",
    "      spatialPlot(redim(subsetGrid(validation.list[[zz+7]],var = index[z]),drop = TRUE),\n",
    "                  backdrop.theme = \"coastline\",\n",
    "                  main = paste(nmes[zz],\"- delta\",index[z]),\n",
    "                  col.regions = brewer.pal(n = 9, \"OrRd\") %>% colorRampPalette(),\n",
    "                  at = seq(0,10,0.5),\n",
    "                  set.min = 0, set.max = 10) \n",
    "    } else {\n",
    "      grid2 <- subsetGrid(validation.list[[zz+7]],var = index[z])\n",
    "      grid1 <- intersectGrid(subsetGrid(validation.list[[8]],var = index[z]),grid2,type = \"spatial\")\n",
    "      grid <- gridArithmetics(grid2,\n",
    "                              grid1,\n",
    "                              operator = \"-\")\n",
    "      spatialPlot(grid,\n",
    "                  backdrop.theme = \"coastline\",\n",
    "                  main = paste(nmes[zz],\"- delta diff.\",index[z]),\n",
    "                  col.regions = cb,\n",
    "                  at = seq(-5, 5, length.out = 21),\n",
    "                  set.min = -5, set.max = 5)\n",
    "    }\n",
    "  }) \n",
    "}) %>% unlist(recursive = FALSE)\n",
    "pdf(file = \"./figures/fig02_temperature.pdf\",width = 10,height = 16)\n",
    "grid.arrange(grobs = delta.plots, ncol = 3)\n",
    "dev.off() "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 5. Precipitation\n",
    "As for the case of temperature (Section 3), we present in this section the code needed to replicate the downscaling of precipitation for the historical and RCP8.5 scenarios of EC-EARTH. We start by loading E-OBS precipitation and saving it locally:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "y <- loadGridData(dataset = \"E-OBS_v14_0.50regular\",\n",
    "                  var = \"pr\",\n",
    "                  lonLim = c(-10,32),\n",
    "                  latLim = c(36,72), \n",
    "                  years = 1979:2008)\n",
    "grid2nc(y,NetCDFOutFile = \"./Data/precip/pr_E-OBS_v14_0.50regular.nc4\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 5.1 Downscaling (precipitation) with two local GLMs\n",
    "\n",
    "Unlike for temperature, we need two independent GLMs to downscale precipitation: one for precipitation occurrence and another for precipitation amount. Therefore, we need to define a first binomial GLM (`family = binomial(link = \"logit\")`) which will produce binary deterministic (yes/no) predictions of occurrence plus a second gamma GLM (`family = Gamma(link = \"log\")` and `simulate = TRUE`) which will produce stochastic values of precipitaton amount. Both deterministic and stochastic series need to be multiplied (`gridArithmetics` function) to obtain the final predicted precipitation, which are locally saved in netCDF format. Note that the `prepareData.args` input list in `downscaleChunk` allows for specifyinig the type of predictors to be considered (local information at neighbouring gridboxes in this case)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# building GLM1 and GLM4 models to downscale precipitation \n",
    "# from EC-EARTH (historical and RCP8.5 scenarios)\n",
    "\n",
    "# NOTE THAT YOU MAY HAVE TO RUN THE LOOP MANUALLY IF YOUR COMPUTER DO NOT HAVE ENOUGH MEMORY CAPACITY\n",
    "glmName <- c(\"glm1\",\"glm4\")\n",
    "neighs <- c(1,4)\n",
    "scenario <- c(\"historical\",\"rcp85\")\n",
    "lapply(1:length(glmName), FUN = function(z){ # GLM1 and GLM4\n",
    "  s1 <- Sys.time()  \n",
    "  # Occurrence model (logistic regression)  \n",
    "  pred_ocu <- downscaleChunk(x = scaleGrid(x,type = \"standardize\"), \n",
    "                             y = binaryGrid(y,condition = \"GE\",threshold = 1), \n",
    "                             newdata = list(xh,xf),\n",
    "                             method = \"GLM\", \n",
    "                             family = binomial(link = \"logit\"), \n",
    "                             simulate = c(FALSE,TRUE),\n",
    "                             prepareData.args = list(local.predictors = list(n=neighs[z], vars = getVarNames(x))))\n",
    "  # rainfall model (gamma regression with link logarithmic). We substract 0.99 to center the Gamma on (recall that rainy day >= 1mm/day)\n",
    "  pred_amo <- downscaleChunk(x = scaleGrid(x,type = \"standardize\"), \n",
    "                             y = gridArithmetics(y,0.99,operator = \"-\"), \n",
    "                             newdata = list(xh,xf),\n",
    "                             method = \"GLM\", \n",
    "                             family = Gamma(link = \"log\"), \n",
    "                             simulate = c(FALSE,TRUE),\n",
    "                             condition = \"GT\", threshold = 0,\n",
    "                             prepareData.args = list(local.predictors = list(n=neighs[z], vars = getVarNames(x))))\n",
    "  for (a in 1:length(pred_amo)) pred_amo[[a]] %<>% gridArithmetics(0.99,operator = \"+\") \n",
    "  \n",
    "  # Save the deterministic predictions  \n",
    "  lapply(c(2,3), FUN = function(zz) {\n",
    "    # We transform the probabilities to binary values with binaryGrid\n",
    "    pred_bin <- binaryGrid(pred_ocu[[zz]],ref.obs = binaryGrid(y,condition = \"GE\",threshold = 1),ref.pred = pred_ocu[[1]])\n",
    "    # We recreate the precipitation serie by multiplying the predictions from both gamma and regression models.  \n",
    "    p <- gridArithmetics(pred_amo[[zz]],pred_bin)\n",
    "    # We save the predictions of a given GLM model and scenario to a local directory  \n",
    "    grid2nc(p,NetCDFOutFile = paste0(\"./Data/precip/predictions_\",scenario[zz-1],\"_deterministic_\",glmName[z],\".nc4\"))\n",
    "  })\n",
    "  # Save the stochastic predictions   \n",
    "  lapply(c(4,5), FUN = function(zz) {\n",
    "    # We recreate the precipitation serie by multiplying the predictions from both gamma and regression models.  \n",
    "    p <- gridArithmetics(pred_amo[[zz]],pred_ocu[[zz]])\n",
    "    # We save the predictions of a given GLM model and scenario to a local directory  \n",
    "    grid2nc(p,NetCDFOutFile = paste0(\"./Data/precip/predictions_\",scenario[zz-3],\"_stochastic_\",glmName[z],\".nc4\"))\n",
    "  })\n",
    "  s2 <- Sys.time()\n",
    "  c(s1,s2)  \n",
    "})"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 5.2 Downscaling (precipitation) with a spatial GLM\n",
    "\n",
    "This section is equivalent to Section 3.2 but for precipitation. Therefore, as explained in the previous subsection, the main difference is the inclusion of two (instead just one) GLMs for each PRUDENCE region, which are needed due to the mixed (binary/continuous) character of precipitation. Again, note that `prepareData` allows for easiliy computing the principal components needed as predictors within each region."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# building the GLMPC model to downscale precipitation \n",
    "# from EC-EARTH (historical and RCP8.5 scenarios)\n",
    "s1 <- Sys.time()\n",
    "\n",
    "p <- lapply(1:n_regions, FUN = function(i) { # we loop over the Prudence Regions\n",
    "  xlim <- areas[n[i]]@bbox[1,]; ylim <- areas[n[i]]@bbox[2,] \n",
    "  if (i == 6) xlim[2] <- xlim[2] + 0.5\n",
    "  x <- subsetGrid(x,lonLim = xlim,latLim = ylim)\n",
    "  y <- loadGridData(dataset = \"E-OBS_v14_0.50regular\",var = \"pr\",lonLim = xlim,latLim = ylim,years = 1979:2008)\n",
    "  # We train the logistic GLM  \n",
    "  xyT <- prepareData(x = scaleGrid(x,type = \"standardize\"), y = binaryGrid(y,condition = \"GE\",threshold = 1),\n",
    "                     spatial.predictors = list(v.exp=0.95, which.combine = getVarNames(x)),\n",
    "                     combined.only = TRUE) \n",
    "  model <- downscaleTrain(xyT,method = \"GLM\",family = binomial(link = \"logit\"))\n",
    "  # We predict on the train set, which is to be used to adjust the frequency of rainy days on the prediction set  \n",
    "  pred_ocu_train <- model$pred %>% redim(drop = TRUE)\n",
    "  \n",
    "  pred_ocu <- lapply(c(FALSE,TRUE), FUN = function(sim) {\n",
    "    lapply(1:2, FUN = function(z) {  \n",
    "      if (z == 1) {grid <- xh} else if (z == 2) {grid <- xf}\n",
    "      grid <- subsetGrid(grid,lonLim = xlim,latLim = ylim) # subset the latitude-longitude are of the given Prudence Region for the GCM predictors  \n",
    "      xyt <- prepareNewData(grid,xyT)\n",
    "      # Predict  \n",
    "      pred_ocu <- downscalePredict(xyt,model,simulate = sim) %>% redim(drop = TRUE)\n",
    "      if (!isTRUE(sim)) pred_ocu <- binaryGrid(pred_ocu,ref.obs = binaryGrid(y,condition = \"GE\",threshold = 1),ref.pred = pred_ocu_train)  \n",
    "      pred_ocu        \n",
    "    })  \n",
    "  }) %>% unlist(recursive = FALSE)\n",
    "  rm(model) # To free memory  \n",
    "  # We train the Gamma GLM  \n",
    "  xyT <- prepareData(x = scaleGrid(x,type = \"standardize\"), y = gridArithmetics(y,0.99,operator = \"-\"),\n",
    "                     spatial.predictors = list(v.exp=0.95, which.combine = getVarNames(x)),\n",
    "                     combined.only = TRUE)\n",
    "  model <- downscaleTrain(xyT,method = \"GLM\",family = Gamma(link = \"log\"),condition = \"GT\", threshold = 0)\n",
    "  pred <- lapply(c(FALSE,TRUE), FUN = function(sim) {\n",
    "    lapply(1:2, FUN = function(z) {  \n",
    "      if (z == 1) {grid <- xh} else if (z == 2) {grid <- xf}\n",
    "      grid <- subsetGrid(grid,lonLim = xlim,latLim = ylim) # subset the latitude-longitude are of the given Prudence Region for the GCM predictors  \n",
    "      xyt <- prepareNewData(grid,xyT)\n",
    "      # Predict  \n",
    "      grid_amo <- downscalePredict(xyt,model,simulate = sim) %>% gridArithmetics(0.99,operator = \"+\") %>% redim(drop = TRUE)\n",
    "      \n",
    "      if (!isTRUE(sim) && z == 1) grid_ocu <- pred_ocu[[1]]  \n",
    "      if (!isTRUE(sim) && z == 2) grid_ocu <- pred_ocu[[2]]  \n",
    "      if (isTRUE(sim) && z == 1) grid_ocu <- pred_ocu[[3]]  \n",
    "      if (isTRUE(sim) && z == 2) grid_ocu <- pred_ocu[[4]]  \n",
    "      gridArithmetics(grid_amo,grid_ocu)  \n",
    "    })\n",
    "  }) %>% unlist(recursive = FALSE) \n",
    "}) %>% unlist(recursive = FALSE)  \n",
    "s2 <- Sys.time()\n",
    "\n",
    "# We bind the 8 PRUDENCE regions in a single C4R object\n",
    "lapply(c(\"deterministic\",\"stochastic\"), FUN = function(zz) {\n",
    "  lapply(c(\"historical\",\"rcp85\"), FUN = function(z) {\n",
    "    if (zz == \"deterministic\" && z == \"historical\") {ind <- seq(1,n_regions*4,4)} \n",
    "    if (zz == \"deterministic\" && z == \"rcp85\") {ind <- seq(2,n_regions*4,4)} \n",
    "    if (zz == \"stochastic\" && z == \"historical\") {ind <- seq(3,n_regions*4,4)} \n",
    "    if (zz == \"stochastic\" && z == \"rcp85\") {ind <- seq(4,n_regions*4,4)} \n",
    "    p <- p[ind]  \n",
    "    p <- lapply(1:getShape(p[[1]],\"time\"), FUN = function(zz){ # for computational tractability we loop over time and after bind with bindGrid at the end of the loop\n",
    "      lapply(1:length(p), FUN = function(z){\n",
    "        subsetDimension(p[[z]],dimension = \"time\", indices = zz)\n",
    "      }) %>% mergeGrid(aggr.fun = list(FUN = \"mean\",na.rm = TRUE)) # use mergeGrid to merge all the Prudence Regions predictions into one single C4R object\n",
    "    }) %>% bindGrid(dimension = \"time\")\n",
    "    p <- p[c(\"Variable\",\"Data\",\"xyCoords\",\"Dates\")]\n",
    "    # save the predictions to local directory of a given GCM scenario  \n",
    "    grid2nc(p,NetCDFOutFile = paste0(\"./Data/precip/predictions_\",z,\"_\",zz,\"_glmPC.nc4\"))\n",
    "  })\n",
    "})\n",
    "s3 <- Sys.time()\n",
    "c(s1,s2,s3)  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 5.3 Downscaling (precipitation) with deep neural networks\n",
    "\n",
    "For the particular case of precipitation, there is a lack of data in the E-OBS dataset, especially for the period 2005-2008 over the (45º-49ºN, 16º-25ºE) domain. Therefore, and due to the multi-site nature of neural networks, we get rid of the days presenting no data in this region. To do so, we use the functions `filterNA`, `subsetGrid` and `intersectGrid` from `transformeR`. Beyond this particularity, the downscaling process is esentially the same presented for temperature in Section 3.3."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "s1 <- Sys.time() \n",
    "# discarding days presenting missing data over \n",
    "# the (45º-49ºN, 16º-25ºE) domain in E-OBS\n",
    "ysub <- filterNA(subsetGrid(y,latLim = c(49,55), lonLim = c(16,25))) %>% intersectGrid(y,which.return = 2)\n",
    "xsub <- intersectGrid(x,ysub,which.return = 1)\n",
    "xyT <- prepareData.keras(scaleGrid(xsub,type = \"standardize\"),\n",
    "                         binaryGrid(gridArithmetics(ysub,0.99, operator = \"-\"),\n",
    "                                    condition = \"GE\",\n",
    "                                    threshold = 0,\n",
    "                                    partial = TRUE),\n",
    "                         first.connection = \"conv\",\n",
    "                         last.connection = \"dense\",\n",
    "                         channels = \"last\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We define our CNN, which consists of three convolutional hidden layers with 50, 25 and 1 feature maps with *ReLu* activation functions. The last hidden layer is fully connected to the output layer, which is a concatenation of 3 layers representing the 3 estimated parameters (*p* = probability of rain, *alpha* = shape factor, *beta* = scale factor) for each predictand gridbox. We use `downscaleTrain.keras` to infer the model, which optimizes the negative log-likelihood of a Bernouilli-Gamma distribution. This is specified through the custom `bernouilliGamma.loss_function` loss function from `downscaleR.keras`. Note that, unlike for the case of the GLMs, the occurrence and amount of rainfall is simultaneously estimated by our CNN. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# defining and training our CNN (see the keras documentation for details)\n",
    "inputs <- layer_input(shape = dim(xyT$x.global)[-1])\n",
    "l0 = inputs\n",
    "l1 = layer_conv_2d(l0 ,filters = 50, kernel_size = c(3,3), activation = 'relu', padding = \"same\")\n",
    "l2 = layer_conv_2d(l1,filters = 25, kernel_size = c(3,3), activation = 'relu', padding = \"same\")\n",
    "l3 = layer_conv_2d(l2,filters = 1, kernel_size = c(3,3), activation = 'relu', padding = \"same\")\n",
    "l4 = layer_flatten(l3)\n",
    "parameter1 = layer_dense(l4,units = dim(xyT$y$Data)[2], activation = \"sigmoid\")\n",
    "parameter2 = layer_dense(l4,units = dim(xyT$y$Data)[2])\n",
    "parameter3 = layer_dense(l4,units = dim(xyT$y$Data)[2])\n",
    "outputs = layer_concatenate(list(parameter1,parameter2,parameter3))\n",
    "model <- keras_model(inputs = inputs, outputs = outputs)\n",
    "downscaleTrain.keras(obj = xyT,\n",
    "                     model = model,\n",
    "                     clear.session = TRUE,\n",
    "                     compile.args = list(\"loss\" = bernouilliGamma.loss_function(last.connection = \"dense\"),\n",
    "                                         \"optimizer\" = optimizer_adam(lr = 0.0001)),\n",
    "                     fit.args = list(\"batch_size\" = 100,\n",
    "                                     \"epochs\" = 1000,\n",
    "                                     \"validation_split\" = 0.1,\n",
    "                                     \"verbose\" = 1,\n",
    "                                     \"callbacks\" = list(callback_early_stopping(patience = 30),\n",
    "                                                        callback_model_checkpoint(filepath=paste0('./models/precip/CNN.h5'),\n",
    "                                                                                  monitor='val_loss', save_best_only=TRUE))))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We use now the above model to make predictions from the training dataset, which will be later used to adjust the frequency of rainy days."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# predictions for the training dataset\n",
    "xyt <- prepareNewData.keras(scaleGrid(x,type = \"standardize\"),xyT)\n",
    "pred_ocu_train <- downscalePredict.keras(newdata = xyt,\n",
    "                                         model = list(\"filepath\" = paste0(\"./models/precip/CNN.h5\"), \n",
    "                                                      \"custom_objects\" = c(\"custom_loss\" = bernouilliGamma.loss_function(last.connection = \"dense\"))),\n",
    "                                         C4R.template = ysub,\n",
    "                                         clear.session = TRUE) %>% \n",
    "  subsetGrid(var = \"pr1\")\n",
    "rm(xyt)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Next we use `downscalePredict.keras` to make predictions from both the historical and RCP8.5 scenarios. Note that the `bernouilliGamma.statistics` function is used to compute the expectance of the conditional daily distributions using the parameters infered by the networkd on the output layer. The predictions are locally saved in netCDF format."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# using the CNN model to make precipitation predictions from \n",
    "# EC-EARTH (both for historical and RCP8.5 scenarios)\n",
    "simulateName <- c(\"deterministic\",\"stochastic\")\n",
    "simulateDeep <- c(FALSE,TRUE)\n",
    "scenario <- c(\"rcp85\",\"historical\")\n",
    "lapply(scenario,FUN = function(z){\n",
    "  # 'if' loop to distnguish between the historical and RCP8.5 scenario. \n",
    "  if (z == \"historical\") {xy <- prepareNewData.keras(xh,xyT)} \n",
    "  else if (z == \"rcp85\") {xy <- prepareNewData.keras(xf,xyT)}\n",
    "  pred <- downscalePredict.keras(xy,\n",
    "                                 model = list(\"filepath\" = paste0(\"./models/precip/CNN.h5\"), \n",
    "                                              \"custom_objects\" = c(\"custom_loss\" = bernouilliGamma.loss_function(last.connection = \"dense\"))),\n",
    "                                 C4R.template = ysub,\n",
    "                                 clear.session = TRUE)\n",
    "  rm(xy)\n",
    "  lapply(1:length(simulateDeep),FUN = function(zz) {\n",
    "    # We use the function bernouilliGamma.statistics to 1) compute the expectance or 2) sample from, the daily conditional Bernouilli-Gamma distributions  \n",
    "    pred <- bernouilliGamma.statistics(p = subsetGrid(pred,var = \"pr1\"),\n",
    "                                       alpha = subsetGrid(pred,var = \"pr2\"),\n",
    "                                       beta = subsetGrid(pred,var = \"pr3\"),\n",
    "                                       simulate = simulateDeep[zz],\n",
    "                                       bias = 0.99)\n",
    "    pred_ocu <- subsetGrid(pred,var = \"probOfRain\") %>% redim(drop = TRUE)\n",
    "    pred_amo <- subsetGrid(pred,var = \"amountOfRain\") %>% redim(drop = TRUE)\n",
    "    if (!isTRUE(simulateDeep[zz])) {\n",
    "      pred_bin <- binaryGrid(pred_ocu,\n",
    "                             ref.obs = binaryGrid(y,threshold = 1,condition = \"GE\"),\n",
    "                             ref.pred = pred_ocu_train)\n",
    "    } else {\n",
    "      pred_bin <- pred_ocu\n",
    "    }\n",
    "    p <- gridArithmetics(pred_bin,pred_amo)\n",
    "    p$Variable$varName <- \"pr\"; attr(p$Variable,\"longname\") <- \"pr\"\n",
    "    # We save the CNN predictions to local directory  \n",
    "    grid2nc(p,NetCDFOutFile = paste0(\"./Data/precip/predictions_\",z,\"_\",simulateName[zz],\"_CNN.nc4\"))\n",
    "  })\n",
    "}) \n",
    "rm(xyT,pred_ocu_train,ysub,xsub) # to save space\n",
    "s2 <- Sys.time()  \n",
    "c(s1,s2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 5.4 Download precipitation from the EC-Earth\n",
    "\n",
    "This section is equivalent to Section 4.4 but for precipitation."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# preparing E-OBS masks\n",
    "mask_h <- loadGridData(\"./Data/precip/pr_E-OBS_v14_0.50regular.nc4\",var = \"pr\") %>% \n",
    "  gridArithmetics(0) %>% gridArithmetics(1,operator = \"+\") \n",
    "mask_f <- subsetDimension(mask_h,dimension = \"time\", indices = 1:(getShape(mask_h,\"time\")-1))\n",
    "\n",
    "# post-processing EC-EARTH precipitation\n",
    "labelsCM <- c(\"CMIP5-subset_EC-EARTH_r12i1p1_historical\",\n",
    "              \"CMIP5-subset_EC-EARTH_r12i1p1_rcp85\")\n",
    "for (z in 1:length(labelsCM)) {\n",
    "  if (z == 1) {years <- 1979:2008 ; mask <- mask_h}\n",
    "  if (z == 2) {years <- 2071:2100 ; mask <- mask_f}\n",
    "  grid <- loadGridData(dataset = labelsCM[z],\n",
    "                       var = \"pr\",\n",
    "                       lonLim = c(-10,32),\n",
    "                       latLim = c(36,72), \n",
    "                       years = years) %>% \n",
    "    interpGrid(getGrid(y)) %>% \n",
    "    binaryGrid(condition = \"GE\",threshold = 1,partial = TRUE)\n",
    "  grid %>% gridArithmetics(mask) %>% \n",
    "    grid2nc(NetCDFOutFile = paste0(\"./Data/precip/pr_\",labelsCM[z],\".nc4\"))\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 5.5 Validation of results\n",
    "This section is equivalent to Section 4.5 but for precipitation. The metrics computed for this variable are the relative biases for the R01, SDII and P98 indices between 1) the historical and observed values and 2) the RCP8.5 and historical values. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# computing validation metrics\n",
    "models <- list(\n",
    "  c(\"pr_E-OBS_v14_0.50regular\",NA),\n",
    "  c(\"pr_E-OBS_v14_0.50regular\",\"pr_CMIP5-subset_EC-EARTH_r12i1p1_historical\"),\n",
    "  c(\"pr_E-OBS_v14_0.50regular\",\"predictions_historical_deterministic_glm1\"),\n",
    "  c(\"pr_E-OBS_v14_0.50regular\",\"predictions_historical_deterministic_glm4\"),\n",
    "  c(\"pr_E-OBS_v14_0.50regular\",\"predictions_historical_deterministic_glmPC\"),\n",
    "  c(\"pr_E-OBS_v14_0.50regular\",\"predictions_historical_deterministic_CNN\"),\n",
    "  c(\"pr_CMIP5-subset_EC-EARTH_r12i1p1_historical\",NA),\n",
    "  c(\"pr_CMIP5-subset_EC-EARTH_r12i1p1_historical\",\"pr_CMIP5-subset_EC-EARTH_r12i1p1_rcp85\"),\n",
    "  c(\"predictions_historical_deterministic_glm1\",\"predictions_rcp85_deterministic_glm1\"),\n",
    "  c(\"predictions_historical_deterministic_glm4\",\"predictions_rcp85_deterministic_glm4\"),\n",
    "  c(\"predictions_historical_deterministic_glmPC\",\"predictions_rcp85_deterministic_glmPC\"),\n",
    "  c(\"predictions_historical_deterministic_CNN\",\"predictions_rcp85_deterministic_CNN\")\n",
    ")\n",
    "\n",
    "sdModel <- c(\"glm1\",\"glm4\",\"glmPC\",\"CNN\")\n",
    "index <- c(\"R01\",\"SDII\",\"P98\",\"P98\")\n",
    "validation.list <- lapply(1:length(models), FUN = function(zz){\n",
    "  args <- list()\n",
    "  if (!any(zz == c(1,7))) {\n",
    "    args[[\"y\"]] <- loadGridData(paste0(\"./Data/precip/\",models[[zz]][1],\".nc4\"),var = \"pr\")\n",
    "    args[[\"x\"]] <- loadGridData(paste0(\"./Data/precip/\",models[[zz]][2],\".nc4\"),var = \"pr\")\n",
    "  } else {\n",
    "    args[[\"grid\"]] <- loadGridData(paste0(\"./Data/precip/\",models[[zz]][1],\".nc4\"),var = \"pr\")\n",
    "  }\n",
    "  lapply(1:length(index), FUN = function(z) {\n",
    "    if (any(z == c(3,4))) {\n",
    "      args[[\"condition\"]] <- \"GE\" \n",
    "      args[[\"threshold\"]] <- 1\n",
    "      args[[\"which.wetdays\"]] <- \"Independent\"  \n",
    "      if (z == 4) {\n",
    "        if (any(zz == c(3,4,5,6))) {\n",
    "          args[[\"x\"]] <- loadGridData(paste0(\"./Data/precip/predictions_historical_stochastic_\",sdModel[zz-2],\".nc4\"),var = \"pr\")\n",
    "        } else if (any(zz == c(9,10,11,12))) {\n",
    "          args[[\"y\"]] <- loadGridData(paste0(\"./Data/precip/predictions_historical_stochastic_\",sdModel[zz-8],\".nc4\"),var = \"pr\")\n",
    "          args[[\"x\"]] <- loadGridData(paste0(\"./Data/precip/predictions_rcp85_stochastic_\",sdModel[zz-8],\".nc4\"),var = \"pr\")\n",
    "        }\n",
    "      }\n",
    "    }\n",
    "    if (zz == 5) args[[\"y\"]] <- intersectGrid(args[[\"y\"]],args[[\"x\"]],type = \"spatial\")\n",
    "    if (!any(zz == c(1,7))) args[[\"measure.code\"]] <- \"biasRel\"\n",
    "    args[[\"index.code\"]] <- index[z]\n",
    "    if (any(zz == c(1,7))) return(do.call(\"valueIndex\",args)$Index)\n",
    "    if (!any(zz == c(1,7))) return(do.call(\"valueMeasure\",args)$Measure)\n",
    "  }) %>% makeMultiGrid()\n",
    "})"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The following block of code replicates the precipitation pannels shown in Fig. 3 of the paper:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# replicating precipitation pannels in Fig. 3 of the paper\n",
    "nmes <- c(\"EC-EARTH\",\"GLM1\",\"GLM4\",\"GLMPC\",\"CNN\")\n",
    "cb <- brewer.pal(n = 11, \"BrBG\")\n",
    "cb[5:7] <- \"#FFFFFF\"; cb <- cb %>% colorRampPalette()\n",
    "val.plots <- lapply(1:length(nmes), FUN = function(zz) {\n",
    "  lapply(1:length(index), FUN = function(z) {\n",
    "    spatialPlot(redim(subsetDimension(validation.list[[zz+1]],dimension = \"var\", indices = z),drop = TRUE),\n",
    "                backdrop.theme = \"coastline\",\n",
    "                main = paste(nmes[zz],\"- biasRel\",index[z]),\n",
    "                col.regions = cb,\n",
    "                at = seq(-0.5,0.5,length.out = 21),\n",
    "                set.min = -0.5, set.max = 0.5) \n",
    "  }) \n",
    "}) %>% unlist(recursive = FALSE)\n",
    "pdf(file = \"./figures/fig01_precip.pdf\",width = 13,height = 16)\n",
    "grid.arrange(grobs = val.plots, ncol = 4)\n",
    "dev.off() "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This last piece of code replicates the precipitation pannels shown in Fig. 4 of the paper:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# replicating precipitation pannels in Fig. 4 of the paper\n",
    "delta.plots <- lapply(1:length(nmes), FUN = function(zz) {\n",
    "  lapply(1:length(index), FUN = function(z) {\n",
    "    if (zz == 1) {\n",
    "      spatialPlot(redim(subsetDimension(validation.list[[zz+7]],dimension = \"var\", indices = z),drop = TRUE),\n",
    "                  backdrop.theme = \"coastline\",\n",
    "                  main = paste(nmes[zz],\"- delta\",index[z]),\n",
    "                  col.regions = cb,\n",
    "                  at = seq(-0.5, 0.5,length.out = 21),\n",
    "                  set.min = -0.5, set.max = 0.5)\n",
    "    } else {\n",
    "      grid2 <- subsetDimension(validation.list[[zz+7]],dimension = \"var\", indices = z)\n",
    "      grid1 <- intersectGrid(subsetDimension(validation.list[[8]],dimension = \"var\", indices = z),grid2,type = \"spatial\")\n",
    "      grid <- gridArithmetics(grid2,\n",
    "                              grid1,\n",
    "                              operator = \"-\")\n",
    "      spatialPlot(grid,\n",
    "                  backdrop.theme = \"coastline\",\n",
    "                  main = paste(nmes[zz],\"- delta diff.\",index[z]),\n",
    "                  col.regions = cb,\n",
    "                  at = seq(-0.5, 0.5,length.out = 21),\n",
    "                  set.min = -0.5, set.max = 0.5)\n",
    "    }\n",
    "  }) \n",
    "}) %>% unlist(recursive = FALSE)\n",
    "pdf(file = \"./figures/fig02_precip.pdf\",width = 13,height = 16)\n",
    "grid.arrange(grobs = delta.plots, ncol = 4)\n",
    "dev.off() "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 6. Technical specifications \n",
    "\n",
    "This notebook was run on a machine with the following technical specifications:\n",
    "\n",
    "1. Virtual machine:\n",
    "     + Operating system: Ubuntu 18.04.3 LTS (64 bits)\n",
    "     + Memory: 60 GiB \n",
    "     + Processor: 2x Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz (16 cores, 32 threads)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "R",
   "language": "R",
   "name": "ir"
  },
  "language_info": {
   "codemirror_mode": "r",
   "file_extension": ".r",
   "mimetype": "text/x-r-source",
   "name": "R",
   "pygments_lexer": "r",
   "version": "3.6.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
