{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Loan Credit Risk Prediction\n",
    "\n",
    "When a financial institution examines a request for a loan, it is crucial to assess the risk of default to determine whether to grant it, and if so, what will be the interest rate. \n",
    "\n",
    "This notebook takes advantage of the power of SQL Server and RevoScaleR (Microsoft R Server). The tables are all stored in a SQL Server, and most of the computations are done by loading chunks of data in-memory instead of the whole dataset.\n",
    "\n",
    "It does the following: \n",
    "\n",
    " * **Step 0: Packages, Compute Contexts and Database Creation**\n",
    " * **Step 1: Pre-Processing and Cleaning**\n",
    " * **Step 2: Feature Engineering**\n",
    " * **Step 3: Training, Scoring and Evalutating a Logistic Regression Model**\n",
    " * **Step 4: Operational Metrics Computation and Scores Transformation**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Step 0: Packages, Compute Contexts and Database Creation\n",
    "\n",
    "#### In this step, we set up the connection string to access a SQL Server Database we create and load the necessary packages. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# WARNING.\n",
    "# We recommend not using Internet Explorer as it does not support plotting, and may crash your session."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# INPUT DATA SETS: point to the correct path.  \n",
    "Loan <- \"C:/Solutions/Loans/Data/Loan.txt\"\n",
    "Borrower <- \"C:/Solutions/Loans/Data/Borrower.txt\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "# Load packages.\n",
    "library(RevoScaleR)\n",
    "library(\"MicrosoftML\")\n",
    "library(smbinning)\n",
    "library(ROCR)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Creating the connection string. Specify:\n",
    "## Database name. If it already exists, tables will be overwritten. If not, it will be created.\n",
    "## Server name. If conecting remotely to the DSVM, the full DNS address should be used with the port number 1433 (which should be enabled) \n",
    "## User ID and Password. Change them below if you modified the default values.  \n",
    "db_name <- \"Loans\"\n",
    "server <- \"localhost\"\n",
    "\n",
    "connection_string <- sprintf(\"Driver=SQL Server;Server=%s;Database=%s;TRUSTED_CONNECTION=True\", server, db_name)\n",
    "\n",
    "print(\"Connection String Written.\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Create the database if not already existing. \n",
    "\n",
    "## Open an Odbc connection with SQL Server master database only to create a new database with the rxExecuteSQLDDL function.\n",
    "connection_string_master <- sprintf(\"Driver=SQL Server;Server=%s;Database=master;TRUSTED_CONNECTION=True\", server)\n",
    "outOdbcDS_master <- RxOdbcData(table = \"Default_Master\", connectionString = connection_string_master)                         \n",
    "rxOpen(outOdbcDS_master, \"w\")\n",
    "\n",
    "## Create database if not already existing. \n",
    "query <- sprintf( \"if not exists(SELECT * FROM sys.databases WHERE name = '%s') CREATE DATABASE %s;\", db_name, db_name)\n",
    "rxExecuteSQLDDL(outOdbcDS_master, sSQLString = query)\n",
    "\n",
    "## Close Obdc connection to master database. \n",
    "rxClose(outOdbcDS_master)\n",
    "\n",
    "print(\"Database created if not already existing.\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Define Compute Contexts.\n",
    "sql <- RxInSqlServer(connectionString = connection_string)\n",
    "local <- RxLocalSeq()\n",
    "\n",
    "# Open a connection with SQL Server to be able to write queries with the rxExecuteSQLDDL function in the new database.\n",
    "outOdbcDS <- RxOdbcData(table = \"Default\", connectionString = connection_string)\n",
    "rxOpen(outOdbcDS, \"w\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### The function below can be used to get the top n rows of a table stored on SQL Server. \n",
    "#### You can execute this cell throughout your progress by removing the comment \"#\", and inputting:\n",
    "#### - the table name.\n",
    "#### - the number of rows you want to display."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "display_head <- function(table_name, n_rows){\n",
    "   table_sql <- RxSqlServerData(sqlQuery = sprintf(\"SELECT TOP(%s) * FROM %s\", n_rows, table_name), connectionString = connection_string)\n",
    "   table <- rxImport(table_sql)\n",
    "   print(table)\n",
    "}\n",
    "\n",
    "# table_name <- \"insert_table_name\"\n",
    "# n_rows <- 10\n",
    "# display_head(table_name, n_rows)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Step 1: Pre-Processing and Cleaning\n",
    "\n",
    "In this step, we: \n",
    "\n",
    "**1.** Upload the 2 raw data sets Loan and Borrower from disk to the SQL Server.\n",
    "\n",
    "**2.** Join the 2 tables into one.\n",
    "\n",
    "**3.** Perform a small pre-processing on a few variables.\n",
    "\n",
    "**4.** Clean the merged data set: we replace NAs with the mode (categorical variables) or mean (continuous variables).\n",
    "\n",
    "**Input:** 2 Data Tables: Loan and Borrower.\n",
    "\n",
    "**Output:** Cleaned data set Merged_Cleaned."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# Set the compute context to Local. \n",
    "rxSetComputeContext(local)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "# Upload the data set to SQL.\n",
    "\n",
    "## Specify the desired column types. \n",
    "## When uploading to SQL, Character and Factor are converted to nvarchar(255), Integer to Integer and Numeric to Float. \n",
    "column_types_loan <-  c(loanId = \"integer\",    \n",
    "                        memberId = \"integer\",  \n",
    "                        date = \"character\",\n",
    "                        purpose = \"character\",\n",
    "                        isJointApplication = \"character\",\n",
    "                        loanAmount = \"numeric\",\n",
    "                        term = \"character\",\n",
    "                        interestRate = \"numeric\",\n",
    "                        monthlyPayment = \"numeric\",\n",
    "                        grade = \"character\",\n",
    "                        loanStatus = \"character\")\n",
    "                          \n",
    "column_types_borrower <- c(memberId = \"integer\",  \n",
    "                           residentialState = \"character\",\n",
    "                           yearsEmployment = \"character\",\n",
    "                           homeOwnership = \"character\",\n",
    "                           annualIncome = \"numeric\",\n",
    "                           incomeVerified = \"character\",\n",
    "                           dtiRatio = \"numeric\",\n",
    "                           lengthCreditHistory = \"integer\",\n",
    "                           numTotalCreditLines = \"integer\",\n",
    "                           numOpenCreditLines = \"integer\",\n",
    "                           numOpenCreditLines1Year = \"integer\",\n",
    "                           revolvingBalance = \"numeric\",\n",
    "                           revolvingUtilizationRate = \"numeric\",\n",
    "                           numDerogatoryRec = \"integer\",\n",
    "                           numDelinquency2Years = \"integer\",\n",
    "                           numChargeoff1year = \"integer\",\n",
    "                           numInquiries6Mon = \"integer\")\n",
    "  \n",
    "## Point to the input data sets while specifying the classes.\n",
    "Loan_text <- RxTextData(file = Loan, colClasses = column_types_loan)\n",
    "Borrower_text <- RxTextData(file = Borrower, colClasses = column_types_borrower)\n",
    "  \n",
    "## Upload the data to SQL tables. \n",
    "Loan_sql <- RxSqlServerData(table = \"Loan\", connectionString = connection_string)\n",
    "Borrower_sql <- RxSqlServerData(table = \"Borrower\", connectionString = connection_string)\n",
    "  \n",
    "rxDataStep(inData = Loan_text, outFile = Loan_sql, overwrite = TRUE)\n",
    "rxDataStep(inData = Borrower_text, outFile = Borrower_sql, overwrite = TRUE)\n",
    "\n",
    "print(\"Data exported to SQL.\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# Set the compute context to SQL. \n",
    "rxSetComputeContext(sql)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Inner join of the raw tables Loan and Borrower.\n",
    "rxExecuteSQLDDL(outOdbcDS, sSQLString = \"DROP TABLE if exists Merged;\")\n",
    "    \n",
    "rxExecuteSQLDDL(outOdbcDS, sSQLString = \n",
    "    \"SELECT loanId, [date], purpose, isJointApplication, loanAmount, term, interestRate, monthlyPayment,\n",
    "            grade, loanStatus, Borrower.*\n",
    "     INTO Merged\n",
    "     FROM Loan JOIN Borrower\n",
    "     ON Loan.memberId = Borrower.memberId;\")\n",
    "\n",
    "print(\"Merging of the two tables completed.\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Determine if Merged has missing values and compute statistics for use in Production. \n",
    "\n",
    "## Use rxSummary function to get the names of the variables with missing values.\n",
    "## Assumption: no NAs in the id variables (loan_id and member_id), target variable and date.\n",
    "## For rxSummary to give correct info on characters, stringsAsFactors = T should be used. \n",
    "Merged_sql <- RxSqlServerData(table = \"Merged\", connectionString = connection_string, stringsAsFactors = T)\n",
    "col_names <- rxGetVarNames(Merged_sql)\n",
    "var_names <- col_names[!col_names %in% c(\"loanId\", \"memberId\", \"loanStatus\", \"date\")]\n",
    "formula <- as.formula(paste(\"~\", paste(var_names, collapse = \"+\")))\n",
    "summary <- rxSummary(formula, Merged_sql, byTerm = TRUE)\n",
    "  \n",
    "## Get the variables types.\n",
    "categorical_all <- unlist(lapply(summary$categorical, FUN = function(x){colnames(x)[1]}))\n",
    "numeric_all <- setdiff(var_names, categorical_all)\n",
    "  \n",
    "## Get the variables names with missing values. \n",
    "var_with_NA <- summary$sDataFrame[summary$sDataFrame$MissingObs > 0, 1]\n",
    "categorical_NA <- intersect(categorical_all, var_with_NA)\n",
    "numeric_NA <- intersect(numeric_all, var_with_NA)\n",
    "\n",
    "## Compute the global means. \n",
    "Summary_DF <- summary$sDataFrame\n",
    "Numeric_Means <- Summary_DF[Summary_DF$Name %in% numeric_all, c(\"Name\", \"Mean\")]\n",
    "Numeric_Means$Mean  <- round(Numeric_Means$Mean) \n",
    "  \n",
    "## Compute the global modes. \n",
    "## Get the counts tables.\n",
    "Summary_Counts <- summary$categorical\n",
    "names(Summary_Counts) <- lapply(Summary_Counts, FUN = function(x){colnames(x)[1]})\n",
    "  \n",
    "## Compute for each count table the value with the highest count. \n",
    "modes <- unlist(lapply(Summary_Counts, FUN = function(x){as.character(x[which.max(x[,2]),1])}), use.names = FALSE)\n",
    "Categorical_Modes <- data.frame(Name = categorical_all, Mode = modes)\n",
    "  \n",
    "## Set the compute context to local to export the summary statistics to SQL. \n",
    "## The schema of the Statistics table is adapted to the one created in the SQL code. \n",
    "rxSetComputeContext('local')\n",
    "  \n",
    "Numeric_Means$Mode <- NA\n",
    "Numeric_Means$type <- \"float\" \n",
    "  \n",
    "Categorical_Modes$Mean <- NA\n",
    "Categorical_Modes$type <- \"char\"\n",
    "  \n",
    "Stats <- rbind(Numeric_Means, Categorical_Modes)[, c(\"Name\", \"type\", \"Mode\", \"Mean\")]\n",
    "colnames(Stats) <- c(\"variableName\", \"type\", \"mode\", \"mean\")\n",
    "  \n",
    "## Save the statistics to SQL for Production use. \n",
    "Stats_sql <- RxSqlServerData(table = \"Stats\", connectionString = connection_string)\n",
    "rxDataStep(inData = Stats, outFile = Stats_sql, overwrite = TRUE)\n",
    "  \n",
    "## Set the compute context back to SQL. \n",
    "rxSetComputeContext(sql)  \n",
    "\n",
    "\n",
    "# If no missing values, we move the data to a new table Merged_Cleaned. \n",
    "if(length(var_with_NA) == 0){\n",
    "    print(\"No missing values: no treatment will be applied.\")\n",
    "   \n",
    "    rxExecuteSQLDDL(outOdbcDS, sSQLString = \"DROP TABLE if exists Merged_Cleaned;\")\n",
    "    rxExecuteSQLDDL(outOdbcDS, sSQLString = \"SELECT * INTO Merged_Cleaned FROM Merged;\")\n",
    "    \n",
    "    missing <- 0     \n",
    "    \n",
    "} else{\n",
    "    print(\"Variables containing missing values are:\")\n",
    "    print(var_with_NA)\n",
    "    missing <- 1\n",
    "    print(\"Perform data cleaning in the next cell.\")\n",
    "}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# If applicable, NULL is replaced with the mode (categorical variables: integer or character) or mean (continuous variables).\n",
    "\n",
    "if(missing == 1){\n",
    "\n",
    "    # Get the global means of the numeric variables with missing values.\n",
    "    numeric_NA_mean <- round(Stats[Stats$variableName %in% numeric_NA, \"mean\"])\n",
    "    \n",
    "    # Get the global modes of the categorical variables with missing values. \n",
    "    categorical_NA_mode <- as.character(Stats[Stats$variableName %in% categorical_NA, \"mode\"]) \n",
    "    \n",
    "    # Function to replace missing values with mean or mode. It will be wrapped into rxDataStep. \n",
    "    Mean_Mode_Replace <- function(data) {\n",
    "      data <- data.frame(data, stringsAsFactors = FALSE)\n",
    "      # Replace numeric variables with the mean. \n",
    "      if(length(num_with_NA) > 0){\n",
    "        for(i in 1:length(num_with_NA)){\n",
    "          row_na <- which(is.na(data[, num_with_NA[i]])) \n",
    "          data[row_na, num_with_NA[i]] <- num_NA_mean[i]\n",
    "        }\n",
    "      }\n",
    "      # Replace categorical variables with the mode. \n",
    "      if(length(cat_with_NA) > 0){\n",
    "        for(i in 1:length(cat_with_NA)){\n",
    "          row_na <- which(is.na(data[, cat_with_NA[i]])) \n",
    "          data[row_na, cat_with_NA[i]] <- cat_NA_mode[i]\n",
    "        }\n",
    "      }\n",
    "      return(data)  \n",
    "    }\n",
    "    \n",
    "    # Point to the input table. \n",
    "    Merged_sql <- RxSqlServerData(table = \"Merged\", connectionString = connection_string)\n",
    "    \n",
    "    # Point to the output (empty) table. \n",
    "    Merged_Cleaned_sql <- RxSqlServerData(table = \"Merged_Cleaned\", connectionString = connection_string)\n",
    "      \n",
    "    # Perform the data cleaning with rxDataStep. \n",
    "    rxDataStep(inData = Merged_sql, \n",
    "               outFile = Merged_Cleaned_sql, \n",
    "               overwrite = TRUE, \n",
    "               transformFunc = Mean_Mode_Replace,\n",
    "               transformObjects = list(num_with_NA = numeric_NA , num_NA_mean = numeric_NA_mean,\n",
    "                                       cat_with_NA = categorical_NA, cat_NA_mode = categorical_NA_mode))  \n",
    " \n",
    "    print(\"Data cleaned.\")\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Step 2: Feature Engineering\n",
    "\n",
    "In this step, we:\n",
    "\n",
    "**1.** Create the label isBad based on the status of the loan.\n",
    "\n",
    "**2.** Split the cleaned data set into a Training and a Testing set. \n",
    "\n",
    "**3.**  Bucketize all the numeric variables, based on Conditional Inference Trees, using the smbinning package on the Training set. \n",
    "\n",
    "**Input:** Cleaned data set Merged_Cleaned.\n",
    "\n",
    "**Output:** Data set with new features Merged_Features."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Point to the input table. \n",
    "Merged_Cleaned_sql <- RxSqlServerData(table = \"Merged_Cleaned\", connectionString = connection_string)\n",
    "\n",
    "# Point to the Output SQL table:\n",
    "Merged_Labeled_sql <- RxSqlServerData(table = \"Merged_Labeled\", connectionString = connection_string)\n",
    "  \n",
    "# Create the target variable, isBad, based on loanStatus.\n",
    "rxDataStep(inData = Merged_Cleaned_sql ,\n",
    "           outFile = Merged_Labeled_sql, \n",
    "           overwrite = TRUE, \n",
    "           transforms = list(\n",
    "               isBad = ifelse(loanStatus %in% c(\"Current\"), \"0\", \"1\")  \n",
    "           ))\n",
    "\n",
    "print(\"Label isBad created.\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Split the cleaned data set into a Training and a Testing set.\n",
    "\n",
    "## Create the Hash_Id table containing loanId hashed to integers. \n",
    "## The advantage of using a hashing function for splitting is to permit repeatability of the experiment.  \n",
    "rxExecuteSQLDDL(outOdbcDS, sSQLString = \"DROP TABLE if exists Hash_Id;\")\n",
    "  \n",
    "rxExecuteSQLDDL(outOdbcDS, sSQLString = \n",
    "\"SELECT loanId, ABS(CAST(CAST(HashBytes('MD5', CAST(loanId AS varchar(20))) AS VARBINARY(64)) AS BIGINT) % 100) AS hashCode  \n",
    "INTO Hash_Id\n",
    "FROM Merged_Labeled ;\")\n",
    "  \n",
    "# Point to the training set. \n",
    "Train_sql <- RxSqlServerData(sqlQuery = \n",
    "                            \"SELECT *   \n",
    "                             FROM Merged_Labeled \n",
    "                             WHERE loanId IN (SELECT loanId from Hash_Id WHERE hashCode <= 70)\",\n",
    "                            connectionString = connection_string)\n",
    "\n",
    "print(\"Splitting completed.\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Compute optimal bins for numeric variables using the smbinning package on the Training set. \n",
    "\n",
    "# Using the smbinning has some limitations, such as: \n",
    "# - The variable should have more than 10 unique values. \n",
    "# - If no significant splits are found, it does not output bins. \n",
    "# For this reason, we manually specify default bins based on an analysis of the variables distributions or smbinning on a larger data set. \n",
    "# We then overwrite them with smbinning when it output bins. \n",
    "  \n",
    "bins <- list()\n",
    "  \n",
    "# Default cutoffs for bins:\n",
    "# EXAMPLE: If the cutoffs are (c1, c2, c3),\n",
    "## Bin 1 = ]- inf, c1], Bin 2 = ]c1, c2], Bin 3 = ]c2, c3], Bin 4 = ]c3, + inf] \n",
    "## c1 and c3 are NOT the minimum and maximum found in the training set. \n",
    "bins$loanAmount <- c(14953, 18951, 20852, 22122, 24709, 28004)\n",
    "bins$interestRate <- c(7.17, 10.84, 12.86, 14.47, 15.75, 18.05)\n",
    "bins$monthlyPayment <- c(382, 429, 495, 529, 580, 649, 708, 847)\n",
    "bins$annualIncome <- c(49402, 50823, 52089, 52885, 53521, 54881, 55520, 57490)\n",
    "bins$dtiRatio <- c(9.01, 13.42, 15.92, 18.50, 21.49, 22.82, 24.67)\n",
    "bins$lengthCreditHistory <- c(8)\n",
    "bins$numTotalCreditLines <- c(1, 2)\n",
    "bins$numOpenCreditLines <- c(3, 5)\n",
    "bins$numOpenCreditLines1Year <- c(3, 4, 5, 6, 7, 9)\n",
    "bins$revolvingBalance <- c(11912, 12645, 13799, 14345, 14785, 15360, 15883, 16361, 17374, 18877)\n",
    "bins$revolvingUtilizationRate <- c(49.88, 60.01, 74.25, 81.96)\n",
    "bins$numDerogatoryRec <- c(0, 1)\n",
    "bins$numDelinquency2Years <- c(0)\n",
    "bins$numChargeoff1year <- c(0)\n",
    "bins$numInquiries6Mon <- c(0)\n",
    "  \n",
    "# Import the training set to be able to apply smbinning. \n",
    "Train_df <- rxImport(Train_sql)\n",
    "  \n",
    "# Set the type of the label to numeric. \n",
    "Train_df$isBad <- as.numeric(as.character(Train_df$isBad))\n",
    "  \n",
    "# Function to compute smbinning on every variable. \n",
    "compute_bins <- function(name, data){\n",
    " library(smbinning)\n",
    " output <- smbinning(data, y = \"isBad\", x = name, p = 0.05)\n",
    " if (class(output) == \"list\"){ # case where the binning was performed and returned bins.\n",
    "    cuts <- output$cuts  \n",
    "    return (cuts)\n",
    " }\n",
    "}\n",
    "  \n",
    "\n",
    "# We apply it in parallel accross cores with rxExec and the compute context set to Local Parallel.\n",
    "## 3 cores will be used here so the code can run on servers with smaller RAM. \n",
    "## You can increase numCoresToUse below in order to speed up the execution if using a larger server.\n",
    "## numCoresToUse = -1 will enable the use of the maximum number of cores.\n",
    "rxOptions(numCoresToUse = 3) # use 3 cores.\n",
    "rxSetComputeContext('localpar')\n",
    "bins_smb <- rxExec(compute_bins, name = rxElemArg(names(bins)), data = Train_df)\n",
    "names(bins_smb) <- names(bins)\n",
    "  \n",
    "# Fill bins with bins obtained in bins_smb with smbinning. \n",
    "## We replace the default values in bins if and only if smbinning returned a non NULL result. \n",
    "for(name in names(bins)){\n",
    " if (!is.null(bins_smb[[name]])){ \n",
    "     bins[[name]] <- bins_smb[[name]]\n",
    "  }\n",
    " }\n",
    "  \n",
    "# Save the bins to SQL for use in Production Stage. \n",
    "  \n",
    "## Open an Odbc connection with SQL Server.\n",
    "OdbcModel <- RxOdbcData(table = \"Bins\", connectionString = connection_string)\n",
    "rxOpen(OdbcModel, \"w\")\n",
    "  \n",
    "## Drop the Bins table if it exists. \n",
    "if(rxSqlServerTableExists(OdbcModel@table, OdbcModel@connectionString)) {\n",
    "    rxSqlServerDropTable(OdbcModel@table, OdbcModel@connectionString)\n",
    "}\n",
    "  \n",
    "## Create an empty Bins table. \n",
    "rxExecuteSQLDDL(OdbcModel, \n",
    "                sSQLString = paste(\" CREATE TABLE [\", OdbcModel@table, \"] (\",\n",
    "                                     \"     [id] varchar(200) not null, \",\n",
    "                                     \"     [value] varbinary(max), \",\n",
    "                                     \"     constraint unique_id unique (id))\",\n",
    "                                     sep = \"\")\n",
    ")\n",
    "  \n",
    "## Write the model to SQL. \n",
    "rxWriteObject(OdbcModel, \"Bin Info\", bins)\n",
    "  \n",
    "## Close the Obdc connection used. \n",
    "rxClose(OdbcModel)\n",
    "  \n",
    "# Set back the compute context to SQL.\n",
    "rxSetComputeContext(sql)\n",
    "  \n",
    "  \n",
    "print(\"Bins computed/defined.\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "  # Function to bucketize numeric variables. It will be wrapped into rxDataStep. \n",
    "  bucketize <- function(data) { \n",
    "    for(name in  names(b)) { \n",
    "      name2 <- paste(name, \"Bucket\", sep = \"\") \n",
    "      data[[name2]] <- as.character(as.numeric(cut(data[[name]], c(-Inf, b[[name]], Inf)))) \n",
    "    }\n",
    "    return(data) \n",
    "  }\n",
    "  \n",
    "# Perform feature engineering on the cleaned data set.\n",
    "   \n",
    "# Output:\n",
    "Merged_Features_sql <- RxSqlServerData(table = \"Merged_Features\", connectionString = connection_string)\n",
    "    \n",
    "# Create buckets for various numeric variables with the function Bucketize. \n",
    "rxDataStep(inData = Merged_Labeled_sql,\n",
    "           outFile = Merged_Features_sql, \n",
    "           overwrite = TRUE, \n",
    "           transformFunc = bucketize,\n",
    "           transformObjects =  list(\n",
    "            b = bins))\n",
    "\n",
    "print(\"Feature Engineering Completed.\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Step 3: Training and Evaluating the Models\n",
    "\n",
    "In this step we:\n",
    "\n",
    "**1.** Train a logistic regression classification model on the training set and save it to SQL. \n",
    " \n",
    "**2.** Score the logisitc regression on the test set. \n",
    "\n",
    "**3.** Evaluate the tested model.\n",
    "\n",
    "**Input:** Data set Merged_Features.\n",
    "\n",
    "**Output:** Logistic Regression Model, Predictions and Evaluation Metrics. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Convert strings to factors.\n",
    "Merged_Features_sql <- RxSqlServerData(table = \"Merged_Features\", connectionString = connection_string, stringsAsFactors = TRUE)\n",
    "\n",
    "## Get the column information. \n",
    "column_info <- rxCreateColInfo(Merged_Features_sql, sortLevels = TRUE)\n",
    "\n",
    "## Set the compute context to local to export the column_info list to SQl. \n",
    "rxSetComputeContext('local')\n",
    "  \n",
    "## Open an Odbc connection with SQL Server.\n",
    "OdbcModel <- RxOdbcData(table = \"Column_Info\", connectionString = connection_string)\n",
    "rxOpen(OdbcModel, \"w\")\n",
    "  \n",
    "## Drop the Column Info table if it exists. \n",
    "if(rxSqlServerTableExists(OdbcModel@table, OdbcModel@connectionString)) {\n",
    "    rxSqlServerDropTable(OdbcModel@table, OdbcModel@connectionString)\n",
    "}\n",
    "  \n",
    "## Create an empty Column_Info table. \n",
    "rxExecuteSQLDDL(OdbcModel, \n",
    "                sSQLString = paste(\" CREATE TABLE [\", OdbcModel@table, \"] (\",\n",
    "                                     \"     [id] varchar(200) not null, \",\n",
    "                                     \"     [value] varbinary(max), \",\n",
    "                                     \"     constraint unique_id2 unique (id))\",\n",
    "                                     sep = \"\")\n",
    ")\n",
    "  \n",
    "## Write the model to SQL. \n",
    "rxWriteObject(OdbcModel, \"Column Info\", column_info)\n",
    "  \n",
    "## Close the Obdc connection used. \n",
    "rxClose(OdbcModel)\n",
    "  \n",
    "# Set the compute context back to SQL. \n",
    "rxSetComputeContext(sql)\n",
    "  \n",
    "# Point to the training set. It will be created on the fly when training models. \n",
    "Train_sql <- RxSqlServerData(sqlQuery = \n",
    "                               \"SELECT *   \n",
    "                                FROM Merged_Features \n",
    "                                WHERE loanId IN (SELECT loanId from Hash_Id WHERE hashCode <= 70)\",\n",
    "                             connectionString = connection_string, colInfo = column_info)\n",
    "  \n",
    "# Point to the testing set. It will be created on the fly when testing models. \n",
    "Test_sql <- RxSqlServerData(sqlQuery = \n",
    "                              \"SELECT *   \n",
    "                               FROM Merged_Features \n",
    "                               WHERE loanId NOT IN (SELECT loanId from Hash_Id WHERE hashCode <= 70)\",\n",
    "                            connectionString = connection_string, colInfo = column_info)\n",
    "  \n",
    "print(\"Column information received.\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Write the formula after removing variables not used in the modeling.\n",
    "## We remove the id variables, date, residentialState, term, and all the numeric variables that were later bucketed. \n",
    "variables_all <- rxGetVarNames(Train_sql)\n",
    "variables_to_remove <- c(\"loanId\", \"memberId\", \"loanStatus\", \"date\", \"residentialState\", \"term\",\n",
    "                         \"loanAmount\", \"interestRate\", \"monthlyPayment\", \"annualIncome\", \"dtiRatio\", \"lengthCreditHistory\",\n",
    "                         \"numTotalCreditLines\", \"numOpenCreditLines\", \"numOpenCreditLines1Year\", \"revolvingBalance\",\n",
    "                         \"revolvingUtilizationRate\", \"numDerogatoryRec\", \"numDelinquency2Years\", \"numChargeoff1year\", \n",
    "                         \"numInquiries6Mon\")\n",
    "  \n",
    "training_variables <- variables_all[!(variables_all %in% c(\"isBad\", variables_to_remove))]\n",
    "formula <- as.formula(paste(\"isBad ~\", paste(training_variables, collapse = \"+\")))\n",
    "\n",
    "print(\"Formula written.\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Train the logistic regression model.\n",
    "logistic_model <- rxLogit(formula = formula,\n",
    "                          data = Train_sql,\n",
    "                          reportProgress = 0, \n",
    "                          initialValues = NA)\n",
    "\n",
    "## rxLogisticRegression function from the MicrosoftML library can be used instead. \n",
    "## The regularization weights (l1Weight and l2Weight) can be modified for further optimization.\n",
    "## The included selectFeatures function can select a certain number of optimal features based on a specified method.\n",
    "## the number of variables to select and the method can be further optimized.\n",
    "  \n",
    "#library('MicrosoftML')\n",
    "#logistic_model <- rxLogisticRegression(formula = formula,\n",
    "#                                       data = Train_sql,\n",
    "#                                       type = \"binary\",\n",
    "#                                       l1Weight = 0.7,\n",
    "#                                       l2Weight = 0.7,\n",
    "#                                       mlTransforms = list(selectFeatures(formula, mode = mutualInformation(numFeaturesToKeep = 10))))\n",
    "  \n",
    "\n",
    "print(\"Training Logistic Regression done.\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Get the coefficients of the logistic regression formula.\n",
    "## NA means the variable has been dropped while building the model.\n",
    "coeff <- logistic_model$coefficients\n",
    "Logistic_Coeff <- data.frame(variable = names(coeff), coefficient = coeff, row.names = NULL)\n",
    "  \n",
    "## Order in decreasing order of absolute value of coefficients. \n",
    "Logistic_Coeff <- Logistic_Coeff[order(abs(Logistic_Coeff$coefficient), decreasing = TRUE),]\n",
    "  \n",
    "# Write the table to SQL. Compute Context should be set to local. \n",
    "rxSetComputeContext(local)\n",
    "Logistic_Coeff_sql <- RxSqlServerData(table = \"Logistic_Coeff\", connectionString = connection_string)\n",
    "rxDataStep(inData = Logistic_Coeff, outFile = Logistic_Coeff_sql, overwrite = TRUE)\n",
    "\n",
    "print(\"Logistic Regression Coefficients written to SQL.\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Save the fitted model to SQL Server. \n",
    "\n",
    "## Open an Odbc connection with SQL Server.\n",
    "OdbcModel <- RxOdbcData(table = \"Model\", connectionString = connection_string)\n",
    "rxOpen(OdbcModel, \"w\")\n",
    "  \n",
    "## Drop the Model table if it exists. \n",
    "if(rxSqlServerTableExists(OdbcModel@table, OdbcModel@connectionString)) {\n",
    "    rxSqlServerDropTable(OdbcModel@table, OdbcModel@connectionString)\n",
    "}\n",
    "  \n",
    "## Create an empty Model table. \n",
    "rxExecuteSQLDDL(OdbcModel, \n",
    "                sSQLString = paste(\" CREATE TABLE [\", OdbcModel@table, \"] (\",\n",
    "                                   \"  [id] varchar(200) not null, \",\n",
    "                                   \"     [value] varbinary(max), \",\n",
    "                                   \"     constraint unique_id3 unique (id))\",\n",
    "                             sep = \"\")\n",
    "                )\n",
    "  \n",
    "## Write the model to SQL. \n",
    "rxWriteObject(OdbcModel, \"Logistic Regression\", logistic_model)\n",
    "\n",
    "## Close Obdc connection. \n",
    "rxClose(OdbcModel)\n",
    "\n",
    "# Set the compute context back to SQL. \n",
    "rxSetComputeContext(sql)\n",
    "\n",
    "print(\"Model uploaded to SQL.\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Logistic Regression Scoring\n",
    "\n",
    "# Make Predictions and save them to SQL.\n",
    "Predictions_Logistic_sql <- RxSqlServerData(table = \"Predictions_Logistic\", connectionString = connection_string)\n",
    "  \n",
    "rxPredict(logistic_model, \n",
    "          data = Test_sql, \n",
    "          outData = Predictions_Logistic_sql, \n",
    "          overwrite = TRUE, \n",
    "          type = \"response\", # If you used rxLogisticRegression, this argument should be removed. \n",
    "          extraVarsToWrite = c(\"isBad\", \"loanId\"))\n",
    "\n",
    "print(\"Scoring done.\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Evaluation. \n",
    "\n",
    "## Import the prediction table and convert is_bad to numeric for correct evaluation. \n",
    "Predictions <- rxImport(Predictions_Logistic_sql)\n",
    "Predictions$isBad <- as.numeric(as.character(Predictions$isBad))\n",
    "\n",
    "## Change the names of the variables in the predictions table if you used rxLogisticRegression.\n",
    "## Predictions <- Predictions[, c(1, 2, 5)]\n",
    "## colnames(Predictions) <- c(\"isBad\", \"loanId\", \"isBad_Pred\")\n",
    "\n",
    "## Set the Compute Context to local for evaluation. \n",
    "rxSetComputeContext(local)\n",
    "    \n",
    "print(\"Predictions imported.\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "## KS PLOT AND STATISTIC.\n",
    "\n",
    "# Split the data according to the observed value and get the cumulative distribution of predicted probabilities. \n",
    "Predictions0 <- Predictions[Predictions$isBad==0,]$isBad_Pred\n",
    "Predictions1 <- Predictions[Predictions$isBad==1,]$isBad_Pred\n",
    "    \n",
    "cdf0 <- ecdf(Predictions0)\n",
    "cdf1 <- ecdf(Predictions1)\n",
    "    \n",
    "# Compute the KS statistic and the corresponding points on the KS plot. \n",
    "    \n",
    "## Create a sequence of predicted probabilities in its range of values. \n",
    "minMax <- seq(min(Predictions0, Predictions1), max(Predictions0, Predictions1), length.out=length(Predictions0)) \n",
    "    \n",
    "## Compute KS, ie. the largest distance between the two cumulative distributions. \n",
    "KS <- max(abs(cdf0(minMax) - cdf1(minMax))) \n",
    "print(sprintf(\"KS = %s\", KS))\n",
    "    \n",
    "## Find a predicted probability where the cumulative distributions have the biggest difference.  \n",
    "x0 <- minMax[which(abs(cdf0(minMax) - cdf1(minMax)) == KS )][1] \n",
    "    \n",
    "## Get the corresponding points on the plot. \n",
    "y0 <- cdf0(x0) \n",
    "y1 <- cdf1(x0) \n",
    "    \n",
    "# Plot the two cumulative distributions with the line between points of greatest distance. \n",
    "plot(cdf0, verticals = TRUE, do.points = FALSE, col = \"blue\", main = sprintf(\"KS Plot; KS = %s\", round(KS, digits = 3)), ylab = \"Cumulative Distribution Functions\", xlab = \"Predicted Probabilities\") \n",
    "plot(cdf1, verticals = TRUE, do.points = FALSE, col = \"green\", add = TRUE) \n",
    "legend(0.3, 0.8, c(\"isBad == 0\", \"isBad == 1\"), lty = c(1, 1), lwd = c(2.5, 2.5), col = c(\"blue\", \"green\"))\n",
    "points(c(x0, x0), c(y0, y1), pch = 16, col = \"red\") \n",
    "segments(x0, y0, x0, y1, col = \"red\", lty = \"dotted\") \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "## CONFUSION MATRIX AND VARIOUS METRICS. \n",
    "\n",
    "# The cumulative distributions of predicted probabilities given observed values are the farthest apart for a score equal to x0.\n",
    "# We can then use x0 as a decision threshold for example. \n",
    "# Note that the choice of a decision threshold can be further optimized.\n",
    "    \n",
    "# Using the x0 point as a threshold, we compute the binary predictions to get the confusion matrix. \n",
    "Predictions$isBad_Pred_Binary <- ifelse(Predictions$isBad_Pred < x0, 0, 1)\n",
    "    \n",
    "confusion <- table(Predictions$isBad, Predictions$isBad_Pred_Binary, dnn = c(\"Observed\", \"Predicted\"))[c(\"0\", \"1\"), c(\"0\", \"1\")]\n",
    "print(confusion) \n",
    "tp <- confusion[1, 1] \n",
    "fn <- confusion[1, 2] \n",
    "fp <- confusion[2, 1] \n",
    "tn <- confusion[2, 2] \n",
    "accuracy <- (tp + tn) / (tp + fn + fp + tn) \n",
    "precision <- tp / (tp + fp) \n",
    "recall <- tp / (tp + fn) \n",
    "fscore <- 2 * (precision * recall) / (precision + recall) \n",
    "\n",
    "# Print the computed metrics.\n",
    "metrics <- c(\"Accuracy\" = accuracy, \n",
    "             \"Precision\" = precision, \n",
    "             \"Recall\" = recall, \n",
    "             \"F-Score\" = fscore,\n",
    "             \"Score Threshold\" = x0) \n",
    "\n",
    "print(metrics)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "## ROC PLOT AND AUC.\n",
    "\n",
    "ROC <- rxRoc(actualVarName = \"isBad\", predVarNames = \"isBad_Pred\", data = Predictions, numBreaks = 1000)\n",
    "AUC <- rxAuc(ROC)\n",
    "print(sprintf(\"AUC = %s\", AUC))\n",
    "plot(ROC, title = \"ROC Curve for Logistic Regression\")\n",
    "    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "## LIFT CHART. \n",
    "\n",
    "pred <- prediction(predictions = Predictions$isBad_Pred, labels = Predictions$isBad, label.ordering = c(\"0\", \"1\"))\n",
    "perf <- performance(pred,  measure = \"lift\", x.measure = \"rpp\") \n",
    "plot(perf, main = c(\"Lift Chart\"))\n",
    "abline(h = 1.0, col = \"purple\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Step 4: Operational Metrics Computation and Scores Transformation\n",
    "\n",
    "In this step, we: \n",
    "\n",
    "**1.** Compute Operational Metrics: expected bad rate for various classification decision thresholds.  \n",
    "\n",
    "**2.** Apply a score transformation based on operational metrics. \n",
    "\n",
    "**Input:** Predictions table.\n",
    "\n",
    "**Output:** Operational Metrics and Transformed Scores."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Operational metrics are computed in the following way:\n",
    "\n",
    "**1.** Apply a sigmoid function to the output scores of the logistic regression, in order to spread them in [0,1].\n",
    "\n",
    "**2.** Compute bins for the scores, based on quantiles. \n",
    "\n",
    "**3.** Take each lower bound of each bin as a decision threshold for default loan classification, and compute the rate of bad loans among loans with a score higher than the threshold. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Space out the scores (predicted probability of default) for interpretability with a sigmoid.\n",
    "## Define the sigmoid: it is centered at 1.2*mean score to ensure a good spread of scores.  \n",
    "dev_test_avg_score <- mean(Predictions$isBad_Pred)\n",
    "sigmoid <- function(x){\n",
    " return(1/(1 + exp(-20*(x-1.2*dev_test_avg_score))))\n",
    "}\n",
    "  \n",
    "## Apply the function.\n",
    "Predictions$transformedScore <- sigmoid(Predictions$isBad_Pred)\n",
    "\n",
    "## Changes can be observed with the histograms and summary statistics.\n",
    "#summary(Predictions$isBad_Pred)\n",
    "#hist(Predictions$isBad_Pred)\n",
    "#summary(Predictions$transformedScore)\n",
    "#hist(Predictions$transformedScore)\n",
    "\n",
    "## Save the average score on the test set for the Production stage. \n",
    "Scores_Average <- data.frame(avg = dev_test_avg_score)\n",
    "Scores_Average_sql <- RxSqlServerData(table = \"Scores_Average\", connectionString = connection_string)\n",
    "rxDataStep(inData = Scores_Average, outFile = Scores_Average_sql, overwrite = TRUE)\n",
    "  \n",
    "print(\"Scores Spaced out in [0,1]\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Compute operational metrics.\n",
    "  \n",
    "## Bin the scores based on quantiles. \n",
    "bins <- rxQuantile(\"transformedScore\", Predictions, probs = c(seq(0, 0.99, 0.01))) \n",
    "bins[[\"0%\"]] <- 0 \n",
    "  \n",
    "## We consider 100 decision thresholds: the lower bound of each bin.\n",
    "## Compute the expected rates of bad loans for loans with scores higher than each decision threshold. \n",
    "badrate <- rep(0, length(bins))\n",
    "for(i in 1:length(bins))\n",
    "{\n",
    " selected <- Predictions$isBad[Predictions$transformedScore >= bins[i]]\n",
    " badrate[i] <- sum(selected)/length(selected) \n",
    "}\n",
    "  \n",
    "## Save the data points to a data frame and load it to SQL.  \n",
    "Operational_Metrics <- data.frame(scorePercentile = names(bins), scoreCutoff = bins, badRate = badrate, row.names = NULL)\n",
    "Operational_Metrics_sql <- RxSqlServerData(table = \"Operational_Metrics\", connectionString = connection_string)\n",
    "rxDataStep(inData = Operational_Metrics, outFile = Operational_Metrics_sql, overwrite = TRUE)\n",
    "  \n",
    "print(\"Operational Metrics computed.\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Apply the score transformation. \n",
    "  \n",
    "## Deal with the bottom 1-99 percentiles. \n",
    "for (i in seq(1, (nrow(Operational_Metrics) - 1))){\n",
    " rows <- which(Predictions$transformedScore <= Operational_Metrics$scoreCutoff[i + 1] & \n",
    "               Predictions$transformedScore > Operational_Metrics$scoreCutoff[i])\n",
    " Predictions[rows, c(\"scorePercentile\")] <- as.character(Operational_Metrics$scorePercentile[i + 1])\n",
    " Predictions[rows, c(\"badRate\")] <- Operational_Metrics$badRate[i]\n",
    " Predictions[rows, c(\"scoreCutoff\")] <- Operational_Metrics$scoreCutoff[i]\n",
    "}\n",
    "  \n",
    "## Deal with the top 1% higher scores (last bucket). \n",
    "rows <- which(Predictions$transformedScore > Operational_Metrics$scoreCutoff[100])\n",
    "Predictions[rows, c(\"scorePercentile\")] <- \"Top 1%\"\n",
    "Predictions[rows, c(\"scoreCutoff\")] <- Operational_Metrics$scoreCutoff[100]\n",
    "Predictions[rows, c(\"badRate\")] <- Operational_Metrics$badRate[100]\n",
    " \n",
    "## Save the transformed scores to SQL. \n",
    "Scores_sql <- RxSqlServerData(table = \"Scores\", connectionString = connection_string)\n",
    "rxDataStep(inData = Predictions[, c(\"loanId\", \"transformedScore\", \"scorePercentile\", \"scoreCutoff\", \"badRate\", \"isBad\")], \n",
    "           outFile = Scores_sql, \n",
    "           overwrite = TRUE)\n",
    "\n",
    "print(\"Scores transformed.\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Plot the rates of bad loans for various thresholds obtained through binning.  \n",
    "plot(Operational_Metrics$badRate, main = c(\"Bad Loans Rates Among those with Scores Higher than Decision Thresholds\"), xlab = \"Default Score Percentiles\", ylab = \"Expected Rate of Bad Loans\")\n",
    "\n",
    "## EXAMPLE: \n",
    "## If the score cutoff of the 91th score percentile is 0.9834, and we read a bad rate of 0.6449.  \n",
    "## This means that if 0.9834 is used as a threshold to classify loans as bad, we would have a bad rate of 64.49%.  \n",
    "## This bad rate is equal to the number of observed bad loans over the total number of loans with a score greater than the threshold. \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Close Obdc connection to master database. \n",
    "rxClose(outOdbcDS)"
   ]
  }
 ],
 "metadata": {
  "celltoolbar": "Raw Cell Format",
  "kernelspec": {
   "display_name": "R",
   "language": "R",
   "name": "ir"
  },
  "language_info": {
   "codemirror_mode": "r",
   "file_extension": ".r",
   "mimetype": "text/x-r-source",
   "name": "R",
   "pygments_lexer": "r",
   "version": "3.4.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
