{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Prediction of online shoppers’ purchasing intention\n",
    "*by Georgy Lazarev* (**mlcourse slackname: jorgy**)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "As the title goes the task was to predict whether the user is intended to make a purchase on Internet shop. Data for this project can be found [here](https://archive.ics.uci.edu/ml/datasets/Online+Shoppers%E2%80%99+Purchasing+Intention+Dataset). "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Dataset and features description"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We have a binary classification problem which measures user intention to finalize transaction. Originally this dataset was used in [research](https://link.springer.com/article/10.1007/s00521-018-3523-0) where there was an attempt to build a system consisting of two modules. The first one is to determine visitor's likelihood to leave the site. If probability of that is higher that set threshold, than the second module should predict whether or not this person has commercial intention. As authors of this paper state data is real and was collected and provided by retailer. Company might be interested in system which in real time can offer a special offer to client with positive commercial intention."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Data formed in such way that each session would correspond to different user in 1 year period to avoid any tendency. \n",
    "Target variable is called 'Revenue' and takes two values - 0 and 1, whether or not session ended with purchase.\n",
    "There are 10 numeric and 7 categorical features:\n",
    "\n",
    " ***Numeric:***\n",
    "\n",
    "*first six features were derived from the URL information of the pages visited by the user. They were updated each time visitor moved from one page to another till the end of the session.* \n",
    "\n",
    " - **Administrative**  - Number of pages about account management visited by person\n",
    "    \n",
    "    \n",
    " - **Administrative duration** - Total amount of time (in seconds) spent by the visitor on administrative pages  \n",
    " \n",
    " \n",
    " - **Informational**  - Number of pages in session about Web site, communication and address information of the shopping site\n",
    " \n",
    " \n",
    " - **Informational duration** - time (in seconds) spent on informational pages \n",
    " \n",
    " \n",
    " - **Product related**  - Number of pages concerning product visited\n",
    " \n",
    " \n",
    " - **Product related duration** - time spent on product related pages\n",
    " \n",
    " \n",
    "*next three features were  measured by \"Google Analytics\" for each page in the online-shop website:*\n",
    "\n",
    "\n",
    " - **Bounce rate**  - Average bounce rate value of the pages visited by the visitor. Bounce rate itself is percentage of visitors    who enter the site from that page and then leave\n",
    " \n",
    " \n",
    " - **Exit rate**  - Average exit rate value of the pages visited by the visitor. Value of exit rate for page is percentage of all views of this page that were last in the session\n",
    " \n",
    " \n",
    " - **Page value**  - Average page value of the pages visited. Indicates how valuable a specific page is to shop holder in monetary terms\n",
    " \n",
    " \n",
    " \n",
    " \n",
    " - **Special day**  - Closeness of the site visiting time to a special day. The value of this attribute is determined by considering the dynamics of e-commerce such as the duration between the order date and delivery date. for Valentina’s day, this value takes a nonzero value between February 2 and February 12, zero before and after this date unless it is close to another special day, and its maximum value of 1 on February 8.\n",
    " \n",
    "\n",
    "***Categorical:***\n",
    "\n",
    " - **OperatingSystems**  - Operating system of the visitor\n",
    " \n",
    " \n",
    " - **Browser**  - Browser of the visitor \n",
    " \n",
    " \n",
    " - **Region** - Geographic region from which the session has been started by the visitor\n",
    " \n",
    " \n",
    " - **TrafficType** - Traffic source by which the visitor has arrived at the Web site (e.g., banner, SMS, direct)\n",
    " \n",
    " \n",
    " - **VisitorType** - whether the visitor is the new or returning (or not specified)\n",
    " \n",
    " \n",
    " - **Weekend**  - Boolean value indicating whether the date of the visit is weekend \n",
    " \n",
    " \n",
    " - **Month**  - Month value of the visit date \n",
    " \n",
    " \n",
    " Dataset was formed such way that each session correpsonds to unique person. That was done to prevent any possible trends"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Exploratory data analysis"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "import seaborn as sns\n",
    "\n",
    "%matplotlib inline\n",
    "\n",
    "import warnings\n",
    "warnings.filterwarnings('ignore')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#load data\n",
    "df=pd.read_csv('online_shoppers_intention (1).csv')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's look at dataset:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df.columns"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df.info()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "There is no missing data in dataset. \n",
    "\n",
    "Now let's look at distribution of target value:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "sns.countplot(df.Revenue)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df.Revenue.value_counts(normalize=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Seems that we deal with somewhat imbalanced classes. There are more visitors that leave shop website without purchasing anything and that's not surprising.\n",
    "\n",
    "Target value will be converted to binary type"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#list of numeric features\n",
    "num_feats=['Administrative', 'Administrative_Duration', 'Informational',\n",
    "       'Informational_Duration', 'ProductRelated', 'ProductRelated_Duration',\n",
    "       'BounceRates', 'ExitRates', 'PageValues', 'SpecialDay']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df[num_feats].describe()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We certainly will scale numerical features. As we see they are of different scales\n",
    "\n",
    "Now let's look at categorical features:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "cat_feats=['Month','OperatingSystems', 'Browser', 'Region', 'TrafficType', 'VisitorType','Weekend']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df[cat_feats].head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "As we see, some features are already label-encoded. Some are stll in string format. *Weekend* will be converted to binary."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df[cat_feats].astype('category').describe()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "There are two interesting observations: number of months present and number of visitor types.."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df.Month.unique()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "January and April are missing."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df.VisitorType.unique()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "'Other'? Let's see how many such values in our dataset:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df.VisitorType.value_counts()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "That makes no sense though. We'll get back to that later."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df.groupby('VisitorType')['Revenue'].mean()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "A bit surprising. I expected percentage of potentially beneficial clients would be higher among visitors who returned to website other than new ones. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "sum(df.loc[df.Revenue==1].Administrative==0)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "sum(df.loc[df.Revenue==1].Informational==0)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "sum(df.loc[df.Revenue==1].ProductRelated==0)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "That makes sense. Only six people made purchase and at the same time din't visit any pages related to products."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "(df.Administrative==0).sum()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "(df.Administrative_Duration==0).sum()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "So, there were cases when number of pages was greater than 0 but time spent was 0."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df.loc[df.Administrative>0].loc[df.Administrative_Duration==0].Administrative.value_counts()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "So theoretically it is possible.\n",
    "\n",
    "*Special day* feature shows closeness to ..special days, right. We might think that this feature will positively affect target value"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df.loc[df.SpecialDay>0].Revenue.value_counts(normalize=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "How come? That's again not what I expected. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df[['Revenue','SpecialDay']].corr()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "That's actualy strange.."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Primary visual data analysis"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Here goes pairwise Pearson-correlation of numerical features:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "corrl=num_feats.copy()\n",
    "corrl.append('Revenue')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "sns.heatmap(df[corrl].corr())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Yes, some features indeed are highly correlated!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df[['ProductRelated', 'ProductRelated_Duration','BounceRates', 'ExitRates']].corr()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "fig, axes = plt.subplots(ncols=4, nrows = 2, figsize=(24, 18))\n",
    "for i in range(len(cat_feats)):\n",
    "    sns.countplot(df[cat_feats[i]],ax=axes[i//4, i%4])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Well, I'd say it's difficult to draw any concrete conclusions from this plot. There are leaders in each groups .\n",
    "Now let's explore some features a bit more with respect to target value:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "sns.countplot(df.Weekend,hue=df.Revenue)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "plt.figure(figsize=(15,15))\n",
    "plt.subplot(321)\n",
    "df.groupby('Month').Revenue.mean().plot.bar()\n",
    "plt.subplot(322)\n",
    "df.groupby('Browser').Revenue.mean().plot.bar()\n",
    "plt.subplot(323)\n",
    "df.groupby('TrafficType').Revenue.mean().plot.bar()\n",
    "plt.subplot(324)\n",
    "df.groupby('OperatingSystems').Revenue.mean().plot.bar()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Percentage of visitors who made purchases in November seems a bit higher in comparison to other months. In February there was a small number of visitors and too few of them ended up buying something. Maybe it was bad advertising and price policy that was a reason"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "As for other features distribution of session results is consistent, as it seems. It's difficult to interpret those result in a sense that feature values are encoded by LavelEncoding already so we don't really know which real meanings stand behind them. Yep."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "tmp=['Revenue','Administrative_Duration','Informational_Duration','ProductRelated_Duration','BounceRates','ExitRates','PageValues']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "r=['Revenue','Administrative','Administrative_Duration','Informational','Informational_Duration','ProductRelated','ProductRelated_Duration','BounceRates','ExitRates','PageValues']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "sns.pairplot(df[r],hue='Revenue',diag_kind='hist')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In general, trends here make sense. Lower Bounce and Exit Rates corresponds to more frequent transactions made. On the other hand higher PageValues not always lead to commercial benefit. Also in most distributions and pairplots related to website pages we see cases where visitor spent too much time on website but still quit it without purchase. That happens in real life too. Thus, as for outliers, I guess I can assume there is no such."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.figure(figsize=(10,20))   \n",
    "for i,v in enumerate(range(len(num_feats))):\n",
    "    v = v+1\n",
    "    ax1 = plt.subplot(len(num_feats),1,v)\n",
    "    ax1=sns.distplot(df[num_feats[i]])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Right-skewed. All of them."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Insights and found dependencies"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1. There is no sessions recorded for January and April. By numbers it seems that in November&October bigger percentage of sessions ended up with purchases. \n",
    "2. At the same time SpecialDay feature is shows negative effect on target value. We can explain it but assuming that most visitor prefer to shop in advance. \n",
    "3. There are two pairs of highly correlated features. It's worth checking later if deleting them will improve our models.\n",
    "4. Almost 25% percent of new visitors made transactions in contrast to ~14% of returning ones. \n",
    "5. Also we have 85 instances which have VisitorType as 'Other'. As there are no sensible options except New and Returning, this fact does mean that information wasn't correctly derived. As this is only 0.6894% of the whole data , let's take a deep breath and drop these instances away.\n",
    "6. I got an impression that all features are right-skewed. It can be useful later to do a log transformation."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Metric choice"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "As we are dealing with imbalanced class accuracy is not the best option. Due to task specificty company doesn't want to miss potential buyers. So the cost of showing the cliend special offer is lower than loss of left visitors aimed to make an purchase.\n",
    "Moreover, it is a good idea to not depend on threshold for making decision about class. Probabilities for class can be considered as intention scores and so special offers can be adjusted to degree of visitor intention. So ROC AUC seems pretty nice for our task."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Model Choice"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Following models were selected:\n",
    "- Logistic Regression - classic and interpretable. We'll do OHE for categorical features and scale numeric.\n",
    "- Random Forest - tree based model in contrast to LR, worth trying (we have categorical features as well as numerical). No need for OHE and scaling. \n",
    "- XGBoost Classifier - because why not? "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Data preprocessing "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.model_selection import train_test_split\n",
    "from sklearn.preprocessing import StandardScaler"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "First, we'll convert to boolean features to binary type"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df[['Weekend','Revenue']]=df[['Weekend','Revenue']].apply(lambda x:x.astype(int))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Then instances with *VisitorType* as 'Returning Visitor\" will be droped away:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df=df.drop(df.loc[df.VisitorType=='Other'].index)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "vt=df.VisitorType.map({'New_Visitor':0,'Returning_Visitor':1})"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "There will be different prepocessing as we work with models based on different approaches.\n",
    "\n",
    "For logistic regression it's a good idea to scale our numeric features and do One_Hot Encoding on categorical ones. To avoid data leakage scaling will be done after splitting data. As for OHE and LabelEncoding (for tree based models), I suppose we can do it before splitting as we know range of all possible values of categorical features, so there is no data leakage to prevent."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "dummies=pd.concat([pd.get_dummies(df.Month,drop_first=True),\n",
    "                   pd.get_dummies(df.Browser,drop_first=True,prefix='Browser'),\n",
    "                   pd.get_dummies(df.Region,drop_first=True,prefix='Region'),\n",
    "                   pd.get_dummies(df.OperatingSystems,drop_first=True,prefix='OS'),\n",
    "                   pd.get_dummies(df.TrafficType,drop_first=True,prefix='TT')],axis=1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "dummies.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "target=df.Revenue"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "*feats_logreg* will contain all features for Logistic Regression. *feats_tb* is for tree-based models "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "feats_logreg=pd.concat([df[num_feats],dummies,df['Weekend'],vt],axis=1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "feats_logreg.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we'll split our data.  *stratify* used due to imbalance in classes."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_train_logreg_,X_test_logreg_,y_train_logreg,y_test_logreg=train_test_split(feats_logreg,\n",
    "                                        target,test_size=0.3,random_state=17,stratify=target)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's check distribution of classes in train and test sets:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.subplot(121)\n",
    "y_train_logreg.value_counts(normalize=True).plot.bar()\n",
    "plt.subplot(122)\n",
    "y_test_logreg.value_counts(normalize=True).plot.bar()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Yep, that seems right.\n",
    "\n",
    "Now test set will be split into two same-sized sets: one for validation and other for final test. We won't test our models on second one until the end."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_valid_logreg_,X_test_logreg_,y_valid_logreg,y_test_logreg=train_test_split(X_test_logreg_,\n",
    "                                                            y_test_logreg,test_size=0.5,random_state=17)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "scaler=StandardScaler()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_train_logreg=X_train_logreg_.copy(deep=True)\n",
    "X_valid_logreg=X_valid_logreg_.copy(deep=True)\n",
    "X_test_logreg=X_test_logreg_.copy(deep=True)\n",
    "\n",
    "\n",
    "X_train_logreg[num_feats]=scaler.fit_transform(X_train_logreg[num_feats])\n",
    "X_valid_logreg[num_feats]=scaler.transform(X_valid_logreg[num_feats])\n",
    "X_test_logreg[num_feats]=scaler.transform(X_valid_logreg[num_feats])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_train_logreg.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For tree based models our preprocessing will include only LabelEncoding of *month*. Other Categorical features except boolean one are already label-encoded. Splitting into ***3*** sets is the same."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.preprocessing import LabelEncoder"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "le=LabelEncoder()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "feats_tb=pd.concat([df[num_feats],df[['Weekend','TrafficType','OperatingSystems','Browser','Region']],vt],axis=1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "feats_tb['month_enc']=le.fit_transform(df.Month)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "feats_tb.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's split our data. *stratify* used due to imbalance in classes."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_train_tb,X_test_tb,y_train_tb,y_test_tb=train_test_split(feats_tb,\n",
    "                                        target,test_size=0.3,random_state=17,stratify=target)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.subplot(121)\n",
    "y_train_tb.value_counts(normalize=True).plot.bar()\n",
    "plt.subplot(122)\n",
    "y_valid_tb.value_counts(normalize=True).plot.bar()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_valid_tb,X_test_tb,y_valid_tb,y_test_tb=train_test_split(X_test_tb,\n",
    "                                        y_test_tb,test_size=0.5,random_state=17)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Cross-validation and adjustment of model hyperparameters"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.linear_model import LogisticRegression\n",
    "from sklearn.metrics import roc_auc_score\n",
    "from sklearn.model_selection import StratifiedKFold\n",
    "from sklearn.model_selection import cross_val_score, GridSearchCV\n",
    "from sklearn.tree import DecisionTreeClassifier\n",
    "from sklearn.ensemble import RandomForestClassifier"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We'll use Statified cross validation again due to imbalanced classes. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "skf=StratifiedKFold(n_splits=5,random_state=17)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Logistic Regression"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's at first train a basic LogReg without tuning hyperparametes, creating new features to establish sort of baseline:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "pr_lr=LogisticRegression(class_weight='balanced')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "pr_lr.fit(X_train_logreg_,y_train_logreg)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print('Mean ROC-AUC on cross-validation:', np.mean(cross_val_score(pr_lr,X_train_logreg_,y_train_logreg,scoring='roc_auc',cv=skf)))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print('ROC AUC on valid set :', roc_auc_score(y_valid_logreg,pr_lr.predict_proba(X_valid_logreg_)[:,1]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Tuning hyperparameters"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "param_grid={ 'C':np.logspace(-2,1,7), 'class_weight':[None, 'balanced']}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "gs = GridSearchCV(pr_lr, param_grid, scoring='roc_auc', n_jobs=-1, cv=skf)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "gs.fit(X_train_logreg_,y_train_logreg)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "display(gs.best_params_)\n",
    "display(gs.best_score_)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now I'd select more narrow range for *C*:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "gs = GridSearchCV(pr_lr, {'C':np.linspace(0.05,0.2,10),'class_weight':['balanced']}, scoring='roc_auc', n_jobs=-1, cv=skf)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "gs.fit(X_train_logreg_,y_train_logreg)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print('Best found parameters for LogReg:',gs.best_params_)\n",
    "print('Best score found for LogReg with GridSearch:',gs.best_score_)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print('ROC AUC on valid set :', roc_auc_score(y_valid_logreg,gs.predict_proba(X_valid_logreg_)[:,1]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Increased. ~0.002"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "***Oversampling minority class***\n",
    "\n",
    "This is known technique to handle imbalanced class and implemented in ***imbalanced-learn*** [package](https://imbalanced-learn.readthedocs.io/en/stable/). We'll just create new synthetic data instance corresponding to '1' class."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from imblearn.over_sampling import SMOTE"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "I'm going to check whether oversampling improves LR perfomance"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%time\n",
    "best_params=[]\n",
    "best_scores=[]\n",
    "rocs=[]\n",
    "for d in np.linspace(0.4,1,5):\n",
    "    sm=SMOTE(sampling_strategy=d,random_state=17)\n",
    "    X_train_logreg_res, y_train_logreg_res = sm.fit_sample(X_train_logreg, y_train_logreg)\n",
    "    lr=LogisticRegression()\n",
    "    lr.fit(X_train_logreg_res,y_train_logreg_res)\n",
    "    gs=GridSearchCV(lr, {'C':np.linspace(0.05,1,11)}, scoring='roc_auc', n_jobs=-1, cv=skf)\n",
    "    gs.fit(X_train_logreg_res,y_train_logreg_res)\n",
    "    best_params.append(gs.best_params_)\n",
    "    best_scores.append(gs.best_score_)\n",
    "    rocs.append(roc_auc_score(y_valid_logreg,gs.predict_proba(X_valid_logreg)[:,1]))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "max(best_scores)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "max(rocs)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "LogReg doesn't perfoms better after oversamling so we won't use it."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Random Forest\n",
    "\n",
    "For this and for XGBoost we use data with postfix *tb* (tree-based). Data is not scaled, categorical features are Label-encoded."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.ensemble import RandomForestClassifier\n",
    "from sklearn.model_selection import RandomizedSearchCV"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "rfc=RandomForestClassifier()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "rfc.fit(X_train_tb,y_train_tb)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "roc_auc_score(y_valid_tb,rfc.predict_proba(X_valid_tb)[:,1])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Time for gridsearch:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "param_grid = {\n",
    "    \"n_estimators\": [500],\n",
    "    \"max_depth\": [4,5,10,15],\n",
    "    \"min_samples_split\": [2,3],\n",
    "    \"min_samples_leaf\": [2], #,1,3],\n",
    "    'max_features': [1,'auto','log2'], \n",
    "    'criterion': ['gini'] }"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "gs = GridSearchCV(rfc, param_grid, scoring='roc_auc', n_jobs=-1, cv=skf, verbose=1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%time\n",
    "gs.fit(X_train_tb, y_train_tb)\n",
    "print('Best parameters for Random Forest: ', gs.best_params_)\n",
    "print('Best score: ', gs.best_score_)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "roc_auc_score(y_valid_tb,gs.predict_proba(X_valid_tb)[:,1])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Seems much better. Will XGBoost beat this?\n",
    "\n",
    "We'll save best RandomForest version for future reference"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "rfc=gs.best_estimator_"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "rfc.fit(X_train_tb,y_train_tb)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### XGBoost Classifier"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from xgboost import XGBClassifier"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "xgbclf = XGBClassifier(random_state=17, n_jobs=-1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "xgbclf.fit(X_train_tb,y_train_tb)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "roc_auc_score(y_valid_tb,xgbclf.predict_proba(X_valid_tb)[:,1])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "param_grid = {\n",
    "    'max_depth': [2,3,4,5], \n",
    "    'n_estimators': [50,100,150,300], \n",
    "    'learning_rate':[0.01,0.05,0.1], \n",
    "    'reg_alpha': [0, 0.1, 0.2],\n",
    "    'gamma': [0,1]\n",
    "}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "gs = GridSearchCV(xgbclf, param_grid, scoring='roc_auc', n_jobs=-1, cv=skf, verbose=1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%time\n",
    "gs.fit(X_train_tb, y_train_tb)\n",
    "print('Best parameters for XGBBoost Classifier: ', gs.best_params_)\n",
    "print('Best scorefor XGBBoost Classifier: ', gs.best_score_)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "roc_auc_score(y_valid_tb,gs.predict_proba(X_valid_tb)[:,1])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Bit worse."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Ok, basic XGBoost without tuning shows better results among 3 models.  But there is room for improvements in case for Logistic Regression so we'll get back to it once more."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Logistic Regression 2.0"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "As you might remember, all our numeric features are right skewed, so let's see if log transformation will improve our model perfomance.  *_lt* stands for log-transformation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.figure(figsize=(10,20))   \n",
    "for i,v in enumerate(range(len(num_feats))):\n",
    "    v = v+1\n",
    "    ax1 = plt.subplot(len(num_feats),1,v)\n",
    "    ax1=sns.distplot(np.log1p(df[num_feats[i]]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "That's better."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#we'll do transformation over a copy of dataset. '_' stands for version before scaling\n",
    "X_train_logreg_lt=X_train_logreg_.copy(deep=True)\n",
    "X_valid_logreg_lt=X_valid_logreg_.copy(deep=True)\n",
    "X_test_logreg_lt=X_test_logreg_.copy(deep=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_train_logreg_lt[num_feats]=np.log1p(X_train_logreg_lt[num_feats])\n",
    "X_valid_logreg_lt[num_feats]=np.log1p(X_valid_logreg_lt[num_feats])\n",
    "X_test_logreg_lt[num_feats]=np.log1p(X_test_logreg_lt[num_feats])\n",
    "\n",
    "X_train_logreg_lt[num_feats]=scaler.fit_transform(X_train_logreg_lt[num_feats])\n",
    "X_valid_logreg_lt[num_feats]=scaler.transform(X_valid_logreg_lt[num_feats])\n",
    "X_test_logreg_lt[num_feats]=scaler.transform(X_test_logreg_lt[num_feats])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "lr=LogisticRegression(class_weight='balanced',C=0.05)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "lr.fit(X_train_logreg_lt,y_train_logreg)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print('Mean ROC AUC on dataset after Log tansformation of numeric features',\n",
    "      np.mean(cross_val_score(lr,X_train_logreg_lt,y_train_logreg,scoring='roc_auc',cv=skf)))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print('ROC AUC on valid set',roc_auc_score(y_valid_logreg,lr.predict_proba(X_valid_logreg_lt)[:,1]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "That's a quite  an improvement (~1.3%!!! :-))  in comparison with very first basic LogReg before gridsearch (~0.895). We'll keep transformed dataset for further exploration."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "***Feature selection***"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "There were two pairs of highly correlated numerical features = *ProductRelated - ProductRelated_Duration* and *BounceRates - ExitRates*. Maybe deleting them will improve model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#we'll make again a transformation on copy\n",
    "X_train_logreg_copy=X_train_logreg_lt.copy(deep=True)\n",
    "X_valid_logreg_copy=X_valid_logreg_lt.copy(deep=True)\n",
    "X_test_logreg_copy=X_test_logreg_lt.copy(deep=True)\n",
    "\n",
    "X_train_logreg_copy.drop(['ProductRelated','BounceRates'],axis=1,inplace=True)\n",
    "X_valid_logreg_copy.drop(['ProductRelated','BounceRates'],axis=1,inplace=True)\n",
    "X_test_logreg_copy.drop(['ProductRelated','BounceRates'],axis=1,inplace=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "lr=LogisticRegression(class_weight='balanced',C=0.05)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "lr.fit(X_train_logreg_copy,y_train_logreg)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print('Mean ROC AUC on dataset after deleting \"ProductRelated\" and \"BounceRates\"',np.mean(cross_val_score(lr,X_train_logreg_copy,y_train_logreg,scoring='roc_auc',cv=skf)))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print('ROC AUC on valid set',roc_auc_score(y_valid_logreg,lr.predict_proba(X_valid_logreg_copy)[:,1]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Results in cross validation hasn't changed, but slighly decreased in hold-out validation. It's not quite clear what should be done so we'll keep the old version.\n",
    "Let's refer to our RandomForest and XGBoost models and see which features were the least important."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "feat_names=['Administrative','Administrative_Duration','Informational','Informational_Duration','ProductRelated','ProductRelated_Duration',\n",
    " 'BounceRates','ExitRates','PageValues','SpecialDay','Weekend','TrafficType','OperatingSystems','Browser','Region','VisitorType',\n",
    "           'month']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "rfc_feat_imp=dict(zip(feat_names, rfc.feature_importances_))\n",
    "xgb_feat_imp=dict(zip(feat_names, xgbclf.feature_importances_))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.figure(figsize=(20,15))\n",
    "plt.subplot(211)\n",
    "plt.bar(range(len(feat_names)),list(rfc_feat_imp.values()),tick_label=list(rfc_feat_imp.keys()))\n",
    "plt.xticks(rotation=90)\n",
    "plt.subplot(212)\n",
    "plt.bar(range(len(feat_names)),list(xgb_feat_imp.values()),tick_label=list(xgb_feat_imp.keys()))\n",
    "plt.xticks(rotation=90)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We'll drop *Informational_Duration*, *SpecialDay*, *Weekend*, *Browser* and *OperatingSystems* (well, dummy columns for last two)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_train_logreg_copy=X_train_logreg_lt.copy(deep=True)\n",
    "X_valid_logreg_copy=X_valid_logreg_lt.copy(deep=True)\n",
    "X_test_logreg_copy=X_test_logreg_lt.copy(deep=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_train_logreg_copy.drop(['Informational_Duration','Weekend','Browser_2',\n",
    "       'Browser_3', 'Browser_4', 'Browser_5', 'Browser_6', 'Browser_7',\n",
    "       'Browser_8', 'Browser_9', 'Browser_10', 'Browser_11', 'Browser_12',\n",
    "       'Browser_13','OS_2', 'OS_3', 'OS_4',\n",
    "       'OS_5', 'OS_6', 'OS_7', 'OS_8','Region_2', 'Region_3', 'Region_4', 'Region_5',\n",
    "       'Region_6', 'Region_7', 'Region_8', 'Region_9',],axis=1,inplace=True)\n",
    "\n",
    "X_valid_logreg_copy.drop(['Informational_Duration','Weekend','Browser_2',\n",
    "       'Browser_3', 'Browser_4', 'Browser_5', 'Browser_6', 'Browser_7',\n",
    "       'Browser_8', 'Browser_9', 'Browser_10', 'Browser_11', 'Browser_12',\n",
    "       'Browser_13','OS_2', 'OS_3', 'OS_4',\n",
    "       'OS_5', 'OS_6', 'OS_7', 'OS_8','Region_2', 'Region_3', 'Region_4', 'Region_5',\n",
    "       'Region_6', 'Region_7', 'Region_8', 'Region_9',],axis=1,inplace=True)\n",
    "X_test_logreg_copy.drop(['Informational_Duration','Weekend','Browser_2',\n",
    "       'Browser_3', 'Browser_4', 'Browser_5', 'Browser_6', 'Browser_7',\n",
    "       'Browser_8', 'Browser_9', 'Browser_10', 'Browser_11', 'Browser_12',\n",
    "       'Browser_13','OS_2', 'OS_3', 'OS_4',\n",
    "       'OS_5', 'OS_6', 'OS_7', 'OS_8','Region_2', 'Region_3', 'Region_4', 'Region_5',\n",
    "       'Region_6', 'Region_7', 'Region_8', 'Region_9'],axis=1,inplace=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "lr=LogisticRegression(class_weight='balanced',C=0.1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "lr.fit(X_train_logreg_copy,y_train_logreg)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "np.mean(cross_val_score(lr,X_train_logreg_copy,y_train_logreg,scoring='roc_auc',cv=skf))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "roc_auc_score(y_valid_logreg,lr.predict_proba(X_valid_logreg_copy)[:,1])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Getting rid of five least important (from RandomForestClassifier perspective) gave a little improvement in our logistic regreesion perfomance. But it's still not close to Forest or XGboost Classifier."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "gs = GridSearchCV(lr, {'C':np.logspace(-2,1,10)}, scoring='roc_auc', n_jobs=-1, cv=skf)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "gs.fit(X_train_logreg_copy,y_train_logreg)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "gs.best_score_"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print('ROC AUC on valid set :', roc_auc_score(y_valid_logreg,gs.predict_proba(X_valid_logreg_copy)[:,1]))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "lr=gs.best_estimator_"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Well, ok. We can make conclusion that this now is the best version of Logistic Regression. \n",
    "Log-transformation and feature selection based on tree-based models were right solutions, while deleting two the most correlated features - not. \n",
    "\n",
    "XGBoost is the best model by now, Logistic Regression is the worst yet. We'll keep those all for experiments with engineering new features."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Creation of new features and description of this process"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Logistic Regression"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "So I'd try to make several interaction features by myself and see what will happen. Intuitively I suppose that feature showing amount of time visitor spends on ProductRelated pages with the fact that visitor already was there might be useful for determining this visitor's intention. Interaction between VisitorType and PageValues might be important too (PageValues itself is quite important as we could see on previous plots)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#_wn = with new features\n",
    "X_train_logreg_wn=X_train_logreg_copy.copy(deep=True)\n",
    "X_valid_logreg_wn=X_valid_logreg_copy.copy(deep=True)\n",
    "X_test_logreg_wn=X_test_logreg_copy.copy(deep=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_train_logreg_wn['interfeat1']=X_train_logreg_wn.VisitorType*X_train_logreg_wn.ProductRelated\n",
    "X_valid_logreg_wn['interfeat1']=X_valid_logreg_wn.VisitorType*X_valid_logreg_wn.ProductRelated\n",
    "X_test_logreg_wn['interfeat1']=X_test_logreg_wn.VisitorType*X_test_logreg_wn.ProductRelated\n",
    "\n",
    "X_train_logreg_wn['condfeat1']=X_train_logreg_wn.VisitorType*X_train_logreg_wn.PageValues\n",
    "X_valid_logreg_wn['condfeat1']=X_valid_logreg_wn.VisitorType*X_valid_logreg_wn.PageValues\n",
    "X_test_logreg_wn['condfeat1']=X_test_logreg_wn.VisitorType*X_test_logreg_wn.PageValues"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "lr.fit(X_train_logreg_wn,y_train_logreg)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "tmp_train=X_train_logreg_wn.copy(deep=True)\n",
    "tmp_valid=X_valid_logreg_wn.copy(deep=True)\n",
    "tmp_test=X_test_logreg_wn.copy(deep=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print('ROC AUC on valid set :', roc_auc_score(y_valid_logreg,lr.predict_proba(X_valid_logreg_wn)[:,1]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Again too small but improvement\n",
    "\n",
    "Now let's try to generate interaction features using [PolynomialFeatures](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PolynomialFeatures.html). Then we'll select the most important ones."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.preprocessing import PolynomialFeatures"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "polfeat = PolynomialFeatures(degree=2, interaction_only=True, include_bias=False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "polfeats_train=pd.DataFrame(polfeat.fit_transform(X_train_logreg_wn))\n",
    "polfeats_valid=pd.DataFrame(polfeat.fit_transform(X_valid_logreg_wn))\n",
    "polfeats_test=pd.DataFrame(polfeat.transform(X_test_logreg_wn))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "polfeats_train.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "I'll fit XGboost Classifier on those 820 features to see which features were the most important from its prespective."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "xg=XGBClassifier()\n",
    "xg.fit(polfeats_train,y_train_logreg)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.figure(figsize=(25,15))\n",
    "plt.bar(range(820),list(xg.feature_importances_),tick_label=polfeat.get_feature_names())\n",
    "plt.xticks(rotation=90)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Yep, the feature names are not readable. I'll print ***10*** most important features."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "xg_imp_2=dict(list(zip(polfeat.get_feature_names(),xg.feature_importances_)))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "sorted(xg_imp_2.items(), key=lambda x: x[1], reverse=True)[:10]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "dict(zip(polfeat.get_feature_names()[:38],X_train_logreg_wn.columns))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_train_logreg_wn['x7 x13']=X_train_logreg_wn.PageValues*X_train_logreg_wn.Mar\n",
    "X_valid_logreg_wn['x7 x13']=X_valid_logreg_wn.PageValues*X_valid_logreg_wn.Mar\n",
    "X_test_logreg_wn['x7x13']=X_test_logreg_wn.PageValues*X_test_logreg_wn.Mar\n",
    "\n",
    "\n",
    "X_train_logreg_wn['x3 x6']=X_train_logreg_wn.ProductRelated*X_train_logreg_wn.ExitRates\n",
    "X_valid_logreg_wn['x3 x6']=X_valid_logreg_wn.ProductRelated*X_valid_logreg_wn.ExitRates\n",
    "X_test_logreg_wn['x3 x6']=X_test_logreg_wn.ProductRelated*X_test_logreg_wn.ExitRates\n",
    "\n",
    "\n",
    "X_train_logreg_wn['x7 x14']=X_train_logreg_wn.PageValues*X_train_logreg_wn.May\n",
    "X_valid_logreg_wn['x7 x14']=X_valid_logreg_wn.PageValues*X_valid_logreg_wn.May\n",
    "X_test_logreg_wn['x7 x14']=X_test_logreg_wn.PageValues*X_test_logreg_wn.May\n",
    "\n",
    "\n",
    "X_train_logreg_wn['x6 x7']=X_train_logreg_wn.PageValues*X_train_logreg_wn.ExitRates\n",
    "X_valid_logreg_wn['x6 x7']=X_valid_logreg_wn.PageValues*X_valid_logreg_wn.ExitRates\n",
    "X_test_logreg_wn['x6 x7']=X_test_logreg_wn.PageValues*X_test_logreg_wn.ExitRates\n",
    "\n",
    "\n",
    "X_train_logreg_wn['x0 x7']=X_train_logreg_wn.Administrative*X_train_logreg_wn.ExitRates\n",
    "X_valid_logreg_wn['x0 x7']=X_valid_logreg_wn.Administrative*X_valid_logreg_wn.ExitRates\n",
    "X_test_logreg_wn['x0 x7']=X_test_logreg_wn.Administrative*X_test_logreg_wn.ExitRates\n",
    "\n",
    "\n",
    "X_train_logreg_wn['x4 x7']=X_train_logreg_copy.ProductRelated_Duration*X_train_logreg_wn.PageValues\n",
    "X_valid_logreg_wn['x4 x7']=X_valid_logreg_copy.ProductRelated_Duration*X_valid_logreg_wn.PageValues\n",
    "X_test_logreg_wn['x4 x7']=X_test_logreg_wn.ProductRelated_Duration*X_test_logreg_wn.PageValues\n",
    "\n",
    "\n",
    "X_train_logreg_wn['x4 x15']=X_train_logreg_copy.ProductRelated*X_train_logreg_wn.Nov\n",
    "X_valid_logreg_wn['x4 x15']=X_valid_logreg_copy.ProductRelated*X_valid_logreg_wn.Nov\n",
    "X_test_logreg_wn['x4 x15']=X_test_logreg_wn.ProductRelated*X_test_logreg_wn.Nov\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The feature added *below* was made up intuitively."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_train_logreg_wn['condfeat2']=(X_train_logreg_wn.ProductRelated_Duration>1).astype(int)\n",
    "X_valid_logreg_wn['condfeat2']=(X_valid_logreg_wn.ProductRelated_Duration>1).astype(int)\n",
    "X_test_logreg_wn['condfeat2']=(X_test_logreg_wn.ProductRelated_Duration>1).astype(int)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can also see sort of importance by referring to ***lr*** attribute *coef_*"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_valid_logreg_copy.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.figure(figsize=(25,15))\n",
    "plt.bar(range(40),list(lr.coef_[0]),tick_label=list(tmp_train.columns))\n",
    "plt.xticks(rotation=90)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Deleting some columns:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_train_logreg_wn.drop(['TT_14','TT_17','TT_7','TT_12','TT_18','TT_19','TT_9','TT_15','TT_6','TT_4','June','Oct'],axis=1,inplace=True)\n",
    "X_valid_logreg_wn.drop(['TT_14','TT_17','TT_7','TT_12','TT_18','TT_19','TT_9','TT_15','TT_6','TT_4','June','Oct'],axis=1,inplace=True)\n",
    "X_test_logreg_wn.drop(['TT_14','TT_17','TT_7','TT_12','TT_18','TT_19','TT_9','TT_15','TT_6','TT_4','June','Oct'],axis=1,inplace=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_train_logreg_wn.drop(['ExitRates','Informational','Administrative_Duration','ProductRelated_Duration'],axis=1,inplace=True)\n",
    "X_valid_logreg_wn.drop(['ExitRates','Informational','Administrative_Duration','ProductRelated_Duration'],axis=1,inplace=True)\n",
    "X_test_logreg_wn.drop(['ExitRates','Informational','Administrative_Duration','ProductRelated_Duration'],axis=1,inplace=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "lr=LogisticRegression(C=0.1,class_weight='balanced')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "lr.fit(X_train_logreg_wn,y_train_logreg)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print('ROC AUC on valid-out set :', roc_auc_score(y_valid_logreg,lr.predict_proba(X_valid_logreg_wn)[:,1]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Our feature engineering improved model approximately by ~0.8%"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# *Part without name*"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Actually RandomForest and XGBoost Classifier showed better ROC AUC score but fitting and tuning Logistic Regression is much faster. Now suppose we have an optimal threshold established by retail company *=0.5*. Out of curiosity I decided to check recall score:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.metrics import recall_score"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print('Random Forest Classifier recognised %f %% of all visitor who have purchasing intention higher that 0.5' \n",
    "      % (100*recall_score(y_test_tb,rfc.predict(X_test_tb))))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print('XGBoost Classifier recognised %f %% of all visitor who have purchasing intention higher that 0.5'\n",
    "      % (100*recall_score(y_test_tb,xgbclf.predict(X_test_tb))))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print('Logistic Regression recognised %f %% of all visitor who have purchasing intention higher that 0.5'\n",
    "      % (100*recall_score(y_test_logreg,lr.predict(X_test_logreg_wn))))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Plotting training and validation curves"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.model_selection import learning_curve,validation_curve"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,\n",
    "                        n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)):\n",
    "    \n",
    "    plt.figure()\n",
    "    plt.title(title)\n",
    "    if ylim is not None:\n",
    "        plt.ylim(*ylim)\n",
    "    plt.xlabel(\"Training examples\")\n",
    "    plt.ylabel(\"ROC AUC\")\n",
    "    train_sizes, train_scores, test_scores = learning_curve(\n",
    "        estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes, scoring='roc_auc')\n",
    "    train_scores_mean = np.mean(train_scores, axis=1)\n",
    "    train_scores_std = np.std(train_scores, axis=1)\n",
    "    test_scores_mean = np.mean(test_scores, axis=1)\n",
    "    test_scores_std = np.std(test_scores, axis=1)\n",
    "    plt.grid()\n",
    "\n",
    "    plt.fill_between(train_sizes, train_scores_mean - train_scores_std,\n",
    "                     train_scores_mean + train_scores_std, alpha=0.1,\n",
    "                     color=\"r\")\n",
    "    plt.fill_between(train_sizes, test_scores_mean - test_scores_std,\n",
    "                     test_scores_mean + test_scores_std, alpha=0.1, color=\"g\")\n",
    "    plt.plot(train_sizes, train_scores_mean, 'o-', color=\"r\",\n",
    "             label=\"Training score\")\n",
    "    plt.plot(train_sizes, test_scores_mean, 'o-', color=\"g\",\n",
    "             label=\"Cross-validation score\")\n",
    "\n",
    "    plt.legend(loc=\"best\")\n",
    "    return plt"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.figure(figsize=(10, 7))\n",
    "plot_learning_curve(lr, 'Logistic Regression', X_train_logreg_wn, y_train_logreg, cv=skf, n_jobs=-1);"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We observe a good thing - training cross-validation curves have tend to converge. No underfitting."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.figure(figsize=(10,7))\n",
    "param_range=np.array([0.01, 0.05, 0.1, 0.25, 0.5, 1, 5])\n",
    "train_scores, test_scores = validation_curve(lr, X_train_logreg_wn, y_train_logreg, param_name=\"C\",\n",
    "                                             param_range=param_range, cv=skf, scoring=\"roc_auc\", n_jobs=-1)\n",
    "train_scores_mean = np.mean(train_scores, axis=1)\n",
    "train_scores_std = np.std(train_scores, axis=1)\n",
    "test_scores_mean = np.mean(test_scores, axis=1)\n",
    "test_scores_std = np.std(test_scores, axis=1)\n",
    "\n",
    "plt.title(\"Validation Curve\")\n",
    "plt.xlabel(\"C\")\n",
    "plt.ylabel(\"ROC AUC\")\n",
    "#plt.ylim(0.0, 1.1)\n",
    "lw = 2\n",
    "plt.plot(param_range, train_scores_mean, label=\"Training score\",\n",
    "             color=\"darkorange\", lw=lw)\n",
    "plt.fill_between(param_range, train_scores_mean - train_scores_std,\n",
    "                 train_scores_mean + train_scores_std, alpha=0.2,\n",
    "                 color=\"darkorange\", lw=lw)\n",
    "plt.plot(param_range, test_scores_mean, label=\"Cross-validation score\",\n",
    "             color=\"navy\", lw=lw)\n",
    "plt.fill_between(param_range, test_scores_mean - test_scores_std,\n",
    "                 test_scores_mean + test_scores_std, alpha=0.2,\n",
    "                 color=\"navy\", lw=lw)\n",
    "plt.legend(loc=\"best\")\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "So our scores pretty much consistent along the *C* range. Though ROC AUC drastically rised with *C* increasing from *0* to *~0.4*"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Prediction for test or hold-out samples"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now it's time to make predictions on ***test*** set. This one which we created at the beginning and transformed each with *train* and *valid* but haven't used that."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "And before that, we can fit our model on ***train***+***valid***"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**RandomForest**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_train_tb_fin=pd.concat([X_train_tb,X_valid_tb],axis=0)\n",
    "y_train_tb_fin=pd.concat([y_train_tb,y_valid_tb],axis=0)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%time\n",
    "rfc.fit(X_train_tb_fin,y_train_tb_fin)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print('ROC AUC on test set :', roc_auc_score(y_test_tb,rfc.predict_proba(X_test_tb)[:,1]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**XGBoost Clasiffier**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%time\n",
    "xgbclf.fit(X_train_tb_fin,y_train_tb_fin)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print('ROC AUC on test set :', roc_auc_score(y_test_tb,xgbclf.predict_proba(X_test_tb)[:,1]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Logistic Regression**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_train_logreg_wn.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_valid_logreg_wn.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_train_logreg_fin=pd.concat([X_train_logreg_wn,X_valid_logreg_wn],axis=0)\n",
    "y_train_logreg_fin=pd.concat([y_train_logreg,y_valid_logreg],axis=0)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_train_logreg_fin.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "y_train_logreg_fin.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "y_test_logreg.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "lr.fit(X_train_logreg_fin,y_train_logreg_fin)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print('ROC AUC on test set :', roc_auc_score(y_test_logreg,lr.predict_proba(X_test_logreg_wn)[:,1]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Results are even higher than on ***valid*** set. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Conclusions "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "So this is the end. We used three model on imbalanced data and each of them showed quite a high ROC AUC score. We could observe that score on valid set changed in accordance to *cross_val_score*. \n",
    "I'd stick to Logistic Regression. You could notice this by last parts of project. Moreover, I feel like it could be done more in case of RandomForest and XGBoost.\n",
    "\n",
    "When integrated with other module to determine likelihood of visitor to leave the site (I mentioned in the beginning), company can use this classification model to show individual special offers to such visitors before they leave shop website.\n",
    "\n",
    "\n",
    "- Data was collected during one year and we have a feature *Month*. So basically we have sort of timeline. However I used *Month* solely as a categorical feature without any time context. And I'm not sure the opposite would make any sense.\n",
    "\n",
    "- Feature selection and engineering helped a bit to increase LogReg score. Except experimenting with visualizations I don't see further way to improve this process. Oversampling didn't help (but didn't worsen too).\n",
    "\n",
    "- As for choosing parameters range: I'm very new to this so I don't have much experience in tuning models. So I just make range of values close to default or just select a wide but small range first and then iterate over bigger amount of values in choosen smaller areas that seems optimal. Parameter grids used in code above is what I came to after some time. Hyperparameters tuning definitely needs wiser approach.\n",
    "\n",
    "- It's also an interesting idea to build online learning system which could be updated with each new example."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Thank you for attention!"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
