{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<center>\n",
    "<img src=\"../../img/ods_stickers.jpg\" />\n",
    "    \n",
    "## [mlcourse.ai](mlcourse.ai) – Open Machine Learning Course \n",
    "### <center> Author: Александр Кацалап (ODS Slack nick: Alexkats)\n",
    "    \n",
    "## <center> Prediction of real estate prices at Melbourne housing market"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In this project I will analyze **Melbourne Housing Market** data, collected by [Tony Pino](https://www.kaggle.com/anthonypino) and posted on [Kaggle](https://www.kaggle.com/anthonypino/melbourne-housing-market/home).\n",
    "\n",
    "This data was scraped from publicly available results posted every week from real estate resource [Domain.com.au](https://www.domain.com.au). Dataset includes address, type of real estate, suburb, method of selling, rooms, price, real estate agent, date of sale and distance from C.B.D. - centre of Melbourne.\n",
    "\n",
    "The purpose of this project is building a model, that will allow to predict the price of property on Melbourne house market, based on its characteristics. **So, our task is regression task.**\n",
    "\n",
    "It may be useful to know actual property price in next cases:\n",
    "\n",
    "- You are ***property seller*** and you want to sell it as soon as possible. You don't want to sell it at a low price and lose your money. And you don't want to find during long time you buyer because of too high price of the property.\n",
    "\n",
    "\n",
    "- You are ***property buyer*** and you want to buy a good house for a good price and don't want to overpay.\n",
    "\n",
    "\n",
    "- You are **real estate agency** like a [Domain.com.au](https://www.domain.com.au) and you want to remove  advertisements with suspicious objects on your website. For example, if sale advertisement have very low price in compare with objects with aproximatly same characteristics, it may be a fraud. Prediction of actual prices will help to detect and remove such advertisement and **you don't lose your customers**.\n",
    "\t\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 1. Dataset and features description"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Dataset contains information about property sales in Melbourne during the period **from January, 2016 to October, 2018.**\n",
    "\n",
    "Let's load dataset and describe given features:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "import seaborn as sns\n",
    "from matplotlib import pyplot as plt\n",
    "%matplotlib inline"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import warnings\n",
    "warnings.simplefilter(action='ignore', category=FutureWarning)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "warnings.simplefilter(\"ignore\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "full_data = pd.read_csv('Melbourne_housing_FULL.csv', parse_dates=['Date'])\n",
    "full_data.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "full_data = full_data[full_data['Date'] <= '2018-04-01']"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Every object in dataset (every row) is a sold of property with its own characteristics and additional information as seller, sell type and sell.\n",
    "\n",
    "Let's get info about types of columns and skipped values in dataset:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "full_data.info()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "As we see, data contains 21 columns, many of them have missing values."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1.1 Features description"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's give a more detailed description about columns meaning:"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Suburb**: Suburb name in Melbourne\n",
    "\n",
    "**Address**: Address\n",
    "\n",
    "**Rooms**: Number of rooms\n",
    "\n",
    "**Bedroom2**: Number of rooms (from different source)\n",
    "\n",
    "**Price**: Price in Australian dollars. It's a ***target value*** in our task\n",
    "\n",
    "**Method** -  type of sales method:\n",
    "- S  - property sold; \n",
    "- SP - property sold prior; \n",
    "- PI - property passed in; \n",
    "- PN - sold prior not disclosed; \n",
    "- SN - sold not disclosed; \n",
    "- VB - vendor bid; \n",
    "- W  - withdrawn prior to auction; \n",
    "- SA - sold after auction; \n",
    "- SS - sold after auction price not disclosed. \n",
    "\n",
    "**Type** -  type of property:\n",
    "- h - house, cottage, villa, semi, terrace; \n",
    "- u - unit, duplex; \n",
    "- t - townhouse; \n",
    "\n",
    "\n",
    "**SellerG**: Real Estate Agent\n",
    "\n",
    "**Date**: Date sold\n",
    "\n",
    "**Distance**: Distance from C.B.D.(Melbourne centre) in Kilometres\n",
    "\n",
    "**Regionname**: General Region (West, North West, North, North east ...etc.)\n",
    "\n",
    "**Propertycount**: Number of properties that exist in the suburb.\n",
    "\n",
    "**Bathroom**: Number of Bathrooms\n",
    "\n",
    "**Car**: Number of carspots\n",
    "\n",
    "**Landsize**: Land size in Metres\n",
    "\n",
    "**BuildingArea**: Building size in Metres\n",
    "\n",
    "**YearBuilt**: Year the house was built\n",
    "\n",
    "**CouncilArea**: Governing council for the area\n",
    "\n",
    "**Lattitude**: Lattitude coordinate of property\n",
    "\n",
    "**Longtitude**: Longtitude coordinate of property"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "From this description it's possible to classify most of these features into categorical and numeric (continiuos):\n",
    "\n",
    "***Categorical features:***\n",
    "- Suburb, Type, Method, SellerG, Postcode, CouncilArea, Regionname\n",
    "\n",
    "***Numeric features:***\n",
    "- Rooms, Date, Distance, Bedroom2, Bathroom, Car, Landsize, BuildingArea, YearBuilt, Lattitude, Longtitude,Propertycount\n",
    "\n",
    "***Target value:***\n",
    "- Price\n",
    "\n",
    "Also there are some complex features, that ***can't be definitely classified*** by them type and must be transformed before used in modeling:\n",
    "\n",
    "- Address, Postcode"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 2. Exploratory data analysis"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "As we see, that there are missing values in our target. The reason is that **Method** of selling theese properties was among **PN, SS**. Theese methods didn't imply diclosing sell price. So we shall have to remove such objects from dataset when we will build our model. But before it we save them for calculating some statistics and for missing values imputing.\n",
    "\n",
    "Let's separate dataset into 2 parts: with target value and without target value:"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "And split data with not missing target into feature data and target data:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_full = full_data.copy()\n",
    "y_full = full_data['Price']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.1  Features interactions and their influence on the target"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's explain features influence on the target variable - **Price**. \n",
    "\n",
    "1. It's evident, that features **Rooms,  Bedroom2, Bathroom, Car, Landsize, BuildingArea**  are directly proportional to house price. Therefore, it is expected that they should have a high correlation with target.\n",
    " \n",
    "2. **Date** of sold probably will have seasonal influence on price: for example, there is low season at summer and high season at winter. \n",
    " \n",
    "3. **Distance** from CBD may have complex, non-linear influence on price. On the one hand, the price should be the highest in the center and should decrease when moving to the outskirts. On the other hand, in the centre of big city there is a bad ecology and very noisy. \n",
    "  \n",
    "4. Similar reasoning can be done for the **YearBuilt** of construction. On the one hand, the price should be the highest for the new buildings and houses. On the other hand, very old buildings may be architectural monuments and have historical value, so very old buildings may have very high prices.\n",
    " \n",
    "5. Features **Suburb, Postcode, Regionname** characterize houses locations in the city, and, as consequences, crime situation and transport accessibility. So, theese features and their different combinations should influence on the house price.\n",
    "\n",
    "6. **CouncilArea** may characterize the quality of local goverment work. The degree of well-being is depends on this work and, as consequence, depend a house prices in different areas.\n",
    "\n",
    "7. **Type** of property certainly matters, because own cottage or villa is more expensive, than duplex with neighbors."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.2 Target value analysis"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Save target value  **Price** without NaNs to variable $y$ for analysis:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "y = full_data[full_data.Price.notnull()]['Price']"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's plot the distribution of target value:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.figure(figsize=(14, 7))\n",
    "sns.distplot(y)\n",
    "plt.grid()\n",
    "plt.title('Price distribution');"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "It is not a nornal distribution, so it is not a good idea to predict this value directly. So we try to take a **logarithm of target** and plot the distribution of such transformed value:\n",
    "\n",
    "$$ \\widehat{y} = ln (y + 1) $$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.figure(figsize=(14, 7))\n",
    "sns.distplot(np.log( 1.0 + y ))\n",
    "plt.grid()\n",
    "plt.title('Logarithm of Price distribution');"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "So let's make a statistical tests for normality and skewness of distribution of $\\widehat{y} $ . \n",
    "\n",
    "Use the **Shapiro-Wilk and Kolmogorov-Smirnov nornality tests**:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from scipy.stats import shapiro, kstest, probplot, skew\n",
    "test_stat, p_value = shapiro(np.log(y))\n",
    "test_stat, p_value"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "test_stat, p_value = kstest(np.log(y), cdf='norm')\n",
    "test_stat, p_value"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**QQ-plot of $\\widehat{y}$**. \n",
    "\n",
    "For ideal normal distribution all blue dots lay on red line."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.figure(figsize=(7,7))\n",
    "probplot(np.log(y), dist='norm', plot=plt);\n",
    "plt.grid()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Skewness** test. For symmetrical distribution result of skewness test is equal to zero."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "skew(np.log(y))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Nevertheless distribution of $\\widehat{y}$ is slightly non symmetrical, QQ-plot and both  normality tests passed **allow us to work with value $\\widehat{y}$ as with normal distributed value.**\n",
    "\n",
    "So, we shall work with target value $y$ as follows:\n",
    "\n",
    "1. Train models on transform target $y^* = ln(y+1)$ \n",
    "2. Make a prediction as $\\widehat{y}_*$ \n",
    "3. Perform the inverse transform to original target: $\\widehat{y} = e^{\\widehat{y}_*} - 1$ \n",
    "4. Check their quality by calculating some metric $f(y, \\widehat{y})$, that will be choosed later."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.3   Missing values processing"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Because of a lot of missings in data, first of all, we shall try to fill them."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_full.info()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Fill fields Regionname, Propertycount, CouncilArea, Postcode. \n",
    "\n",
    "Let's look at object with Postcode missed:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_full[X_full.Postcode.isnull()]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The most fields are missing, so it's better to drop this object:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_full = X_full[~X_full.Postcode.isnull()]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_full.Postcode = X_full.Postcode.astype(int)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For objects with missed Regionname, Propertycount, CouncilArea we get the same case: the most of fields are missing, so we drop them too:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_full[X_full.Regionname.isnull()]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_full = X_full[~X_full.Regionname.isnull()]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Check missing values count in data again:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_full.info()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Check YearBuilt values. Plot histogram:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.figure(figsize=(10,5))\n",
    "X_full.YearBuilt.hist(bins=100)\n",
    "\n",
    "plt.text(x=1200, y = 1200, s='Min built year = {}'.format(X_full.YearBuilt.min()))\n",
    "plt.text(x=1200, y = 1100, s='Max built year = {}'.format(X_full.YearBuilt.max()))\n",
    "plt.title('Year built');"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "There are wrong values, including values from future:)\n",
    "\n",
    "Set them to NaN, and then fill together with other missings."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_full[(X_full.YearBuilt < 1800) | (X_full.YearBuilt > 2018)]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_full.loc[(X_full.YearBuilt < 1800) | (X_full.YearBuilt > 2018), 'YearBuilt'] = np.nan"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Note, that there are a lot of Lattitude and Longtitude missing values. But we can restore them by Adress (using street), PostCode, RegionName, Suburb and CouncilArea values. \n",
    "\n",
    "Select objects with filled Lattitude and Longtitude, and find mean values for them by grouping by theese values:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "coords_features = ['Address', 'Postcode', 'Regionname', 'Suburb', 'CouncilArea', 'Lattitude', 'Longtitude']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "coords_data = X_full[~((X_full.Lattitude.isnull()) & (X_full.Longtitude.isnull()))][coords_features]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "coords_data.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Extraxt street name from address:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "coords_data['Address_splitted'] = coords_data.Address.str.split(' ')\n",
    "coords_data['Street'] = coords_data.Address_splitted.apply(lambda s: s[1])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "group_features = ['Regionname','Suburb','CouncilArea']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "coords_data_mean = coords_data\\\n",
    "                    .groupby(group_features)['Lattitude','Longtitude']\\\n",
    "                    .mean()\\\n",
    "                    .reset_index()\\\n",
    "                    .rename(columns={'Lattitude': 'Lat_new', 'Longtitude': 'Lon_new'})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "coords_data_mean.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now add Street name to our dataset and merge it with **coords_data_mean** :"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_full['Address_splitted'] = X_full.Address.str.split(' ')\n",
    "X_full['Street'] = X_full.Address_splitted.apply(lambda s: s[1])\n",
    "X_full['HouseNumber'] = X_full.Address_splitted.apply(lambda s: s[0])\n",
    "X_full.drop('Address_splitted', axis=1, inplace=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_full_2 = pd.merge(X_full, coords_data_mean, on=group_features, how='left')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_full_2.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now replace Nans in **Lattitude** and **Longtitude** by new values **Lat_new** and\t**Lon_new**:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_full_2.loc[X_full_2.Lattitude.isnull(), 'Lattitude'] = X_full_2['Lat_new']\n",
    "X_full_2.loc[X_full_2.Longtitude.isnull(), 'Longtitude'] = X_full_2['Lon_new']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "And check whether all NaNs in Lattitude and Longtitude are filled:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_full_2[X_full_2.Lattitude.isnull()].shape[0]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "No, there are 87 values unfilled, because there are some Regionname, Suburb and CouncilArea, when Lattitude and Longtitude values was missing totally. So, because count of such objects is very small compared to dataset size, we can just fill them by mean:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_full_2.Lattitude = X_full_2.Lattitude.fillna(X_full_2.Lattitude.mean())\n",
    "X_full_2.Longtitude = X_full_2.Longtitude.fillna(X_full_2.Longtitude.mean())\n",
    "\n",
    "X_full_2.drop(['Lat_new','Lon_new'], axis=1, inplace=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Check missing values counts:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_full_2.info()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "So, we have missings in features: **Bedroom2, Bathroom, Car, Landsize, BuildingArea**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Before fill missing in them, let's check their distributions. They may have ouliers, that can have bad influence on filling quality and on model quality in futher.\n",
    "\n",
    "Let's plot distributions of theese features:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Bathroom\n",
    "X_full_2[~X_full_2.Bathroom.isnull()].Bathroom.hist(bins=11)\n",
    "X_full_2[~X_full_2.Bathroom.isnull()].Bathroom.value_counts()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "There are objects with 7 and more bathrooms! Let's look at them:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_full[X_full.Bathroom >= 8][['Suburb', 'Address', 'Rooms', 'Type', 'Method',  'Date',\n",
    "       'Distance', 'Bedroom2', 'Bathroom', 'Car', 'Landsize', 'BuildingArea', 'YearBuilt', 'Price']]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Bedroom\n",
    "X_full_2[~X_full_2.Bedroom2.isnull()].Bedroom2.hist(bins=15)\n",
    "X_full_2[~X_full_2.Bedroom2.isnull()].Bedroom2.value_counts()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Is seems very suspicious, especially objects, when **bedrooms quantity is equals to bathrooms quantity :)**\n",
    "The same for zero count of bedrooms.\n",
    "\n",
    "So, it's better solution **to drop objects with bathrooms quantity more than 6 and with bedrooms quantity more than 8 (or equal to zero):**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_full_3 = X_full_2[(X_full_2.Bathroom.isnull()) | (X_full_2.Bathroom <= 6)]\n",
    "X_full_3 = X_full_3[(X_full_3.Bedroom2.isnull()) | ((X_full_3.Bedroom2 <= 8) & (X_full_3.Bedroom2 > 0))]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Check **Car** feature:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Car\n",
    "X_full_3[~X_full_3.Car.isnull()].Car.hist(bins=9)\n",
    "X_full_3[~X_full_3.Car.isnull()].Car.value_counts()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_full[X_full.Car > 9 ][['Suburb', 'Address', 'Rooms', 'Type', 'Method',  'Date',\n",
    "       'Distance', 'Bedroom2', 'Bathroom', 'Car', 'Landsize', 'BuildingArea', 'YearBuilt', 'Price']].head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "So, we see, that objects with carspots quantity more than 9 have very big Landsize values and suspiciously low prices. So, theese **objects aren't look like most others**, and we will drop them too:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_full_3 = X_full_3[(X_full_3.Car.isnull()) | (X_full_3.Car < 9)]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Check **Landsize** feature:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Landsize\n",
    "X_full_3[~X_full_3.Landsize.isnull()].Landsize.hist();"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "So, distribution is very skewed, because of outliers with very huge values. Let's find **mean value** and calculate **99.9% quantile for Landsize** feature. Also, find objects with top-10 Landsizes in our dataset:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Find mean value of Landsize:\n",
    "X_full_3.Landsize.mean()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Find 99.9 % quantile of Landsize:\n",
    "q_99 = X_full_3[~X_full_3.Landsize.isnull()].Landsize.quantile(0.999)\n",
    "q_99"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Find top-10 objects with biggest Landsize:\n",
    "top_10_landsizes = sorted(X_full_3[~X_full_3.Landsize.isnull()].Landsize, reverse=True)[:10]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "top_10_landsizes"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_full_3[X_full_3.Landsize.isin(top_10_landsizes)][['Suburb', 'Address', 'Rooms', 'Type', 'Method',  'Date',\n",
    "       'Distance', 'Bedroom2', 'Bathroom', 'Car', 'Landsize', 'BuildingArea', 'YearBuilt', 'Price']]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "So, almost all of them are located far from CBD (mean distance is about 11 km). As with other outliers, we shall drop objects with Landsize more than 99.9%, because they will have bad influence on model quality:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_full_3 = X_full_3[(X_full_3.Landsize.isnull()) | (X_full_3.Landsize < q_99)]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Plot the distribution after outliers removing:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_full_3[~X_full_3.Landsize.isnull()].Landsize.hist(bins=50);"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Check **BuildingArea** feature:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Landsize\n",
    "X_full_3[~X_full_3.BuildingArea.isnull()].BuildingArea.hist(bins=50);"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We have the same situation  as with **Landsize**, so let's repeat the procedure for outliers dropping with **BuildingArea** feature:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Find 99 % quantile of Landsize:\n",
    "q_99 = X_full_3[~X_full_3.BuildingArea.isnull()].BuildingArea.quantile(0.99)\n",
    "q_99"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_full_3 = X_full_3[(X_full_3.BuildingArea.isnull()) | (X_full_3.BuildingArea < q_99)]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Plot the distribution after outliers removing:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_full_3[~X_full_3.BuildingArea.isnull()].BuildingArea.hist(bins=50);"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Note, that BuildingArea can't be equal to zero! (unlike Landsize). But we have several zero values in BuildingArea feature. It seems to be mistake, so let's change theese zero values to NaNs:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_full_3.loc[X_full_3.BuildingArea == 0, 'BuildingArea'] = np.nan"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "So, we dropped objects with abnormal values of several features. \n",
    "\n",
    "**I think, in real business task we just have to build others, separates models for every group of such objects. But in this task our goal is to build one model for most of objects in our dataset.**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_full_3.info()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**It would be more correct to process NaNs in features like we did it with Lattitude and Longtitude by calculating mean values in groups with similar objects without NaNs and use those values to fill NaNs. **\n",
    "\n",
    "But let't use **SimpleImputor** from Sklearn for saving time and variety :)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import sklearn\n",
    "from sklearn.impute import SimpleImputer"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "imputer_mean = SimpleImputer(missing_values=np.nan, strategy='median')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Select features for imputing:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "features_with_nans = X_full_3.columns[X_full_3.isnull().any()].tolist()\n",
    "features_with_nans.remove('Price')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_full_3.reset_index(drop=True, inplace=True)\n",
    "X_to_impute = X_full_3[features_with_nans].copy()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_imputed_array = imputer_mean.fit_transform(X_to_impute)\n",
    "X_imputed = pd.DataFrame(data=X_imputed_array, columns=features_with_nans)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Create new dataset with imputed values:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_full_4 = pd.concat([X_full_3.drop(features_with_nans, axis=1), X_imputed], axis=1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_full_4.info()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "So, now there is no NaNs in our features values. \n",
    "**But there are NaNs in target value - Price**. \n",
    "Before building the model we will drop objects with missing targets."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now let's split our data into features dataframe and target vector:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "data_total = X_full_4.copy()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Add **Street** and **HouseNumber** again:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "data_total['Address_splitted'] = data_total.Address.str.split(' ')\n",
    "data_total['Street'] = data_total.Address_splitted.apply(lambda s: s[1])\n",
    "data_total['HouseNumber'] = data_total.Address_splitted.apply(lambda s: s[0])\n",
    "data_total.drop(['Address_splitted', 'Address'], axis=1, inplace=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "data_total.YearBuilt = data_total.YearBuilt.astype(int)\n",
    "data_total.Bedroom2 = data_total.Bedroom2.astype(int)\n",
    "data_total.Bathroom = data_total.Bathroom.astype(int)\n",
    "data_total.Car = data_total.Car.astype(int)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "data = data_total[~data_total.Price.isnull()]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_total = data_total.drop('Price', axis=1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Features for objects with price only\n",
    "X = data_total[~data_total.Price.isnull()].drop('Price', axis=1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Target vector\n",
    "y = data_total[~data_total.Price.isnull()]['Price']"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Check shapes of dataframes:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X.shape, y.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Part 3. Visual analysis of the features"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's split our features into **categical** and **numerical**:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "numerical_features = ['Rooms','Distance', 'Propertycount', \n",
    "                      'Bedroom2', 'Bathroom', 'Car', 'Landsize', \n",
    "                      'BuildingArea', 'YearBuilt', 'HouseNumber']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "cat_features = ['Suburb', 'Address','Type', 'Method', 'SellerG','CouncilArea','Regionname']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.1 Numerical features relashonships"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's use **seaborn pairplot** to visualize relationships between numerical features:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "sns.pairplot(data=data[numerical_features + ['Price']]);"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "So, let's plot the most interesting of them seprately: "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "sns.pairplot(data=data[numerical_features + ['Price']], \n",
    "             y_vars=['Price'], \n",
    "             x_vars=['Rooms',  'Distance',\n",
    "                     'Car', 'Landsize', 'BuildingArea', 'YearBuilt']);"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "sns.pairplot(data=data[numerical_features + ['Price']], \n",
    "             y_vars=['Distance'], \n",
    "             x_vars=['Landsize', 'BuildingArea', 'YearBuilt']);"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "From theese plots we can just conclude, that price is inversely proportional to the distance from CBD. More older buildings are closer to CBD. And houses with bigger landsize are more far from CBD.\n",
    "\n",
    "So, theese conclusions was expected and consistent with reality."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's plot **correlation matrix** for numerical features and target:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.figure(figsize=(10,10))\n",
    "sns.heatmap(data[numerical_features + ['Price']].corr(), annot=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "So, **Rooms, BuidingArea, Bedroom2, Bathroom, Car** are positive correlated with **Price** as expected."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Features like **YearBuilt** may have more complex dependencies with price, because they have different categorical features values (for exaple **Type** and **Regionname**). To check this hypothesis we have to analyze this and other muneric features in more detail. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 3.2 Categorical features relashonships with target"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "I use **seaborn boxplot** to visualize price distributions in different groups of cat.features"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's look, how prices are distributed depending on the **Type** feature:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.figure(figsize=(12,8))\n",
    "sns.boxplot(x='Type', y='Price',\n",
    "            data=data);\n",
    "plt.ylim((0, 0.5*1e7))\n",
    "plt.grid()\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Same plot for **Regionname** feature:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.figure(figsize=(18,8))\n",
    "\n",
    "sns.boxplot(x='Regionname', y='Price',\n",
    "            data=data);\n",
    "plt.ylim((0, 0.4*1e7))\n",
    "plt.grid()\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's combine theese two plots to visualize price distributions, grouped by **Regionname** and **Type** simultaneously:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.figure(figsize=(18,12))\n",
    "\n",
    "sns.boxplot(x='Regionname', y='Price',\n",
    "            hue='Type',\n",
    "            data=data);\n",
    "plt.ylim((0, 0.4*1e7))\n",
    "plt.grid()\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "And it's very interesting to plot the same for **Distance**:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.figure(figsize=(18,12))\n",
    "\n",
    "sns.boxplot(x='Regionname', y='Distance',\n",
    "            hue='Type',\n",
    "            data=data);\n",
    "plt.grid()\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Boxplot for **Price** with grouping by  **Council Area **:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.figure(figsize=(18,12))\n",
    "sns.boxplot(y='CouncilArea', x='Price', data=data);\n",
    "plt.xlim((0, 0.4*1e7))\n",
    "plt.grid()\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's visualize price distributions, grouped by **Method** and **Type** simultaneously. It allows to understand influence of buying **Method** on the house price in every **Type**:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.figure(figsize=(18,12))\n",
    "\n",
    "sns.boxplot(x='Type', y='Price',\n",
    "            hue='Method',\n",
    "            data=data);\n",
    "plt.ylim((0, 0.4*1e7))\n",
    "plt.grid()\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "To find out, how many objects contain different regions, how many objects was selled by different methods, let's drow  **countplots**:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.figure(figsize=(18,12))\n",
    "sns.countplot(data=data, hue='Type', y='Regionname');\n",
    "plt.grid()\n",
    "\n",
    "plt.figure(figsize=(18,12))\n",
    "sns.countplot(data=data, hue='Method', y='Regionname');\n",
    "plt.grid()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "And returning to features numerical lets plot **Prices** distributions grouped by **Rooms, Car, YearBuilt, Bedroom2, Bathroom** and cat. feature ** Type** :"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "for f in ['Rooms', 'Car', 'Bedroom2', 'Bathroom']:\n",
    "    plt.figure(figsize=(18,6))\n",
    "    sns.boxplot(y='Price', x=f,\n",
    "                hue='Type',\n",
    "                data=data);\n",
    "    plt.ylim((0, 0.4*1e7))\n",
    "    plt.grid()\n",
    "    plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Plot **YearBuilt** distributions grouped by **Regionname** and **Type** :"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.figure(figsize=(18,10))\n",
    "sns.boxplot(y='YearBuilt', x='Regionname',\n",
    "            hue='Type',\n",
    "            data=data);\n",
    "plt.grid()\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Plot **Price** distributions grouped by **YearBuilt** :"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.figure(figsize=(18,10))\n",
    "sns.boxplot(y='Price', x='YearBuilt', data=data);\n",
    "plt.xticks(rotation=90)\n",
    "plt.ylim((0, 0.6*1e7))\n",
    "\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Average **Price**, grouped by **YearBuilt**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "prices_by_yearbuilt = data[['YearBuilt', 'Price']]\\\n",
    ".groupby('YearBuilt')\\\n",
    ".agg(['mean'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.figure(figsize=(12,8))\n",
    "plt.scatter(x=prices_by_yearbuilt.index,y=prices_by_yearbuilt.values[:,0])\n",
    "plt.xticks(rotation=90)\n",
    "\n",
    "plt.grid()\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "There are *geographical coordinates* in our dataset: features **Longtitude** and **Lattitude** . So we can literally plot a map for some of categorical features! Let's do it for **'Suburb', 'Postcode','CouncilArea', 'Regionname'** and try for **'Distance'**:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "for feature in ['Suburb', 'Postcode','CouncilArea', 'Regionname', 'Distance']:\n",
    "\n",
    "    plt.figure(figsize=(15,10))\n",
    "    if feature == 'Postcode':\n",
    "        feature_unique_values = data[feature].unique()\n",
    "    else:\n",
    "        feature_unique_values = sorted(data[feature].unique())\n",
    "    colors = sns.color_palette(\"hls\", len(feature_unique_values))\n",
    "    for i, cat_value in enumerate(feature_unique_values):\n",
    "        plt.scatter(x=data[data[feature] == cat_value]['Longtitude'],\n",
    "                    y=data[data[feature] == cat_value]['Lattitude'], c=colors[i]);\n",
    "    \n",
    "    plt.title(feature)\n",
    "    if feature in ['CouncilArea', 'Regionname']:\n",
    "        plt.legend(feature_unique_values);\n",
    "    plt.grid()\n",
    "    plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 3.3 Date feature and its relashoinships with target and other features"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "There is a date of sale in dataset. Let's extract year and month from it and plot prices and sales count, grouped by year and by month:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "data['Month'] = data.Date.dt.month\n",
    "data['Year'] = data.Date.dt.year"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.figure(figsize=(18,12))\n",
    "\n",
    "sns.countplot(x='Month',\n",
    "              hue='Year',\n",
    "              data=data);\n",
    "plt.grid()\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.4 Conclusions"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's resume to our assumptions from **p.2** and add make conclusions about them after visual data analysis :\n",
    "\n",
    "1. It's evident, that features **Rooms,  Bedroom2, Bathroom, Car, Landsize, BuildingArea**  are directly proportional to house price. Therefore, it is expected that they should have a high correlation with target.\n",
    "<font color = 'green'> Yes, in general, without grouping by cat. fatures it's true. </font>\n",
    " \n",
    "2. **Date** of sold probably will have seasonal influence on price: for example, there is low season at summer and high season at winter. \n",
    "<font color = 'red'> No, there is no seasonality during the year. There is rather **uptrend** during all period in dataset. </font>\n",
    " \n",
    "3. **Distance** from CBD may have complex, non-linear influence on price. On the one hand, the price should be the highest in the center and should decrease when moving to the outskirts. On the other hand, in the centre of big city there is a bad ecology and very noisy. \n",
    "<font color = 'red'> No, there is realy mostly linear dependency between **Distance** and **Price**. It's confirmed by pairplot and negative correlation coefficient </font>\n",
    "  \n",
    "4. Similar reasoning can be done for the **YearBuilt** of construction. On the one hand, the price should be the highest for the new buildings and houses. On the other hand, very old buildings may be architectural monuments and have historical value, so very old buildings may have very high prices.\n",
    "<font color = 'green'> Yes, there is really non-linear complex dependency: we can't see linear dependency on pairplot, but there is small negative correlation coefficient in generally.   </font>\n",
    " \n",
    "5. Features **Suburb, Postcode, Regionname** characterize houses locations in the city, and, as consequences, crime situation and transport accessibility. So, theese features and their different combinations should influence on the house price.\n",
    "<font color = 'green'> Yes, there is really different prices in different regions. But it's rather consequence of **Distance** value. \n",
    " </font>\n",
    "\n",
    "6. **CouncilArea** may characterize the quality of local goverment work. The degree of well-being is depends on this work and, as consequence, depend a house prices in different areas.\n",
    "<font color = 'green'> Yes, there is really different prices in different **CouncilArea**. But but just like **Suburb, Postcode, Regionname** it's rather consequence of **Distance** value. \n",
    " </font>\n",
    "7. **Type** of property certainly matters, because own cottage or villa is more expensive, than duplex with neighbors.\n",
    "<font color = 'green'> Absolutely right, there is really different prices.\n",
    " </font>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In addition, there is also a significant price difference in sales method - feature **Method**."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 4. Patterns, insights, pecularities of data "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's sum up conclusions about data, based on previous parts."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1. Houses of different types have significant price differences. So, we'll take in account **Type** feature in our predicttion model.\n",
    "\n",
    "2. Features, that match physical parameters of houses - **'Rooms', 'Bedroom2', 'Bathroom', 'Car', 'Landsize', 'BuildingArea'** - have a direct influence on price according to principle \"the larger the size/count, the more expensive\". So, theese features are important.\n",
    "\n",
    "3. There is mostly linear dependency between **Distance** and **Price**. So, **Distance** is important feature.\n",
    "\n",
    "4. Features **Suburb, Postcode, Regionname, CouncilArea** characterize houses locations. But, as it was visible on our \"maps\", **Suburb** and **Postcode** introduse almost the same city divisions, so we don't need both of them. Moreover, they introduse too much detalization and this information can be unuseful in our model unlike features **Regionname, CouncilArea**, so we'll first of all try  **Regionname, CouncilArea** in our model.\n",
    "\n",
    "5. It was unexpected, but sales method - feature **Method** - really matters, at that, we can observe this influence along all types of houses. So, it will be important information in our model too.\n",
    "\n",
    "6. Sutuation with **YearBuilt** is not easy. For houses buld before 1950 there is a huge price variance. From 1950 to 2013 average price is much more stable. But new houses, build after 2013 have higher average price. So, this feature is important, but we need to tranform it before addition to our model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 5. Metrics selection"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Prediction price is a **regression** task. In our data **Price** distribution have big skewess coeffitient. This means, that there outliers - very small percent with huge prices in compare with most other objects. Our goal is to build precise model for most objects with usual prices. As mentioned above, for very expensive houses we have to build a separate model. \n",
    "\n",
    "So, in current task, priority in the accuracy of the prediction will be given to the main majority of objects. \n",
    "\n",
    "A good metric here is **MAE (Mean absolute error)**. In compare with **(R)MSE  (Root mean squared error)**,  **MAE** is less susceptible to large errors on houses with a very large price (fines less for larger errors in absolute magnitude, since only the error modulus is taken, not its square), that сorresponds to the conditions of our task - get the most adequate model metrics for most objects. Big errors on houses with huge price will not distort **MAE** unlike **(R)MSE**."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$$ MAE = \\frac{1}{n} \\sum_{i=1}^n \\mid{y_i - \\widehat{y_i}}\\mid $$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "From an interpretation standpoint, **MAE** is clearly the winner. **(R)MSE** does not describe average error alone and has other implications that are more difficult to tease out and understand. In our task **MAE** have interpretation *\"average error in Australian dollars\"*. It is very easy to explain such error to every buyer or seller."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 6. Model selection"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The most of features in our data, as shown before, have a linear dependency with **Price**. So, this is a good reason to use ***linear regression model*** in our task. Despite of its siplicity, linear models have several advantages in our task:\n",
    "\n",
    "1. Fitting very fast \n",
    "2. Effective with a huge count of features (we have a categorical features with multiple values, so after, for example, one hot encoding, we'll get a hundreds of them)\n",
    "3. Good interpretation: a feature importance is just a absolute value of its coeffitient in fitted linear model."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We'll use **Lasso regression** from sklearn  module. Lasso has a nice property to select features. Just for comparison, we'll try **Random Forest**."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 7. Data preprocessing"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Outliers and missings was processing before in Part 2. So in this part we just divide our data into train and validation sets and make one hot encoding."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 7.1 Split data into train and control parts"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "We have time dependency in our data, so we have to create train and control datasets so, that maximun date in train data is less or equal to minimal data in control data set. So we sort data by date and split it into train and control in proportion 7/3:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "data_sorted = data.sort_values(by='Date')\n",
    "data_sorted.reset_index(inplace=True, drop=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X = data_sorted.drop('Price', axis=1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "y = data_sorted['Price']\n",
    "\n",
    "y.reset_index(inplace=True, drop=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "split_index = int(0.7*X.shape[0])\n",
    "\n",
    "X_train = data_sorted.loc[: split_index, :].drop('Price',axis=1)\n",
    "X_valid = data_sorted.loc[split_index:, :].drop('Price',axis=1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "y_train = y.loc[: split_index]\n",
    "y_valid = y.loc[split_index:]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Check sizes of datasets\n",
    "X_train.shape, X_valid.shape, y_train.shape, y_valid.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 7.2 Make one hot encoding"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Instead of sklearn's OneHotEncoder let's use own function, it's more convinient, because of string values are in categorical features (OneHotEncoder works only with integer categorical features).\n",
    "\n",
    "Let's first of all code cat.features only with small categorical values and drop others. So, let it be our baseline."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def make_one_hot_encoding(X, features):\n",
    "    X_ohe = pd.get_dummies(data=X, columns=features)\n",
    "    return X_ohe"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#  Cat.features only with small categorical values\n",
    "ohe_features = ['Type', 'Method', 'CouncilArea', 'Regionname']\n",
    "\n",
    "X_ohe_train = make_one_hot_encoding(X_train, features=ohe_features)\n",
    "X_ohe_valid = make_one_hot_encoding(X_valid, features=ohe_features)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_ohe_train.drop(['Suburb', 'SellerG', 'Date','Postcode','Lattitude', \n",
    "            'Longtitude', 'Propertycount', 'Street', 'HouseNumber'], axis=1, inplace=True)\n",
    "\n",
    "X_ohe_valid.drop(['Suburb', 'SellerG', 'Date','Postcode','Lattitude', \n",
    "            'Longtitude', 'Propertycount', 'Street', 'HouseNumber'], axis=1, inplace=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_ohe_train.drop('YearBuilt', axis=1, inplace=True)\n",
    "X_ohe_valid.drop('YearBuilt', axis=1, inplace=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_ohe_train.shape, X_ohe_valid.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 7.3 Standatrization, Pipeline"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "When we use linear model, **it's nesessary to standatrize** our data. Let's use **StandardScaler**:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.preprocessing import  StandardScaler\n",
    "\n",
    "scaler = StandardScaler()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "It's comfortably to use scaler and model in **Pipeline**:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.pipeline import Pipeline"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Also, let's define functions for convertion our target into in logarhytm and backwards:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def to_log(y):\n",
    "    return np.log(1 + y)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def from_log(y):\n",
    "    return np.exp(y) - 1"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 8. Cross-validation and adjustment of model hyperparameters"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "When using Lasso, the following optimization task is solving:\n",
    "\n",
    "$$ \\sum_{i=1}^l \\sum_{j=1}^n (w_j x_{ij} - y_*)^2 + \n",
    "\\lambda \\sum_{j=1}^n \\mid{w_j}\\mid     \\longrightarrow min{{\\substack{w}}}\n",
    "$$\n",
    "\n",
    "where $\\lambda$ is **regularization hyperparameter**. When $\\lambda$ is small, weights vector can has large **$l1$-norm**, i.e. high coefficients at our features values $x_{ij}$, and, as consiquence, model will be very unstable (will have **high variance**). And as $\\lambda$ is growing, weights  will be zeroing one by one, and model will be more stable, but will have high **bias**. \n",
    "\n",
    "So, our task is find optimal $\\lambda$, such as will provide the best quality during cross-validation."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In **RandomForest** we'll tune **max_depth** hyperparameter. When it isn't restrict, trees in forest can grow very deep and complex, that can lead to overfitting. \n",
    "\n",
    "*The overfitting symptom is small error at train dataset and big error on validation dataset.*"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's import modules with linear model **Lasso** (we want to make a feature selection) and **RandomForest** - model for control and function for MAE calculation:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.metrics import mean_absolute_error as mae\n",
    "\n",
    "from sklearn.linear_model import Lasso\n",
    "from sklearn.ensemble import  RandomForestRegressor\n",
    "from sklearn.model_selection import TimeSeriesSplit, GridSearchCV\n",
    "from sklearn.pipeline import Pipeline"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Create model object, create **Pipeline** to make scaling ant model fitting at the same time.\n",
    "Note, that we use **TimeSeriesSplit** crossvalidation! \n",
    "\n",
    "\n",
    "This will allow to take into account time dependency in our data."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "lasso = Lasso(random_state=42)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "params_grid = {'lasso__alpha': np.logspace(-4, 4, 10)}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "pipe = Pipeline(steps=[('scaler', scaler), ('lasso', lasso)])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "model_grid = GridSearchCV(pipe, \n",
    "                          params_grid, \n",
    "                          cv=TimeSeriesSplit(max_train_size=None, n_splits=5))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%time\n",
    "with warnings.catch_warnings():\n",
    "    warnings.simplefilter(\"ignore\")\n",
    "    model_grid.fit(X_ohe_train, to_log(y_train))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's visualize coefficients for our features (I'm using the code from article of lesson 4):"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def visualize_coefficients(classifier, feature_names, n_top_features=25):\n",
    "    # get coefficients with large absolute values \n",
    "    coef = classifier.coef_.ravel()\n",
    "    positive_coefficients = np.argsort(coef)[-n_top_features:]\n",
    "    negative_coefficients = np.argsort(coef)[:n_top_features]\n",
    "    interesting_coefficients = np.hstack([negative_coefficients, positive_coefficients])\n",
    "    # plot them\n",
    "    plt.figure(figsize=(18, 8))\n",
    "    colors = [\"red\" if c < 0 else \"blue\" for c in coef[interesting_coefficients]]\n",
    "    plt.bar(np.arange(2 * n_top_features)+1, coef[interesting_coefficients], color=colors)\n",
    "    feature_names = np.array(feature_names)\n",
    "    \n",
    "    plt.xticks(np.arange(1, 1 + 2 * n_top_features), \n",
    "               feature_names[interesting_coefficients], rotation=60, ha=\"right\");\n",
    "    plt.xlabel(\"Feature name\")\n",
    "    plt.ylabel(\"Feature weight\")\n",
    "    plt.title(\"LASSO feature importances\")\n",
    "    plt.grid()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "visualize_coefficients(model_grid.best_estimator_.steps[1][1], \n",
    "                       X_ohe_train.columns, \n",
    "                       n_top_features=20)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Making a predictions on train and validation data:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "y_pred_train = from_log(model_grid.predict(X_ohe_train))\n",
    "y_pred_valid = from_log(model_grid.predict(X_ohe_valid))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now let's make plot, when every object has x-coordinate $y$ (true value) and y-coordinate $\\widehat{y}$ (predicted value), also calculate MAE for train and test datasets and $ \\lambda$. It's clearly, that for good model points must locate nearly the diagonal line:\n",
    "\n",
    "(*Hereinafter we'll make this plot only for objects with prices from 99%-quantile*)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.figure(figsize=(12,12))\n",
    "plt.xlim((0, y.quantile(0.99)))\n",
    "plt.ylim((0, y.quantile(0.99)))\n",
    "plt.scatter(y=y_pred_train,  x=y_train, c='blue', s=7)\n",
    "plt.scatter(y=y_pred_valid,  x=y_valid, c='green', s=7)\n",
    "plt.plot([0,3e6],[0, 3e6], 'r-')\n",
    "plt.legend(['\"Ideal model\" diagonal','Predicted train data','Predicted validation data'])\n",
    "plt.text(s=\" MAE_train = {0:.2f}\".format(mae(y_pred_train, y_train)), x=1e5, y=3e6)\n",
    "plt.text(s=\" MAE_valid = {0:.2f}\".format(mae(y_pred_valid, y_valid)),  x=1e5, y=2.9e6)\n",
    "plt.text(s=\"\"\"$ \\\\lambda = {0:.4f} $\"\"\".format(model_grid.best_estimator_.steps[1][1].alpha), x=1e5, y=2.8e6)\n",
    "plt.title(\"True VS Predicted values (LASSO)\")\n",
    "plt.grid()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "So, we got $MAE_{train} = 219484$  and $MAE_{valid} = 216917$.\n",
    "\n",
    "Let's try do better by creation some new features."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 9. Creation of new features and description of this process"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "Let's create separate datasets to add features to them:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "split_index = int(0.7*data.shape[0])\n",
    "\n",
    "data_sorted = data.sort_values(by='Date')\n",
    "data_sorted.reset_index(inplace=True, drop=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_train_2 = data_sorted.loc[: split_index, :]\n",
    "X_valid_2 = data_sorted.loc[split_index:, :]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "y_train_2 = X_train_2['Price']\n",
    "y_valid_2 = X_valid_2['Price']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Check shapes\n",
    "X_train_2.shape, X_valid_2 .shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "#### 9.1 Features from YearBuilt"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "Let's split **YearBuilt** into 3 ranges according differences in average **Price**. We can see it from distributions plot from **part 3.2**. Let's also add a flag for year 1970, because there are a huge count of houses built at that year:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Year Built\n",
    "\n",
    "X_train_2['old'], X_train_2['middle'], X_train_2['new'] = 0, 0, 0\n",
    "X_valid_2['old'], X_valid_2['middle'], X_valid_2['new'] = 0, 0, 0\n",
    "\n",
    "X_train_2.loc[X_train_2['YearBuilt'] < 1942, 'old'] = 1\n",
    "X_train_2.loc[(X_train_2['YearBuilt'] >= 1942) & (X_train_2['YearBuilt'] <= 2012), 'middle'] = 1\n",
    "X_train_2.loc[X_train_2['YearBuilt'] > 2012, 'new'] = 1\n",
    "\n",
    "X_valid_2.loc[X_valid_2['YearBuilt'] < 1942, 'old'] = 1\n",
    "X_valid_2.loc[(X_valid_2['YearBuilt'] >= 1942) & (X_valid_2['YearBuilt'] <= 2012), 'middle'] = 1\n",
    "X_valid_2.loc[X_valid_2['YearBuilt'] > 2012, 'new'] = 1\n",
    "\n",
    "X_train_2['1970'], X_valid_2['1970'] = 0, 0\n",
    "X_train_2.loc[X_train_2['YearBuilt'] == 1970, '1970'] = 1\n",
    "X_valid_2.loc[X_valid_2['YearBuilt'] == 1970, '1970'] = 1"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "Also, let's calculate  top of the most and the least expensive years of built (sorted by average **Price**). For reliability we'll consider only years with minimum 10 objects built at them: "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "count_by_year_built = X_train_2\\\n",
    "                    .groupby('YearBuilt')['Suburb']\\\n",
    "                    .count()\\\n",
    "                    .reset_index()\\\n",
    "                    .rename(columns={'Suburb':'HousesBuiltInYear'})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "mean_price_by_year_built = X_train_2.groupby('YearBuilt')['Price'].mean().reset_index()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "mean_price_by_year_built = pd.merge(mean_price_by_year_built, \n",
    "                                    count_by_year_built, on=['YearBuilt'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# The most expensive and cheapest houses by YearBuilt\n",
    "\n",
    "top_year_built_by_price = mean_price_by_year_built\\\n",
    "                            .query(\"HousesBuiltInYear > 10\")\\\n",
    "                            .sort_values(by=['Price'], ascending=False)\\\n",
    "                            .head(10)['YearBuilt']\\\n",
    "                            .tolist()\n",
    "\n",
    "last_year_built_by_price = mean_price_by_year_built\\\n",
    "                            .query(\"HousesBuiltInYear > 10\")\\\n",
    "                            .sort_values(by=['Price'], ascending=False)\\\n",
    "                            .tail(10)['YearBuilt']\\\n",
    "                            .tolist()\n",
    "\n",
    "X_train_2['TopYearBuilt'], X_train_2['LastYearBuilt'] = 0, 0\n",
    "X_valid_2['TopYearBuilt'], X_valid_2['LastYearBuilt'] = 0, 0\n",
    "\n",
    "X_train_2.loc[X_train_2['YearBuilt'].isin(top_year_built_by_price), 'TopYearBuilt'] = 1\n",
    "X_valid_2.loc[X_valid_2['YearBuilt'].isin(top_year_built_by_price), 'TopYearBuilt'] = 1\n",
    "\n",
    "X_train_2.loc[X_train_2['YearBuilt'].isin(last_year_built_by_price), 'LastYearBuilt'] = 1\n",
    "X_valid_2.loc[X_valid_2['YearBuilt'].isin(last_year_built_by_price), 'LastYearBuilt'] = 1"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "#### 9.2 Features from Streets"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's repeat the same procedure as above for **Street**:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# The most expensive and cheapest houses by Streets\n",
    "\n",
    "price_by_street = X_train_2.groupby('Street')['Price'].mean().reset_index()\n",
    "count_by_street = X_train_2.groupby('Street')['Suburb'].count().reset_index().rename(columns={'Suburb':'HousesCount'})\n",
    "\n",
    "price_by_street = pd.merge(price_by_street, count_by_street, on=['Street'])\n",
    "\n",
    "top_streets = price_by_street\\\n",
    "                .query(\"HousesCount > 10\")\\\n",
    "                .sort_values(by=['Price'], ascending=False)\\\n",
    "                .head(50)['Street']\\\n",
    "                .tolist()\n",
    "\n",
    "last_streets = price_by_street\\\n",
    "                .query(\"HousesCount > 10\")\\\n",
    "                .sort_values(by=['Price'], ascending=False)\\\n",
    "                .tail(50)['Street']\\\n",
    "                .tolist()\n",
    "\n",
    "X_train_2['TopStreet'], X_train_2['LastStreet'] = 0, 0\n",
    "X_valid_2['TopStreet'], X_valid_2['LastStreet'] = 0, 0\n",
    "\n",
    "X_train_2.loc[X_train_2['Street'].isin(top_streets), 'TopStreet'] = 1\n",
    "X_valid_2.loc[X_valid_2['Street'].isin(top_streets), 'TopStreet'] = 1\n",
    "\n",
    "X_train_2.loc[X_train_2['Street'].isin(last_streets), 'LastStreet'] = 1\n",
    "X_valid_2.loc[X_valid_2['Street'].isin(last_streets), 'LastStreet'] = 1"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "#### 9.3 One hot encoding for other cat. features and dropping other columns"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#  Cat.features\n",
    "ohe_features_2 = ['Type', 'Method', 'CouncilArea', 'Regionname']\n",
    "\n",
    "X_ohe_train_2 = make_one_hot_encoding(X_train_2, features=ohe_features_2)\n",
    "X_ohe_valid_2 = make_one_hot_encoding(X_valid_2, features=ohe_features_2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_ohe_train_2.shape, X_ohe_valid_2.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "cols_to_drop = ['Suburb','Price', 'SellerG', 'Date', 'Postcode','Lattitude', \n",
    "                'Longtitude', 'Propertycount', 'Street', 'HouseNumber', 'YearBuilt', \n",
    "                'Month', 'Year']\n",
    "\n",
    "X_ohe_train_2.drop(cols_to_drop, axis=1, inplace=True)\n",
    "X_ohe_valid_2.drop(cols_to_drop, axis=1, inplace=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_ohe_train_2.shape, X_ohe_valid_2.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's prepare copy of datasets with new features for RandomForest:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Save copy for test RandomForestRegressor\n",
    "X_ohe_train_rf = X_ohe_train_2.copy()\n",
    "X_ohe_valid_rf = X_ohe_valid_2.copy()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "#### 9.4 Polynomial features"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "While linear regression is just a linear combination of features, **polynomial regression** is very similar, but it allows for a linear combination of features values raised to varying degrees. This fact allows us to create more complex, non-linear models using the same linear models. This trick allows to restore more complex dependencies between features and target value.\n",
    "\n",
    "Let's try add polynomial features of degree 2 of some of source features:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.preprocessing import PolynomialFeatures\n",
    "\n",
    "\n",
    "features_to_poly = ['Regionname', 'Type', 'Street', 'Method', 'YearBuilt']\n",
    "cols_to_poly = []\n",
    "for col_name in X_ohe_train_2.columns:\n",
    "    for f in features_to_poly:\n",
    "        if f in col_name:\n",
    "            cols_to_poly.append(col_name)\n",
    "\n",
    "poly_generator = PolynomialFeatures(degree=2, include_bias=False, interaction_only=True)\n",
    "\n",
    "X_train_poly_2 = poly_generator.fit_transform(X_ohe_train_2[cols_to_poly])\n",
    "X_valid_poly_2 = poly_generator.transform(X_ohe_valid_2[cols_to_poly])\n",
    "\n",
    "X_ohe_train_2 = np.hstack([X_ohe_train_2.drop(cols_to_poly, axis=1), X_train_poly_2])\n",
    "X_ohe_valid_2 = np.hstack([X_ohe_valid_2.drop(cols_to_poly, axis=1), X_valid_poly_2])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_ohe_train_2.shape, X_ohe_valid_2.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%time\n",
    "with warnings.catch_warnings():\n",
    "    warnings.simplefilter(\"ignore\")\n",
    "model_grid.fit(X_ohe_train_2, to_log(y_train_2))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Again make predictions on train and valid datasets and plot real and predicted prices:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "y_pred_train_2 = from_log(model_grid.predict(X_ohe_train_2))\n",
    "y_pred_valid_2 = from_log(model_grid.predict(X_ohe_valid_2))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.figure(figsize=(12,12))\n",
    "plt.xlim((0, y.quantile(0.99)))\n",
    "plt.ylim((0, y.quantile(0.99)))\n",
    "plt.scatter(y=y_pred_train_2,  x=y_train, c='blue', s=7)\n",
    "plt.scatter(y=y_pred_valid_2,  x=y_valid, c='green', s=7)\n",
    "plt.plot([0,3e6],[0, 3e6], 'r-')\n",
    "plt.legend(['\"Ideal model\" diagonal','Predicted train data','Predicted validation data'])\n",
    "plt.text(s=\" MAE_train = {0:.2f}\".format(mae(y_pred_train_2, y_train)), x=1e5, y=3e6)\n",
    "plt.text(s=\" MAE_test = {0:.2f}\".format(mae(y_pred_valid_2, y_valid)),  x=1e5, y=2.9e6)\n",
    "plt.text(s=\"\"\"$ \\\\lambda = {0:.4f} $\"\"\".format(model_grid.best_estimator_.steps[1][1].alpha), x=1e5, y=2.8e6)\n",
    "plt.title(\"True VS Predicted values (LASSO with additional features)\")\n",
    "plt.grid()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "So, now we got $MAE_{train} = 211470$  and $MAE_{valid} = 208750$. \n",
    "\n",
    "We see, that error on validation set decresed on **3.8%**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now, let's try RandomForest with default parameters on the same features (but without polynomial features):"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%time\n",
    "rf = RandomForestRegressor(n_estimators=300, random_state=42).fit(X_ohe_train_rf, to_log(y_train_2))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "y_pred_train_rf = from_log(rf.predict(X_ohe_train_rf))\n",
    "y_pred_valid_rf = from_log(rf.predict(X_ohe_valid_rf))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.figure(figsize=(12,12))\n",
    "plt.xlim((0, y.quantile(0.99)))\n",
    "plt.ylim((0, y.quantile(0.99)))\n",
    "plt.scatter(y=y_pred_train_2,  x=y_train, c='blue', s=7)\n",
    "plt.scatter(y=y_pred_valid_2,  x=y_valid, c='green', s=7)\n",
    "plt.plot([0,3e6],[0, 3e6], 'r-')\n",
    "plt.legend(['\"Ideal model\" diagonal','Predicted train data','Predicted validation data'])\n",
    "plt.text(s=\" MAE_train = {0:.2f}\".format(mae(y_pred_train_rf, y_train)), x=1e5, y=3e6)\n",
    "plt.text(s=\" MAE_test = {0:.2f}\".format(mae(y_pred_valid_rf, y_valid)),  x=1e5, y=2.9e6)\n",
    "plt.text(s=\"max_depth = {0} \".format(rf.max_depth), x=1e5, y=2.8e6)\n",
    "plt.title(\"True VS Predicted values (RandomForest)\")\n",
    "plt.grid()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "We got **overfitted model**, because error on the train dataset is more than twice(!) lower, than error on the validation set.\n",
    "\n",
    "So, let's try tune hyperparameters of RandomForest using cross-validation to get better quality."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 10. Plotting training and validation curves"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We'll provide cross-validation for RandomForest by using **cross_val_score** module with **TimeSeriesSplit** strategy and n_splits=5. Also we'll use our own **scorer** - wrapper for MAE-function to use in cross_val_score-function. Our tuning parametr is **max_depth**."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.model_selection import cross_val_score"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.metrics import make_scorer\n",
    "\n",
    "# Create scorer with our MAE-function\n",
    "scorer = make_scorer(mae, greater_is_better=False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# MAE-function for logarithmic inputs\n",
    "def mae_score(y_true, y_pred):\n",
    "    return mae(from_log(y_true), from_log(y_pred))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "scorer = make_scorer(mae_score)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# List of params values\n",
    "max_depth_list = [10, 12, 13, 15, 17, 20, 25]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "During cross-validation process for every parameter value our train dataset is splitted into 5 folds: four for model fitting and one for getting cross-validation error. So, we'll get list of of first 4 errors, calculate avarage of them and this will be **'Cross validation MAE on train'** for current max_depth value. Then we'll make a prediction on validation set with the same max_depth value  and calculate **'MAE on validation set'**. Prediction *on full train dataset* (without splitting on folds) will give us  **'MAE on train set'**."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%time\n",
    "cv_errors_list = []\n",
    "train_errors_list = []\n",
    "valid_errors_list = []\n",
    "\n",
    "for max_depth in max_depth_list:\n",
    "    rf = RandomForestRegressor(n_estimators=300, max_depth=max_depth,random_state=42)\n",
    "\n",
    "    \n",
    "    cv_errors = cross_val_score(estimator=rf, \n",
    "                                  X=X_ohe_train_rf, \n",
    "                                  y=to_log(y_train_2), \n",
    "                                  scoring=scorer,\n",
    "                                  cv=TimeSeriesSplit(n_splits=5))  \n",
    "    cv_errors_list.append(cv_errors.mean())\n",
    "    \n",
    "    rf.fit(X=X_ohe_train_rf, y=to_log(y_train_2))\n",
    "    \n",
    "    valid_error = mae_score(to_log(y_valid_2), rf.predict(X_ohe_valid_rf))    \n",
    "    valid_errors_list.append(valid_error)\n",
    "    \n",
    "    train_error = mae_score(to_log(y_train_2), rf.predict(X_ohe_train_rf))\n",
    "    train_errors_list.append(train_error)\n",
    "    \n",
    "    print(max_depth)\n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now let's plot dependencies of errors above from **max_depth** parameter:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.figure(figsize=(10, 7))\n",
    "\n",
    "plt.plot(max_depth_list,cv_errors_list)\n",
    "plt.plot(max_depth_list,train_errors_list)\n",
    "plt.plot(max_depth_list,valid_errors_list)\n",
    "plt.vlines(x=max_depth_list[np.array(cv_errors_list).argmin()], \n",
    "           ymin=0, ymax=2e5, \n",
    "           linestyles='dashed', colors='r')\n",
    "\n",
    "plt.legend(['Cross validation MAE on train', \n",
    "            'MAE on train set', \n",
    "            'MAE on validation set', \n",
    "            'Best Max_depth value on CV'])\n",
    "plt.title(\"MAE on train and validation sets.\")\n",
    "plt.xlabel('Max_depth value')\n",
    "plt.ylabel('MAE value')\n",
    "plt.grid()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "It's clear, that in the begining all errors are decreasing. But after concrete **max_depth** value **'Cross validation MAE on train'** and **'MAE on validation set'** are growing. The minimum value of theese errors is provided by **max_depth**, marked by red dashed line and it's max_depth=15."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "best_max_depth"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Note, that in first RandomForest fitting we got **max_depth = 25**:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "rf.max_depth"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 11. Prediction for test or hold-out samples"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Finnaly, let's try to fit RandomForestRegressor with best **max_depth** value, found due to cross-validation. We'll use the same datasets."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "rf_best = RandomForestRegressor(n_estimators=300, max_depth=best_max_depth, random_state=42)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "rf_best.fit(X_ohe_train_rf, to_log(y_train_2))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "As earlier, let's make a predictions on validation set and plot real and predicted values on train and validation datasets:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "y_pred_train_rf_best = from_log(rf_best.predict(X_ohe_train_rf))\n",
    "y_pred_valid_rf_best = from_log(rf_best.predict(X_ohe_valid_rf))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.figure(figsize=(12,12))\n",
    "plt.xlim((0, y.quantile(0.99)))\n",
    "plt.ylim((0, y.quantile(0.99)))\n",
    "plt.scatter(y=y_pred_train_2,  x=y_train, c='blue', s=7)\n",
    "plt.scatter(y=y_pred_valid_2,  x=y_valid, c='green', s=7)\n",
    "plt.plot([0,3e6],[0, 3e6], 'r-')\n",
    "plt.legend(['\"Ideal model\" diagonal','Predicted train data','Predicted validation data'])\n",
    "plt.text(s=\" MAE_train = {0:.2f}\".format(mae(y_pred_train_rf_best, y_train)), x=1e5, y=3e6)\n",
    "plt.text(s=\" MAE_test = {0:.2f}\".format(mae(y_pred_valid_rf_best, y_valid)),  x=1e5, y=2.9e6)\n",
    "plt.text(s=\"max_depth = {0} \".format(rf_best.max_depth), x=1e5, y=2.8e6)\n",
    "plt.title(\"True VS Predicted values (RandomForest afret max_depth tuning)\")\n",
    "plt.grid()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "So, MAE on train is increased significantly, but MAE on validation is slightly decreased. That mean, that now our model isn't so overfitted as before and has more generalizing ability."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 12. Conclusions"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "Solution may be useful for **real estate agencies**, who collect such data and try to predict the most adequate prices for properties. It's important, because it allows them to sell/buy properties as soos as possible without money losses and with saving customer loyalty.\n",
    "\n",
    "Possible cases for model improving is creating more useful features and experiments with other types of models, for example, using as gradient boosting (xgboost, LightGBM, CatBoost)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
