{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<center>\n",
    "<img src=\"../../img/ods_stickers.jpg\" />\n",
    "    \n",
    "## [mlcourse.ai](https://mlcourse.ai) – Open Machine Learning Course \n",
    "### <center> Author: I_Kalininskii, Kiavip at ODS Slack \n",
    "    \n",
    "## <center> Individual project\n",
    "### <center> \"Sberbank Russian Housing Market\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "[Kaggle competition overview](#Kaggle-competition-overview)<br>\n",
    "[Main assumptions](#Main_assumptions)<br>\n",
    "[Part 1. Dataset and features description](#Part-1.-Dataset-and-features-description )<br>\n",
    "[Part 2. Exploratory data analysis](#Part-2.-Exploratory-data-analysis)<br>\n",
    "[Part 3. Visual analysis of the features](#Part-3.-Visual-analysis-of-the-features)<br>\n",
    "[Part 4. Patterns, insights, pecularities of data](#Part-4.-Patterns,-insights,-pecularities-of-data)<br>\n",
    "[Part 5. Data preprocessing](#Part-5.-Data-preprocessing)<br>\n",
    "[Part 6. Feature engineering and description](#Part-6.-Feature-engineering-and-description)<br>\n",
    "[Part 7. Cross-validation, hyperparameter tuning](#Part-7.-Cross-validation,-hyperparameter-tuning)<br>\n",
    "[Part 8. Validation and learning curves](#Part-8.-Validation-and-learning-curves)<br>\n",
    "[Part 9. Prediction for hold-out and test samples ](#Part-9.-Prediction-for-hold-out-and-test-samples )<br>\n",
    "[Part 10. Model evaluation with metrics description](#Part-10.-Model-evaluation-with-metrics-description)<br>\n",
    "[Part 11. Conclusions](#Part-11.-Conclusions)<br>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Kaggle competition overview\n",
    "\n",
    "Housing costs demand a significant investment from both consumers and developers. And when it comes to planning a budget—whether personal or corporate—the last thing anyone needs is uncertainty about one of their biggets expenses. Sberbank, Russia’s oldest and largest bank, helps their customers by making predictions about realty prices so renters, developers, and lenders are more confident when they sign a lease or purchase a building.\n",
    "\n",
    "Although the housing market is relatively stable in Russia, the country’s volatile economy makes forecasting prices as a function of apartment characteristics a unique challenge. Complex interactions between housing features such as number of bedrooms and location are enough to make pricing predictions complicated. Adding an unstable economy to the mix means Sberbank and their customers need more than simple regression models in their arsenal.\n",
    "\n",
    "In this competition, Sberbank is challenging Kagglers to develop algorithms which use a broad spectrum of features to predict realty prices. Competitors will rely on a rich dataset that includes housing data and macroeconomic patterns. An accurate forecasting model will allow Sberbank to provide more certainty to their customers in an uncertain economy."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Project data\n",
    "\n",
    "Datasets and their description are on Kaggle and it's much safer than any other location I can offer.<br>\n",
    "__[Download all from Kaggle](https://www.kaggle.com/c/6392/download-all)__<br>\n",
    "__[Kaggle competition Data page](https://www.kaggle.com/c/sberbank-russian-housing-market/data)__"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Main assumptions\n",
    "\n",
    "I'm interested in Sberbank strategy and in Moscow realty prices, so I'll try to learn as much as I can out of this project.\n",
    "\n",
    "I'll try to examine most obvious features in datasets, then, i'll compare train and test sets to see, are they comparable. Also' I'll try to enhance the model by using macroeconomical features as well as cleaning data and feature engineering. \n",
    "\n",
    "As a model, I'll use LightGBM. It's new to me and I want to learn, how to build a robust predictions using gradient boosting. I want to use ensemble, but I think, ensembling will take more time, and I'd better stick to one decision.\n",
    "\n",
    "As a metric, I'll choose RMSLE, as it is the one set for this competition."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import pandas as pd\n",
    "import pylab\n",
    "import calendar\n",
    "from datetime import datetime\n",
    "from scipy import stats\n",
    "import seaborn as sn\n",
    "import matplotlib.pyplot as plt\n",
    "%matplotlib inline\n",
    "\n",
    "import lightgbm as lgb\n",
    "from sklearn.metrics import mean_squared_error"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 1. Dataset and features description"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Here I'm going to read data from csv files. There are three files:\n",
    "1. train.csv: data and target variable for model training\n",
    "2. test.csv: data to make prediction of target variable by trained model\n",
    "3. macro.csv: additional data both for training and making prediction\n",
    "\n",
    "Features described in file data_dictionary.txt, and description is significantly long, so I'll not copy that content here. Later I'll describe groups of columns."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_df = pd.read_csv(\"train.csv\", parse_dates=['timestamp'])\n",
    "test_df = pd.read_csv(\"test.csv\", parse_dates=['timestamp'])\n",
    "macro_df = pd.read_csv(\"macro.csv\", parse_dates=['timestamp'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print (train_df.shape, test_df.shape, macro_df.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "I experienced some troubles with *timestamp* column at **macro_df**. It seems, no one else had such behavior, but I need to cast it to *datetime64[ns]* or compatible."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "macro_df['timestamp']=macro_df['timestamp'].apply(lambda s: pd.to_datetime(s+' 00:00:00'))\n",
    "macro_df['timestamp'].dtype"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_df.head().T"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**train_df** is dataframe containing 292 features. Data exploratory analysis will come later. Now only brief explanations of these features, as they described in annotation to competition:\n",
    "\n",
    "\n",
    "price_doc: sale price (this is the target variable)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "test_df.head().T"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Same columns, as in **train_df**, but without a target."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "macro_df.head().T"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**macro_df** contains 99 features related to Russia's macroeconomy and financial sector and timestamp to join to the **train_df** and **test_df**. These stats are collected daily, so 2485 days are approximately 7 years."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 2. Exploratory data analysis"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "I will explore target variable **price_doc**. Price can be quite imbalanced, I'll apply logarithm to see, if it can looks like normal distibution."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_df['price_doc'].describe()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Target description is nice. I see, 75 quartile is 8.3 million, so reasonable. But there is some huge prices, of course, realty market may contain such offers.\n",
    "\n",
    "I'll find skewness and kurtosis"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(\"Skewness: %f\" % train_df['price_doc'].skew())\n",
    "print(\"Kurtosis: %f\" % train_df['price_doc'].kurt())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_df['price_doc_log'] = np.log1p(train_df['price_doc'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(\"Skewness: %f\" % train_df['price_doc_log'].skew())\n",
    "print(\"Kurtosis: %f\" % train_df['price_doc_log'].kurt())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Skewness closer to zero and kurtosis less than original of *price_doc*. I think, logarithmic target will be better to predict. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "fig,axes = plt.subplots(ncols=2)\n",
    "fig.set_size_inches(20,10)\n",
    "sn.kdeplot(data=train_df[\"price_doc\"], color=\"r\", shade=True,ax=axes[0])\n",
    "sn.kdeplot(data=train_df[\"price_doc_log\"], color=\"r\", shade=True,ax=axes[1])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "fig,axes = plt.subplots(ncols=2)\n",
    "fig.set_size_inches(20,10)\n",
    "sn.kdeplot(data=train_df[(train_df[\"price_doc\"]>1e6) & (train_df[\"price_doc\"]!=2e6) & (train_df[\"price_doc\"]!=3e6)][\"price_doc\"], color=\"r\", shade=True,ax=axes[0])\n",
    "sn.kdeplot(data=train_df[(train_df[\"price_doc\"]>1e6) & (train_df[\"price_doc\"]!=2e6) & (train_df[\"price_doc\"]!=3e6)][\"price_doc_log\"], color=\"r\", shade=True,ax=axes[1])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Without some values, distributions seems more smooth. I'll prepare method of cleaning this values and call it at next step."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def remove_abnormal_prices(df):\n",
    "    df.drop(df[(df[\"price_doc\"]<=1e6) | (df[\"price_doc\"]==2e6) | (df[\"price_doc\"]==3e6)].index,\n",
    "              inplace=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "fig,axes = plt.subplots(ncols=2)\n",
    "fig.set_size_inches(20, 10)\n",
    "stats.probplot(train_df[\"price_doc_log\"], dist='norm', fit=True, plot=axes[0])\n",
    "stats.probplot(train_df[(train_df[\"price_doc\"]>1e6) & (train_df[\"price_doc\"]!=2e6) & (train_df[\"price_doc\"]!=3e6)][\"price_doc_log\"], dist='norm', fit=True, plot=axes[1])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "It seems, logarithm target has normal distibution with some deviations. Model will be trained on *log* Target to get better predictions and satisfy metric condidions. \n",
    "\n",
    "I'll look on *product_type* relation to *price_doc_log*."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "ax = sn.FacetGrid(train_df, col=\"product_type\", size=6)\n",
    "ax.map(sn.kdeplot, \"price_doc_log\", color=\"r\", shade=True)\n",
    "ax.add_legend()\n",
    "ax.set(ylabel='density')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Investment realty price tends to have round value, 1e6 - one million, 2e6 - two millions, etc."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 3. Visual analysis of the features"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "I'll explore datatypes of features in **train_df**."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "dataTypeDf = pd.DataFrame(train_df.dtypes.value_counts()).reset_index().rename(columns={\"index\":\"variableType\",0:\"count\"})\n",
    "fig,ax = plt.subplots()\n",
    "fig.set_size_inches(20,5)\n",
    "sn.barplot(data=dataTypeDf,x=\"variableType\",y=\"count\",ax=ax)\n",
    "ax.set(xlabel='Variable Type', ylabel='Count',title=\"Variables Count Across Datatype\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "*int84* and *float64* values are good for regression. Features marked as *object* are categorical variables. *datetime64[ns]* is **timestamp** column."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "I'll look for missing values in **train_df**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_na = (train_df.isnull().sum() / len(train_df)) * 100\n",
    "train_na = train_na.drop(train_na[train_na == 0].index).sort_values(ascending=False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(figsize=(12, 8))\n",
    "plt.xticks(rotation='90')\n",
    "sn.barplot(x=train_na.index, y=train_na)\n",
    "ax.set(title='Percent missing data by feature', ylabel='% missing')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Of 292 columns, 51 have missing values. The percentage of values missing ranges from 0.1% in metro_min_walk to 47.4% in hospital_beds_raion. I'll think, how to deal with it. Remain it intact is an option, though."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "I need to get some new features derived from timestamp to continue."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_df['year'] = train_df['timestamp'].apply(lambda ts: ts.year)\n",
    "train_df['month'] = train_df['timestamp'].apply(lambda ts:ts. month)\n",
    "\n",
    "test_df['year'] = test_df['timestamp'].apply(lambda ts: ts.year)\n",
    "test_df['month'] = test_df['timestamp'].apply(lambda ts: ts.month)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "fig,(ax1,ax2) = plt.subplots(ncols=2)\n",
    "fig.set_size_inches(20,5)\n",
    "sn.boxplot(data=train_df,y=\"price_doc\",orient=\"v\",ax=ax1)\n",
    "sn.boxplot(data=train_df,x=\"price_doc\",y=\"year\",orient=\"h\",ax=ax2)\n",
    "\n",
    "fig1,ax3 = plt.subplots()\n",
    "fig1.set_size_inches(20,5)\n",
    "sn.boxplot(data=train_df,x=\"month\",y=\"price_doc\",orient=\"v\",ax=ax3)\n",
    "ax1.set(ylabel='Price Doc', title=\"Box Plot On Price Doc\")\n",
    "ax2.set(xlabel='Price Doc', ylabel='Year',title=\"Box Plot On Price Doc Across Year\")\n",
    "ax3.set(xlabel='Month', ylabel='Count',title=\"Box Plot On Price Doc Across Month\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "stateDf = pd.DataFrame(train_df[~train_df['state'].isnull()]['state'].value_counts()).reset_index().rename(columns={\"index\":\"state\",\"state\":\"count\"})\n",
    "stateDf\n",
    "fig,ax = plt.subplots()\n",
    "fig.set_size_inches(20,5)\n",
    "sn.barplot(data=stateDf,x=\"state\",y=\"count\",ax=ax)\n",
    "ax.set(xlabel='state', ylabel='Count',title=\"Variables Count Across Datatype\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "State should be discrete valued between 1 and 4. There is a 33 in it that is cleary a data entry error\n",
    "Lets just replace it with value 3."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def correct_state_33(df):\n",
    "    df.loc[df['state'] == 33, 'state'] = 3"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "I'll explore *build_year* feature"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(figsize=(12, 8))\n",
    "plt.xticks(rotation='90')\n",
    "ind = train_df[(train_df['build_year'] <= 1691) | (train_df['build_year'] >= 2018)].index\n",
    "by_df = train_df.drop(ind).sort_values(by=['build_year'])\n",
    "sn.countplot(x=by_df['build_year'])\n",
    "ax.set(title='Distribution of build year')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The distribution appears bimodal with a peak somewhere in the early 1970s and somewhere in the past few years."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "buildYearDf = pd.DataFrame(train_df[(train_df['build_year']<1147)|(train_df['build_year']>2030)]['build_year'].value_counts()).reset_index().rename(columns={\"index\":\"build_year\",\"build_year\":\"count\"})\n",
    "buildYearDf\n",
    "fig,ax = plt.subplots()\n",
    "fig.set_size_inches(20,5)\n",
    "sn.barplot(data=buildYearDf,x=\"build_year\",y=\"count\",ax=ax)\n",
    "ax.set(xlabel='Build Year', ylabel='Count',title=\"Count Across build_year\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Build_year has some erroneous values. Since its unclear what they should be, let's replace 20052009 with 2007, 4965 with 1965, 20 with 1920, 71 with 1971. Values 0, 1, 3 remain intact."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def clean_build_year(df):\n",
    "    build_year_mode = df['build_year'].mode().iloc[0]\n",
    "    print(\"build_year_mode=%s\"%build_year_mode)\n",
    "\n",
    "    df.loc[df['build_year'] == 20052009, 'build_year'] = 2007\n",
    "    df.loc[df['build_year'] == 4965, 'build_year'] = 1965\n",
    "    df.loc[df['build_year'] == 20, 'build_year'] = 1920\n",
    "    df.loc[df['build_year'] == 71, 'build_year'] = 1971\n",
    "\n",
    "    #df.loc[df['build_year'] < 1147, 'build_year'] = build_year_mode"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now let’s see if build_year and prices are related. Here I group the data by year and take the mean of price_doc."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(figsize=(12, 6))\n",
    "by_price = by_df.groupby('build_year')[['build_year', 'price_doc']].mean()\n",
    "sn.regplot(x=\"build_year\", y=\"price_doc\", data=by_price, scatter=False, order=3, truncate=True)\n",
    "plt.plot(by_price['build_year'], by_price['price_doc'], color='r')\n",
    "ax.set(title='Mean price by year of build')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The relationship appears somewhat steady over time, especially after 1960. There is some volatility in the earlier years. This is not a real effect but simply due to the sparseness of observations until around 1950."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "There are internal home characteristics in the **train_df**. I'll build correlation matrix with column *price_doc*."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "internal_feats = ['full_sq', 'life_sq', 'floor', 'max_floor', 'build_year', 'num_room', 'kitch_sq', 'state', 'price_doc']\n",
    "corrmat = train_df[internal_feats].corr()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(figsize=(10, 7))\n",
    "plt.xticks(rotation='90')\n",
    "sn.heatmap(corrmat, square=True, linewidths=.5, annot=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Area of Home and Number of Rooms**\n",
    "\n",
    "*full_sq* is correlated with price. I'll take a closer look."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(figsize=(10, 7))\n",
    "plt.scatter(x=train_df['full_sq'], y=train_df['price_doc'], c='r')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "There is an outlier in *full_sq*. Its not clear whether this is an entry error. I'll remove it. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def remove_life_sq_outlier(df):\n",
    "    df.drop(df[df[\"life_sq\"] > 5000].index, inplace=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(figsize=(10, 7))\n",
    "ind = train_df[train_df['full_sq'] > 2000].index\n",
    "plt.scatter(x=train_df.drop(ind)['full_sq'], y=train_df.drop(ind)['price_doc'], c='r', alpha=0.5)\n",
    "ax.set(title='Price by area in sq meters', xlabel='Area', ylabel='Price')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The feature *full_sq* is defined in the data dictionary as ‘total area in square meters, including loggias, balconies and other non-residential areas’ and the *life_sq* is defined as ‘living area in square meters, excluding loggias, balconies and other non-residential areas.’ So it should be the case that *life_sq* is always less than *full_sq*."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "(train_df['life_sq'] > train_df['full_sq']).sum()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "There are 37 observations where *life_sq* is greater than *full_sq*."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "I'll explore *num_room* feature"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(figsize=(10, 7))\n",
    "sn.countplot(x=train_df['num_room'])\n",
    "ax.set(title='Distribution of room count', xlabel='num_room')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "A vast majority of the apartments have one, two or three rooms."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Timestamp**\n",
    "\n",
    "How does the sale price vary over the time horizon of the data set? Here I just group by the day and caclulate the median price for each day and plot it over time."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(figsize=(12, 6))\n",
    "ts_df = train_df.groupby('timestamp')[['price_doc']].mean()\n",
    "#sns.regplot(x=\"timestamp\", y=\"price_doc\", data=ts_df, scatter=False, truncate=True)\n",
    "plt.plot(ts_df.index, ts_df['price_doc'], color='r', )\n",
    "ax.set(title='Daily median price over time')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "And to compare with the above plot, here is the volume of sales over the same time."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import datetime\n",
    "import matplotlib.dates as mdates\n",
    "years = mdates.YearLocator()   # every year\n",
    "yearsFmt = mdates.DateFormatter('%Y')\n",
    "ts_vc = train_df['timestamp'].value_counts()\n",
    "f, ax = plt.subplots(figsize=(12, 6))\n",
    "plt.bar(left=ts_vc.index, height=ts_vc)\n",
    "ax.xaxis.set_major_locator(years)\n",
    "ax.xaxis.set_major_formatter(yearsFmt)\n",
    "ax.set(title='Sales volume over time', ylabel='Number of transactions')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Is there a seasonal component to home prices in the course of a year?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(figsize=(12, 8))\n",
    "ts_df = train_df.groupby(by=[train_df.timestamp.dt.month])[['price_doc']].median()\n",
    "plt.plot(ts_df.index, ts_df, color='r')\n",
    "ax.set(title='Price by month of year')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Home State/Material**\n",
    "\n",
    "How do homes vary in price by condition?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(figsize=(12, 8))\n",
    "ind = train_df[train_df['state'].isnull()].index\n",
    "train_df['price_doc_log10'] = np.log10(train_df['price_doc'])\n",
    "sn.violinplot(x=\"state\", y=\"price_doc_log10\", data=train_df.drop(ind), inner=\"box\")\n",
    "# sns.swarmplot(x=\"state\", y=\"price_doc_log10\", data=train_df.dropna(), color=\"w\", alpha=.2);\n",
    "ax.set(title='Log10 of median price by state of home', xlabel='state', ylabel='log10(price)')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "It’s hard to tell from the plot, but it does appear that state 4 has the highest sale price on average. Significantly fewer homes fall under this category however. I'll check this assumption:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_df.drop(ind).groupby('state')['price_doc'].mean()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "State 4 has the highest average price by far, followed by state 3. State 1 and 2 are close."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "What about the material feature?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(figsize=(12, 8))\n",
    "ind = train_df[train_df['material'].isnull()].index\n",
    "sn.violinplot(x=\"material\", y=\"price_doc_log\", data=train_df.drop(ind), inner=\"box\")\n",
    "# sns.swarmplot(x=\"state\", y=\"price_doc_log10\", data=train_df.dropna(), color=\"w\", alpha=.2);\n",
    "ax.set(title='Distribution of price by build material', xlabel='material', ylabel='log(price)')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "It’s unclear what these values mean since this feature is not described in the data dictionary. Material 1 is by far the most common. Only one home is classifed as material 3. How does median price compare among these six materials?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_df.drop(ind).groupby('material')['price_doc'].median()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Floor of Home**\n",
    "\n",
    "How does the floor feature compare with price? According to the correlation plot from earlier, there is a moderate positive correlation."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(figsize=(12, 8))\n",
    "plt.scatter(x=train_df['floor'], y=train_df['price_doc_log'], c='r', alpha=0.4)\n",
    "sn.regplot(x=\"floor\", y=\"price_doc_log\", data=train_df, scatter=False, truncate=True)\n",
    "ax.set(title='Price by floor of home', xlabel='floor', ylabel='log(price)')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "On a whole, price seems to rise with the floor, although the effect is pretty small. Along the same lines, I wonder if the height of building is correlated with price. Well look at this using max_floor as a proxy for height."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(figsize=(12, 8))\n",
    "plt.scatter(x=train_df['max_floor'], y=train_df['price_doc_log'], c='r', alpha=0.4)\n",
    "sn.regplot(x=\"max_floor\", y=\"price_doc_log\", data=train_df, scatter=False, truncate=True)\n",
    "ax.set(title='Price by max floor of home', xlabel='max_floor', ylabel='log(price)')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Again a small positive correlation. This effect however is likely being confounded by the fact that the urban core has both more expensive real estate and taller buildings. So the height of the building alone is likely not what is determing price here."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(figsize=(12, 8))\n",
    "plt.scatter(x=train_df['floor'], y=train_df['max_floor'], c='r', alpha=0.4)\n",
    "plt.plot([0, 80], [0, 80], color='.5')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The observations below the grey identity line have a floor greater than the number of floors in the building. That’s not good. How many are there?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_df.loc[train_df['max_floor'] < train_df['floor'], ['id', 'floor','max_floor']].head(20)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "There are 1,493 observations where this is the case."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Demographic Characteristics**\n",
    "\n",
    "Now let’s move beyond the internal home characteristics and take a look at some of the basic demographic and geographic characteristics. First, the correlation plot."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "demo_feats = ['area_m', 'raion_popul', 'full_all', 'male_f', 'female_f', 'young_all', 'young_female', \n",
    "             'work_all', 'work_male', 'work_female', 'price_doc']\n",
    "corrmat = train_df[demo_feats].corr()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(figsize=(10, 7))\n",
    "plt.xticks(rotation='90')\n",
    "sn.heatmap(corrmat, square=True, linewidths=.5, annot=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Price is correlated with most of these, but the associations are fairly weak. First I’ll check out the sub_area feature. According to the data dictionary, this is the district that the home is located in."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_df['sub_area'].unique().shape[0]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "scrolled": true
   },
   "source": [
    "I'll try to set how many sales transactions are in each district:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(figsize=(10, 20))\n",
    "sa_vc = train_df['sub_area'].value_counts()\n",
    "sa_vc = pd.DataFrame({'sub_area':sa_vc.index, 'count': sa_vc.values})\n",
    "ax = sn.barplot(x=\"count\", y=\"sub_area\", data=sa_vc, orient=\"h\")\n",
    "ax.set(title='Number of Transactions by District')\n",
    "f.tight_layout()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Poselenie Sosenskoe, Nekrasovka, Poselenie Vnukovskoe had the most transactions in the data set by a fairly large margin."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "I'll look if there is a relationship between the share of the population that is working age and price."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_df['work_share'] = train_df['work_all'] / train_df['raion_popul']\n",
    "f, ax = plt.subplots(figsize=(12, 6))\n",
    "sa_price = train_df.groupby('sub_area')[['work_share', 'price_doc']].mean()\n",
    "sn.regplot(x=\"work_share\", y=\"price_doc\", data=sa_price, scatter=True, order=4, truncate=True)\n",
    "ax.set(title='District mean home price by share of working age population')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "There does not appear to be a relationship between the mean home price in a district and the district’s share of working age population."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**School Characteristics**\n",
    "\n",
    "I'll see if the price depends on school quality."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "school_feats = ['children_preschool', 'preschool_quota', 'preschool_education_centers_raion', 'children_school', \n",
    "                'school_quota', 'school_education_centers_raion', 'school_education_centers_top_20_raion', \n",
    "                'university_top_20_raion', 'additional_education_raion', 'additional_education_km', 'university_km', 'price_doc']\n",
    "corrmat = train_df[school_feats].corr()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(figsize=(10, 7))\n",
    "plt.xticks(rotation='90')\n",
    "sn.heatmap(corrmat, square=True, linewidths=.5, annot=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "There is little to no correlation between price and the school variables. The school variables however are highly correlated with each other, indicating that we would not want to use all of them in a linear regression model due to multicollinearity."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The one variable that does show some correlation is *university_top_20_raion*. I'll look at it:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_df['university_top_20_raion'].unique()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(figsize=(12, 8))\n",
    "sn.stripplot(x=\"university_top_20_raion\", y=\"price_doc\", data=train_df, jitter=True, alpha=.2, color=\".8\");\n",
    "sn.boxplot(x=\"university_top_20_raion\", y=\"price_doc\", data=train_df)\n",
    "ax.set(title='Distribution of home price by # of top universities in Raion', xlabel='university_top_20_raion', \n",
    "       ylabel='price_doc')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Homes in a raion with 3 top 20 universities have the highest median home price, however, it is fairly close among 0, 1, and 2. There are very few homes with 3 top universites in their raion."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Cultural/Recreational Characteristics**\n",
    "\n",
    "These features may correlate with price."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "cult_feats = ['sport_objects_raion', 'culture_objects_top_25_raion', 'shopping_centers_raion', 'park_km', 'fitness_km', \n",
    "                'swim_pool_km', 'ice_rink_km','stadium_km', 'basketball_km', 'shopping_centers_km', 'big_church_km',\n",
    "                'church_synagogue_km', 'mosque_km', 'theater_km', 'museum_km', 'exhibition_km', 'catering_km', 'price_doc']\n",
    "corrmat = train_df[cult_feats].corr()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(figsize=(12, 7))\n",
    "plt.xticks(rotation='90')\n",
    "sn.heatmap(corrmat, square=True, linewidths=.5, annot=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "There are weak correlations between price and many of these variables. There is a small positive correlation between price and the number of ‘sports objects’ in a raion as well as between price and the number of shopping centers. As expected, there is also a negative correlation between price and the (nearest?) of the cultural and recreational amenities."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "I'll look at *sport_objects*."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(figsize=(10, 6))\n",
    "so_price = train_df.groupby('sub_area')[['sport_objects_raion', 'price_doc']].median()\n",
    "sn.regplot(x=\"sport_objects_raion\", y=\"price_doc\", data=so_price, scatter=True, truncate=True)\n",
    "ax.set(title='Median Raion home price by # of sports objects in Raion')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "There is definitely a positive correlation. This could be a good candidate feature to include in a model."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "I'll do the same for culture objects."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(figsize=(10, 6))\n",
    "co_price = train_df.groupby('sub_area')[['culture_objects_top_25_raion', 'price_doc']].median()\n",
    "sn.regplot(x=\"culture_objects_top_25_raion\", y=\"price_doc\", data=co_price, scatter=True, truncate=True)\n",
    "ax.set(title='Median Raion home price by # of sports objects in Raion')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can’t get much information out of this due to the large number of raions that have zero culture objects. What if we just see if there is a difference between raions with and raions without a top 25 culture object."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_df.groupby('culture_objects_top_25')['price_doc'].median()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "So raions that have a top 25 cultural object have a median home sale price that is higher by 1.2 million."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "How is the distance to the nearest park related to home price?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(figsize=(10, 6))\n",
    "sn.regplot(x=\"park_km\", y=\"price_doc\", data=train_df, scatter=True, truncate=True, scatter_kws={'color': 'r', 'alpha': .2})\n",
    "ax.set(title='Median Raion home price by # of sports objects in Raion')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Infrastructure Features**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "inf_feats = ['nuclear_reactor_km', 'thermal_power_plant_km', 'power_transmission_line_km', 'incineration_km',\n",
    "                'water_treatment_km', 'incineration_km', 'railroad_station_walk_km', 'railroad_station_walk_min', \n",
    "                'railroad_station_avto_km', 'railroad_station_avto_min', 'public_transport_station_km', \n",
    "                'public_transport_station_min_walk', 'water_km', 'mkad_km', 'ttk_km', 'sadovoe_km','bulvar_ring_km',\n",
    "                'kremlin_km', 'price_doc']\n",
    "corrmat = train_df[inf_feats].corr()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(figsize=(12, 7))\n",
    "plt.xticks(rotation='90')\n",
    "sn.heatmap(corrmat, square=True, linewidths=.5, annot=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(figsize=(10, 6))\n",
    "sn.regplot(x=\"kremlin_km\", y=\"price_doc\", data=train_df, scatter=True, truncate=True, scatter_kws={'color': 'r', 'alpha': .2})\n",
    "ax.set(title='Home price by distance to Kremlin')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "There is negative correlation between distance to Kremlin and price"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Macroeconomical features**\n",
    "\n",
    "Now I'll examine macroeconomical features. I'll join **train_df** and **macro_df** by a timestamp and treat **macro_df** as optional (\"left join\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_macro_df = pd.merge(train_df, macro_df, on='timestamp', how='left')\n",
    "print(train_macro_df.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Missing values:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_macro_na = (train_macro_df[macro_df.columns].isnull().sum() / len(train_macro_df)) * 100\n",
    "train_macro_na = train_macro_na.drop(train_macro_na[train_macro_na == 0].index).sort_values(ascending=False)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "I see, that one of the columns (*provision_retail_space_modern_sqm*) is totally useless, other (*provision_retail_space_sqm*) is nearly empty. Some others have equal missings, I may predict, It can be weekends, or interval, when data was not collected."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(figsize=(12, 8))\n",
    "plt.xticks(rotation='90')\n",
    "sn.barplot(x=train_macro_na.index, y=train_macro_na)\n",
    "ax.set(title='Percent missing data by feature', ylabel='% missing')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Next five heatmaps indicates correlation of **macro_df** features."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Main trading features"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "mcr_trade_feats = ['oil_urals',\n",
    "                 'gdp_quart',\n",
    "                 'gdp_quart_growth',\n",
    "                 'cpi',\n",
    "                 'ppi',\n",
    "                 'gdp_deflator',\n",
    "                 'balance_trade',\n",
    "                 'balance_trade_growth',\n",
    "                 'usdrub',\n",
    "                 'eurrub',\n",
    "                 'brent',\n",
    "                 'net_capital_export',\n",
    "                 'gdp_annual',\n",
    "                 'gdp_annual_growth',\n",
    "                 'rts',\n",
    "                 'micex',\n",
    "                 'micex_rgbi_tr',\n",
    "                 'micex_cbi_tr',\n",
    "                 'grp',\n",
    "                 'grp_growth',\n",
    "                 'price_doc']\n",
    "corrmat = train_macro_df[mcr_trade_feats].corr()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(figsize=(12, 12))\n",
    "plt.xticks(rotation='90')\n",
    "sn.heatmap(corrmat, square=True, linewidths=.5, annot=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Market features"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "mcr_market_feats = ['average_provision_of_build_contract',\n",
    "             'average_provision_of_build_contract_moscow',\n",
    "             'deposits_value',\n",
    "             'deposits_growth',\n",
    "             'deposits_rate',\n",
    "             'mortgage_value',\n",
    "             'mortgage_growth',\n",
    "             'mortgage_rate',\n",
    "             'income_per_cap',\n",
    "             'real_dispos_income_per_cap_growth',\n",
    "             'salary',\n",
    "             'salary_growth',\n",
    "             'fixed_basket',\n",
    "             'retail_trade_turnover',\n",
    "             'retail_trade_turnover_per_cap',\n",
    "             'retail_trade_turnover_growth',\n",
    "             'labor_force',\n",
    "             'unemployment',\n",
    "             'employment',\n",
    "             'pop_natural_increase',\n",
    "             'pop_migration',\n",
    "             'pop_total_inc',\n",
    "             'childbirth',\n",
    "             'mortality',\n",
    "             'price_doc']\n",
    "corrmat = train_macro_df[mcr_market_feats].corr()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(figsize=(12, 12))\n",
    "plt.xticks(rotation='90')\n",
    "sn.heatmap(corrmat, square=True, linewidths=.5, annot=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Investment features"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "mcr_invest_feats = ['invest_fixed_capital_per_cap',\n",
    " 'invest_fixed_assets',\n",
    " 'profitable_enterpr_share',\n",
    " 'unprofitable_enterpr_share',\n",
    " 'share_own_revenues',\n",
    " 'overdue_wages_per_cap',\n",
    " 'fin_res_per_cap',\n",
    " 'marriages_per_1000_cap',\n",
    " 'divorce_rate',\n",
    " 'construction_value',\n",
    " 'invest_fixed_assets_phys',\n",
    " 'price_doc']\n",
    "corrmat = train_macro_df[mcr_invest_feats].corr()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(figsize=(7, 7))\n",
    "plt.xticks(rotation='90')\n",
    "sn.heatmap(corrmat, square=True, linewidths=.5, annot=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Rental features"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "mcr_rent_feats = ['housing_fund_sqm',\n",
    " 'lodging_sqm_per_cap',\n",
    " 'water_pipes_share',\n",
    " 'baths_share',\n",
    " 'sewerage_share',\n",
    " 'gas_share',\n",
    " 'hot_water_share',\n",
    " 'electric_stove_share',\n",
    " 'heating_share',\n",
    " 'old_house_share',\n",
    " 'average_life_exp',\n",
    " 'infant_mortarity_per_1000_cap',\n",
    " 'perinatal_mort_per_1000_cap',\n",
    " 'incidence_population',\n",
    " 'rent_price_4+room_bus',\n",
    " 'rent_price_3room_bus',\n",
    " 'rent_price_2room_bus',\n",
    " 'rent_price_1room_bus',\n",
    " 'rent_price_3room_eco',\n",
    " 'rent_price_2room_eco',\n",
    " 'rent_price_1room_eco',\n",
    " 'apartment_build',\n",
    " 'apartment_fund_sqm',\n",
    " 'price_doc']\n",
    "corrmat = train_macro_df[mcr_rent_feats].corr()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(figsize=(12, 12))\n",
    "plt.xticks(rotation='90')\n",
    "sn.heatmap(corrmat, square=True, linewidths=.5, annot=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Educational and cultural features"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "mcr_edc_cult_feats = ['load_of_teachers_preschool_per_teacher',\n",
    "                 'child_on_acc_pre_school',\n",
    "                 'load_of_teachers_school_per_teacher',\n",
    "                 'students_state_oneshift',\n",
    "                 'modern_education_share',\n",
    "                 'old_education_build_share',\n",
    "                 'provision_doctors',\n",
    "                 'provision_nurse',\n",
    "                 'load_on_doctors',\n",
    "                 'power_clinics',\n",
    "                 'hospital_beds_available_per_cap',\n",
    "                 'hospital_bed_occupancy_per_year',\n",
    "                 'provision_retail_space_sqm',\n",
    "                 'provision_retail_space_modern_sqm',\n",
    "                 'retail_trade_turnover_per_cap',\n",
    "                 'turnover_catering_per_cap',\n",
    "                 'theaters_viewers_per_1000_cap',\n",
    "                 'seats_theather_rfmin_per_100000_cap',\n",
    "                 'museum_visitis_per_100_cap',\n",
    "                 'bandwidth_sports',\n",
    "                 'population_reg_sports_share',\n",
    "                 'students_reg_sports_share',\n",
    "                 'price_doc']\n",
    "corrmat = train_macro_df[mcr_edc_cult_feats].corr()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(figsize=(12, 12))\n",
    "plt.xticks(rotation='90')\n",
    "sn.heatmap(corrmat, square=True, linewidths=.5, annot=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Final **macro_df** feature selection\n",
    "\n",
    "Value of **macro_df** features may be unsignificant. I'll just take columns with absolute correlation more or equal to 0.1. And it's obvious, there will be some multicollinearity, which I may detect and cut excessive features."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "macro_cols = ['cpi',\n",
    "            'ppi',\n",
    "            'gdp_deflator',\n",
    "            'usdrub',\n",
    "            'eurrub',\n",
    "            'gdp_annual',\n",
    "            'gdp_annual_growth',\n",
    "            'rts',\n",
    "            'micex_rgbi_tr',\n",
    "            'micex_cbi_tr',\n",
    "            'grp',\n",
    "            'deposits_value',\n",
    "            'salary',\n",
    "            'fixed_basket',\n",
    "            'retail_trade_turnover',\n",
    "            'retail_trade_turnover_per_cap',\n",
    "            'labor_force',\n",
    "            'employment',\n",
    "            'invest_fixed_capital_per_cap',\n",
    "            'invest_fixed_assets',\n",
    "            'profitable_enterpr_share',\n",
    "            'unprofitable_enterpr_share',\n",
    "            'fin_res_per_cap',\n",
    "            'construction_value',\n",
    "            'average_life_exp',\n",
    "            'incidence_population',\n",
    "            'load_of_teachers_school_per_teacher',\n",
    "            'modern_education_share',\n",
    "            'old_education_build_share',\n",
    "            'provision_doctors',\n",
    "            'provision_nurse',\n",
    "            'load_on_doctors',\n",
    "            'hospital_beds_available_per_cap',\n",
    "            'hospital_bed_occupancy_per_year',\n",
    "            'retail_trade_turnover_per_cap',\n",
    "            'turnover_catering_per_cap',\n",
    "            'bandwidth_sports']\n",
    "del train_macro_df"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now I'll perform train and test comparison."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "At first, I will look for missing values in **test_df.**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "test_na = (test_df.isnull().sum() / len(test_df)) * 100\n",
    "test_na = test_na.drop(test_na[test_na == 0].index).sort_values(ascending=False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(figsize=(12, 8))\n",
    "plt.xticks(rotation='90')\n",
    "sn.barplot(x=test_na.index, y=test_na)\n",
    "ax.set(title='Percent missing data by feature', ylabel='% missing')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Here are some missing values, We may compare with ones in **train_df**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 8), sharey=True)\n",
    "pd.DataFrame(np.log1p(train_df['full_sq'])).plot.kde(ax=ax[0])\n",
    "pd.DataFrame(np.log1p(test_df['full_sq'])).plot.kde(ax=ax[1])\n",
    "ax[0].set(title='train', xlabel='full_sq_log')\n",
    "ax[1].set(title='test', xlabel='full_sq_log')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "*full_sq* distribution shape in **train_df** looks like one in **test_df**."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 8), sharey=True)\n",
    "pd.DataFrame(np.log1p(train_df['life_sq'])).plot.kde(ax=ax[0])\n",
    "pd.DataFrame(np.log1p(test_df['life_sq'])).plot.kde(ax=ax[1])\n",
    "ax[0].set(title='train', xlabel='life_sq_log')\n",
    "ax[1].set(title='test', xlabel='life_sq_log')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "*life_sq* distribution shape in **train_df** looks like one in **test_df**."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 8), sharey=True)\n",
    "pd.DataFrame(np.log1p(train_df['kitch_sq'])).plot.kde(ax=ax[0])\n",
    "pd.DataFrame(np.log1p(test_df['kitch_sq'])).plot.kde(ax=ax[1])\n",
    "ax[0].set(title='train', xlabel='kitch_sq_log')\n",
    "ax[1].set(title='test', xlabel='kitch_sq_log')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "*kitch_sq* distribution shape in **train_df** looks like one in **test_df**."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 8), sharey=True)\n",
    "sn.countplot(x=test_df['num_room'], ax=ax[0])\n",
    "sn.countplot(x=train_df['num_room'], ax=ax[1])\n",
    "ax[0].set(title='test', xlabel='num_room')\n",
    "ax[1].set(title='train', xlabel='num_room')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Proportions of quantities of *num_room* are comparable"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 8), sharey=True)\n",
    "pd.DataFrame(np.log1p(train_df['cafe_count_2000'])).plot.kde(ax=ax[0])\n",
    "pd.DataFrame(np.log1p(test_df['cafe_count_2000'])).plot.kde(ax=ax[1])\n",
    "ax[0].set(title='train', xlabel='cafe_count_2000_log')\n",
    "ax[1].set(title='test', xlabel='cafe_count_2000_log')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "*cafe_count_2000* distribution shape in **train_df** looks like one in **test_df**."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 8), sharey=True)\n",
    "pd.DataFrame(np.log1p(train_df['sport_count_3000'])).plot.kde(ax=ax[0])\n",
    "pd.DataFrame(np.log1p(test_df['sport_count_3000'])).plot.kde(ax=ax[1])\n",
    "ax[0].set(title='train', xlabel='sport_count_3000_log')\n",
    "ax[1].set(title='test', xlabel='sport_count_3000_log')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "*sport_count_3000* distribution shape in **train_df** have some differences with one in **test_df**< but thy are still comparable."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "years = mdates.YearLocator()   # every year\n",
    "yearsFmt = mdates.DateFormatter('%Y')\n",
    "ts_vc_train = train_df['timestamp'].value_counts()\n",
    "ts_vc_test = test_df['timestamp'].value_counts()\n",
    "f, ax = plt.subplots(figsize=(12, 6))\n",
    "plt.bar(left=ts_vc_train.index, height=ts_vc_train)\n",
    "plt.bar(left=ts_vc_test.index, height=ts_vc_test)\n",
    "ax.xaxis.set_major_locator(years)\n",
    "ax.xaxis.set_major_formatter(yearsFmt)\n",
    "ax.set(title='Number of transactions by day', ylabel='count')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "*Number of transactions by day*, of course, varies, but there should be more than just seasonal graph and general trend."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 8), sharey=True)\n",
    "sn.countplot(x=test_df['product_type'], ax=ax[0])\n",
    "sn.countplot(x=train_df['product_type'], ax=ax[1])\n",
    "ax[0].set(title='test', xlabel='product_type')\n",
    "ax[1].set(title='train', xlabel='product_type')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Proportions of quantities of *product_type* are comparable"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 8), sharey=True)\n",
    "sn.countplot(x=test_df['state'], ax=ax[0])\n",
    "sn.countplot(x=train_df['state'], ax=ax[1])\n",
    "ax[0].set(title='test', xlabel='state')\n",
    "ax[1].set(title='train', xlabel='state')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Proportions of quantities of *state* are comparable"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "f, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 8), sharey=True)\n",
    "sn.countplot(x=test_df['material'], ax=ax[0])\n",
    "sn.countplot(x=train_df['material'], ax=ax[1])\n",
    "ax[0].set(title='test', xlabel='material')\n",
    "ax[1].set(title='train', xlabel='material')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Proportions of quantities of *material* are comparable\n",
    "\n",
    "It seems to me, that data in **train_df** and **test_df** are comparable and we should get results with decent quality."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 4. Patterns, insights, pecularities of data "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Price of Moscow realty depends on many factors, but that's for sure, I consider full square (*full_sq*) as the most important. next may be quantity or rooms (*num_room*), and then it may be district or raion, but it could be just because of different dintance to center, therefore distance to Kremlin.\n",
    "\n",
    "There were many assumptions during the exploratory data analysis, so they are alredy written up there."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 5. Data preprocessing"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "LightGBM don't required a mandatory preprocessing of the missed values in the input and it worked with the data quite fast. Earlier I chose it as a base algorithm.\n",
    "\n",
    "I'll process **train_df** to remove some outliers and clean data. I will extract logarithmic target variable *price_doc_log* from **train_df** into target_dataset and *id* from **test_df** to construct submission file later. Then, I'll drop unneeded columns, concat **train_df** and **test_df** and merge **macro_df** to get access to selected *macro_cols*"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "remove_abnormal_prices(train_df)\n",
    "clean_build_year(train_df)\n",
    "correct_state_33(train_df)\n",
    "remove_life_sq_outlier(train_df)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "y_train = train_df['price_doc_log'].values\n",
    "id_test = test_df['id']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_df.drop(['id', 'price_doc', 'price_doc_log','price_doc_log10'], axis=1, inplace=True)\n",
    "test_df.drop(['id'], axis=1, inplace=True)\n",
    "\n",
    "# Build full_df = (train_df+test_df).join(macro_df)\n",
    "idx_split = len(train_df)\n",
    "full_df = pd.concat([train_df, test_df],sort=False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "full_df = pd.merge_ordered(full_df, macro_df[['timestamp']+macro_cols], on='timestamp', how='left')\n",
    "print(full_df.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 6. Feature engineering and description "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 6.1 Creating new features\n",
    "\n",
    "I'll add basic timestamp-derived features: *year*, *month* and *day of week*. \n",
    "\n",
    "Then I'll calculate quantity of realty deals in given month (*month_year_cnt*) and week (*week_year_cnt*). I'll do that, because price of the house could also be affected by the availability of other houses at the same time period. So creating a count variable on the number of houses at the given time period might help.\n",
    "\n",
    "And after that, i'll compute ratio of given floor to a total floors quantity in building (*rel_floor*) and ratio of kitchen square to full square (*rel_kitch_sq*). Finally, column *timestamp* will be dropped to avoid overfitting\n",
    "\n",
    "Since schools generally play an important role in house hunting, I'll create some variables around school."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Add year, month and day-of-week\n",
    "full_df['year'] = full_df.timestamp.dt.year\n",
    "full_df['month'] = full_df.timestamp.dt.month\n",
    "full_df['dow'] = full_df.timestamp.dt.dayofweek\n",
    "\n",
    "# Add month-year\n",
    "month_year = (full_df.timestamp.dt.month + full_df.timestamp.dt.year * 100)\n",
    "month_year_cnt_map = month_year.value_counts().to_dict()\n",
    "full_df['month_year_cnt'] = month_year.map(month_year_cnt_map)\n",
    "\n",
    "# Add week-year count\n",
    "week_year = (full_df.timestamp.dt.weekofyear + full_df.timestamp.dt.year * 100)\n",
    "week_year_cnt_map = week_year.value_counts().to_dict()\n",
    "full_df['week_year_cnt'] = week_year.map(week_year_cnt_map)\n",
    "\n",
    "# Other feature engineering\n",
    "full_df['rel_floor'] = full_df['floor'] / full_df['max_floor'].astype(float)\n",
    "full_df['rel_kitch_sq'] = full_df['kitch_sq'] / full_df['full_sq'].astype(float)\n",
    "full_df[\"extra_sq\"] = full_df[\"full_sq\"] - full_df[\"life_sq\"]\n",
    "full_df[\"floor_from_top\"] = full_df[\"max_floor\"] - full_df[\"floor\"]\n",
    "\n",
    "full_df[\"ratio_preschool\"] = full_df[\"children_preschool\"] / full_df[\"preschool_quota\"].astype(\"float\")\n",
    "full_df[\"ratio_school\"] = full_df[\"children_school\"] / full_df[\"school_quota\"].astype(\"float\")\n",
    "\n",
    "# Remove timestamp column (may overfit the model in train)\n",
    "full_df.drop(['timestamp'], axis=1, inplace=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 6.2 Deal with categorical values\n",
    "\n",
    "I need to create distinct datasets with numeric values and object type values. **obj_df** will be processed to be fit to LightGBM, than I'll concatenate datasets again."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "numeric_df = full_df.select_dtypes(exclude=['object'])\n",
    "obj_df = full_df.select_dtypes(include=['object']).copy()\n",
    "for c in obj_df:\n",
    "    obj_df[c] = pd.factorize(obj_df[c])[0]\n",
    "\n",
    "values_df = pd.concat([numeric_df, obj_df], axis=1)\n",
    "del numeric_df, obj_df"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 6.3 Create a validation set, with last 20% of data\n",
    "\n",
    "I'll provide hold-on dataset to estimate model after CV, but before submitting results. It's necessary operation, and data leakage should be evaded."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "idx_val = int(idx_split * 0.2)\n",
    "\n",
    "X_train = values_df[:idx_split]\n",
    "X_train_part = values_df[:idx_split-idx_val]\n",
    "X_valid = values_df[idx_split-idx_val:idx_split]\n",
    "y_train_part = y_train[:-idx_val]\n",
    "y_valid = y_train[-idx_val:]\n",
    "\n",
    "X_test = values_df[idx_split:]\n",
    "\n",
    "columns_df = values_df.columns\n",
    "\n",
    "print('X_train shape is', X_train.shape)\n",
    "print('X_train_part shape is', X_train_part.shape)\n",
    "print('y_train_part shape is', y_train_part.shape)\n",
    "print('X_valid shape is', X_valid.shape)\n",
    "print('y_valid shape is', y_valid.shape)\n",
    "print('X_test shape is', X_test.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 6.4 LightGBM datasets preparing"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "lgb_x_train_part = lgb.Dataset(X_train_part, \n",
    "                               label=y_train_part)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "lgb_x_valid = lgb.Dataset(X_valid, \n",
    "                          label=y_valid)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 7. Cross-validation, hyperparameter tuning"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "I'll set random seed value = **111**, initial iterations (rounds = 1500).\n",
    "\n",
    "Also I want to emphasize Cross-validation role. I encountered a discussion about CV and here is important statement:\n",
    "\n",
    "Mykhailo Lisovyi (https://stackoverflow.com/users/9640384/mykhailo-lisovyi):\n",
    "\n",
    "\"In general, the purpose of CV in NOT to do hyperparameter optimisation. The purpose is to evaluate performance of model-building procedure.\n",
    "\n",
    "A basic train/test split is conceptually identical to a 1-fold CV (with a custom size of the split in contrast to the 1/K train size in the k-fold CV). The advantage of doing more splits (i.e. k>1 CV) is to get more information about the estimate of generalisation error. There is more info in a sense of getting the error + stat uncertainty. There is an excellent discussion(https://stats.stackexchange.com/questions/244907/how-to-get-hyper-parameters-in-nested-cross-validation?rq=1) on CrossValidated (start with the links added to the question, which cover the same question, but formulated in a different way). It covers nested cross validation and is absolutely not straightforward. But if you will wrap your head around the concept in general, this will help you in various non-trivial situations. The idea that you have to take away is: **The purpose of CV is to evaluate performance of model-building procedure.**\n",
    "\n",
    "Keeping that idea in mind, how does one approach hyperparameter estimation in general (not only in LightGBM)?\n",
    "\n",
    "You want to train a model with a set of parameters on some data and evaluate each variation of the model on an independent (validation) set. Then you intend to choose the best parameters by choosing the variant that gives the best evaluation metric of your choice.\n",
    "This **can be done with a simple train/test split.** But evaluated performance, and thus the choice of the optimal model parameters, might be just a fluctuation on a particular split.\n",
    "Thus, you **can evaluate each of those models more statistically robust averaging evaluation over several train/test splits, i.e k-fold CV.**\n",
    "Then you can make a step further and say that you had an additional hold-out set, that was separated before hyperparameter optimisation was started. This way you can evaluate the chosen best model on that set to measure the final generalisation error. However, you can make even step further and instead of having a single test sample you can have an outer CV loop, which brings us to nested cross validation.\n",
    "\n",
    "Technically, *lightbgm.cv()* allows you only to evaluate performance on a k-fold split with fixed model parameters. For hyper-parameter tuning you will need to run it in a loop providing different parameters and recoding averaged performance to choose the best parameter set. after the loop is complete. This interface is different from *sklearn*, which provides you with complete functionality to do hyperparameter optimisation in a CV loop. Personally, **I would recommend to use the sklearn-API of lightgbm**. It is just a wrapper around the native *lightgbm.train()* functionality, thus it is not slower. But it allows you to use the full stack of *sklearn* toolkit, thich makes your life MUCH easier.\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 7.1 Initial hyperparameters"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "RS = 111\n",
    "np.random.seed(RS)\n",
    "ROUNDS = 1500\n",
    "params = {\n",
    "    'objective': 'regression',\n",
    "    'metric': 'rmse', #metric is chosen and descripted further\n",
    "    'boosting': 'gbdt',\n",
    "    'learning_rate': 0.1,\n",
    "    'verbose': 1,\n",
    "    'num_leaves': 2 ** 5,\n",
    "    'bagging_fraction': 0.95,\n",
    "    'bagging_freq': 1,\n",
    "    'bagging_seed': RS,\n",
    "    'feature_fraction': 0.7,\n",
    "    'feature_fraction_seed': RS,\n",
    "    'max_bin': 100,\n",
    "    'max_depth': 5,\n",
    "    'num_rounds': ROUNDS\n",
    "}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import warnings\n",
    "warnings.filterwarnings('ignore')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 7.2 Initial tune hyperparameters with cv"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "learning_rates = [0.005,0.01,0.02]\n",
    "nums_leaves = [8,16,32,48,64]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%time\n",
    "lgb_cv_params = []\n",
    "lgb_cv_results = []\n",
    "for learning_rate in learning_rates:\n",
    "    for num_leaves in nums_leaves:\n",
    "        gridParams = {\n",
    "            'objective': 'regression',\n",
    "            'metric': 'rmse',\n",
    "            'boosting': 'gbdt',\n",
    "            'learning_rate': learning_rate,\n",
    "            'verbose': 1,\n",
    "            'num_leaves': num_leaves,\n",
    "            'bagging_fraction': 0.95,\n",
    "            'bagging_freq': 1,\n",
    "            'bagging_seed': RS,\n",
    "            'feature_fraction': 0.7,\n",
    "            'feature_fraction_seed': RS,\n",
    "            'max_bin': 100,\n",
    "            'max_depth': 5\n",
    "            }\n",
    "        print('Processing learning rate = {0}, num_leaves = {1}'.format(learning_rate, num_leaves))\n",
    "        lgb_cv_params.append(gridParams)\n",
    "        lgb_cv_results.append(lgb.cv(gridParams, lgb_x_train_part, num_boost_round=ROUNDS, nfold=4, early_stopping_rounds=200, stratified=False, seed=RS))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "i=0\n",
    "for res in lgb_cv_results:\n",
    "    print(lgb_cv_params[i]['learning_rate'],lgb_cv_params[i]['num_leaves'],len(res['rmse-mean']), np.round(np.min(res['rmse-mean']),4))\n",
    "    i+=1"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 7.3 Fine tune hyperparameters with cv"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "fine_learning_rates = [0.019,0.02,0.021]\n",
    "fine_nums_leaves = [27,30,32,34,37]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%time\n",
    "lgb_cv_fine_params = []\n",
    "lgb_cv_fine_results = []\n",
    "for learning_rate in fine_learning_rates:\n",
    "    for num_leaves in fine_nums_leaves:\n",
    "        gridParams = {\n",
    "            'objective': 'regression',\n",
    "            'metric': 'rmse',\n",
    "            'boosting': 'gbdt',\n",
    "            'learning_rate': learning_rate,\n",
    "            'verbose': 1,\n",
    "            'num_leaves': num_leaves,\n",
    "            'bagging_fraction': 0.95,\n",
    "            'bagging_freq': 1,\n",
    "            'bagging_seed': RS,\n",
    "            'feature_fraction': 0.7,\n",
    "            'feature_fraction_seed': RS,\n",
    "            'max_bin': 100,\n",
    "            'max_depth': 5,\n",
    "            'num_rounds': ROUNDS\n",
    "            }\n",
    "        print('Processing learning rate = {0}, num_leaves = {1}'.format(learning_rate, num_leaves))\n",
    "        lgb_cv_fine_params.append(gridParams)\n",
    "        lgb_cv_fine_results.append(lgb.cv(gridParams, lgb_x_train_part, nfold=4, early_stopping_rounds=200, stratified=False, seed=RS))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "i=0\n",
    "for res in lgb_cv_fine_results:\n",
    "    print(lgb_cv_fine_params[i]['learning_rate'],lgb_cv_fine_params[i]['num_leaves'],len(res['rmse-mean']), np.round(np.min(res['rmse-mean']),4))\n",
    "    i+=1"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "scrolled": true
   },
   "source": [
    "### Part 8. Validation and learning curves"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 8.1 Making predictions on validation set\n",
    "\n",
    "I'll set params as the best I get by CV.\n",
    "\n",
    "I need to create variable *evals_result* to record eval results for plotting validation curve."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "params = {\n",
    "    'objective': 'regression',\n",
    "    'metric': {'rmse'},\n",
    "    'boosting': 'gbdt',\n",
    "    'learning_rate': 0.019,\n",
    "    'verbose': 1,\n",
    "    'num_leaves': 32,\n",
    "    'bagging_fraction': 0.95,\n",
    "    'bagging_freq': 1,\n",
    "    'bagging_seed': RS,\n",
    "    'feature_fraction': 0.7,\n",
    "    'feature_fraction_seed': RS,\n",
    "    'max_bin': 100,\n",
    "    'max_depth': 5,\n",
    "    'num_rounds': ROUNDS\n",
    "}\n",
    "evals_result = {} "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "model_lgb = lgb.train(params, \n",
    "                      lgb_x_train_part, \n",
    "                      evals_result=evals_result, \n",
    "                      valid_sets=[lgb_x_valid], \n",
    "                      early_stopping_rounds=200)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 8.2 Validation curve\n",
    "\n",
    "Now I'll plot validation curve."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "ax = lgb.plot_metric(evals_result, metric='rmse')\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Smooth curve shows that model minimized metric well enough on 400-600 iteration. But perfomance of LightGBM is very impressive, so I'll keep **ROUNDS=1500**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 9. Prediction for hold-out and test samples "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 9.1 RMSLE"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "lgb_valid_pred = model_lgb.predict(X_valid)\n",
    "lgb_mse = np.sqrt(mean_squared_error(y_valid, lgb_valid_pred))\n",
    "lgb_mse"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Resulting metric is good enough to continue with submitting test set predictions on Kaggle."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 9.2 Feature importance"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "gain = model_lgb.feature_importance('gain')\n",
    "ft = pd.DataFrame({'feature':model_lgb.feature_name(), 'split':model_lgb.feature_importance('split'), 'gain':100 * gain / gain.sum()}).sort_values('gain', ascending=False)\n",
    "print(ft.head(25))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.figure()\n",
    "ft[['feature','gain']].head(25).plot(kind='barh', x='feature', y='gain', legend=False, figsize=(10, 20))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Obviously, *full_sq* is the most important feature, but second one, *cafe_count_2000*, is unexpected. Same for the following, *num_room*, *life_sq* and *build_year* are valuable, but *sport_count_3000*, *cafe_count_5000_price_2500* and *cafe_count_5000_price_high* are surprising. Also, there are some engineered features: *extra_sq* is on the sixth place, *ratio_preschool* and *rel_kitch_sq* in top."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 10. Model evaluation with metrics description"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Finally, model trains on full train set. Metric chosen for competition is *Root Mean Squared Log Error* **(RMSLE)**. \n",
    "\n",
    "This metric is used when the Target variable is converted into Log(Target), and this is our case. To evaluate RMSLE, I need to take square root out of *sklearn.mean_squared_error*. So, our metric on validation set **X_valid** (20% of full **train_df**) gives about **0.2271**.\n",
    "\n",
    "There is an economical meaning of RMSLE, it penalises underpricing more than overpricing, which may be relevant for banks, as they have huge amounte of financial resources and may benefit from investing money in realty and sell actives later, but with greater profit. Also, realty prices in Russia tends to grow, especially in Moscow. This is a general direction, and there is one rule: realty become more expensive over time, so go for upper limit and earn more! Anyhow, it's just my opinion and doesn't make big sense in context of this project."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "lgb_x_train = lgb.Dataset(X_train,\n",
    "                          label=y_train)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "bst_lgb = lgb.train(params, lgb_x_train)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "lgb_test_pred = bst_lgb.predict(X_test)\n",
    "lgb_test_pred_exp = np.exp(lgb_test_pred)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "output=pd.DataFrame({'id':id_test,'price_doc':lgb_test_pred_exp})\n",
    "output.to_csv('lgb_%s_%s_%s.csv'%(ROUNDS, RS, np.round(lgb_mse,4)),index=False)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "After submitting result to Kaggle, best result for Public LB was **0.32145** and for Private LB **0.31787**. It means getting in top-1000 out of 3200 with the first place scored respectively **0.29755** and **0.30087**."
   ]
  },
  {
   "attachments": {
    "image.png": {
     "image/png": "iVBORw0KGgoAAAANSUhEUgAAA34AAABnCAYAAABfGJDaAAAgAElEQVR4Ae2d34scR5bvjzxr0x525JTBcnmMIM0Mlx62d9R48VDCGIrxReywy+2al0v5yfU2euwXQ+kvUIFf+lF+Sz+52BeVdlkMZr1bYIyKHa5pzTRMwY5RQuNxWeZKZa2XaWS8vefEj8yIzKzMqv4hlaq+adyVERlx4sQnokp58pyIPHPv/v1DwgECIAACIAACIAACIAACIAACILC0BM4c8rG0vUPHQAAEQAAEQAAEQAAEQAAEQAAE6CkwAAEQAAEQAAEQAAEQAAEQAAEQWG4CMPyWe3zROxAAARAAARAAARAAARAAARCAxw9zAARAAARAAARAAARAAARAAASWnQA8fss+wugfCIAACIAACIAACIAACIDAyhOA4bfyUwAAQAAEQAAEQAAEQAAEQAAElp0ADL9lH2H0DwRAAARAAARAAARAAARAYOUJwPBb+SkAACAAAiAAAiAAAiAAAiAAAstOAIbfso8w+gcCIAACIAACIAACIAACILDyBP5i5QkAAAiAAAiAAAiAAAiAAAiAwBIReEh3fzek/xf/fzr4fs5uPf0jevmvfkF/88pZ+sGcVRe9+JlDPhZdSegHAiAAAiAAAiAAAiAAAiAAAtUEvqcvb/0zfbJ/oIo+9fTTcxhw39N33/23qvfDsEF//4vz1c09QSXg8XuCBguqggAIgAAIgAAIgAAIgAAIlBD44hYNxehbu0Cv/+0levmZkrIFl75/8Af6t49+T/fiXfrdX1+mnz9bUOgJzcoZfuOPPqGXbj+krYuvUv/yuZm6NXud+9R77zN664EV+wx98PYb1JrJmI6p++7ndFWqXvgJHbZCK0R9Dnsf06X9s3Trndeo7l2RxLS6Tr5XJyPn7h413/+KbiZlMtcl/yhlzr5IX/5mg2qJXCLdDyfDnF771ZvU2bD5Wb2LOFaUyelrZGd0smNrWy5in+t7RkZSV504etlxnKaLKu+wzpbLtpO9bhvOlrP5+AQBEAABEHiEBCY07LapOzRN1jsUdeoUlGgQ95u0HaUF6p2IOnW3RkYm3wF0og55RaT6ZEjddpf4YqZ+KtuWEfXaO31qhvpaVgdbwy1j8/B5VAKZcTyRuWF0iXkseRK54zUZdqmdTMSMzm7bpm5Sor1DfTMxZpaRVMbJoyJw94t79B09RS/81fxGn+j4g7M/o9d/EtM//ceE7sac8TPJXY4jZ/idXres0Wdv5E36/U+IZjD+xh/ta6OvSEG+4e/uF13QeaV1uUipkWuNCWukGCPy0ru/TY3Mo5QxdV56j3LGH5UZKratpEwRR2NclZbRbHyD0mdojdCkzN5v6cyHn9OZHqWGt8pjSz7Dp7BfLL5wLM5vUP+dxKpVSiQG54UXtCE/RzuJvn53kAIBEAABEHiMBOK+GH1t2mFjLuR/S/vNbb75nm78aYPLMeTUTTjLSIw3ayxYmSbNBl7W+IsHXRKDLv9gOAViy6Q5zplrDDjZOD0ZAic/N6xePCd6kU0knwGPZ7+fJNWJNeTqjXX9MMIYfcnDBvXwYJuapI2/mWT4TSD1qAioNX3P0NqPjt7gs2tPH73yAtd8dLt67v1Refq2Lv7U/PCeo9avX6Qtekhv3dijcRkkNnausBfy2kUp7x5i4HxMZzxvnHudz6fWlWvfqn8IMjW85PBftafv2kZo8kNqXJDTBzTY01mzl2HP3C9DXYmNnesX2ff84Cu68tF9nUf3Kf7GnE75GO/eV57Ha69bTyFzfP0sl35IPb4mhzWutl552XgTz1HjFfFzp2Xo7p8dD6aqlvkT00CMaTYe29Ym2/gpfSBN7X9tuLHR+am4b51+8T/nbdOvyPBJBJeNRVJITu7T4M5D/rRyZ2ynsk9eI0iAAAiAAAg8KgJ80yz33/VOg/+VkIP/LWVvH7sAaRCrjMyfmHYjzqo3aN06+MJNanPWcDCiCX/SZEQDtuZSmQHVW6oEDUaqBDvxutRs+l5DqZo7+CZ/O6pTu501DSc0HuVKI+MkCZzG3GD95MFBsykPG2ZRlu95VME2tYy7ONYTkBp2AgZ1UtMr2uXHFkVHXkZRKeSBwOMkMJvhJ94WMbDU/7+l3kefqPNu9saevuFQTlvuY2omBg3/UO/p+M76j8+l/T3/l9oIfPDnKV8iKco3/TfY+BIDZDOtqs9C6rzzJh2+8xO6lr2k0mV10wqeTmk2n1lD7BkKp4ajzlKGfwyUR3LNk1P78Zpq7ea9jLX33F964Z+eSjMk4ntiNPE/hi7rwnpl/SqskMn8hmI1rMX9Gv7JGrRSbbaxUA2YhwR0lg1WxX2edubtE+vlzNkz7Ml1/40Qz2M69+2cTuu43wFb1p33GWBIggAIgMBKEpiMBuq3db1mrTiioLauWIzG2kjzwYTUZJdMvywUdDL2fq+l/iRjpWmvDMvZafvivRR7HyWetN2iRs27gMQjIHAqc4P1Dps87jyHJJy48uC1XJEU4hvNUD5LjxEVTtm5ZJQ2gIsgcGoEqg0/N8ROjKy3n6Uee9+Kjpu32br5tRhib9It9ordvP0Z6RvjacbRcxSKB4kOKL5bJFG8V3vsKWTPz6+th6u4XFFuZV3rIfoTr+FLDNtPqJfowt6030h/3HWI1oizBsYMZaxn8eyz/g/K+We1B/Obb43H0xo433rGiGtI1C5fUEbu1U+tl5SNEON1a21qo7re0mOQrgm0HrTUGBz/6UAj2/WN+tToKfDcWYPMhl9S8fhZ2a5BWzkWzgDahwSpV3O2dmy7NLVPTiPqVDzGsuZUwo+FmTxAeEASxqs48Ny3611lTn/Jnkw9p62XlR9oJMatZXyWOjOujc1qgzQIgAAILCuByTj91yXpY1BTD3+HhXfRSankZDLsqZvzdkuvC5wEDXVj7675s+24BmYiYMqJlsvhos1wSgnJHrNHSDxI+v/usMhYLamOS1MJ2DHzChxzbniyKhM2HJTDihthUjrcbPP5MPEeyxpQFTXKDwhya0jZB61DSn0ZiTCcgMCCEKg0/PRNuA25Y605RLGjQh3zPdi6uJFs1FLfUBYdXd2L8wVnzTGhgXThQiJ31qo2xHOWulfvEF1Pbvw59JTXHabGn9/isGc3mJmu0yxlfKmZ1D57yjIGdGr8iZeTDRQOEX1JGavWcHGNU1+eNro4zw3bVEW4r/deUIb64dsSRusYPXy9dvkNZexc/dB4cT98oNZDphvrpCGkb/1rrCTKRjpR9sHAXONoDeuz1LAhpjRjOzP0ySipP/a+VutG0/BjP4zXGpLWcyo8xABUBvXGC8oAv3nnC2203/2CeuL9TIxiryUkQAAEQAAEjkpA1laxwSWbcbgbdARB6j1UolW4Jp/x+i3n/r28VZZ9neWm4aLZ4hzqKTbrcES1K6kHSW9SA+MvS+uRp6fMjbn0MCHDXlixCAib1OeNgnhXIG3w89rRobO5i9fGNBleISRA4PETqDD8rKfOD+ULn59hX9ScN2vezprQQGJvTGYHz2pJs9Udn/+pNnqSnTXZqPqVv17ObUtC+S6ZdW9fTtFpljKuTO/87nPUVgZoasTVWzqMNTEwVNgtG5+ymYoqK95VMdhsGKInkch4rdjqo1tJP7nM5oaub/th1xwmaxd1OONLt4l3XtUeRDEOib24bjikNQ5p/3MTEvk1NRRDq8dsY2FLJ+sTk7Wg+kp1O7P0ybaiP7OGneS63tIae1C3OC8xfK0nUFU3RuID9vSJh9h4j9O1oKoQ/oAACIAACByXAK+t6kjYJ9+Ej7bZ49aP8xLFAJBwTfYhdq6U7xSaVmYvzXW+mWdD8UrehWOKmZDTfrpTaFBv+WsNU4E4e9QEZpkbFTrZUFPrSbbF1RpBNvbWd7TB3+/vUDvaZiOwn1ueNE2GlYVPEFgUAo9oV89zFD7HXX7wUId0JuvlbGijb1gqONaDorxQH/u8xMh479v8bpi21Kx1z+vQSFvN/dRhis71aQaUW2laGbWWkdcpmrWMyRICG2pq1/SxPsk1V66cq7psZOQ2UxFjhY1DedWF8j45MtjT1mQvndok5W3/NRe1kr7r8EW9GY/nMVXGIW8uc1tvbFM3HjnlCbvsKMwc5Nh6ngd91rFQNWy45DNkw1ZVtvlT2g6XqezThjOeXN6uhXTb8M65v3a3Ub3DqTaw7a6hyqu9/0BtqhPeE86ul9KThAQIgAAIrDSBoFbn/mfCPc0avbqz7q8UktlcYxj1aNhIDTGJNOmLN0aMvqJXOUwTar00xFv7N/1CERuYo07Jqx/84kgdg8Dpzo0qxdINWTZDt6zdXMj1HusNiaJuRLsx70yblJ8mw5WHcxBYDAIVHj+rpL8Gr/KGWapljBrrJUzXREkZs6tmUXicuuk2nibj2Tq0niTxdrneK6um/ZyprvZonXm3OKxTGS1WHhsyZ5QBZV9FYS84n6VlyteoWS9R6eYg2fWBTtOFp2L0qd1OOUw397oMsxuq58FKpdjQxjRn+pkYRFmGNjxYGW8zjYWRn6whzIfRVrbD//CrHV7n6JMNR3bnpB0Dd9MW0U55AlU4LHsAbfiyDfe890e9eU/RPJ6ODldAAARAYGUIBOsNtZ7P3cjFbsRSuB7PhPAVevdonVJbUb8WImKS7R3XGJwBrfUWiTfR/G83ApGQUrV2ULb05zBTrOmbgecRi5ze3JhBIbMhy/RQ32IZ7jymI8ooloxcEDhdAhWGX8HaKjYopr0z7+btP5rneWxUZTxTdlMSr4zs1knp+kF9cz8lZPHEOdgNOpxXHIjxYAy8ZIOOUoPOKFVZJm0rWQtn17056+4SRnbdGIu36wX1RicFcnJlOKPU6BOdzcYtSVinriOvzEjWASavbthP1zsW6KyNJ14raNf42flxhHWZyaYuyaszRFd9VLczQ5+Ei1oXaTZvsYZbMm/t+kTtubPzMTECzcOM9KGACfdkr99VVtMa8FZnfIIACIAACBgCwTo12Ok37A5MmJzxkiTr8cSAk41T2HMnS+eMd484tC6J7GRjUG+uYXdedI0+NtDCU6AdNtgAdPVm/2J/29tk5hRaXS2RpzI3ZkGYbsiSvLIhqRaS3tvFfd1I6tmzr3zgfWSTTV3yMhJhOAGBhSFQGeqp1lbRJ7y7oazh+lwZBrcuyu6G+Z09r118VnldbpruXftVulZNjI3OO+z5eu8ztR5NFynyRj1CNhuv0SGJN4/Xrd227bpePWsIyjUd5mdLyacO+ZulDBeWts6LF85wFAFs9H3peS6FEecz55fe/UpKqMOGFqpEkRwxnhOvnl1TJ6Vlo5qP6S1VUf+xL6tPxpU3brn6oSng6cNGJu9oGrJH75IrQ7ytdl2gVEsYOv3KljHiSz/YeLbrJ5P3BroVZminuk+uQDlPecsaSX044996lT7g+fqWw8jyM4XJhnsizNMSwScIgAAIFBHgd+xJ6CRvlLHdjHQBNvrEw5bZoiWpLNvx7/Drsrc57NLUELceG4ihKmMNMElIaGakcvUf+9Jt+1Jue0k2ZZGoTnvd5k//LNCbC7ubzEyviyuzEShgfMy5Ie3KGj217NMoYeeIHbvJ8Lp+x1/hLp2yt0ufohqHAbtzK6NXlYzZ+o9SIPDoCJw55GPe5sQTcmnfNTbmlYDyIAACIAACIAACIAACIAACIHCyBO7e+ke1BOdC4//QpWRfkTnb+MO/0D/8/h49/9f/l/73z+asu8DFK0I9JdRQ1nB9bN7HJz1hV7fa2dK+XHuBewfVQAAEQAAEQAAEQAAEQAAEVozA9/T9wdG7/OeD745eeYFrVoZ6yqYWt8TD54S75UMUF7iHUA0EQAAEQAAEQAAEQAAEQGAlCJw//yN6av9rGo9+Rw9e/jmd/cGc3X74Be3u/ydX+iEFtTnrLnjxI4V6LnifoB4IgAAIgAAIgAAIgAAIgMBKEvgv+sO/fEi/v/ffRE89RU//YD7L7/vvviOuSWsX3qC/u/QSzVd7sYHD8Fvs8YF2IAACIAACIAACIAACIAAC8xD4/gHd+fch7X4xoe/EipvneOqH9ML/+gW9/vPzvH3ich0w/JZrPNEbEAABEAABEAABEAABEAABEMgRqNzcJVcDGSAAAiAAAiAAAiAAAiAAAiAAAk8UARh+T9RwQVkQAAEQAAEQAAEQAAEQAAEQmJ/AyRp+k12Kdifza7FQNQ4oHgwoPsYWsAvVHSgDAiAAAiAAAiAAAiAAAiCw8gSKDb8DfldfFFFk/h/ACppzosTUffcT6t11q0kevxPxvT0ac/b4o0/oTC92C+TP7+5RMydHF1PvV1T1i9rKi0IOCIAACIAACIAACIAACIDA6hIoeI/fhHZ7AwqabWoHAobT0ZDiVoPCtdUFdbye36fee5/T8OKrdHj5nBZ1+Q06rBJ6foM6Fz6m7u59atl6qg4b5vvP0AdvhyrVeUd/qgT+gAAIgAAIgAAIgAAIgAAIgECGQN7wm8S0u9k0Rp+UDmiz3chUs0kJi+zRIJZ0SI2GshTtRZrsRtTfNcmwQa1GSGviTexNWOYmS5ZDDMtdCsSwJLnGYZYqn+VNMTYL5ao6rj5Em5ubtMtS25tGLwlFtQpZfVS9gj9KT6OL8BAZZbpPNYrF6PuMeq+8Sn3HeBOP3xXaSPLEg3dp3+hx9kX68jcbJO+MrG+cpZuffsFewnMqrUrsfU1Xz56jL89LSjx+X1Pjndeozikt9xzVb39FV1Xhs3TLXFNJ/AEBEAABEAABEAABEAABEFg5ArlQz0nMphLbOPEgDfWctmzvIB7SIGCjqM3eQTbkJoPdFCAbSbu7m9RU11rUoAGNJnx5LWSDbJdiOZdDGZqbyps4GRlPI9dhm48GqoIqlf6ZJpdL+Po0iRXw6g36ZPRh+WFMvWkdEwOUq9at7pM+G7e86K9E97Qh96zY6HNLqHMO6ezui4H2Jh2+8yp9QF9RtGdKbfzUT3P2cO8BXXtdG4Y5WZxx8/Z9Ct8WWW/SlxcP6JIJLy0qizwQAAEQAAEQAAEQAAEQAIHlJ5Az/KTLsbZ6tEHHFtikz56v3GYnBzSOY9oUK1EdAa2zRy89amw4Wa/eGgVcbDLRQtaCkHaN5XcwYe+fknHA150y7JFrr69Rrln2exXLLdfnYBxTvBkaLyPbcLWQwt2Y/Y1FB3sb6+ydVJfWKGTPYVyqe5EMot6Nz+itB2yI3RGPXdnxMl1PvHLnKHyOjbs/3TcVzlHjlWfo6l5s0myUspHY2Jgub+viBrWUN5CodvkCXXvwZzZlcYAACIAACIAACIAACIAACKwqgXyop5BgQydZz6e8XAPaHR9QmGRKITbUYjak6nKujzVl3dkEG20c6tnbtVc5GLRhyonRxYbfAZtW4zgw+WxgNZo0iXoUmSphgz2FoUnYj7Vpcsv1OWDPYugry0Zgsdlnm/I+2So94BrKYMzp7pU0iYd087mf0OFvnlOhnlc+ejkJ68yVPn+OYg71fGk/vbL1fHpe2zxHW+9/TUMOW61LmOeFF6rXB6bV+YxDYHmjmboxBr1LSIAACIAACIAACIAACIAACCw9gZzhJ964InsoCLKL2NiLx+GSyhFmLon3zh4H8YD6JGGggcqSdXmJDbhWYxNmSGMuHnN7DVuJDatNDq/clLRaTzeiSeI11IWmyy3XR/qlvXaJskXdNJoU9Ivrq5pTdTdVkw/efOWXoUq1fv0i9d7/jLo/fpM6BZ46WZd3idhINJu0yHq/biKHT86/TK2zn9Fg7z7Fn0qY52vu1dz5zXvfcN45nX/3Wya9Rg0YfTlOyAABEAABEAABEAABEACBVSGQC/XUIZC8Bs/GWPKGKH1eqxdq+83hosM3bcimWIujQZxc1x42a2TF3nI7DrQkdvrRgENIg0SwbPLCxiEbg+o4mFDM14wEk8n2oPLcFckt1ycb2pkN/UwaMCe7HAaqEWRDSIt0z9bOpHl3zv6vztLVD3/LRlj+iO89pK3nOb5TDrXeT5+mf02456d71HtQHuap6uzvJ6+SGPOOoDfZQ+g4ZlOxOAMBEAABEAABEAABEAABEFgJAjmPn2xg0mhOKOpFBoDeXTNn9/HVYJNDMXlXT7bX+OByDfbVGcMtWG/wIjcbtsmbvDQ3qd9PPXjKEGMPX2L38fkmrycccLtKnMhrbeYMv1K5nj66Tbbf9MH9qjcG1NPKqrjTVqOoV1KcN6AJYi470HW9XU7N+kBPd12s9O/Ga3Rrj3fu5PfyffD2G46Xk0Mwf/kiEXsEz9wWCbzJCxuJlz78Iw0v6506JVet1Xv3cw7zvMCe1PJj6yKHjr7P7wxUxWTTmLC8Aq6CAAiAAAiAAAiAAAiAAAgsNYEzh3w8lh6yJzGKnVctnIISEhY6ZF9XI8z6DY/Z2CPQ/agaZl8TcVQ5qAcCIAACIAACIAACIAACILA8BPIev1Pvm4R09nm9n3j0pnncjqKE/w4/JUG9f+8kjb7T0v0o/UUdEAABEAABEAABEAABEAABEJiNwOPz+M2mH0qBAAiAAAiAAAiAAAiAAAiAAAgck0Buc5djykN1EAABEAABEAABEAABEAABEACBBSMAw2/BBgTqgAAIgAAIgAAIgAAIgAAIgMBJE4Dhd9JEIQ8EQAAEQAAEQAAEQAAEQAAEFowADL8FGxCoAwIgAAIgAAIgAAIgAAIgAAInTQCG30kThTwQAAEQAAEQAAEQAAEQAAEQWDACMPwWbECgDgiAAAiAAAiAAAiAAAiAAAicNAEYfidNFPJAAARAAARAAARAAARAAARAYMEIwPBbsAGBOiAAAiAAAiAAAiAAAiAAAiBw0gRg+J00UcgDARAAARAAARAAARAAARAAgQUjAMNvwQYE6oAACIAACIAACIAACIAACIDASROA4XfSRCEPBEAABEAABEAABEAABEAABBaMQN7wmwyp2+xTPIeicb9J3eFkjhqnXDTuU7M7pESjbHpK89KPZlP+n6//U8Q9GdnCRvXZ9L0fP1691fzr0iJNp1mAPO7vQDJ33XnvKP649XNUWbDTmPrNJ2++HRdiMl/c7z6f+19/YSO/C9P4TGjY1b8bfj3WTn2PzW+KkpH/TZ2mw/T2jttr1AcBEAABEACB1SbwF6vdfbf3Me1GRO2dPjVDN/+o53JT1KZx66TkHVWPknpi9G2PqBP1qR5IObnR26ZuLaKOziipfEqXgjp1+vVTEr6kYvkmuxcxt6hjxlHuu7vUHjQo6tRJDW2m6+r6uMXjHWauPKKkzL1ebap+j0iL1W6mveOPv/o9aBLlfgOHNBhNqJ79TZiMaDAsQKjkROq3tBPq62q+8QO1nX6TTJa+kNVB5+IvCIAACIAACIDAKRDIe/xOoZEnR2SdakV3yU9OB+bSNFaWbisxFohvyRpsKAwHo9RbOpdEFF4UAkG9Q/0pRt+i6Ag9FoxA2OAHPkTRbuwoVqd2u/g3YTIa0LDdprZTWk7178qO9wAtqLe4XESe6Ew9JEEABEAABEAABE6XwGweP/MEV6vSpk5nRN0Cb4GE7mxHRmH3Sa6E/bTHbFRwPQ5Fs3Lk6W8gngmb59YxpaZ9eG2R7+2YVmdqvtu/Nj/x5lsU/WRae+2sevwIu+AJeZSIrXeMp0z1l8Oj5Mqwybc7Vr8CL6Bh07JPwkUX9oTsNAa0zQ0nHkhXx0SebVp76iKbzOpp8zOfIT+B72fyqpN+W0mfpaLRvbPepW5kJLEB4nud/PrtTodG3TEl/Vc8BtSw3qsjyPR0snolE9OOhdHPyJ+Pt/Go2YnBvHeMuOoPMwca0m8zR7hSMs5WgOiV6FxwvaDcUOau5W365bOXSu6cHlIzkirWw+te46LePLJzl3u6vU2R284sekrT5nC/u/rrZr5XBTpL2V7igZ6NnfIu2bFJvsu2db+P7Z2KkbPfT9XtyAjJzCH3+y4lZuE2S52iORLIb6mdN/Z3yqg165wxxWf9qG02qB4NaDSpOw+JJjRid1+71SoWMxrzTAsdb3NIzX6fJpMJlw+K6yAXBEAABEAABEDgdAkcZo/7tw6vbd04vGPz79w43Nq6dnjrvslQ6a3DrRtJicM7Nzi9tXWYZt05vOGmlUy3zv3DW9d0nWtWsCmTyrAKFHwqHVId79+6drh17dahVZEVKk8XiDzkHt9w+8llVL8chfx0UR8dTqyN9NGpzhIL8gp5u6ykmoyJIzuT9vUqaKOwv0WZZlx8pZ2C+noyZqY/SdrMjSStmG4dVqW33PmW6ZsaS55L02VU6JSVl00XzOcq3mq+uTrn+u0gy50axk79nLysjtl0VmbR9cx3QOZIylCmFH9nMuPsz6Ps/Ld6O/PQjG8iRumRfi+zanrpjH7qWkGer7fVIW0jx06NRXpdzx+bNvUThU0f3e+WpyQnVJ/4++j8vqg2k7SWmbLN/C6Y74j3G5r93mQ5JnWs3ma8+HuQ6pHtS6bdirHIjnXC32OR/ib642C5iH6ZdkWQGgPW1ZnjSn7mT6EOmTLZpB5vkZ39P2WVrYM0CIAACIAACICAJlAZ6pkLBwybtNMuMEb5KXe6XEiHDGZDhjqN0FQMaL3BMUXsNbhi140E6yRZsxyToEF96yHjCsE6P5EeyhPmEzz4iXwv4ifqaaco3GxLHJTe+GYSUIOfYCeXlf5DGp+IEn678aBL1LmSPm03bcm6G/HgjEfsZNgMTecDxtqnRjC/IpPhdd6kx2/bCNUf8YC65IwZP7mXcfRDQ9vUsmPKT/wF2dBCiXfZ++lfbxZOJq9VTvh1fJnlOpWzs+34fS6voz0d9U6De2cOEyJnk7N8uvXV/KVRMnek/WHbCcHldY+tNnt6BvEsoo9Wpmq+G6l1dx5Oxqx1m9Kpx56wfoOOMPXm0nk6O/bm9XiG7TjryMJN1tCEGKo1aaxj8jvEwc3sVav+2eE6V9K1kipscSgeMK32+pU+h0gGpg96zo/snC/ixnmz1UnnmJ4jrh7mN9TIp6OMRSMMvnAAAAy7SURBVLTtb+yk1vsyD9sVK5s/5bfP/Z7rMM/N9DvglCX+N6IfCdeItpvNtI3cDjBcKauDlC8qZ+SrEGb+3e3n/nfG3NUF5yAAAiAAAiAAAgmBilBPbVTUG/6dQFDjW6VxIkOd1GvZMutEAzfcZz2/fm69dqSgnyAI9OYVSTiXqMDGga/S8VJ8IzXkG5ehxMJ5R1unWIcgG67FV/h+/fhH3eWix2AYtamZkVxvSIa+AexuS0ipHGzE9PkmiPWb59DhcWxfRtNvoCZiYQ75xrpYEd2cp7uvgarPSnuaBTW+QcxMJr8aPyBwefgXy3WqYmdkefKr6vD1Id+4t9xeBFTj6T7PsZ75vqR1dfuzfOfSOidwVjXfTROe3uoBRJdv7iN91YQ4zjn15lbe08GrrccmUuHV3gX+VvCh+rhO/tDVqHrosr9dPN5185BHfgcCP3xZmuJnWt7h63yUOiIuq4fTxFHGIglJNeGv661Co0+1Igb0sGfCPW2Yp3QydpRwTtUmTf00Q4WhFmwcleiQFsUZCIAACIAACIDA6RCoMPxOp9HjStVrhMTA6egnzsoAqzAejtSoMaKK6pr1NLI2S+9cZ9YfFZU9gbx0HVZemH4KLvn6BlRuxMvK5yRwX3jZkLcrZK6MzbBru2x6ET6n6jRR2s3FwvRneh0t87F0O7du6qS1KJnvhU1p77K6vTc39s2IvVLi7QkKKzySzNx6SdtqbE9O8NM8/OFOU990Wq1LLGviKHXK5KlrxxkLrttq89rNHg3FA1o4duLJ5CgI2d1zXXbz5AdtGeO2VEX2Au602QMoG0fVU+9paZ2Ci/76TbfAvHPXrYtzEAABEAABEFgNAhWhntqTkYTqGSYTcXlkjnwZ9g4d0aOXEZ1JGo+IG2qnnuRnih03qTxRafhdVpz2XnUojRrT3oZsucp0pe7FY1AsV2+gIBt6ZMejuDznqhv20Uw364G4tY4RUltYv7L/UzVXFwplJlXmYWcrVdXh63UiP5RPz0kr4Xifxe3Ld67eWPe9pcdryK9dMd/9wgUpvrHv92WTmJMKd5Y25uVaNDaOrkV95PnHv1QVR/Z3QL7rZgdgNX/dUOQZdD5KnQoNvctHGQsVEqtf2+DJchI23DPm3Tw5XrY4zFOMWg7XnBqtecx/ExDq6QwITkEABEAABEBgTgIVhp9e20ERPwm2jg42FLajglZ4rUbyjz3/43+dwzDTdWcF5auyxCApuYFIDRv2chUqVNVAxXWzjs5dVyVPm72XuztGUNznnQ4LRPoGgi6Qrn3Ua5IKqnlZcsPljYHy7DV5PZ4Minj53Bstswatph/ba515zZgdP1eyMvo4dHNn2lN+tzCfm/VSPUeYeDeaPNZF4jO1C+qfwNhV6FTOLqehyiivY9Y1dgdJkJteG+nK0mNStlbJLZ09z7XP3yd5T19jXY9ptvyR08qDaGrPMt+zDanvqPNibrOGTk898YBXzA3n+5OIdtbOkawpzT9jSormT/JjY18krn6bcn1kHa/zdyMvKJPDxsz1dI6r7zqHLKfDkRqG+bmQEZUkj1InqZw/KR2LfPF8jnmVizOvc2VUuCeH9nZp+lxU61Fl6V7mN0fNYQ5ETxaE5qQjAwRAAARAAARA4JQJVId68tPjqMOvXGg3tSq8JkM8Svx2Bu+odzjuZ5tv9EyuhMolG594JWdLKI8at5WXwWFJV9hQaW+rreh5NQ17q9jTcL1N2/1NNoLC2RqoLCWhUyyXX8LOdpU+JKSwb8KU6lfYu8FtclilHFI26lyn9nafNtkYCtk3I+FT3W29Nk+Hn2V15xsh5iYb05Qe8gSfQ0qbdgy4cBpiGjCHDm/x3kzYywKjyMRrKU8R65YP30qNrojHLfIUmBY2JR7FHWas+6SqyBod+5ZmT0ZRgusrXW19bkfCArOTqajq1LwKnUrZTRFaUUe8DjvM2469bN+/I2FwVpzyInHfjjoXc+2ffPik2qCku63Xa8oYsq6l8932zf1kPdVvQ/IF0d8DPde0V6zDBmuhuao2xDHfH5mv/JsSsDzFldmqg/M7bV4qrFMz/ZWx8XUSdn0z/+U7vcN9tr8d8j3iNK9dKz94LFtjrmd/COT7YfqV1VlY7qzzqziu83LYKQ9UjlKnXEF+qFI2FlWV9XX9rr3tkt9RvXENjVyjNy9bXhMT1Zx/M0yRwhBc2dwlyso4+fmebQFpEAABEAABEFhFAmdkc895O67WWRS8x29eOdPLm/VyrX6B4Te9Fq5kCYjnqUe1aTeg2eKPIy2hYWz4Je/xexw6nHCbp//9OGGFT0OceKD4fZTKoDsN+Y9K5hLOz0eFDu2AAAiAAAiAAAgsFoFKwy9m79X2yDyRV7qLMbFN/MgbRtlijeXiayPGQGbLeDW/SHucFr8D0HDlCMDwW7khR4dBAARAAARAYFkJVIZ6StiOvB7ARl8JCBUSFj6JSLTRGmVUV/3Jx0JmSiF5bAIcjpYNWVVhqTOHih5bAwgAARAAARAAARAAARAAgZUkUOnxW0kq6DQIgAAIgAAIgAAIgAAIgAAILBGByl09l6iv6AoIgAAIgAAIgAAIgAAIgAAIrCQBGH4rOezoNAiAAAiAAAiAAAiAAAiAwCoRgOG3SqONvoIACIAACIAACIAACIAACKwkARh+Kzns6DQIgAAIgAAIgAAIgAAIgMAqEZhq+Mk2+01+YbH+v0/xrFRk+3Ou1x1OcjXk/WbN/nRJ6nqzS35V2YnT6pF+FsnPNYgMEAABEAABEAABEAABEAABEAABKnydg3q3mry7r1+ngCGpF1Kb1zqEpdD4xevX2XDjMvVcuQmNBnxlvZW7ojLYYLzeLag5GdOI2uqVEmFxTeSCAAiAAAiAAAiAAAiAAAiAAAiUEMh7/NgA60VE7ZY2+qRuUL9CnXpEPd8VlxM7GV6n7voO7bT9S9p72GYvoJ+fprTBuL6zwyYeDhAAARAAARAAARAAARAAARAAgZMkUGD4jdlj16bN0G0moNo60XCcD99MSimP3TrtNL2K6rK8BL7f5xfBt5PS3ok1GAuq6nL1mvI8epWQAAEQAAEQAAEQAAEQAAEQAAEQmIlAzvCbjEccp5k3tIIaB2+OxlRs+lmPXZPCmZp1CpUYjKoUh3oOaUyDbrq+r2ydoCMZpyAAAiAAAiAAAiAAAiAAAiAAAkygcI3fvGSsx64fzl1TrQlc3+mXG4zDEdUi9hp2RL5s9rJNTdrhz7kbnFdBlAcBEAABEAABEAABEAABEACBJ55AzuM3d4+qPHYlAq3BWGa/TYIGh4l2qC67zKgjpKbEjEa7s+80amriAwRAAARAAARAAARAAARAAARWkUDO4xfoxXwqpDOxtZjMZKx35HTzBNhkNOBQTP6vGUnSOdrUrPPOoJ10kxjnIp+aXT6HQ8pXbVK9E/GGMgEFQbZFrhrUeNfQAcmSw7Dgst8OUiAAAiAAAiAAAiAAAiAAAiCw2gRyhp82qrq0G/N6vdDCmZBa+tfIW1kBG3e8b4t3yC6evZo23LwLXiJg447DN708CePscVin9fDx2sFumwaNjCy17m+dWnl1PGlIgAAIgAAIgAAIgAAIgAAIgAAIEOVDPYM6tdocSdkbJhu5qJDMYZtaJt5SDLsm79QZnzpBNg5ZmWF34LTFxuF2xEZjo3xd4KnrhgZAAARAAARAAARAAARAAARA4MkgkPf4sd7y+oUdNu7aTduJx/gC9bBJfd78pdlMlEnCQK12+AQBEAABEAABEAABEAABEAABEJhO4MwhH9Mv4woIgAAIgAAIgAAIgAAIgAAIgMCTTiAf6vmk9wj6gwAIgAAIgAAIgAAIgAAIgAAIeARg+Hk4kAABEAABEAABEAABEAABEACB5SMAw2/5xhQ9AgEQAAEQAAEQAAEQAAEQAAGPAAw/DwcSIAACIAACIAACIAACIAACILB8BGD4Ld+YokcgAAIgAAIgAAIgAAIgAAIg4BGA4efhQAIEQAAEQAAEQAAEQAAEQAAElo8ADL/lG1P0CARAAARAAARAAARAAARAAAQ8AjD8PBxIgAAIgAAIgAAIgAAIgAAIgMDyEYDht3xjih6BAAiAAAiAAAiAAAiAAAiAgEcAhp+HAwkQAAEQAAEQAAEQAAEQAAEQWD4CMPyWb0zRIxAAARAAARAAARAAARAAARDwCMDw83AgAQIgAAIgAAIgAAIgAAIgAALLRwCG3/KNKXoEAiAAAiAAAiAAAiAAAiAAAh4BGH4eDiRAAARAAARAAARAAARAAARAYPkIwPBbvjFFj0AABEAABEAABEAABEAABEDAIwDDz8OBBAiAAAiAAAiAAAiAAAiAAAgsH4H/AWN9YQsTKUwnAAAAAElFTkSuQmCC"
    }
   },
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![image.png](attachment:image.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 11. Conclusions"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This was my very first experience for competing on Kaggle (although without appear on a leaderboard actualy).\n",
    "\n",
    "I was able to explore given data, find some dependencies and relations. Also i caught the idea of droping some irrelevant or bad (outlied) train data in favor of important features.\n",
    "\n",
    "I used regression with gradient boosting LightGBM library, I used it for the second time and gain valuable results. This approach can show feature importance and therefore interpretable! One can see which variables made most significant effect on prediction and may considerate to find other reliable data and features to futher improve model and forecasts.\n",
    "\n",
    "There are many ways to improve the solution, as it is, obviously, not the best one. I could use ensemble of different models, cleanse data based on outer datasets and realty agencies datasets. I could provide more complex feature engeneering, for example based on macroeconomical feature changes and on rolling time intervals. Also, I could use PCA or another GB model (XGboost) as a most important feature selector.\n",
    "\n",
    "But there is also time limits and perfomance issues, so my decision may be not the worst for some cases, and when I'll face similar task, I'll bring this code and get the job done!"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.0"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
