{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<center>\n",
    "    \n",
    "## [mlcourse.ai](mlcourse.ai) – Open Machine Learning Course \n",
    "### <center> Author: Artem Kuznetsov, ODS Slack te\n",
    "    \n",
    "## <center> Exploring TED Talks"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Research plan**\n",
    "    - Dataset and features description\n",
    "    - Exploratory data analysis\n",
    "    - Visual analysis of the features\n",
    "    - Patterns, insights, pecularities of data\n",
    "    - Data preprocessing\n",
    "    - Metric selection\n",
    "    - Feature engineering and description\n",
    "    - Cross-validation, hyperparameter tuning\n",
    "    - Validation and learning curves\n",
    "    - Prediction for hold-out set\n",
    "    - Model selection\n",
    "    - Conclusions"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 1. Dataset and features description"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "TED is the conference organizer, which holds events were people from different areas can have a public talk of important ideas. Last years TED had significantly grown in popularity due video and audio recordings publications of talks.\n",
    "\n",
    "The dataset was collected by Rounak Banik and stored to Kaggle https://www.kaggle.com/rounakbanik/ted-talks/. It's not sure how it was collected by web scrapping or by TED api (now closed). Data contains talks before September 21st, 2017.\n",
    "\n",
    "Data set constists of two files:\n",
    "\n",
    "ted_main.csv - metadata about talks and speakers\n",
    "\n",
    "- comments- The number of first level comments made on the talk (number)\n",
    "- description - A blurb of what the talk is about (string)\n",
    "- duration - The duration of the talk in seconds (number)\n",
    "- event - The TED/TEDx event where the talk took place (string)\n",
    "- film_date - The Unix timestamp of the filming (date in unix time format)\n",
    "- languages - The number of languages in which the talk is available (number)\n",
    "- main_speaker - The first named speaker of the talk (string)\n",
    "- name - The official name of the TED Talk. Includes the title and the speaker. (string)\n",
    "- num_speaker - The number of speakers in the talk (number)\n",
    "- published_date - The Unix timestamp for the publication of the talk on TED.com (date in unix time format)\n",
    "- ratings - A stringified dictionary of the various ratings given to the talk (inspiring, fascinating, jaw dropping, etc.) (json)\n",
    "- related_talks - A list of dictionaries of recommended talks to watch next (json)\n",
    "- speaker_occupation - The occupation of the main speaker (string)\n",
    "- tags - The themes associated with the talk (list)\n",
    "- title - The title of the talk (string)\n",
    "- url - The URL of the talk (string)\n",
    "- views - The number of views on the talk (number)\n",
    "\n",
    "transcripts.csv - talk transcripts\n",
    "\n",
    "- transcript - The official English transcript of the talk. (string)\n",
    "- url - The URL of the talk (string)\n",
    "\n",
    "Target of this project is to to research how can be predicted count of views."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import re\n",
    "import pandas as pd\n",
    "import matplotlib.pyplot as plt\n",
    "import numpy as np\n",
    "import statsmodels.api as sm\n",
    "import seaborn as sns\n",
    "import scipy.stats\n",
    "\n",
    "from sklearn.preprocessing import OneHotEncoder, StandardScaler\n",
    "from sklearn.compose import ColumnTransformer\n",
    "from sklearn.pipeline import Pipeline\n",
    "from sklearn.feature_extraction.text import TfidfVectorizer\n",
    "from sklearn.model_selection import train_test_split, TimeSeriesSplit, GridSearchCV, learning_curve\n",
    "from sklearn.model_selection import cross_val_score\n",
    "from sklearn.metrics import mean_squared_error, mean_absolute_error\n",
    "from sklearn.linear_model import LinearRegression, Ridge\n",
    "\n",
    "DATA_PATH = '../data/'\n",
    "\n",
    "# Set up seeds\n",
    "RANDOM_SEED = 42\n",
    "np.random.seed(RANDOM_SEED)\n",
    "\n",
    "%matplotlib inline"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.rcParams['figure.figsize'] = 12., 9."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 2. Exploratory data analysis"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Load data\n",
    "df_ted_main = pd.read_csv(DATA_PATH + 'ted_main.csv.zip')\n",
    "df_ted_transcripts = pd.read_csv(DATA_PATH + 'transcripts.csv.zip')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted_main.info()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted_transcripts.info()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The datasets contains different count of records, so probably there are fewer transcripts then talks."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Duplicates check"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted_main[df_ted_main.duplicated()]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted_transcripts[df_ted_transcripts.duplicated()]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Ok, we've got some in df_ted_transcripts, let's remove them."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted_transcripts = df_ted_transcripts.drop_duplicates()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Merge datasets"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted_main.shape, df_ted_transcripts.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted = pd.merge(df_ted_main, df_ted_transcripts, how='left', on='url')\n",
    "df_ted.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted.columns"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "DATE_COLUMNS = 'film_date', 'published_date'\n",
    "for column in DATE_COLUMNS:\n",
    "    df_ted[column] = pd.to_datetime(df_ted[column], unit='s')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted.info()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Missing values"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Looks like we have transcript for almost all talks but also have missing values. Also some values of speaker_ocupation is missing."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Recheck NA's"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "for column in df_ted.columns:\n",
    "    na_count = df_ted[column].isna().sum()\n",
    "    if na_count > 0:\n",
    "        print('%s : %s' % (column, na_count))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Common numerics stats"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted.describe()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted.median()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### description"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['description'].nunique(), len(df_ted['description'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['description'].str.len().describe()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['duration'].values[:100]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Each talk has an unique description."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### event"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['event'].value_counts()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We had different type of events here whith the most popular TED2014 event. We can see TED and TEDx events, and some events differen from it. Let's investigate a little more."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "sorted(df_ted['event'].unique())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "sorted(df_ted[df_ted['event'].str.startswith('TEDx')]['event'].unique())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "sorted(df_ted[df_ted['event'].str.startswith('TEDx') == False]['event'].unique())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can add some feature to distinct different types of events."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_event_type(event):\n",
    "    '''\n",
    "    Returns type of event\n",
    "    '''\n",
    "    if not 'TED' in event:\n",
    "        return 'NOT_TED'\n",
    "    elif event.startswith('TEDx'):\n",
    "        return 'TEDx'\n",
    "    elif event.startswith('TED@'):\n",
    "        return 'TED@'\n",
    "    elif re.fullmatch('TED\\d{4}', event) is not None:\n",
    "        return 'TED_YEAR'\n",
    "    else:\n",
    "        return event.split()[0]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['event'].apply(get_event_type).value_counts()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Wikipedia has some additional info on different conference types https://en.wikipedia.org/wiki/TED_(conference)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted.columns"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### film_date"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['film_date'].describe()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Some talks have filming date year as early as 1972. Let's try to find some more."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted[df_ted['film_date'] < '2000-01-01']"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Can be seen that there are three talks, that are not from TED and filmed before 1992."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### languages"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['languages'].describe(), df_ted['languages'].median()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Interesting, some of the talks has language count equal to zero. Let's investigate a little bit."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted[df_ted['languages'] == 0]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted[df_ted['languages'] == 0]['url'].values[:10]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Most of those are art perfomances, but not all. Also for those records there is no transcript."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### main_speaker"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['main_speaker'].value_counts()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['main_speaker'].value_counts().describe()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Most of people have a talk at TED events only once."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['main_speaker'].str.len().describe()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted[df_ted['main_speaker'].str.len() > 20]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In case of long main_speaker field we can suspect more then one speaker."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### name"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['name'].nunique()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Every talk has an unique name."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['name'].str.len().describe()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### num_speaker"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['num_speaker'].describe()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Most of people present their talks alone."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### published_date"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['published_date'].describe()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted[df_ted['event'].str.startswith('TED')]['film_date'].min()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Fist published_date is 2006-06-27, but first filmed talk was on 1984-02-02. So it may be interesting to have a look at timespan between the filming and the publication."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "(df_ted['published_date'] - df_ted['film_date']).describe()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "(df_ted['published_date'] - df_ted['film_date']).median()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted[(df_ted['published_date'] - df_ted['film_date']).dt.total_seconds() < 0]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Interesting, looks like we have some mistakes in data. The records above are where published_date is earlier then film_date which is the case."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### ratings"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "It's a rating from TED site. TED asks people to describe video (talk) in three words. Count simply means amount of people who choosen the category.\n",
    "We will not use the field because it's closely linked with our target variable \"views\".\n",
    "More views video has more people rated it."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['ratings'].values[0]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['ratings'].values[1]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['ratings'].values[2]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['ratings'].values[3]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### related_talks"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We will not use the field in research due to it complexity for analysis. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['related_talks'].values[0]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### speaker_occupation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['speaker_occupation'].value_counts()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['speaker_occupation'].str.len().describe()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted[df_ted['speaker_occupation'].str.len() > 50]['speaker_occupation']"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The most popular occupations are  from arts, business, journalism, architecture and psychology.\n",
    "Some of people describe themself with a lot of different occupation types. Count of occupations could be a feature later."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### tags"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['tags'].values[:5]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['tags'] = df_ted['tags'].apply(lambda x: eval(x))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['tags'].values.reshape(-1,1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "type(df_ted['tags'].values[0])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Some code to flatten list of tags\n",
    "df_ted['tags'].apply(pd.Series).reset_index().melt(id_vars='index').value.dropna().value_counts()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### title"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['title'].nunique()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Every talk has his own title."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['title'].str.len().describe()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['title'].values[:5]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Looks like title + main_speaker = name"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted[['name', 'main_speaker', 'title']].head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### url"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['url'].nunique()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['url'].values[:5]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "sum(df_ted['url'].str.endswith('\\n'))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Every url ends with '\\n', so it could be cleaned."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['url'] = df_ted['url'].str.strip()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['url'].apply(lambda s: s.split('/')[0]).value_counts()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['url'].apply(lambda s: s.split('/')[2]).value_counts()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['url'].apply(lambda s: s.split('/')[3]).value_counts()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "All urls are 'https://www.ted.com/talks/name_of_talk' so we could omit the field without consequences."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### views"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "'views' is our target variable. We also need to check normality of it distribution."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['views'].describe()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Doesn't look normal distributed. Let's check via plots and stat tests."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['views'].hist(bins=100);"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "scipy.stats.normaltest(df_ted['views'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "scipy.stats.shapiro(df_ted['views'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "sm.qqplot(df_ted['views'], line='s');"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "scipy.stats.normaltest(np.log(df_ted['views']))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "scipy.stats.shapiro(np.log(df_ted['views']))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "sm.qqplot(np.log(df_ted['views']), line='s');"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "np.log(df_ted['views']).hist(bins=100);"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "alpha = 0.001\n",
    "p = scipy.stats.shapiro(np.log(np.log(df_ted['views'])))[1]\n",
    "\n",
    "if p < alpha:  # null hypothesis: x comes from a normal distribution\n",
    "    print(\"The null hypothesis can be rejected\")\n",
    "else:\n",
    "    print(\"The null hypothesis cannot be rejected\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "It doesnt't looks like we get normal distibution after applying logarithm, but it looks much closer to it. So we will assume that our target variable has normal distribution."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted.columns"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['target'] = np.log(df_ted['views'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### transcripts"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['transcript'].nunique(), len(df_ted['transcript']), sum(df_ted['transcript'].isna())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Not every talk has transcipt and each transcript in unique."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted['transcript'].str.len().describe()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted[df_ted['transcript'].str.len() < 200]['transcript'].values"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Ok, looks like some transcript are from music."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 3. Visual analysis of the features"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted.columns"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted.drop('views', axis=1, inplace=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted.drop('related_talks', axis=1, inplace=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted.drop('comments', axis=1, inplace=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Make separate dataframe for data preparation for plotting\n",
    "df_plot = df_ted.copy()\n",
    "df_plot['film_date_unix'] = df_ted['film_date'].astype(int)\n",
    "df_plot['published_date_unix'] = df_ted['published_date'].astype(int)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%time\n",
    "\n",
    "sns.pairplot(df_plot, diag_kind=\"kde\", markers=\"+\",\n",
    "    plot_kws=dict(s=50, edgecolor=\"b\", linewidth=1),\n",
    "    diag_kws=dict(shade=True));"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Clearly visible correlation between number of languages and views count."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted.columns"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_plot.corr(method='pearson')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "sns.heatmap(df_plot.corr(method='pearson').abs(), annot=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_plot.corr(method='spearman')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "sns.heatmap(df_plot.corr(method='spearman').abs(), annot=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.plot(df_plot['published_date_unix'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "So, data is sorted by published_date"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.plot(df_plot['target'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.plot(df_plot['published_date_unix'], df_plot['target'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "sns.countplot(df_ted['event']);"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.plot(df_plot.groupby(by='event')['target'].mean().sort_values(ascending=False), 'o-');"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Target mean variable looks like near normally disributed related to event name."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "sns.countplot(df_ted['main_speaker']);"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.plot(df_plot.groupby(by='main_speaker')['target'].mean().sort_values(ascending=False), 'o-');"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The normality of disribution also holds for speaker name."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "sns.countplot(df_ted['speaker_occupation']);"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.plot(df_plot.groupby(by='speaker_occupation')['target'].mean().sort_values(ascending=False), 'o-');"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "speaker_occupation also looks normaly distributed."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 4. Patterns, insights, pecularities of data "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "From previous analysis we have following observations:\n",
    "- 'view' variable is not normally distributed according to tests. For linear model it's should be more adequate to use normally distributed target variable, so we apply logarithm to it. It doesn't makes distribution normal but now it looks closer to it.\n",
    "- Our new target variable highly correlated with language count. May be it's due the fact that the most popular talks are moe often translated to more languages. Because we are doing correlation analysis we cann't say it for sure without additional data. But it can be useful to omit language variable.\n",
    "- Also we have strong correlation between published and film date, so we need to exclude one of them for more accurate predicictions.\n",
    "- We clearly see different kind of events, so may be useful to add additional feature with event type information\n",
    "- Url and name just redunant because contains information also available from other fields\n",
    "- Data is sorted by published date, so we can use TimeSeriesSplit to not get catched by data leak. Also it's fine because we are interested in feature prediction so we don't need to sort data.\n",
    "- We have some errors in data, when published_date is smaller than film_date, but as we are more interested in date of publication and published_date and film_date are highly correlated we will exclude film_date"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 5. Data preprocessing"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We will do different type of preprocessing for different type of columns:\n",
    "- numeric columns will be scaled using StandardScaler\n",
    "- text columns will be converter to lower case and after that vectorized using TfIdfVectorizer\n",
    "- categorial variables will be factorized (similar to label encoding) and then transformed using OneHotEncoder\n",
    "- empty values in transcript field will be filled with 'na' string\n",
    "- date will be converted to unix time and used as numeric\n",
    "- tags arrays will be converted to string and used as string column"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "df_ted.columns"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Only leave features filtred by assumptions from previous part\n",
    "X = df_ted[['description', 'duration', 'event', 'languages',\n",
    "       'main_speaker', 'num_speaker', 'published_date',\n",
    "       'speaker_occupation', 'tags', 'title', 'transcript']].copy()\n",
    "y = df_ted['target'].copy()\n",
    "\n",
    "NUMERIC_COLUMNS = ['duration', 'languages']\n",
    "DATE_COLUMNS = ['published_date']\n",
    "\n",
    "# We will convert 'tags' column to string\n",
    "TEXT_COLUMNS = ['description', 'tags', 'title', 'transcript']\n",
    "\n",
    "CATEGORICAL_COLUMNS = ['event',\n",
    "       'main_speaker', 'speaker_occupation', 'num_speaker']\n",
    "\n",
    "# We will convert published_date back to unix time and use it like numeric column\n",
    "for c in DATE_COLUMNS:\n",
    "    X[c] = X[c].astype(int)\n",
    "\n",
    "# We will use data columns simply as numeric_column, so \n",
    "NUMERIC_COLUMNS += DATE_COLUMNS\n",
    "\n",
    "# StandardScaler will convert fields to float64 with warning, so we will do it before\n",
    "for c in NUMERIC_COLUMNS:\n",
    "    X[c] = X[c].astype(float)\n",
    "\n",
    "# Convert tags to string\n",
    "X['tags'] = X['tags'].apply(lambda tags: ' '.join(tags))\n",
    "\n",
    "X['transcript'] = X['transcript'].fillna('na')\n",
    "\n",
    "# Convert all text columns to lower case\n",
    "for c in TEXT_COLUMNS:\n",
    "    X[c] = X[c].str.lower()\n",
    "\n",
    "# Factorize categorical_columns (similar to LabelEncoding)\n",
    "for c in CATEGORICAL_COLUMNS:\n",
    "    X[c] = X[c].factorize()[0]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "preprocessing = ColumnTransformer(transformers=[\n",
    "    ('ohe', OneHotEncoder(categories='auto', handle_unknown='ignore'), CATEGORICAL_COLUMNS),\n",
    "    ('scaler', StandardScaler(), NUMERIC_COLUMNS),\n",
    "    ('tfidf_0', TfidfVectorizer(), TEXT_COLUMNS[0]),\n",
    "    ('tfidf_1', TfidfVectorizer(), TEXT_COLUMNS[1]),\n",
    "    ('tfidf_2', TfidfVectorizer(), TEXT_COLUMNS[2]),\n",
    "    ('tfidf_3', TfidfVectorizer(), TEXT_COLUMNS[3]),\n",
    "])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# It's crucial not sort splits, because we want to predict future (so no future data should be in train set)\n",
    "# We don't need seed here because shuffle is disabled\n",
    "X_train, X_valid, y_train, y_valid = train_test_split(X, y, test_size = 0.3, shuffle=False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_train.shape, X_valid.shape, y_train.shape, y_valid.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 6.  Metric selection"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For regression task there are two most popular metrics - RMSE and MAE.\n",
    "\n",
    "$\n",
    "\\begin{align}\n",
    "RMSE = \\sqrt{\\frac{1}{n}\\sum_{j=1}^{n}{(\\hat{y} - y_j)^2}}\n",
    "\\end{align}\n",
    "$\n",
    "\n",
    "$\n",
    "\\begin{align}\n",
    "MAE = \\frac{1}{n}\\sum_{j=1}^{n}{\\lvert\\hat{y} - y_j\\rvert}\n",
    "\\end{align}\n",
    "$\n",
    "\n",
    "RMSE put higher weight on the bigger errors in predictions.\n",
    "RMSE has tendency to increase more then MAE with bigger sample size.\n",
    "In our case bigger errors should not be threated in special way.\n",
    "MAE is more easy to interpretate, especially as we have log transformation of initial target variable, so exp(MAE) could be viewed as multiplicator of true value of the original variable.\n",
    "\n",
    "So we will go with MAE.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 7. Feature engineering and description "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's try Ridge from sklearn."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%time\n",
    "model_ridge = Pipeline(\n",
    "    steps=[\n",
    "        ('preprocessing', preprocessing),\n",
    "        ('ridge', Ridge(random_state=RANDOM_SEED))\n",
    "    ]\n",
    ")\n",
    "\n",
    "\n",
    "cv = GridSearchCV(model_ridge, param_grid={}, scoring='neg_mean_absolute_error', cv=TimeSeriesSplit(n_splits=5),\n",
    "                 return_train_score=True, verbose=3)\n",
    "cv.fit(X_train, y_train)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "cv.best_score_"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We will use it as baseline for future."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's construct new features:\n",
    "- len of transcript (because target can depends on how much time presenter speaks)\n",
    "- event type, because it's some events are clearly more popular (like TED events vs regional TEDx events)\n",
    "- published_date hour, month, dayofweek"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Only leave features filtred by assumptions from previous part\n",
    "X = df_ted[['description', 'duration', 'event', 'languages',\n",
    "       'main_speaker', 'num_speaker', 'published_date',\n",
    "       'speaker_occupation', 'tags', 'title', 'transcript']].copy()\n",
    "y = df_ted['target'].copy()\n",
    "\n",
    "X['transcript'] = X['transcript'].fillna('na')\n",
    "X['transcript_len'] = X['transcript'].str.len()\n",
    "X['event_type'] = X['event'].apply(get_event_type)\n",
    "X['published_hour'] = X['published_date'].dt.hour\n",
    "X['published_month'] = X['published_date'].dt.month\n",
    "X['published_dayofweek'] = X['published_date'].dt.dayofweek\n",
    "\n",
    "\n",
    "NUMERIC_COLUMNS = ['duration', 'languages',\n",
    "                   'transcript_len'\n",
    "                  ]\n",
    "DATE_COLUMNS = ['published_date']\n",
    "\n",
    "# We will convert 'tags' column to string\n",
    "TEXT_COLUMNS = ['description', 'tags', 'title', 'transcript']\n",
    "\n",
    "CATEGORICAL_COLUMNS = ['event',\n",
    "       'main_speaker', 'speaker_occupation', 'num_speaker', \n",
    "                       'event_type',\n",
    "                       'published_hour',\n",
    "                       'published_month',\n",
    "                       'published_dayofweek'\n",
    "                      ]\n",
    "\n",
    "# We will convert published_date back to unix time and use it like numeric column\n",
    "for c in DATE_COLUMNS:\n",
    "    X[c] = X[c].astype(int)\n",
    "\n",
    "# We will use data columns simply as numeric_column, so \n",
    "NUMERIC_COLUMNS += DATE_COLUMNS\n",
    "\n",
    "# Convert tags to string\n",
    "X['tags'] = X['tags'].apply(lambda tags: ' '.join(tags))\n",
    "\n",
    "# StandardScaler will convert fields to float64 with warning, so we will do it before\n",
    "for c in NUMERIC_COLUMNS:\n",
    "    X[c] = X[c].astype(float)\n",
    "\n",
    "# Convert all text columns to lower case\n",
    "for c in TEXT_COLUMNS:\n",
    "    X[c] = X[c].str.lower()\n",
    "\n",
    "# Factorize categorical_columns (similar to LabelEncoding)\n",
    "for c in CATEGORICAL_COLUMNS:\n",
    "    X[c] = X[c].factorize()[0]\n",
    "\n",
    "X_train, X_valid, y_train, y_valid = train_test_split(X, y, test_size = 0.3, shuffle=False)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We will test new features it one by one, using ColumnTransformer propery - it will drop columns, not mentnioned in transformers list."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Let's try to exclude published_date"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%time\n",
    "NUMERIC_COLUMNS = ['duration', 'languages']\n",
    "\n",
    "CATEGORICAL_COLUMNS = ['event',\n",
    "       'main_speaker', 'speaker_occupation', 'num_speaker']\n",
    "\n",
    "preprocessing = ColumnTransformer(transformers=[\n",
    "    ('ohe', OneHotEncoder(categories='auto', handle_unknown='ignore'), CATEGORICAL_COLUMNS),\n",
    "    ('scaler', StandardScaler(), NUMERIC_COLUMNS),\n",
    "    ('tfidf_0', TfidfVectorizer(), TEXT_COLUMNS[0]),\n",
    "    ('tfidf_1', TfidfVectorizer(), TEXT_COLUMNS[1]),\n",
    "    ('tfidf_2', TfidfVectorizer(), TEXT_COLUMNS[2]),\n",
    "    ('tfidf_3', TfidfVectorizer(), TEXT_COLUMNS[3]),\n",
    "])\n",
    "\n",
    "model_ridge = Pipeline(\n",
    "    steps=[\n",
    "        ('preprocessing', preprocessing),\n",
    "        ('ridge', Ridge(random_state=RANDOM_SEED))\n",
    "    ]\n",
    ")\n",
    "\n",
    "\n",
    "cv = GridSearchCV(model_ridge, param_grid={}, scoring='neg_mean_absolute_error', cv=TimeSeriesSplit(n_splits=5),\n",
    "                 return_train_score=True, verbose=3)\n",
    "cv.fit(X_train, y_train)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "cv.best_score_"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We have some improvement in score, let's continue."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### transcript_len"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%time\n",
    "NUMERIC_COLUMNS = ['duration', 'languages', 'transcript_len'\n",
    "                  ]\n",
    "\n",
    "CATEGORICAL_COLUMNS = ['event',\n",
    "       'main_speaker', 'speaker_occupation', 'num_speaker']\n",
    "\n",
    "preprocessing = ColumnTransformer(transformers=[\n",
    "    ('ohe', OneHotEncoder(categories='auto', handle_unknown='ignore'), CATEGORICAL_COLUMNS),\n",
    "    ('scaler', StandardScaler(), NUMERIC_COLUMNS),\n",
    "    ('tfidf_0', TfidfVectorizer(), TEXT_COLUMNS[0]),\n",
    "    ('tfidf_1', TfidfVectorizer(), TEXT_COLUMNS[1]),\n",
    "    ('tfidf_2', TfidfVectorizer(), TEXT_COLUMNS[2]),\n",
    "    ('tfidf_3', TfidfVectorizer(), TEXT_COLUMNS[3]),\n",
    "])\n",
    "\n",
    "model_ridge = Pipeline(\n",
    "    steps=[\n",
    "        ('preprocessing', preprocessing),\n",
    "        ('ridge', Ridge(random_state=RANDOM_SEED))\n",
    "    ]\n",
    ")\n",
    "\n",
    "\n",
    "cv = GridSearchCV(model_ridge, param_grid={}, scoring='neg_mean_absolute_error', cv=TimeSeriesSplit(n_splits=5),\n",
    "                 return_train_score=True, verbose=3)\n",
    "cv.fit(X_train, y_train)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "cv.best_score_"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Previous value was -0.4767736055960914, so we have some small imporvement."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### event_type"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%time\n",
    "NUMERIC_COLUMNS = ['duration', 'languages', 'transcript_len']\n",
    "\n",
    "CATEGORICAL_COLUMNS = [\n",
    "       'main_speaker', 'speaker_occupation', 'num_speaker', 'event_type']\n",
    "\n",
    "preprocessing = ColumnTransformer(transformers=[\n",
    "    ('ohe', OneHotEncoder(categories='auto', handle_unknown='ignore'), CATEGORICAL_COLUMNS),\n",
    "    ('scaler', StandardScaler(), NUMERIC_COLUMNS),\n",
    "    ('tfidf_0', TfidfVectorizer(), TEXT_COLUMNS[0]),\n",
    "    ('tfidf_1', TfidfVectorizer(), TEXT_COLUMNS[1]),\n",
    "    ('tfidf_2', TfidfVectorizer(), TEXT_COLUMNS[2]),\n",
    "    ('tfidf_3', TfidfVectorizer(), TEXT_COLUMNS[3]),\n",
    "])\n",
    "\n",
    "model_ridge = Pipeline(\n",
    "    steps=[\n",
    "        ('preprocessing', preprocessing),\n",
    "        ('ridge', Ridge(random_state=RANDOM_SEED))\n",
    "    ]\n",
    ")\n",
    "\n",
    "\n",
    "cv = GridSearchCV(model_ridge, param_grid={}, scoring='neg_mean_absolute_error', cv=TimeSeriesSplit(n_splits=5),\n",
    "                 return_train_score=True, verbose=3)\n",
    "cv.fit(X_train, y_train)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "cv.best_score_"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This is our new best crossval score"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### published_hour"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%time\n",
    "NUMERIC_COLUMNS = ['duration', 'languages', 'transcript_len'\n",
    "                  ]\n",
    "\n",
    "CATEGORICAL_COLUMNS = ['event_type',\n",
    "       'main_speaker', 'speaker_occupation', 'num_speaker', 'published_hour']\n",
    "\n",
    "preprocessing = ColumnTransformer(transformers=[\n",
    "    ('ohe', OneHotEncoder(categories='auto', handle_unknown='ignore'), CATEGORICAL_COLUMNS),\n",
    "    ('scaler', StandardScaler(), NUMERIC_COLUMNS),\n",
    "    ('tfidf_0', TfidfVectorizer(), TEXT_COLUMNS[0]),\n",
    "    ('tfidf_1', TfidfVectorizer(), TEXT_COLUMNS[1]),\n",
    "    ('tfidf_2', TfidfVectorizer(), TEXT_COLUMNS[2]),\n",
    "    ('tfidf_3', TfidfVectorizer(), TEXT_COLUMNS[3]),\n",
    "])\n",
    "\n",
    "model_ridge = Pipeline(\n",
    "    steps=[\n",
    "        ('preprocessing', preprocessing),\n",
    "        ('ridge', Ridge(random_state=RANDOM_SEED))\n",
    "    ]\n",
    ")\n",
    "\n",
    "\n",
    "cv = GridSearchCV(model_ridge, param_grid={}, scoring='neg_mean_absolute_error', cv=TimeSeriesSplit(n_splits=5),\n",
    "                 return_train_score=True, verbose=3)\n",
    "cv.fit(X_train, y_train)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "cv.best_score_"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Not improvement of the best score"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### published_month"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%time\n",
    "NUMERIC_COLUMNS = ['duration', 'languages', 'transcript_len'\n",
    "                  ]\n",
    "\n",
    "CATEGORICAL_COLUMNS = ['event_type',\n",
    "       'main_speaker', 'speaker_occupation', 'num_speaker', 'published_month']\n",
    "\n",
    "preprocessing = ColumnTransformer(transformers=[\n",
    "    ('ohe', OneHotEncoder(categories='auto', handle_unknown='ignore'), CATEGORICAL_COLUMNS),\n",
    "    ('scaler', StandardScaler(), NUMERIC_COLUMNS),\n",
    "    ('tfidf_0', TfidfVectorizer(), TEXT_COLUMNS[0]),\n",
    "    ('tfidf_1', TfidfVectorizer(), TEXT_COLUMNS[1]),\n",
    "    ('tfidf_2', TfidfVectorizer(), TEXT_COLUMNS[2]),\n",
    "    ('tfidf_3', TfidfVectorizer(), TEXT_COLUMNS[3]),\n",
    "])\n",
    "\n",
    "model_ridge = Pipeline(\n",
    "    steps=[\n",
    "        ('preprocessing', preprocessing),\n",
    "        ('ridge', Ridge(random_state=RANDOM_SEED))\n",
    "    ]\n",
    ")\n",
    "\n",
    "\n",
    "cv = GridSearchCV(model_ridge, param_grid={}, scoring='neg_mean_absolute_error', cv=TimeSeriesSplit(n_splits=5),\n",
    "                 return_train_score=True, verbose=3)\n",
    "cv.fit(X_train, y_train)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "cv.best_score_"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Not improvement of score"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### published_dayofweek"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%time\n",
    "NUMERIC_COLUMNS = ['duration', 'languages', 'transcript_len'\n",
    "                  ]\n",
    "\n",
    "CATEGORICAL_COLUMNS = ['event_type',\n",
    "       'main_speaker', 'speaker_occupation', 'num_speaker', 'published_dayofweek']\n",
    "\n",
    "preprocessing = ColumnTransformer(transformers=[\n",
    "    ('ohe', OneHotEncoder(categories='auto', handle_unknown='ignore'), CATEGORICAL_COLUMNS),\n",
    "    ('scaler', StandardScaler(), NUMERIC_COLUMNS),\n",
    "    ('tfidf_0', TfidfVectorizer(), TEXT_COLUMNS[0]),\n",
    "    ('tfidf_1', TfidfVectorizer(), TEXT_COLUMNS[1]),\n",
    "    ('tfidf_2', TfidfVectorizer(), TEXT_COLUMNS[2]),\n",
    "    ('tfidf_3', TfidfVectorizer(), TEXT_COLUMNS[3]),\n",
    "])\n",
    "\n",
    "model_ridge = Pipeline(\n",
    "    steps=[\n",
    "        ('preprocessing', preprocessing),\n",
    "        ('ridge', Ridge(random_state=RANDOM_SEED))\n",
    "    ]\n",
    ")\n",
    "\n",
    "\n",
    "cv = GridSearchCV(model_ridge, param_grid={}, scoring='neg_mean_absolute_error', cv=TimeSeriesSplit(n_splits=5),\n",
    "                 return_train_score=True, verbose=3)\n",
    "cv.fit(X_train, y_train)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "cv.best_score_"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Not improvement of the score"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Conclusion on feature engineering"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We just found two new useful features:\n",
    "- event_type instead of event\n",
    "- transcript_len\n",
    "\n",
    "Cross validation on Ridge shows score improvement with both of them."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 8. Cross-validation, hyperparameter tuning"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We will use features we already found and selected."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%time\n",
    "\n",
    "X = df_ted[['description', 'duration', 'event', 'languages',\n",
    "       'main_speaker', 'num_speaker',\n",
    "       'speaker_occupation', 'tags', 'title', 'transcript']].copy()\n",
    "y = df_ted['target'].copy()\n",
    "\n",
    "X['transcript'] = X['transcript'].fillna('na')\n",
    "X['transcript_len'] = X['transcript'].str.len()\n",
    "X['event_type'] = X['event'].apply(get_event_type)\n",
    "X.drop('event', axis=1, inplace=True)\n",
    "\n",
    "NUMERIC_COLUMNS = ['duration', 'languages', 'transcript_len']\n",
    "\n",
    "CATEGORICAL_COLUMNS = ['main_speaker', 'speaker_occupation', 'num_speaker', 'event_type']\n",
    "\n",
    "\n",
    "# We will convert 'tags' column to string\n",
    "TEXT_COLUMNS = ['description', 'tags', 'title', 'transcript']\n",
    "\n",
    "# Convert tags to string\n",
    "X['tags'] = X['tags'].apply(lambda tags: ' '.join(tags))\n",
    "\n",
    "# StandardScaler will convert fields to float64 with warning, so we will do it before\n",
    "for c in NUMERIC_COLUMNS:\n",
    "    X[c] = X[c].astype(float)\n",
    "\n",
    "# Convert all text columns to lower case\n",
    "for c in TEXT_COLUMNS:\n",
    "    X[c] = X[c].str.lower()\n",
    "\n",
    "# Factorize categorical_columns (similar to LabelEncoding)\n",
    "for c in CATEGORICAL_COLUMNS:\n",
    "    X[c] = X[c].factorize()[0]\n",
    "\n",
    "X_train, X_valid, y_train, y_valid = train_test_split(X, y, test_size = 0.3, shuffle=False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_train.shape, X_valid.shape, y_train.shape, y_valid.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "preprocessing = ColumnTransformer(transformers=[\n",
    "    ('ohe', OneHotEncoder(categories='auto', handle_unknown='ignore'), CATEGORICAL_COLUMNS),\n",
    "    ('scaler', StandardScaler(), NUMERIC_COLUMNS),\n",
    "    ('tfidf_0', TfidfVectorizer(), TEXT_COLUMNS[0]),\n",
    "    ('tfidf_1', TfidfVectorizer(), TEXT_COLUMNS[1]),\n",
    "    ('tfidf_2', TfidfVectorizer(), TEXT_COLUMNS[2]),\n",
    "    ('tfidf_3', TfidfVectorizer(), TEXT_COLUMNS[3]),\n",
    "])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's tune alpha (l1 regularization) for Ridge."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "model_ridge = Pipeline(\n",
    "    steps=[\n",
    "        ('preprocessing', preprocessing),\n",
    "        ('ridge', Ridge(random_state=RANDOM_SEED))\n",
    "    ]\n",
    ")\n",
    "\n",
    "params = {\n",
    "    \n",
    "    'ridge__alpha' : np.logspace(-2, 5, num=8)\n",
    "}\n",
    "\n",
    "cv = GridSearchCV(model_ridge, param_grid=params, scoring='neg_mean_absolute_error', cv=TimeSeriesSplit(n_splits=5),\n",
    "                 return_train_score=True, verbose=3)\n",
    "cv.fit(X_train, y_train)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "cv.best_score_"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "cv.best_params_"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def plot_param_tuning(params, param_name, cv, x_scale_log=False):\n",
    "\n",
    "    plt.plot(params[param_name], cv.cv_results_['mean_train_score'], 'o-', label='train')\n",
    "    plt.plot(params[param_name], cv.cv_results_['mean_test_score'], 'o-', label='test')\n",
    "\n",
    "    plt.fill_between(params[param_name],\n",
    "                     cv.cv_results_['mean_train_score'] - cv.cv_results_['std_train_score'],\n",
    "                     cv.cv_results_['mean_train_score'] + cv.cv_results_['std_train_score'],\n",
    "                     alpha=0.2\n",
    "                    )\n",
    "    plt.fill_between(params[param_name],\n",
    "                     cv.cv_results_['mean_test_score'] - cv.cv_results_['std_test_score'],\n",
    "                     cv.cv_results_['mean_test_score'] + cv.cv_results_['std_test_score'],\n",
    "                     alpha=0.2\n",
    "                    )\n",
    "    if x_scale_log:\n",
    "        plt.xscale('log')\n",
    "\n",
    "    plt.legend();"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plot_param_tuning(params, 'ridge__alpha', cv, x_scale_log=True)\n",
    "plt.xlabel('alpha')\n",
    "plt.ylabel('neg_mean_absolute_error')\n",
    "plt.title('Ridge alpha tuning');"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "It rather difficult to select good alpha value, because of wide range in standard deviation and different sample sizes due to TimeSeriesSplit. But we can consider alpha=10^2 as good guess because here is minimal difference between train and test sample."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "model_lgb = Pipeline(\n",
    "    steps=[\n",
    "        ('preprocessing', preprocessing),\n",
    "        ('lgb', LGBMRegressor(random_state=RANDOM_SEED))\n",
    "    ]\n",
    ")\n",
    "\n",
    "params = {\n",
    "}\n",
    "\n",
    "cv = GridSearchCV(model_lgb, param_grid=params, scoring='neg_mean_absolute_error', cv=TimeSeriesSplit(n_splits=5),\n",
    "                 return_train_score=True, verbose=3)\n",
    "cv.fit(X_train, y_train)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "cv.best_score_"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%time\n",
    "\n",
    "model_lgb = Pipeline(\n",
    "    steps=[\n",
    "        ('preprocessing', preprocessing),\n",
    "        ('lgb', LGBMRegressor(random_state=RANDOM_SEED))\n",
    "    ]\n",
    ")\n",
    "\n",
    "params = {\n",
    "    'lgb__max_depth': [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16]\n",
    "}\n",
    "\n",
    "cv = GridSearchCV(model_lgb, param_grid=params, scoring='neg_mean_absolute_error', cv=TimeSeriesSplit(n_splits=5),\n",
    "                 return_train_score=True, verbose=3)\n",
    "cv.fit(X_train, y_train)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "cv.best_score_"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "cv.best_params_"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plot_param_tuning(params, 'lgb__max_depth', cv)\n",
    "plt.xlabel('max_depth')\n",
    "plt.ylabel('neg_mean_absolute_error')\n",
    "plt.title('Lgb max_depth tuning');"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Definitely we can't say something about the best max_depth for lgbm regression. We will tune n_estimators."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "model_lgb = Pipeline(\n",
    "    steps=[\n",
    "        ('preprocessing', preprocessing),\n",
    "        ('lgb', LGBMRegressor(random_state=RANDOM_SEED))\n",
    "    ]\n",
    ")\n",
    "\n",
    "params = {\n",
    "    'lgb__n_estimators': [10,20,30,40,50, 60, 70, 80, 90, 100, 150, 200, 300, 400, 500, 600, 700]\n",
    "}\n",
    "\n",
    "cv = GridSearchCV(model_lgb, param_grid=params, scoring='neg_mean_absolute_error', cv=TimeSeriesSplit(n_splits=5),\n",
    "                 return_train_score=True, verbose=3)\n",
    "cv.fit(X_train, y_train)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "cv.best_score_, cv.best_params_"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plot_param_tuning(params, 'lgb__n_estimators', cv)\n",
    "plt.xlabel('n_estimators')\n",
    "plt.ylabel('neg_mean_absolute_error')\n",
    "plt.title('Lgb n_estimators tuning');"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "model_lgb = Pipeline(\n",
    "    steps=[\n",
    "        ('preprocessing', preprocessing),\n",
    "        ('lgb', LGBMRegressor(random_state=RANDOM_SEED))\n",
    "    ]\n",
    ")\n",
    "\n",
    "params = {\n",
    "    'lgb__n_estimators': [30],\n",
    "    'lgb__num_leaves':np.linspace(10,51, num=10, dtype=int)\n",
    "}\n",
    "\n",
    "cv = GridSearchCV(model_lgb, param_grid=params, scoring='neg_mean_absolute_error', cv=TimeSeriesSplit(n_splits=5),\n",
    "                 return_train_score=True, verbose=3)\n",
    "cv.fit(X_train, y_train)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "cv.best_score_, cv.best_params_"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plot_param_tuning(params, 'lgb__num_leaves', cv)\n",
    "plt.xlabel('num_leaves')\n",
    "plt.ylabel('neg_mean_absolute_error')\n",
    "plt.title('Lgb num_leaves tuning');"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Looks like we haven't any visible success in lgbm tuning so we can use only 'lgb__n_estimators': 30 as parameter."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Conclusion"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Our params as result of hyperparameter tuning:\n",
    "- Ridge - alpha: 10 (but 100 looks better because of smaller distance between train and test)\n",
    "- Lgbm regression - n_estimators: 30"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 9. Validation and learning curves"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Ridge"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "X_train.shape, y_train.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%time\n",
    "\n",
    "model_ridge = Pipeline(\n",
    "    steps=[\n",
    "        ('preprocessing', preprocessing),\n",
    "        ('ridge', Ridge(random_state=RANDOM_SEED, alpha=100))\n",
    "    ]\n",
    ")\n",
    "\n",
    "\n",
    "train_sizes, train_scores, test_scores = \\\n",
    "    learning_curve(model_ridge, X_train, y_train, \n",
    "                   cv=TimeSeriesSplit(n_splits=5), scoring='neg_mean_absolute_error', random_state=RANDOM_SEED)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def plot_learning_curve(train_sizes, train_scores, test_scores):\n",
    "    train_scores_mean = np.mean(train_scores, axis=1)\n",
    "    train_scores_std = np.std(train_scores, axis=1)\n",
    "    test_scores_mean = np.mean(test_scores, axis=1)\n",
    "    test_scores_std = np.std(test_scores, axis=1)\n",
    "\n",
    "    plt.fill_between(train_sizes, train_scores_mean - train_scores_std,\n",
    "                     train_scores_mean + train_scores_std, alpha=0.1,\n",
    "                     color=\"r\")\n",
    "    plt.fill_between(train_sizes, test_scores_mean - test_scores_std,\n",
    "                     test_scores_mean + test_scores_std, alpha=0.1, color=\"g\")\n",
    "    plt.plot(train_sizes, train_scores_mean, 'o-', color=\"r\",\n",
    "             label=\"Training score\")\n",
    "    plt.plot(train_sizes, test_scores_mean, 'o-', color=\"g\",\n",
    "             label=\"Cross-validation score\")\n",
    "\n",
    "    plt.legend(loc=\"best\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plot_learning_curve(train_sizes, train_scores, test_scores)\n",
    "plt.xlabel('train_sizes')\n",
    "plt.ylabel('neg_mean_absolute_error')\n",
    "plt.title('Learning curve Ridge');"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Lgbm regressor"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%time\n",
    "\n",
    "model_lgb = Pipeline(\n",
    "    steps=[\n",
    "        ('preprocessing', preprocessing),\n",
    "        ('lgb', LGBMRegressor(random_state=RANDOM_SEED, n_estimators=30))\n",
    "    ]\n",
    ")\n",
    "\n",
    "train_sizes, train_scores, test_scores = \\\n",
    "    learning_curve(model_lgb, X_train, y_train, \n",
    "                   cv=TimeSeriesSplit(n_splits=5), scoring='neg_mean_absolute_error', random_state=RANDOM_SEED)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plot_learning_curve(train_sizes, train_scores, test_scores)\n",
    "plt.xlabel('train_sizes')\n",
    "plt.ylabel('neg_mean_absolute_error')\n",
    "plt.title('Learning curve Ridge');"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "LGBRegressor tends towards overfitting, while for Ridge train and validation scores tends to look closer to each other."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 10. Prediction for hold-out set"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's check our models on hold-out set. Hold-out set was produced from all data and consist of last 30% data sorted by time."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Ridge"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "model_ridge.fit(X_train, y_train)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "ridge_mae_valid = mean_absolute_error(y_valid, model_ridge.predict(X_valid))\n",
    "ridge_mae_valid"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Lgbm regressor"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "model_lgb.fit(X_train, y_train)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "lgb_mae_valid = mean_absolute_error(y_valid, model_lgb.predict(X_valid))\n",
    "lgb_mae_valid"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 11. Model selection"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's recheck cross_val_score for models."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%time\n",
    "\n",
    "ridge_cv_score = cross_val_score(model_ridge, X_train, y_train, scoring='neg_mean_absolute_error',\n",
    "                                 cv=TimeSeriesSplit(n_splits=5))\n",
    "lgb_cv_score = cross_val_score(model_lgb, X_train, y_train, scoring='neg_mean_absolute_error',\n",
    "                                 cv=TimeSeriesSplit(n_splits=5))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "ridge_cv_score.mean(), lgb_cv_score.mean()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "pd.DataFrame(index=['Ridge', 'LGBRegressor'], data = [\n",
    "    [ridge_mae_valid, -ridge_cv_score.mean()],\n",
    "    [lgb_mae_valid, -lgb_cv_score.mean()],\n",
    "    ], columns = ['valid', 'cv_score'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We've got some mess with result. No model looks like the clean winner. But on the learning plots Ridge looks more sustainable. So probably we should choose Ridge as basic model for futher research."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 12. Conclusions"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We've made some initial research on TED Talks dataset. 'Views' variable wasn't normally distributed so we used logarithm of it.\n",
    "\n",
    "After parameter tuning and model selection both Ridge and LGBM regressor was able to get about 0.48 MAE on cross-validation. Despite of fact that on hold=out set LGBM outperforms Ridge, Ridge looks better on cross-validation.\n",
    "\n",
    "The model can be useful for research on predicting TED talk popularity measured in views.\n",
    "\n",
    "Ways to impove and futher develope the model:\n",
    "- Normalization of text\n",
    "- Turning Tf-IDF ngramm_range for text fields\n",
    "- Usage of PCA before LGBM\n",
    "- Try to get more data (more new data should be available)\n",
    "- More precisely model tuning\n",
    "- Research on how model will perform without 'language' variable"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
