{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<center>\n",
    "<img src=\"../../img/stack_overflow_survey.png\" />\n",
    "    \n",
    "## [mlcourse.ai](mlcourse.ai) – Open Machine Learning Course \n",
    "### <center> Author: Kseniia Terekhova, ODS Slack Kseniia\n",
    "    \n",
    "## <center> Individual data analysis project\n",
    "### <center> Predicting developer career satisfaction using StackOverflow survey"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Research plan**\n",
    "\n",
    "[Part 1. Feature and data explanation](#part1) <br>\n",
    "[Part 2. Primary data analysis](#EDA) <br>\n",
    "[Part 3. Primary visual data analysis](#part3) <br>\n",
    "[Part 4. Insights and found dependencies](#part4) <br>\n",
    "[Part 5. Metrics selection](#part5) <br>\n",
    "[Part 6. Model selection](#part6) <br>\n",
    "[Part 7. Data preprocessing](#part7) <br>\n",
    "[Part 8. Cross-validation and adjustment of model hyperparameters](#part8) <br>\n",
    "[Part 9. Creation of new features and description of this process](#part9) <br>\n",
    "[Part 10. Plotting training and validation curves](#part10) <br>\n",
    "[Part 11. Prediction for test or hold-out samples](#part11)<br>\n",
    "[Part 12. Conclusions](#part12)<br>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 1. Feature and data explanation <a id='part1'></a>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 1.1 Dataset and task explanation"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<p>This project uses the dataset with results of Stack Overflow 2018 Developer Survey. The data is publicly available through <a href=\"https://www.kaggle.com/stackoverflow/stack-overflow-2018-developer-survey/home\" target=__blank>Kaggle Datasets</a>.</p>\n",
    "<p>The Dataset description on Kaggle states:</p>\n",
    "<blockquote cite=\"https://www.kaggle.com/stackoverflow/stack-overflow-2018-developer-survey/home\">Each year, we at Stack Overflow ask the developer community about everything from their favorite technologies to their job preferences. This year marks the eighth year we’ve published our Annual Developer Survey results—with the largest number of respondents yet. Over 100,000 developers took the 30-minute survey in January 2018. <br><br>\n",
    "    This year, we covered a few new topics ranging from artificial intelligence to ethics in coding. We also found that underrepresented groups in tech responded to our survey at even lower rates than we would expect from their participation in the workforce. Want to dive into the results yourself and see what you can learn about salaries or machine learning or diversity in tech? We look forward to seeing what you find!</blockquote>\n",
    "<p>Indeed, there are numerous aspects of developers' lifes that can be learned from such kind of data. For this concrete project the task of <b>predicting developer career satisfaction</b> has been selected. So, the target value for this research is contained in <b>CareerSatisfaction</b> column. There is also <b>JobSatisfaction</b> feature, that could be possibly more useful for an HR or hiring manager, but for a technical specialist the question of overall career satisfaction of his/her peers seems to be more interesting.</p>\n",
    "<p>The dataset consists of two files:<ul>\n",
    "<li><b>survey_results_public.csv</b> with the main survey results, one respondent per row and one column per question;</li>\n",
    "<li><b>survey_results_schema.csv</b> with each column name from the main results along with the question text corresponding to that column;</li></ul></p>\n",
    "<p>The survey results file has columns for each one of the 128 questions, some of those are in the form \"AssesBenefits4\" or \"AIDangerous\". Detailed question content should be looked in the schema file, so the data  observation is a little bit harder process than just listing columns with short comments.</p>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 1.2 Survey content"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "That can be difficult to undestand the nature of this or that feature without seeng a corresponding question along with available answers. Thus, it looks like a good idea to go through the columns of survey results and for each of them extract the question text from the survey schema and answers used in the survey. This list will be long and tedious, so if you don't want even to scroll through it, you can jump directly to [Features conversion](#features_conversion) or [Exploratory data analysis](#EDA)."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Loading the files, first of all."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "import numpy as np"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "survey_schema = pd.read_csv('data/survey_results_schema.csv')\n",
    "print(survey_schema.shape)\n",
    "survey_schema.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Some preprocessing is needed to read survey_results_public.csv without warnings"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "survey_results_file = 'data/survey_results_public.csv'\n",
    "with open(survey_results_file) as f:\n",
    "    header = f.readline().strip()\n",
    "    \n",
    "col_dtypes = {col:np.object_ for col in header.split(',')}    \n",
    "\n",
    "assesjob_dtypes = {\"AssessJob\" + str(i): np.float64 for i in range(1, 11)}\n",
    "col_dtypes.update(assesjob_dtypes)\n",
    "\n",
    "assesbenefits_dtypes = {\"AssessBenefits\" + str(i): np.float64 for i in range(1, 12)}\n",
    "col_dtypes.update(assesbenefits_dtypes)\n",
    "\n",
    "jopcontacts_dtypes = {\"JobContactPriorities\" + str(i): np.float64 for i in range(1, 6)}\n",
    "col_dtypes.update(jopcontacts_dtypes)\n",
    "\n",
    "jobemail_dtypes = {\"JobEmailPriorities\" + str(i): np.float64 for i in range(1, 8)}\n",
    "col_dtypes.update(jobemail_dtypes)\n",
    "\n",
    "ads_dtypes = {\"AdsPriorities\" + str(i): np.float64 for i in range(1, 8)}\n",
    "col_dtypes.update(ads_dtypes)\n",
    "\n",
    "col_dtypes['ConvertedSalary'] = np.float64\n",
    "\n",
    "survey_results = pd.read_csv('data/survey_results_public.csv', index_col='Respondent', dtype=col_dtypes)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(survey_results.shape)\n",
    "survey_results.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Functions to extract and format the data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from IPython.core.display import HTML\n",
    "\n",
    "def format_questions(questions, answers, hide_count=20):\n",
    "    questions_formatted = \"<b>[\" + questions['Column'] + \"]</b> \" + \\\n",
    "        questions['QuestionText']\n",
    "    answers_formatted = answers.map(lambda answs: \n",
    "        \"{0} options\".format(len(answs)) if len(answs) > hide_count else \"</li><li>\".join(answs.astype('str')))\n",
    "    answers_formatted = answers_formatted.map(lambda a:\n",
    "        \"<ul><li>\" + a + \"</li></ul>\" if a else \"<br/><br/>\")\n",
    "    questions_answers = pd.concat([questions_formatted, answers_formatted], axis=1)\n",
    "    questions_answers_formatted = questions_answers[0] + questions_answers[1]\n",
    "    formatted = \"\".join(questions_answers_formatted.values)\n",
    "    return formatted"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_options(questions, survey):\n",
    "    return questions.apply(lambda q: survey[q['Column']].dropna().unique(), axis=1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_rank_questions(prefix, count, schema):\n",
    "    questions_names = [prefix+str(i) for i in range(1, count+1)]\n",
    "    questions_mask = schema['Column'].apply(lambda s: s in questions_names)\n",
    "    questions = schema[questions_mask]['QuestionText']  \n",
    "    \n",
    "    first_question = questions.iat[0]\n",
    "    last_dot = first_question.rfind('.')\n",
    "    question_text = first_question[:last_dot+1]\n",
    "    question_options = questions.str[last_dot+1:]\n",
    "    return question_text, question_options.values"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_multiselect_options(questions, survey, separator=\";\"):\n",
    "    combinations = questions['Column'].map(lambda column: survey[column].dropna().unique())\n",
    "    options = combinations.map(lambda comb: np.unique(np.concatenate([c.split(separator) for c in comb])))\n",
    "    options = options.rename(1)\n",
    "    return options"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def format_rank_questions(prefix, count, question, options):\n",
    "    question_formatted = \"<b>[\" + prefix + \"1-\" + str(count) + \"]</b> \" + question\n",
    "    options_formatted = \"</li><li>\".join(options)\n",
    "    if options_formatted:\n",
    "        options_formatted = \"<ol><li>\" + options_formatted + \"</li></ol>\" \n",
    "    else:\n",
    "        options_formatted += \"</br>\"\n",
    "\n",
    "    question_formatted += options_formatted\n",
    "    return question_formatted"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### So, the formatted survey content (column names are in square brackets)  <a id=\"survey_questions\"></a>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "coding_questions = survey_schema.loc[1:2, :]\n",
    "coding_options = get_options(coding_questions, survey_results)\n",
    "HTML(format_questions(coding_questions, coding_options))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "current_questions = survey_schema.loc[3:7, :]\n",
    "current_options = get_options(current_questions, survey_results)\n",
    "HTML(format_questions(current_questions, current_options))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "company_question = survey_schema.loc[8:8, :]\n",
    "company_options = get_options(company_question, survey_results)\n",
    "\n",
    "# DevType question is multiselect and needs different handling\n",
    "dev_question = survey_schema.loc[9:9, :]\n",
    "dev_options = get_multiselect_options(dev_question, survey_results)\n",
    "\n",
    "experience_questions = survey_schema.loc[10:11, :]\n",
    "experience_options = get_options(experience_questions, survey_results)\n",
    "\n",
    "job_questions = pd.concat([company_question, dev_question, experience_questions])\n",
    "job_options = pd.concat([company_options, dev_options, experience_options])\n",
    "\n",
    "HTML(format_questions(job_questions, job_options))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "job_questions = survey_schema.loc[12:16, :]\n",
    "job_options = get_options(job_questions, survey_results)\n",
    "HTML(format_questions(job_questions, job_options))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "question_text, options = get_rank_questions('AssessJob', 10, survey_schema)\n",
    "HTML(format_rank_questions('AssessJob', 10, question_text, options))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "question_text, options = get_rank_questions('AssessBenefits', 11, survey_schema)\n",
    "HTML(format_rank_questions('AssessBenefits', 11, question_text, options))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "question_text, options = get_rank_questions('JobContactPriorities', 5, survey_schema)\n",
    "HTML(format_rank_questions('JobContactPriorities', 5, question_text, options))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "question_text, options = get_rank_questions('JobEmailPriorities', 7, survey_schema)\n",
    "HTML(format_rank_questions('JobEmailPriorities', 7, question_text, options))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "salary_questions = survey_schema.loc[50:55,:]\n",
    "salary_questions_options = get_options(salary_questions, survey_results)\n",
    "\n",
    "# Salary amount questions had arbitrary input\n",
    "salary_questions_options.at[52] = np.empty(0)\n",
    "salary_questions_options.at[54] = np.empty(0)\n",
    "\n",
    "communication_tool_question = survey_schema.loc[56:56,:]\n",
    "communication_tool_options = get_multiselect_options(communication_tool_question, survey_results)\n",
    "\n",
    "productivity_question = survey_schema.loc[57:57, :]\n",
    "productivity_options = get_options(productivity_question, survey_results)\n",
    "\n",
    "earning_questions = pd.concat([salary_questions, communication_tool_question, productivity_question])\n",
    "earning_options = pd.concat([salary_questions_options, communication_tool_options, productivity_options])\n",
    "\n",
    "HTML(format_questions(earning_questions, earning_options))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "edu_types_question = survey_schema.loc[58:58, :]\n",
    "edu_types_options = get_multiselect_options(edu_types_question, survey_results)\n",
    "\n",
    "self_taught_question = survey_schema.loc[59:59, :]\n",
    "self_taught_options = get_multiselect_options(self_taught_question, survey_results)\n",
    "\n",
    "bootcamp_questions = survey_schema.loc[60:60, :]\n",
    "bootcamp_questions_options = get_options(bootcamp_questions, survey_results)\n",
    "\n",
    "hackathon_questions = survey_schema.loc[61:61, :]\n",
    "hackathon_questions_options = get_multiselect_options(hackathon_questions, survey_results)\n",
    "\n",
    "training_questions = pd.concat([\n",
    "    edu_types_question,\n",
    "    self_taught_question,\n",
    "    bootcamp_questions,\n",
    "    hackathon_questions\n",
    "])\n",
    "training_options = pd.concat([\n",
    "    edu_types_options,\n",
    "    self_taught_options,\n",
    "    bootcamp_questions_options,\n",
    "    hackathon_questions_options\n",
    "])\n",
    "\n",
    "HTML(format_questions(training_questions, training_options))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "agree_questions = survey_schema.loc[62:64, :]\n",
    "agree_options = get_options(agree_questions, survey_results)\n",
    "HTML(format_questions(agree_questions, agree_options))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "technologies_questions = survey_schema.loc[65:72, :]\n",
    "technologies_options = get_multiselect_options(technologies_questions, survey_results)\n",
    "HTML(format_questions(technologies_questions, technologies_options))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "env_questions = survey_schema.loc[73:78, :]\n",
    "env_options = get_multiselect_options(env_questions, survey_results)\n",
    "HTML(format_questions(env_questions, env_options))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "adbloker_questions = survey_schema.loc[79:81, :]\n",
    "adbloker_options = get_multiselect_options(adbloker_questions, survey_results)\n",
    "\n",
    "ads_agree_questions = survey_schema.loc[82:84, :]\n",
    "ads_agree_options = get_options(ads_agree_questions, survey_results)\n",
    "\n",
    "ads_actions_question = survey_schema.loc[85:85, :]\n",
    "ads_actions_options = get_multiselect_options(ads_actions_question, survey_results)\n",
    "\n",
    "ads_questions = pd.concat([adbloker_questions, ads_agree_questions, ads_actions_question])\n",
    "ads_options = pd.concat([adbloker_options, ads_agree_options, ads_actions_options])\n",
    "\n",
    "HTML(format_questions(ads_questions, ads_options))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "question_text, options = get_rank_questions('AdsPriorities', 7, survey_schema)\n",
    "HTML(format_rank_questions('AdsPriorities', 7, question_text, options))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "ai_ethic_questions = survey_schema.loc[93:100, :]\n",
    "ai_ethic_options = get_options(ai_ethic_questions, survey_results)\n",
    "HTML(format_questions(ai_ethic_questions, ai_ethic_options))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "so_questions = survey_schema.loc[102:108, :]\n",
    "so_options = get_options(so_questions, survey_results)\n",
    "HTML(format_questions(so_questions, so_options))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "question_text, options = get_rank_questions('HypotheticalTools', 5, survey_schema)\n",
    "HTML(format_rank_questions('HypotheticalTools', 5, question_text, options))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "health_questions = survey_schema.loc[114:117, :]\n",
    "health_options = get_options(health_questions, survey_results)\n",
    "\n",
    "multiselect_questions = survey_schema.loc[118:121, :]\n",
    "multiselect_options = get_multiselect_options(multiselect_questions, survey_results)\n",
    "\n",
    "personal_questions = pd.concat([health_questions, multiselect_questions])\n",
    "personal_options = pd.concat([health_options, multiselect_options])\n",
    "\n",
    "HTML(format_questions(personal_questions, personal_options))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "family_questions = survey_schema.loc[122:126, :]\n",
    "family_options = get_multiselect_options(family_questions, survey_results)\n",
    "HTML(format_questions(family_questions, family_options))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "survey_questions = survey_schema.loc[127:128, :]\n",
    "survey_options = get_multiselect_options(survey_questions, survey_results)\n",
    "HTML(format_questions(survey_questions, survey_options))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 1.2 Features conversion<a id='features_conversion'></a>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "As it can be seen from the long list above, most data in the columns of survey results are just strings with repeating values. Since this is not a NLP task, they have to be converted into something more suitable even for performing simple feature analysis."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The target variable of this task is stored in the <b>CareerSatisfaction</b> column, and, sure, it should never be null during prediction. Beside this, it will easier to handle this target in the form of numeric grade."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.preprocessing import LabelEncoder\n",
    "import math"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "survey_results = survey_results[survey_results['CareerSatisfaction'].notna()]\n",
    "full_df = pd.DataFrame(index=survey_results.index)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "satisfaction_map = {\n",
    "    'Extremely satisfied' : 7.0,\n",
    "    'Moderately satisfied' : 6.0,\n",
    "    'Slightly satisfied' : 5.0,\n",
    "    'Neither satisfied nor dissatisfied' : 4.0,\n",
    "    'Slightly dissatisfied' : 3.0,\n",
    "    'Moderately dissatisfied' : 2.0,\n",
    "    'Extremely dissatisfied' : 1.0,\n",
    "}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "full_df['CareerSatisfaction'] = survey_results['CareerSatisfaction'].map(satisfaction_map)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The mentioned earlier feature <b>JobSatisfaction</b> seems to be too similar to the target value. It is more interesting to try predicting <b>CareerSatisfaction</b> using more indirect factors, so <b>JobSatisfaction</b> is not included into the features list at this stage. Maybe we'll have to return to it later."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "There are a few Yes/No question in the survey, it is reasonable to convert them to binary 1/0 int variables."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "yes_no_map = {'Yes': 1, 'No': 0}\n",
    "full_df['Hobby'] = survey_results['Hobby'].map(yes_no_map)\n",
    "full_df['OpenSource'] = survey_results['OpenSource'].map(yes_no_map)\n",
    "full_df['Dependents'] = survey_results['Dependents'].map(yes_no_map)\n",
    "full_df['MilitaryUS'] = survey_results['MilitaryUS'].map(yes_no_map)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Answers to some other questions cannot be directly converted into numeric format, though there are finit set of options for each of them. Obvious decision is to to interpret them as categorical features. LabelEncoder from sklearn.preprocessing allows to assign them numeric labels."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "categorize_cols = [\n",
    "    'Country',\n",
    "    'Student',\n",
    "    'Employment',\n",
    "    'FormalEducation',\n",
    "    'UndergradMajor',\n",
    "    'HopeFiveYears',\n",
    "    'JobSearchStatus',\n",
    "    'UpdateCV',\n",
    "    'SalaryType',\n",
    "    'Currency',\n",
    "    'OperatingSystem',\n",
    "    'CheckInCode',\n",
    "    'AdBlocker',\n",
    "    'AdBlockerDisable',\n",
    "    'AIDangerous',\n",
    "    'AIInteresting',\n",
    "    'AIResponsible',\n",
    "    'AIFuture',\n",
    "    'EthicsChoice',\n",
    "    'EthicsReport',\n",
    "    'EthicsResponsible',\n",
    "    'EthicalImplications',\n",
    "    'StackOverflowVisit',\n",
    "    'StackOverflowHasAccount',\n",
    "    'StackOverflowParticipate',\n",
    "    'StackOverflowJobs',\n",
    "    'StackOverflowDevStory',\n",
    "    'StackOverflowConsiderMember',\n",
    "    'EducationParents',\n",
    "    'SurveyTooLong',\n",
    "    'SurveyEasy'\n",
    "]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "category_encoders = {}\n",
    "\n",
    "for column in categorize_cols:\n",
    "    category_encoders[column] = LabelEncoder()\n",
    "    to_categorize = survey_results[column].fillna('Unknown')\n",
    "    full_df[column] = category_encoders[column].fit_transform(to_categorize)\n",
    "    full_df[column] = full_df[column].astype('category')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Most of the columns that were loaded with implicit dtype=np.float64 designation in the begining of the notebook are \"rank\" questions colums. Corresponding questions look like \"Please rank the ..., where 1 is the most important and ... is the least important.\"<br>\n",
    "They are numeric already, so just copy them to the features dataframe."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "rank_cols = list(assesjob_dtypes.keys()) + \\\n",
    "    list(assesbenefits_dtypes.keys()) + \\\n",
    "    list(jopcontacts_dtypes.keys()) + \\\n",
    "    list(jobemail_dtypes.keys()) + \\\n",
    "    list(ads_dtypes.keys())\n",
    "\n",
    "full_df[rank_cols] = survey_results[rank_cols]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The <b>ConvertedSalary</b> feature was initially float64 too."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "full_df['ConvertedSalary'] = survey_results['ConvertedSalary']"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The <b>Salary</b> column contains values with thousand separator. It has to be removed to convert values into float64."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "full_df['Salary'] = survey_results['Salary'].str.replace(',', '').astype('float64')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "There were questions in the survey that allowed multiselect options. Results of such questions are stored in the corresponded columns concatenated with ';' separator. There are too many combinations in such columns and there's no sense to categorize them in this form. It is much better to split them and convert kind of one-hot-encoding columns, for example. However, the initial options' texts should also be saved somewhere in the readable format for the case of further detailed analysis."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "multiselect_columns_descriptions = {}\n",
    "\n",
    "def create_binary_columns(multiselect_column, survey, result_df, separator=';'):\n",
    "    combinations = survey[multiselect_column].dropna().unique()\n",
    "    options = np.unique(np.concatenate([c.split(separator) for c in combinations]))    \n",
    "    for i, option in enumerate(options):\n",
    "        binary_column = multiselect_column + \"_\" + str(i)\n",
    "        multiselect_columns_descriptions[binary_column] = option\n",
    "        result_df[binary_column] = survey[multiselect_column].fillna(\"\").map(lambda comb: int(option in comb))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "multiselect_columns = [\n",
    "    'DevType',\n",
    "    'CommunicationTools',\n",
    "    'EducationTypes',\n",
    "    'SelfTaughtTypes',\n",
    "    'HackathonReasons',\n",
    "    'LanguageWorkedWith',\n",
    "    'LanguageDesireNextYear',\n",
    "    'DatabaseWorkedWith',\n",
    "    'DatabaseDesireNextYear',\n",
    "    'PlatformWorkedWith',\n",
    "    'PlatformDesireNextYear',\n",
    "    'FrameworkWorkedWith',\n",
    "    'FrameworkDesireNextYear',\n",
    "    'IDE',\n",
    "    'Methodology',\n",
    "    'VersionControl',\n",
    "    'AdBlockerReasons',\n",
    "    'AdsActions',\n",
    "    'ErgonomicDevices',\n",
    "    'Gender',\n",
    "    'SexualOrientation',\n",
    "    'RaceEthnicity'\n",
    "]\n",
    "\n",
    "for column in multiselect_columns:\n",
    "    create_binary_columns(column, survey_results, full_df)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Columns like <b>CompanySize</b> with options \"N to M employees\" or <b>YearsCoding</b> with options 'N to M years' are, again, categorical. But they differ from categorical columns handled above because:<ol>\n",
    "<li>Their options can be ordered, that can give useful computational information;</li>\n",
    "<li>Their options ratio can cary useful patterns as well;</li></ol>\n",
    "It seems reasonable to encode them by the labels that have numerical correspondence with the category. For example, the middle value of the specified interval. Some special categories (\"More than...\" e.g.) have to be encoded with special value, unfortunately."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_middle(low, top):\n",
    "    return low + (top-low)/2"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "company_size_map = {\n",
    "    'Fewer than 10 employees' : get_middle(0, 10),\n",
    "    '10 to 19 employees' : get_middle(10, 20),\n",
    "    '20 to 99 employees' : get_middle(20, 100),\n",
    "    '100 to 499 employees' : get_middle(100, 500),\n",
    "    '500 to 999 employees' : get_middle(500, 1000),\n",
    "    '1,000 to 4,999 employees': get_middle(1000, 5000),\n",
    "    '5,000 to 9,999 employees' : get_middle(5000, 10000),\n",
    "    '10,000 or more employees' : get_middle(10000, 50000)\n",
    "}\n",
    "\n",
    "full_df['CompanySize'] = survey_results['CompanySize'].map(company_size_map)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<a id='years_coding'></a>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "coding_years_map = {\n",
    "    '0-2 years' : get_middle(0, 2),\n",
    "    '3-5 years' : get_middle(3, 5),\n",
    "    '6-8 years' : get_middle(6, 8),\n",
    "    '9-11 years' : get_middle(9, 11),\n",
    "    '12-14 years' : get_middle(12, 14),\n",
    "    '15-17 years' : get_middle(15, 17),\n",
    "    '18-20 years' : get_middle(18, 20),\n",
    "    '21-23 years' : get_middle(21, 23),\n",
    "    '24-26 years' : get_middle(24, 26),\n",
    "    '27-29 years' : get_middle(27, 29),\n",
    "    '30 or more years' : 35.0\n",
    "}\n",
    "\n",
    "full_df['YearsCoding'] = survey_results['YearsCoding'].map(coding_years_map)\n",
    "full_df['YearsCodingProf'] = survey_results['YearsCodingProf'].map(coding_years_map)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "last_new_job_map = {\n",
    "    \"I've never had a job\" : 0.0,\n",
    "    'Less than a year ago' : 0.5,\n",
    "    'Between 1 and 2 years ago' : get_middle(1, 2),\n",
    "    'Between 2 and 4 years ago' : get_middle(2, 4),\n",
    "    'More than 4 years ago' : 8.0,\n",
    "}\n",
    "\n",
    "full_df['LastNewJob'] = survey_results['LastNewJob'].map(last_new_job_map)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "time_productive_map = {\n",
    "    'Less than a month' : 0.5,\n",
    "    'One to three months' : get_middle(1, 3),\n",
    "    'Three to six months' : get_middle(3, 6),\n",
    "    'Six to nine months' : get_middle(6, 9),\n",
    "    'Nine months to a year': get_middle(9, 12),\n",
    "    'More than a year' : 18.0\n",
    "}\n",
    "\n",
    "full_df['TimeFullyProductive'] = survey_results['TimeFullyProductive'].map(time_productive_map)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "bootcamp_time = {\n",
    "    'I already had a full-time job as a developer when I began the program' : -1.0,\n",
    "    'Immediately after graduating' : 0.0,\n",
    "    'Less than a month' : get_middle(0, 1),\n",
    "    'One to three months' : get_middle(1, 3),\n",
    "    'Four to six months' : get_middle(4, 6),    \n",
    "    'Six months to a year' : get_middle(6, 12),\n",
    "    'Longer than a year' : get_middle(12, 24),\n",
    "    'I haven’t gotten a developer job' : 12000\n",
    "}\n",
    "\n",
    "full_df['TimeAfterBootcamp'] = survey_results['TimeAfterBootcamp'].map(bootcamp_time)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "wake_time_map = {\n",
    "    \"Before 5:00 AM\" : 5.0,\n",
    "    \"Between 5:00 - 6:00 AM\" : 6.0,\n",
    "    \"Between 6:01 - 7:00 AM\" : 7.0,\n",
    "    \"Between 7:01 - 8:00 AM\" : 8.0,\n",
    "    \"Between 8:01 - 9:00 AM\" : 9.0,\n",
    "    \"Between 9:01 - 10:00 AM\" : 10.0,\n",
    "    \"Between 10:01 - 11:00 AM\" : 11.0,\n",
    "    \"Between 11:01 AM - 12:00 PM\" : 12.0,\n",
    "    \"After 12:01 PM\" : 13.0,\n",
    "    \"I work night shifts\" : 0.0,\n",
    "    \"I do not have a set schedule\" : -1\n",
    "}\n",
    "\n",
    "full_df['WakeTime'] = survey_results['WakeTime'].map(wake_time_map)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "hours_computing_map = {\n",
    "    \"Less than 1 hour\" : 0.5,\n",
    "    \"1 - 4 hours\" : get_middle(2, 4),\n",
    "    \"5 - 8 hours\" : get_middle(5, 8),\n",
    "    \"9 - 12 hours\" : get_middle(9, 12),\n",
    "    \"Over 12 hours\" : 14.0\n",
    "}\n",
    "\n",
    "full_df['HoursComputer'] = survey_results['HoursComputer'].map(hours_computing_map)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "hours_outside_map = {\n",
    "    \"Less than 30 minutes\" : 0.25,\n",
    "    \"30 - 59 minutes\" : get_middle(0.5, 1.0),\n",
    "    \"1 - 2 hours\" : get_middle(1, 2),\n",
    "    \"3 - 4 hours\" : get_middle(3, 4),\n",
    "    \"Over 4 hours\" : 6.0\n",
    "}\n",
    "\n",
    "full_df['HoursOutside'] = survey_results['HoursOutside'].map(hours_outside_map)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "skip_meals_map = {\n",
    "    \"Never\" : 0.0,\n",
    "    \"1 - 2 times per week\" : get_middle(1, 2),\n",
    "    \"3 - 4 times per week\" : get_middle(3, 4),\n",
    "    \"Daily or almost every day\" : 6.0\n",
    "}\n",
    "\n",
    "full_df['SkipMeals'] = survey_results['SkipMeals'].map(skip_meals_map)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "exercise_map = {\n",
    "    \"I don't typically exercise\" : 0.0,\n",
    "    \"1 - 2 times per week\" : get_middle(1, 2),\n",
    "    \"3 - 4 times per week\" : get_middle(3, 4),\n",
    "    \"Daily or almost every day\" : 6.0,\n",
    "}\n",
    "\n",
    "full_df['Exercise'] = survey_results['Exercise'].map(exercise_map)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "age_map = {\n",
    "    \"Under 18 years old\" : 16.0,\n",
    "    \"18 - 24 years old\" : get_middle(18, 24),\n",
    "    \"25 - 34 years old\" : get_middle(25, 34),\n",
    "    \"35 - 44 years old\" : get_middle(35, 44),\n",
    "    \"45 - 54 years old\" : get_middle(45, 54),\n",
    "    \"55 - 64 years old\" : get_middle(55, 64),\n",
    "    \"65 years or older\" : get_middle(65, 75)\n",
    "}\n",
    "\n",
    "full_df['Age'] = survey_results['Age'].map(age_map)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Some columns can be mapped to 'rank' numeric format as well, but only manually."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "agree_degree_map = {\n",
    "    'strongly agree' : 5.0,\n",
    "    'agree' : 4.0,\n",
    "    'somewhat agree' : 4.0,\n",
    "    'disagree' : 3.0,\n",
    "    'somewhat disagree' : 3.0,\n",
    "    'neither agree nor disagree' : 2.0,\n",
    "    'strongly disagree': 1.0,\n",
    "}\n",
    "\n",
    "agree_cols = [\n",
    "    'AgreeDisagree1',\n",
    "    'AgreeDisagree2',\n",
    "    'AgreeDisagree3',\n",
    "    'AdsAgreeDisagree1',\n",
    "    'AdsAgreeDisagree2',\n",
    "    'AdsAgreeDisagree3'\n",
    "]\n",
    "\n",
    "for column in agree_cols:\n",
    "    full_df[column] = survey_results[column].str.lower().map(agree_degree_map)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "hypothetical_tools_map = {\n",
    "    'Not at all interested' : 1.0,\n",
    "    'A little bit interested' : 2.0,\n",
    "    'Somewhat interested' : 3.0,\n",
    "    'Very interested' : 4.0,\n",
    "    'Extremely interested' : 5.0\n",
    "}\n",
    "\n",
    "for i in range(1,6):\n",
    "    column = \"HypotheticalTools\" + str(i)\n",
    "    full_df[column] = survey_results[column].map(hypothetical_tools_map)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<b>NumberMonitors</b> and <b>StackOverflowJobsRecommend</b> columns can be easily converted to numeric format if apply simple string processing."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "full_df['NumberMonitors'] = survey_results['NumberMonitors'].map(\n",
    "    lambda mon: 6.0 if (not pd.isna(mon) and (\"More\" in mon)) else float(mon))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "full_df['StackOverflowJobsRecommend'] = \\\n",
    "    survey_results['StackOverflowJobsRecommend'].map(lambda rec:\n",
    "        rec if pd.isna(rec) else float(rec.strip(\" (Very Likely)\").strip(\" (Not Likely)\")) )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Save converted features to disk not to run that all each time"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(full_df.shape)\n",
    "full_df.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "full_df.to_csv('converted_features.csv')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pickle"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "with open('category_encoders.pkl', 'wb') as encoders_file:\n",
    "    pickle.dump(category_encoders, encoders_file, protocol=2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "with open('multiselect_columns_descriptions.pkl', 'wb') as descriptions_file:\n",
    "    pickle.dump(multiselect_columns_descriptions, descriptions_file, protocol=2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 2. Primary data analysis <a id='EDA'></a>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from scipy import stats"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 2.1 Load previously converted and saved data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pickle"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "full_df = pd.read_csv('converted_features.csv', index_col='Respondent')\n",
    "\n",
    "with open('category_encoders.pkl', 'rb') as encoders_file:\n",
    "    category_encoders = pickle.load(encoders_file)\n",
    "    \n",
    "categorize_cols = list(category_encoders.keys())\n",
    "    \n",
    "with open('multiselect_columns_descriptions.pkl', 'rb') as descriptions_file:\n",
    "    multiselect_columns_descriptions = pickle.load(descriptions_file)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(full_df.shape)\n",
    "full_df.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 2.2. Missing data"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This dataset contains survey results, where many questions were optional, ant it's no wonder that there are columns containing NAN values. Checking how many exactly. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "cols_with_nans = full_df.isna().mean().sort_values(ascending=False)\n",
    "cols_with_nans = cols_with_nans[cols_with_nans > 0]\n",
    "cols_with_nans"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 2.3 Statistical characteristics of the target feature"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "First of all let's see proportion of respondents giving this or that grade to their career satisfaction level."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "satisfaction_counts = full_df['CareerSatisfaction'].value_counts().rename('Count')\n",
    "satisfaction_proportions = (full_df['CareerSatisfaction'].value_counts() / len(full_df)).rename('Part')\n",
    "satisfaction_df = pd.concat([satisfaction_counts, satisfaction_proportions], axis=1)\n",
    "satisfaction_df"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The most respondents gave \"6\" raiting, that is \"Moderately satisfied\". The \"Extremely satisfied\" is on the second place with little bit more proportion then \"Slightly satisfied\". In general, there are much more respondents who have choosen \"Satisfied\" grades than those who haven't. It allows to conclude that developer is a cool profession :)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Some statistics characteristics of the distribution:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(\"CareerSatisfaction median: \", full_df['CareerSatisfaction'].median())\n",
    "print(\"CareerSatisfaction mean: \", full_df['CareerSatisfaction'].mean())\n",
    "print(\"CareerSatisfaction std: \", full_df['CareerSatisfaction'].std())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Yes, half of respondents ranked their carrier with two highest grades, and 1-5 ranks are distributed among the other half. \n",
    "The median is different from the mean, so it doesn't look like normal distribution."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "satisfaction_values = full_df['CareerSatisfaction'].values.astype(np.float64)\n",
    "\n",
    "_, p_normtest = stats.normaltest(satisfaction_values)\n",
    "print(\"p-value in stats.normaltest: {0:.6f}\".format(p_normtest))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Stats test confirms that is not. What if applying log transforms?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "log1_satisfaction_values = np.log(1 + satisfaction_values)\n",
    "log_inverted_satisfaction_values = np.log(8 - satisfaction_values)\n",
    "\n",
    "_, p_log1_normtest = stats.normaltest(log1_satisfaction_values)\n",
    "_, p_log_inverted_normtest = stats.normaltest(log_inverted_satisfaction_values)\n",
    "print(\"p-value in log1 stats.normaltest: {0:.6f}\".format(p_log1_normtest))\n",
    "print(\"p-value in s inverted log stats.normaltest: {0:.6f}\".format(p_log_inverted_normtest))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "It look like the target variable isn't distributed neither normally, nor lognormally. May be that's due to discrete values of ranks, as normal distribution is continious one. Graphic of the distribution in the visual analysis section is needed to make further conclusions."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 2.4 CareerSatisfaction <-> JobSatisfaction correlation"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Though it was decided not to use JobSatisfaction feature yet, it is better to verify the assumption about its high correlation with the target variable."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "satisfaction_df = survey_results[['CareerSatisfaction', 'JobSatisfaction']]\n",
    "satisfaction_df = satisfaction_df[\n",
    "    satisfaction_df['CareerSatisfaction'].notna() & satisfaction_df['JobSatisfaction'].notna()]\n",
    "satisfaction_df['CareerSatisfaction'] = satisfaction_df['CareerSatisfaction'].map(satisfaction_map)\n",
    "satisfaction_df['JobSatisfaction'] = satisfaction_df['JobSatisfaction'].map(satisfaction_map)\n",
    "satisfaction_df.corr()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Indeed, it looks like high correlation result. Let's put this feature aside for the sake of more comprehensive analysis of other aspects. However, it can be returned if model will show too poor results without it."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 2.5 ConvertedSalary analysis"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The only continious numeric varibles in the research were the <b>Salary</b> and <b>ConvertedSalary</b> features. The second is much more convenient for the analysis as it was converted to common denominator of \"annual USD salaries using the exchange rate on 2018-01-18, assuming 12 working months and 50 working weeks\". "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "converted_salary = full_df['ConvertedSalary']\n",
    "converted_salary.describe()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's see average salaries by career satisfactions ranks"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "full_df.pivot_table(['ConvertedSalary'], ['CareerSatisfaction'], aggfunc='mean')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Those who are paid the biggest average salaries are satisfied by the career the most, it is expectable result. But surprisingly, the lowest payment is received by those in the middle, with \"Neither satisfied nor dissatisfied\" rank. And the most dissatisfied respondents do not receive the least wage. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's see the correlation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "salary_correlation = full_df[['CareerSatisfaction', 'ConvertedSalary']].corr()\n",
    "salary_correlation"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "That's not so much. May be this dependency may get higher evaluation? DataFrame.corr() can use three correlation methods. As I know, the default 'pearson' is good for linear correlation, and we saw above that in this case it is not linear."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "salary_correlation = full_df[['CareerSatisfaction', 'ConvertedSalary']].corr(method='spearman')\n",
    "salary_correlation"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "It's obviously better. So, spearman looks like more suitable correlation method to be used in further analysis. If the correlation between salary and satisfaction is not linear, the others shouldn't be this way all the more."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 2.6 Coding Experience, Professional Coding experience, Last Job Years and Age  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "There are <b>YearsCoding</b>, <b>YearsCodingProf</b>, <b>LastNewJob</b> and <b>Age</b> columns in the dataset. Despite them being categorical in the survey, the way they were [processed](#years_coding) during features conversion allows to treat them as numerical values."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "years_columns = ['CareerSatisfaction', 'YearsCoding', 'YearsCodingProf', 'LastNewJob', 'Age']\n",
    "years_coding_corelation = full_df[years_columns].corr(method='spearman')\n",
    "years_coding_corelation = years_coding_corelation['CareerSatisfaction'].sort_values(ascending=False)\n",
    "years_coding_corelation"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Not so much on the one hand, but not zero on the other."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 2.7 Some other numerical categorical features"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "numerical_features = ['TimeAfterBootcamp',\n",
    "        'ConvertedSalary', 'CompanySize', 'TimeFullyProductive', 'NumberMonitors',\n",
    "        'WakeTime', 'HoursComputer', 'HoursOutside', 'SkipMeals', 'Exercise'\n",
    "]\n",
    "\n",
    "numerical_correlations = full_df[['CareerSatisfaction'] + numerical_features].astype('float64').corr(method='spearman')\n",
    "numerical_correlations = numerical_correlations['CareerSatisfaction'].sort_values(ascending=False)\n",
    "numerical_correlations"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 2.8 Technical role and experience"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "As it was the developer survey there are a number a questions devoted to technical role and technical experience; they seem to affect the technical career satisfaction. However, there are too many of them with too many answers options, so let select only few as representatives for the analysis. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Dev roles question was multiselect and alowed to choose several options at once. Now all this options are splitted to  separate columns with \"1\" for the respondents checked it, and \"0\" for those who didn't. That doesn't prevent performing correlation check with the target variable, anyway. But some mapping with readable descriptions is needed."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "dev_types = [dt for dt in multiselect_columns_descriptions.keys() if dt.startswith('DevType')]\n",
    "dev_types_df = full_df[['CareerSatisfaction'] + dev_types]\n",
    "\n",
    "dev_types = [multiselect_columns_descriptions[dv] for dv in dev_types]\n",
    "dev_types_df = dev_types_df.rename(columns=multiselect_columns_descriptions)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "dev_types_correlation = dev_types_df.corr(method='spearman')['CareerSatisfaction'].sort_values(ascending=False)\n",
    "dev_types_correlation"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "That could be also interesting what percent of respondents with this or that level of satisfaction perform this or that role. As all these columns contain \"1\" or \"0\" only, the mean() function (i.e. sum/count) will give us that proportion."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "dev_types_percent = dev_types_df.pivot_table(dev_types, ['CareerSatisfaction'], aggfunc='mean')\n",
    "dev_types_percent = dev_types_percent.transpose()\n",
    "dev_types_percent"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The similar approaches for <b>LanguageWorkedWith</b> columns."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "langs_worked = [l for l in multiselect_columns_descriptions.keys() if l.startswith('LanguageWorkedWith')]\n",
    "lang_worked_df = full_df[['CareerSatisfaction'] + langs_worked]\n",
    "\n",
    "langs_worked = [multiselect_columns_descriptions[l] for l in langs_worked]\n",
    "langs_worked_df = lang_worked_df.rename(columns=multiselect_columns_descriptions)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "langs_worked_correlation = langs_worked_df.corr(method='spearman')['CareerSatisfaction'].sort_values(ascending=False)\n",
    "langs_worked_correlation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "langs_worked_percent = langs_worked_df.pivot_table(langs_worked, ['CareerSatisfaction'], aggfunc='mean')\n",
    "langs_worked_percent = langs_worked_percent.transpose()\n",
    "langs_worked_percent"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "And <b>LanguageDesireNextYear</b> columns too."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "lang_desired = [l for l in multiselect_columns_descriptions.keys() if l.startswith('LanguageDesireNextYear')]\n",
    "lang_desired_df = full_df[['CareerSatisfaction'] + lang_desired]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "lang_desired = [multiselect_columns_descriptions[l] for l in lang_desired]\n",
    "lang_desired_df = lang_desired_df.rename(columns=multiselect_columns_descriptions)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "langs_desired_correlation = lang_desired_df.corr(method='spearman')['CareerSatisfaction'].sort_values(ascending=False)\n",
    "langs_desired_correlation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "lang_desired_percent = lang_desired_df.pivot_table(lang_desired, ['CareerSatisfaction'], aggfunc='mean')\n",
    "lang_desired_percent = lang_desired_percent.rename(columns=multiselect_columns_descriptions).transpose()\n",
    "lang_desired_percent"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "And also <b>OperatingSystem</b>."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "os_df = full_df[['CareerSatisfaction', 'OperatingSystem']]\n",
    "\n",
    "os_df['OperatingSystem'] = os_df['OperatingSystem'].map(\n",
    "    lambda os: category_encoders['OperatingSystem'].inverse_transform(os)\n",
    ")\n",
    "\n",
    "os_dummies_df = pd.get_dummies(os_df, columns=['OperatingSystem'], prefix=\"OS\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "os_correlation = os_dummies_df.corr(method='spearman')['CareerSatisfaction'].sort_values(ascending=False)\n",
    "os_correlation"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Interesting: MacOS shows much higher correlation with career satisfaction than others OS. When looked on the OS percentage by satisfaction grades, it can be seen that 6- and 7- respondents use this OS more often than others. Is  MacOS the operating system of success?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "satisfaction_os_percent = os_dummies_df.groupby(by='CareerSatisfaction').mean()\n",
    "satisfaction_os_percent"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The main conclusion that can be made from the lists and tables with numbers above, in my opinion, is that it's too hard to search dependencies in such data with the method of staring gaze. It's better to move to visual data analysis."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 3. Primary visual data analysis<a id='part3'></a>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%matplotlib inline\n",
    "from matplotlib import pyplot as plt\n",
    "import seaborn as sns\n",
    "from scipy import stats\n",
    "from scipy.stats import poisson\n",
    "import warnings\n",
    "warnings.filterwarnings('ignore')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 3.1 Target value distribution"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Target value distribution in the graphical form"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "sns.distplot(satisfaction_values, bins=7, kde=False)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "That doesn't look like normal distribution, really. May be it resembles lognormal distribution being turned around, but lognormal distribution is continious and our one is discrete. Wouldn't it be better to search for discrete distribution with similar form? The <a href=\"https://en.wikipedia.org/wiki/Poisson_distribution\" target=__blank>Poisson distribution</a>, for instance. <br/> The histogram of Poisson distribution with the same mean and number of samples, when inverted, looks this way:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "mu = satisfaction_values.mean()\n",
    "data_poisson = poisson.rvs(mu=mu, size=len(satisfaction_values), random_state=42)\n",
    "max_poisson = max(data_poisson)\n",
    "ax = sns.distplot(max_poisson - data_poisson, bins=7, kde=False)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Unfortunately, I haven't found easy and relaible analytic way to check if the distribution is Poisson. Even graphical statsmodels.api qqplot doesn't easily work with Poisson. The <a href=\"https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.stats.probplot.html\" target=__blank>scipy.stats.probplot</a> function shows such plot:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "ax2 = plt.subplot()\n",
    "res = stats.probplot(satisfaction_values, dist='poisson', sparams=(mu), plot=plt)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "As I know, if the distributions are similar then the blue points should be close to 45-degree red line. For \"ordered values\" under 7 they are, for some extent. Apparently, the satisfaction levels has some characteristics of the Poisson distribution, but... No idea that it gives. There was a task to perform statistical analysis of the target feature, I've done it :)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 3.2 Boxplot for ConvertedSalary <a id=\"ConvertedSalaryVisual\" ></a>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "As <b>ConvertedSalary</b> is continuous variable and its distribution can be depicted with the boxplot."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "sns.boxplot(x='CareerSatisfaction', y='ConvertedSalary', data=full_df)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Outliers make this graph looking awful. This column required some mathematical actions to calculate converted US annual salary, may be some respondents have done it wrong. Let's see if cutting values >= 0.95 quantile would help."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "quantile_95 = full_df['ConvertedSalary'].quantile(0.95)\n",
    "cut_salary_outliers_df = full_df[full_df['ConvertedSalary'] < quantile_95]\n",
    "sns.boxplot(x='CareerSatisfaction', y='ConvertedSalary', data=cut_salary_outliers_df)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Much more useful picture. And there is indeed dependency between salary distribution and career satisfaction."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 3.3 Numerical category features corellations"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "sns.heatmap(numerical_correlations.iloc[1:].to_frame(), annot=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The heatmap shows only that these features are much less correlated with the target than the <b>ConvertedSalary</b>. Maybe plots divided by satisfaction level will show something..."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "grid = sns.FacetGrid(full_df, col=\"CareerSatisfaction\", col_wrap=4, sharey=False)\n",
    "grid.map(sns.countplot, \"NumberMonitors\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "grid = sns.FacetGrid(full_df, col=\"CareerSatisfaction\", col_wrap=4, sharey=False)\n",
    "grid.map(sns.countplot, \"CompanySize\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "grid = sns.FacetGrid(full_df, col=\"CareerSatisfaction\", col_wrap=4, sharey=False)\n",
    "grid.map(sns.countplot, \"SkipMeals\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "grid = sns.FacetGrid(full_df, col=\"CareerSatisfaction\", col_wrap=4, sharey=False)\n",
    "grid.map(sns.countplot, \"TimeAfterBootcamp\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Some differences can be noticed if looking attentively, but they are not so big. And, again, the patterns are not changing in a linear manner from lower ranks to higher."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 3.3 Technical experience features"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "_, axes = plt.subplots(1, 2, figsize=(12,8), sharey=False) \n",
    "sns.heatmap(dev_types_correlation.iloc[1:].to_frame(), ax=axes[0], annot=True)\n",
    "sns.heatmap(years_coding_corelation.iloc[1:].to_frame(), ax=axes[1], annot=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Some non-zero but not very correlated role and experience features can be seen on the heatmap."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's draw the percentages of dev roles by career satisfaction levels. It's <b>not the correlation heatmap</b>, so different color scheme is used."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "sns.heatmap(dev_types_percent, cmap='RdGy')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Not so many differences, but some vertical stripes can be seen, especially for the 1, 4, 6 ranks. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Languages experience correlations"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "_, axes = plt.subplots(1, 2, figsize=(16,6), sharey=False) \n",
    "sns.heatmap(langs_worked_correlation.iloc[1:].to_frame(), ax=axes[0])\n",
    "sns.heatmap(langs_desired_correlation.iloc[1:].to_frame(), ax=axes[1])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Languages experience differentiating by satisfaction levels"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "_, axes = plt.subplots(1, 2, figsize=(16,6), sharey=False, ) \n",
    "sns.heatmap(langs_worked_percent, ax=axes[0], cmap='RdGy')\n",
    "sns.heatmap(lang_desired_percent, ax=axes[1], cmap='RdGy')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "OS corellations and differences by satisfaction levels."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "_, axes = plt.subplots(1, 2, figsize=(16,6), sharey=False) \n",
    "sns.heatmap(os_correlation.iloc[1:].to_frame(), ax=axes[0], annot=True)\n",
    "sns.heatmap(satisfaction_os_percent.transpose(), ax=axes[1], cmap='RdGy', annot=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 3.4 Some other factors correlation heatmaps."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "corporative_cols = [\n",
    "    'CareerSatisfaction',\n",
    "    'Student',\n",
    "    'Employment',\n",
    "    'JobSearchStatus',\n",
    "    'HopeFiveYears',\n",
    "    'UpdateCV',\n",
    "    'SalaryType',\n",
    "    'Currency'\n",
    "]\n",
    "\n",
    "corporative_corellations = \\\n",
    "    full_df[corporative_cols].corr(method='spearman')['CareerSatisfaction'].sort_values(ascending=False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "education_cols = [\n",
    "    'CareerSatisfaction',\n",
    "    'FormalEducation',\n",
    "    'UndergradMajor',\n",
    "    'EducationParents'\n",
    "]\n",
    "\n",
    "self_taugh_cols = [st for st in multiselect_columns_descriptions.keys() if st.startswith('SelfTaughtTypes')]\n",
    "edu_type_cols = [et for et in multiselect_columns_descriptions.keys() if et.startswith('EducationTypes')]\n",
    "\n",
    "all_edu_cols = education_cols + self_taugh_cols + edu_type_cols\n",
    "edu_correlation = \\\n",
    "    full_df[all_edu_cols].corr(method='spearman')['CareerSatisfaction'].sort_values(ascending=False).rename(\n",
    "        'Education')\n",
    "edu_correlation_descriptions = edu_correlation.rename(index=multiselect_columns_descriptions)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "edu_correlation_descriptions"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "social_cols = [\n",
    "    'CareerSatisfaction',\n",
    "    'Country',\n",
    "    'Dependents',\n",
    "    'MilitaryUS'\n",
    "]\n",
    "\n",
    "social_corelation = \\\n",
    "    full_df[social_cols].corr(method='spearman')['CareerSatisfaction'].sort_values(ascending=False).rename(\n",
    "        \"Social Factors\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "philosophic_cols = [\n",
    "    'CareerSatisfaction',\n",
    "    'AIDangerous',\n",
    "    'AIInteresting',\n",
    "    'AIResponsible',\n",
    "    'AIFuture',\n",
    "    'EthicsChoice',\n",
    "    'EthicsReport',\n",
    "    'EthicsResponsible',\n",
    "    'EthicalImplications'\n",
    "]\n",
    "\n",
    "philosophic_corelation = \\\n",
    "    full_df[philosophic_cols].corr(method='spearman')['CareerSatisfaction'].sort_values(ascending=False).rename(\n",
    "        \"Philosophic questions\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "stackoverflow_cols = [\n",
    "    'CareerSatisfaction',\n",
    "    'StackOverflowVisit',\n",
    "    'StackOverflowHasAccount',\n",
    "    'StackOverflowParticipate',\n",
    "    'StackOverflowJobs',\n",
    "    'StackOverflowDevStory',\n",
    "    'StackOverflowConsiderMember'\n",
    "]\n",
    "\n",
    "stackoverflow_corelation = \\\n",
    "    full_df[stackoverflow_cols].corr(method='spearman')['CareerSatisfaction'].sort_values(ascending=False).rename(\n",
    "        'StackOverflow questions')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "_, axes = plt.subplots(3, 2, figsize=(18,16), sharey=False) \n",
    "\n",
    "sns.heatmap(corporative_corellations.to_frame().iloc[1:], ax=axes[0,0], annot=True)\n",
    "sns.heatmap(edu_correlation.to_frame().iloc[1:], ax=axes[0,1])\n",
    "sns.heatmap(social_corelation.to_frame().iloc[1:], ax=axes[1,0], annot=True)\n",
    "sns.heatmap(philosophic_corelation.to_frame().iloc[1:], ax=axes[1,1], annot=True)\n",
    "sns.heatmap(stackoverflow_corelation.to_frame().iloc[1:], ax=axes[2,0], annot=True)\n",
    "axes[2,1].set_visible(False)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Not so much in general, but there are features with 0.06 - 0.08 correlations."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 3.5 Conclusions"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This way, we see a dataset wit a discrete distribution of target variable that shows slight characteristics of Poisson function. <br/>\n",
    "The most correlated feature is numerical continious <b>ConvertedSalary</b> feature with the 0.182913 correlation value. It has to be cleared out of outliers.<br/>\n",
    "Among numerical category variables the highest level is shown by the <b>NumbersOfMonitors</b> - 0.090212. Countplots with this feature shows some differences among satisfaction levels as well.<br/>\n",
    "The highest levels of correlation in technical characteristics are about 0.04-0.05, some of them are not correlated with career satisfaction at all. There are some tech-related difference among satisfaction levels, one of the most noticeable is percentage of <b>MacOS</b> users among highly-satisfied respondents.<br/>\n",
    "Some other aspects from educational strategies, social factors, country of living, philosophical outlook, StackOverflow membership etc. can correlate with career satisfaction as well, up to 0.06-0.08 values."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "###  Part 4. Insights and found dependencies <a id='part4'></a>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Combining the observation from the previous paragraphs, the following is to be denoted:<ul>\n",
    "<li>The dataset under analysis contain many omissions. That's no wonder: the data came from a voluntary survey, where many questions can be optional. This omissions have to be handled when working with the specific selected model.</li>\n",
    "<li>The distribution of the target feature, <b>CareerSatisfaction</b> is discrete distribution and it isn't normal or lognormal. That should be taken into account in model selection, as not all models handle non-normal distributions well.</li>\n",
    "<li>The <b>JobSatisfaction</b> and <b>CareerSatisfaction</b> columns seems to be interconnected features, but the model predicting one of them based on anoter will not be so interesting. It's better to try drawing a conclusion using wide range of other factors first.</li>\n",
    "<li>The feature with the next level of correlation with the target variable is <b>ConvertedSalary</b>. It is continious float feature but its correlation with levels of satisfaction is non-linear. Unfortunately, it has 0.386 part of NAN values and there are outliers. Hopping that it's partly due to difficulties of manual convertion and <b>Salary</b> column along with <b>Currency</b> and <b>SalaryType</b> will help soften those problems.</li>\n",
    "<li>By now, other features are categorical with numeric labels, in some cases supporting ordering and ratio relationships, in some cases not. Some of them show non-zero correlation with career satisfaction and differences across satisfaction levels, some do not. Due to lage number of features it is too difficult to select most useful ones before starting the close work with concrete model. It is better to use all of them initially and then exclude/convert some according to practical results/observations/strategies.</li>\n",
    "</ul>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 5. Metrics selection<a id='part5'></a>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<p>First of all, it should be specified that it is the task of <b>multiclass classification</b>. For some extent, predicting a satisfaction level can be seen as a <i>regression</i> task, as satisfaction is monothonicaly increasing function and it's better to erroneously predict the closest grade then the arbitrary one. But the analysis above shows that there is no linear correlation between features and satisfaction level, rather there are patterns inherent to respondents giving this or that rating.</p>  \n",
    "<p>One of the most frequently used and simple metric for classification is <b>Accuracy</b>. Unfortunately, it can work badly in the case of inbalanced classes, and the survey dataset is indeed unbalanced (see the distribution histogram).</p>\n",
    "<p>Another popular classification metric is <b>ROC-AUC</b>. It evaluates correct positive results among all positive samples (True Positive Rate) against incorrect positive results among all negatives (False Positive Rate). However, this task is not the credit scoring one, false positives will not cause missed profit, and we are interested to increased the true positives over the whole set much more than decreasing false triggering. In addition ROC-AUC is initially a binary classification metric, and to use it in multiclassification task additional steps are needed (micro/macro-averaging, binary encoding etc.). This applies to many other binary classification metrics, extended to multiclass case.</p>\n",
    "<p>One more metric is <b>multiclass logarithmic loss</b>. It can be explained by the formula:\n",
    "$$\\mathcal logloss = -\\dfrac{1}{N}\\sum_{i=1}^N\\sum_{j=1}^M y_{ij}\\log(p_{ij})$$\n",
    "($N$ is the number of observations in the test set, $M$ is the number of class labels, $y_{ij}$ is 1 if observation $i$ belongs to class $j$ and 0 otherwise, and $p_{ij}$ is the predicted probability that observation $i$ belongs to class $j$)\n",
    "It involves probability estimates and looks interesting in such case. Career satisfaction is something subjective, non-predetermined, evaluated differently by different people in the same circumstances (and even by the same person in different mood). So some uncertainty is present and it usage in the evaluation metric can be helpfull.</p>\n",
    "<p>Thus, metric selected for evaluating this task is <b>multiclass logarithmic loss.</b><p>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 6. Model selection<a id='part6'></a> "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<p>Human satisfaction seems to be very complicated thing, depending on different combinations of many factors. So this task doesn't look like linearly separable and non-linear model has to be selected, the one that allows to avoid tricks like polynomyal features and kernel change. The most popular non-linear models nowadays are ensembles of decision trees.</p>\n",
    "<p>For the sake of easy reproducibility and without need to handle gigabytes of data or fighting for thousandth fractions of metric in the competition, this task is going to be solved by the means of scikit-learn library, without Vopal Wabbit, XGBoost etc. From tree ensembles presented in scikit-learn the <b>RandomForestClassifier</b> looks like the most obvious selection. Beside handling nonlinearity Random Forest offers some other advantages:<ul><li>After a random split, only subset of features are selected, that is useful when there are so many of them;</li>\n",
    "<li>Conditions in decisions handle both continuous and discrete variables;</li>\n",
    "<li>Doesn't need feature rescaling;</li>\n",
    "<li>Is said to work well with non-normal target distribution;</li>\n",
    "<li>Robust to outliers and large amount of missing data;</li>\n",
    "<li>There are developed methods to estimate feature importance;</li></ul></p>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 7. Data preprocessing<a id='part7'></a>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 7.1 Data handling"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In fact, the most part of data preprocessing was done on the [features convertion step](#features_conversion) above primary data analysis (otherwise the analysis wouldn't be possible at all). There can be found replacing some strings with numbers using LabelEncoder, mapping other strings to numeric categories with order and ratio relationships, converting columns with multiple options to the sets of 1/0 columns (similar to One-Hot Encoding but not that exactly), mapping Yes/No columns to 1/0 columns, simple string processing to parse \"almost numeric\" string columns to numeric columns, etc.<br> "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "There are a lot of columns with NAN value. Features with large amount of missed data do not bring much usefull information, but increase the computational complexity. So, cut off those with amount of NAN > 0.4"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "cols_too_many_nans = full_df.isna().mean()\n",
    "cols_too_many_nans = cols_too_many_nans[cols_too_many_nans > 0.4].index.values"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "preprocessed_df = full_df.drop(columns=cols_too_many_nans)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Fill other NANs with zeros as the simple strategy. (To be honest I've tried other ones, but they didn't show significantly better results)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "preprocessed_df = preprocessed_df.fillna(0)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "During the visual analysis it was [found](#ConvertedSalaryVisual) that it's better to clear ConvertedSalary column from outliers. Let's incorporate it into the dataset."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "quantile_95 = preprocessed_df['ConvertedSalary'].quantile(0.95)\n",
    "preprocessed_df['ConvertedSalary'] = preprocessed_df['ConvertedSalary'].map(lambda cs: min(cs, quantile_95))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The same for 'Salary' column."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "quantile_95 = preprocessed_df['Salary'].quantile(0.95)\n",
    "preprocessed_df['Salary'] = preprocessed_df['Salary'].map(lambda cs: min(cs, quantile_95))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "RandomForestClassifier in sklearn library doesn't handle unordered numerical category features well, so they have to be converted to One-Hot Encoding using pd.get_dummies:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "preprocessed_df = pd.get_dummies(preprocessed_df, columns = categorize_cols, drop_first=True,\n",
    "                            prefix=categorize_cols, sparse=False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "preprocessed_df.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 7.2 Dividing the data into training and hold-out sets"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(\"Preprocessed dataset shape:\", preprocessed_df.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "There are 76504 rows in the preprocessed dataset. Let's separate 0.3 part of them into hold-out set. Respondents anwers are totaly independent from each other and from time characteristics, but the way records are listed is unknown, thus it's good to randomly shuffle them first."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# calling sample() with frac=1 will just shuffle all rows\n",
    "preprocessed_df = preprocessed_df.sample(frac=1, random_state=17)\n",
    "\n",
    "train_part_size = int(0.7 * preprocessed_df.shape[0])\n",
    "\n",
    "train_df = preprocessed_df[:train_part_size]\n",
    "test_df = preprocessed_df[train_part_size:]\n",
    "\n",
    "print(\"Train shape: \", train_df.shape)\n",
    "print(\"Test shape: \", test_df.shape)\n",
    "\n",
    "train_df.to_csv('train.csv')\n",
    "test_df.to_csv('test.csv')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "del test_df"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 8. Cross-validation and adjustment of model hyperparameters<a id='part8'></a>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.ensemble import RandomForestClassifier\n",
    "from sklearn.model_selection import GridSearchCV, StratifiedKFold, cross_val_score\n",
    "from sklearn.metrics import log_loss"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_df = pd.read_csv('train.csv', index_col='Respondent')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 8.1 Initial evaluation of the model"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "First of all, evaluating the model \"as is\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_part = train_df.sample(frac=0.8, random_state=17)\n",
    "validation_part = train_df.drop(train_part.index)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "y_train = train_part['CareerSatisfaction']\n",
    "X_train = train_part.drop(columns=['CareerSatisfaction'])\n",
    "\n",
    "y_valid = validation_part['CareerSatisfaction']\n",
    "X_valid = validation_part.drop(columns=['CareerSatisfaction'])\n",
    "\n",
    "print(\"Train: \", X_train.shape, y_train.shape)\n",
    "print(\"Test: \", X_valid.shape, y_valid.shape)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "random_forest = RandomForestClassifier(random_state=17)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%time \n",
    "random_forest.fit(X_train, y_train)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "y_pred = random_forest.predict_proba(X_valid)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "log_loss(y_valid, y_pred)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Doesn't look like too good result. Create some simple baselines to compare with"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "y_six_baseline = np.zeros(y_pred.shape)\n",
    "y_six_baseline[:, 5] = 1.0"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "y_random_baseline = np.random.random_sample(y_pred.shape)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "y_equal_likely_baseline = np.full(y_pred.shape, fill_value=1/y_pred.shape[1])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "y_no_one_baseline = np.zeros(y_pred.shape)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(\"All 6.0 baseline: \", log_loss(y_valid, y_six_baseline))\n",
    "print(\"Random baseline: \", log_loss(y_valid, y_random_baseline))\n",
    "print(\"Equaly likely baseline: \", log_loss(y_valid, y_equal_likely_baseline))\n",
    "print(\"No one baseline: \", log_loss(y_valid, y_no_one_baseline))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "It looks like logloss strongly penalizes constant-dominant-class predictor. But in general, results of the initial RandomForestClassifier are not good at all.<br/>\n",
    "But the classifier was trained too quickly, in ~3s. It has n_estimators=10, may be it's too few for such a large amount of features?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "random_forest = RandomForestClassifier(random_state=17, n_estimators=200)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%%time \n",
    "random_forest.fit(X_train, y_train)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "y_pred = random_forest.predict_proba(X_valid)\n",
    "initial_score = log_loss(y_valid, y_pred)\n",
    "initial_score"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "That gives hope. Running GridSearchCV to tune hyperparameters."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 8.2 Hyperparameters tuning"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "y_crossvalid = train_df['CareerSatisfaction']\n",
    "X_crossvalid = train_df.drop(columns=['CareerSatisfaction'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    " Using 3 splits as one of the most frequently used amount, shuffle samples in random order. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "skf = StratifiedKFold(n_splits=3, shuffle=True, random_state=17)\n",
    "tree_params = {\n",
    "    'n_estimators' : [250, 300, 350],\n",
    "    'max_depth' : [None, 10, 20],\n",
    "    'max_features' : ['sqrt', 'log2', 50]\n",
    "}\n",
    "gcv = GridSearchCV(random_forest, tree_params, scoring='neg_log_loss', cv=skf, verbose=1)\n",
    "gcv.fit(X_crossvalid, y_crossvalid)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "gcv.best_estimator_, gcv.best_score_"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "It looks like more complex model performs better. However, increasing of its complicity leads to the exhaustion of computing power and rising of MemoryErros. Staying with these parameters. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Split with 3 folds was used to reduce parameters search time. Cross-validation can be run with large amount of splits."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "skf_5 = StratifiedKFold(n_splits=5, shuffle=True, random_state=17)\n",
    "cv_scores = cross_val_score(gcv.best_estimator_, X_crossvalid, y_crossvalid, cv=skf_5, scoring='neg_log_loss')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(cv_scores)\n",
    "print(np.mean(cv_scores))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Cross-validation shows quite stable predictions values."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 8.3 Confusion matrix <a id=\"confusion_matrix\" ></a>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "One more way to evaluate prediction results is a <a href=\"https://scikit-learn.org/stable/modules/generated/sklearn.metrics.confusion_matrix.html\">confusion matrix</a>, that allows to see what classes are mixed out with each other the most often."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.metrics import confusion_matrix\n",
    "%matplotlib inline\n",
    "from matplotlib import pyplot as plt\n",
    "import seaborn as sns"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def evaluate_predictions(estimator, X, y, X_validation, y_validation):\n",
    "    estimator.fit(X, y)\n",
    "    y_pred = estimator.predict(X_validation)\n",
    "    y_pred_proba = estimator.predict_proba(X_validation)\n",
    "    \n",
    "    cm = confusion_matrix(y_validation, y_pred)\n",
    "    log_loss_value = log_loss(y_validation, y_pred_proba)\n",
    "    \n",
    "    # Convert matrix values to fractions due to different amount of samples accros classes.\n",
    "    counts = cm.sum(axis=1)\n",
    "    counts = counts.reshape(-1,1)\n",
    "    sns.heatmap(cm / counts, annot=True)\n",
    "    print(\"LogLoss value: \", log_loss_value)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "evaluate_predictions(gcv.best_estimator_, X_train, y_train, X_valid, y_valid)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Not a beatiful picture. The most predictions were just attributed to the prevalent class. Maybe some samples balancing will help?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def balance_dataset(df):\n",
    "\n",
    "    levels_counts = df['CareerSatisfaction'].value_counts().to_dict()\n",
    "    mean_level_count = int(np.mean(list(levels_counts.values())))\n",
    "\n",
    "    balanced_df = pd.DataFrame()\n",
    "\n",
    "    for level in levels_counts.keys():\n",
    "        level_rows = df[df['CareerSatisfaction']==level]\n",
    "    \n",
    "        level_df = level_rows.sample(n=mean_level_count, random_state=17, replace=True)\n",
    "\n",
    "        index_start = 1 if balanced_df.empty else balanced_df.index[-1] + 1\n",
    "        level_df = pd.DataFrame(\n",
    "            index=range(index_start, index_start + mean_level_count),\n",
    "            data=level_df.values,\n",
    "            columns = df.columns\n",
    "        )\n",
    "        \n",
    "        balanced_df = balanced_df.append(level_df)\n",
    "    \n",
    "    # randomly shuffle\n",
    "    balanced_df = balanced_df.sample(frac=1.0, random_state=17)\n",
    "        \n",
    "    return balanced_df"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_balanced = balance_dataset(train_part)\n",
    "print(train_balanced['CareerSatisfaction'].value_counts())\n",
    "print(train_balanced.shape, validation_part.shape)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "y_train_balanced = train_balanced['CareerSatisfaction']\n",
    "X_train_balanced = train_balanced.drop(columns=['CareerSatisfaction'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "evaluate_predictions(gcv.best_estimator_, X_train_balanced, y_train_balanced, X_valid, y_valid)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Predictions of non-prevalent classes became better due to prevalent class accuracy decrease, but the whole metric became worse as well. <br/>\n",
    "It looks like current set of features just doesn't contain enough information to distinguish levels of satisfaction confidently. Trying to do something with that."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 9. Creation of new features and description of this process<a id='part9'></a> "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_df = pd.read_csv('train.csv', index_col='Respondent')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_df.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 9.1 Amount of skipped questions"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "If there's a problem of lack of information, maybe the fact of missed information can bring a benefit? <br/> \n",
    "Let's see if the average amount of skipped questions is different from one satisfaction level to another."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_skipped_questions_amount(df):\n",
    "    isna_flags = df.drop(columns=['CareerSatisfaction']).isna()\n",
    "    skipped = isna_flags.sum(axis=1).rename('SkippedQuestions')\n",
    "    return skipped"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "full_train_df = full_df.loc[train_df.index]\n",
    "skipped_questions = get_skipped_questions_amount(full_train_df)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "skipped_mean = skipped_questions.mean()\n",
    "skipped_mean"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "skipped_questions_df = skipped_questions.to_frame()\n",
    "skipped_questions_df['CareerSatisfaction'] = full_train_df['CareerSatisfaction']\n",
    "skipped_questions_by_level = skipped_questions_df.groupby('CareerSatisfaction')\n",
    "skipped_questions_by_level.describe()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "less_than_mean_skipped = skipped_questions_df[skipped_questions_df['SkippedQuestions'] < skipped_mean]\n",
    "less_than_mean_skipped.groupby('CareerSatisfaction').size() / skipped_questions_by_level.size()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The amount of respondents skipped more than average amount of questions differs from level to level. Adding this features, evaluating."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_df['LessThanMeanAnswered'] = \\\n",
    "    skipped_questions_df.loc[train_df.index]['SkippedQuestions'].map(lambda skipped:\n",
    "        int(skipped < skipped_mean))\n",
    "train_df['MoreThanMeanAnswered'] = \\\n",
    "    skipped_questions_df.loc[train_df.index]['SkippedQuestions'].map(lambda skipped:\n",
    "        int(skipped > skipped_mean))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "random_forest = RandomForestClassifier(random_state=17, max_depth=20, max_features=50, n_estimators=350)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "y_crossvalid = train_df['CareerSatisfaction']\n",
    "X_crossvalid = train_df.drop(columns=['CareerSatisfaction'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "skf_5 = StratifiedKFold(n_splits=5, shuffle=True, random_state=17)\n",
    "cv_scores = cross_val_score(random_forest, X_crossvalid, y_crossvalid, cv=skf_5, scoring='neg_log_loss')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(cv_scores)\n",
    "print(np.mean(cv_scores))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Doesn't seem to give improvement, removing this feature. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_df = train_df.drop(columns=['LessThanMeanAnswered', 'MoreThanMeanAnswered'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 9.2 Trying to convert Salary programmatically"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The <b>ConvertedSalary</b> column looks like manualy infered by each respondent from <b>Salary</b>, <b>SalaryType</b> and <b>Currency</b>. Maybe there were mistakes; trying to automate the process."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#!pip install currencyconverter"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from currency_converter import CurrencyConverter\n",
    "from datetime import date\n",
    "\n",
    "converter = CurrencyConverter()\n",
    "course_date=date(2018, 1, 18)\n",
    "\n",
    "salary_type_encoder = category_encoders['SalaryType']\n",
    "currency_encoder = category_encoders['Currency']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "salary_type_encoder.classes_"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "currency_encoder.classes_"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "currency_map = {\n",
    "    'Australian dollars (A$)': 'AUD',\n",
    "    'Bitcoin (btc)': 'XBT',\n",
    "    'Brazilian reais (R$)': 'BRL',\n",
    "    'British pounds sterling (£)': 'GBP',\n",
    "    'Canadian dollars (C$)': 'CAD',\n",
    "    'Chinese yuan renminbi (¥)': 'CNY',\n",
    "    'Danish krone (kr)': 'DKK',\n",
    "    'Euros (€)': 'EUR',\n",
    "    'Indian rupees (₹)': 'INR',\n",
    "    'Japanese yen (¥)': 'JPY',\n",
    "    'Mexican pesos (MXN$)': 'MXN',\n",
    "    'Norwegian krone (kr)': 'NOK',\n",
    "    'Polish złoty (zł)': 'PLN',\n",
    "    'Russian rubles (₽)': 'RUB',\n",
    "    'Singapore dollars (S$)': 'SGD',\n",
    "    'South African rands (R)': 'ZAR',\n",
    "    'Swedish kroner (SEK)': 'SEK',\n",
    "    'Swiss francs': 'CHF',\n",
    "    'U.S. dollars ($)': 'USD',\n",
    "}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def try_convert_salary(df):\n",
    "    if pd.isna(df['SalaryType']):\n",
    "        return 0.0    \n",
    "    salary_type = salary_type_encoder.classes_[int(df['SalaryType'])]\n",
    "    \n",
    "    if (pd.isna(df['Salary']) or pd.isna(df['Currency']) or salary_type == 'Unknown'):\n",
    "        return 0.0\n",
    "    \n",
    "    currency = currency_encoder.classes_[int(df['Currency'])]\n",
    "    if (currency == 'Unknown' or currency == 'Bitcoin (btc)'):\n",
    "        return 0.0\n",
    "    \n",
    "    currency = currency_map[currency]\n",
    "    \n",
    "    if currency == 'USD':\n",
    "        in_usd = df['Salary']\n",
    "    else:\n",
    "        in_usd = converter.convert(df['Salary'], currency, 'USD', date=course_date)\n",
    "    \n",
    "    if salary_type == 'Yearly':\n",
    "        return in_usd    \n",
    "    elif salary_type == 'Monthly':\n",
    "        return 12 * in_usd\n",
    "    else:\n",
    "        return 50 * in_usd    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_df['AutoconvertedSalary'] = full_train_df.apply(try_convert_salary, axis=1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "quantile_95 = train_df['AutoconvertedSalary'].quantile(0.95)\n",
    "train_df['AutoconvertedSalary'] = train_df['AutoconvertedSalary'].map(lambda cs: min(cs, quantile_95))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "y_crossvalid = train_df['CareerSatisfaction']\n",
    "X_crossvalid = train_df.drop(columns=['CareerSatisfaction'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "skf_5 = StratifiedKFold(n_splits=5, shuffle=True, random_state=17)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "cv_scores = cross_val_score(random_forest, X_crossvalid, y_crossvalid, cv=skf_5, scoring='neg_log_loss')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(cv_scores)\n",
    "print(np.mean(cv_scores))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The worsening is noticeable. It's better not to add the feature."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_df = train_df.drop(columns=['AutoconvertedSalary'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 9.3 Secondary features using experience columns"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The <b>YearsCoding</b>, <b>YearsCodingProf</b>, <b>LastNewJob</b> and <b>Age</b> didn't show to hight correlation with the target feature, but maybe their ratios contains any usefull patterns?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_df['CodingProportion'] = train_df.apply(lambda df:\n",
    "    0.0 if pd.isna(df['YearsCoding']) or pd.isna(df['Age']) or (df['Age'] == 0)\n",
    "        else df['YearsCoding']/ df['Age'],\n",
    "    axis=1)\n",
    "\n",
    "train_df['ProfProportion'] = train_df.apply(lambda df:\n",
    "    0.0 if pd.isna(df['YearsCodingProf']) or pd.isna(df['YearsCoding']) or (df['YearsCoding'] == 0)\n",
    "        else df['YearsCodingProf'] / df['YearsCoding'],\n",
    "    axis=1)\n",
    "\n",
    "train_df['LastJobProportion'] = train_df.apply(lambda df:\n",
    "    0.0 if pd.isna(df['LastNewJob']) or pd.isna(df['Age']) or (df['Age'] == 0)\n",
    "        else df['LastNewJob'] / df['Age'],\n",
    "    axis=1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "y_crossvalid = train_df['CareerSatisfaction']\n",
    "X_crossvalid = train_df.drop(columns=['CareerSatisfaction'])\n",
    "\n",
    "cv_scores = cross_val_score(random_forest, X_crossvalid, y_crossvalid, cv=skf_5, scoring='neg_log_loss')\n",
    "\n",
    "print(cv_scores)\n",
    "print(np.mean(cv_scores))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Doesn't seem to give increase."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_df = train_df.drop(columns=['CodingProportion', 'ProfProportion', 'LastJobProportion'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 9.4 Adding JobSatisfaction feature"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "As all other attempts didn't help, adding the <b>JobSatisfaction</b> column as the last resort, though initial intent was not to use this obvious feature. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "job_satisfaction = survey_results['JobSatisfaction'].map(satisfaction_map).fillna(0.0)\n",
    "train_df['JobSatisfaction'] = job_satisfaction.loc[train_df.index]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "y_crossvalid = train_df['CareerSatisfaction']\n",
    "X_crossvalid = train_df.drop(columns=['CareerSatisfaction'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "cv_scores = cross_val_score(random_forest, X_crossvalid, y_crossvalid, cv=skf_5, scoring='neg_log_loss')\n",
    "\n",
    "print(cv_scores)\n",
    "print(np.mean(cv_scores))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Relatively high model improvement, as expected."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "test_df = pd.read_csv('test.csv', index_col='Respondent')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "test_df['JobSatisfaction'] = job_satisfaction.loc[test_df.index]\n",
    "test_df.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "test_df.to_csv('test.csv')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "del test_df"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 9.5 Decreasing amount of target classes"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "As feature engineering doesn't bring much outcome, and low-ranking classes are too small, decreasing granularity of target variable looks like a admissible approach.<br/>\n",
    "So, uniting and remaping target classes as:<ul>\n",
    "<li>1-3, namely \"dissatisfied\", with the new value \"1\"</li>\n",
    "<li>4-5, \"middling satisfied\", with value \"2\"</li>\n",
    "<li>Leaving 6 and 7 left as is separate classes (\"Moderately satisfied\" and \"Extremely satisfied\") with new values \"3\" and \"4\"</li></ul>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "lower_granularity_map = {\n",
    "    1.0 : 1.0,\n",
    "    2.0 : 1.0,\n",
    "    3.0 : 1.0,\n",
    "    4.0 : 2.0,\n",
    "    5.0 : 2.0,\n",
    "    6.0 : 3.0,\n",
    "    7.0 : 4.0\n",
    "}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_df['CareerSatisfaction'] = train_df['CareerSatisfaction'].map(lower_granularity_map)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "y_crossvalid = train_df['CareerSatisfaction']\n",
    "X_crossvalid = train_df.drop(columns=['CareerSatisfaction'])\n",
    "\n",
    "cv_scores = cross_val_score(random_forest, X_crossvalid, y_crossvalid, cv=skf_5, scoring='neg_log_loss')\n",
    "\n",
    "print(cv_scores)\n",
    "print(np.mean(cv_scores))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "That looks better. Evaluating confusion matrix:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_part = train_df.sample(frac=0.8, random_state=17)\n",
    "validation_part = train_df.drop(train_part.index)\n",
    "\n",
    "y_train = train_part['CareerSatisfaction']\n",
    "X_train = train_part.drop(columns=['CareerSatisfaction'])\n",
    "\n",
    "y_valid = validation_part['CareerSatisfaction']\n",
    "X_valid = validation_part.drop(columns=['CareerSatisfaction'])\n",
    "\n",
    "evaluate_predictions(random_forest, X_train, y_train, X_valid, y_valid)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Again, the \"Moderately satisfied\" rank gets the most confusion values, though they became lower than classifying to 7 classes. <br/>\n",
    "What if balance classes in train part?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_balanced = balance_dataset(train_part)\n",
    "\n",
    "y_train_balanced = train_balanced['CareerSatisfaction']\n",
    "X_train_balanced = train_balanced.drop(columns=['CareerSatisfaction'])\n",
    "\n",
    "evaluate_predictions(random_forest, X_train_balanced, y_train_balanced, X_valid, y_valid)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The metric value decreased, but now not only the prevalent class is detected more confidently. What is more important, low-satisfaction classes are more easily distingushed from high-satisfaction ones."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Compare with baselines of 4 classes: "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "y_pred = random_forest.predict_proba(X_valid)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "y_three_baseline = np.zeros(y_pred.shape)\n",
    "y_three_baseline[:, 2] = 1.0\n",
    "\n",
    "y_random_baseline = np.random.random_sample(y_pred.shape)\n",
    "\n",
    "y_equal_likely_baseline = np.ones(y_pred.shape)\n",
    "\n",
    "y_no_one_baseline = np.zeros(y_pred.shape)\n",
    "\n",
    "print(\"All 3.0 baseline: \", log_loss(y_valid, y_three_baseline))\n",
    "print(\"Random baseline: \", log_loss(y_valid, y_random_baseline))\n",
    "print(\"Equal likely baseline: \", log_loss(y_valid, y_equal_likely_baseline))\n",
    "print(\"No one baseline: \", log_loss(y_valid, y_no_one_baseline))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Anyway, the model performs better."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "test_df = pd.read_csv('test.csv', index_col='Respondent')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "test_df['CareerSatisfaction'] = test_df['CareerSatisfaction'].map(lower_granularity_map)\n",
    "test_df.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "test_df.to_csv('test.csv')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "del test_df"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 10. Plotting training and validation curves<a id='part10'></a>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 10.1 Facilities for plotting"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import time\n",
    "from sklearn.metrics import confusion_matrix\n",
    "%matplotlib inline\n",
    "from matplotlib import pyplot as plt\n",
    "import seaborn as sns"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class LearningCurve:\n",
    "    def __init__(self, variable_parameter_name, unaltered_parameters, X, y):\n",
    "        self.variable_parameter_name = variable_parameter_name\n",
    "        self.parameters_dict = dict(unaltered_parameters)\n",
    "        self.X = X\n",
    "        self.y = y\n",
    "        self.parameter_values = []\n",
    "        self.train_metrics_mean = []\n",
    "        self.test_metrics_mean = []\n",
    "        self.learning_times_mean = []\n",
    "        \n",
    "    def add_learning_point(self, parameter_value, splits=3):      \n",
    "        self.parameter_values.append(parameter_value)\n",
    "        \n",
    "        self.parameters_dict[self.variable_parameter_name] = parameter_value\n",
    "        estimator = RandomForestClassifier(**self.parameters_dict)\n",
    "        \n",
    "        train_metrics = []\n",
    "        test_metrics = []\n",
    "        learning_times = []\n",
    "        \n",
    "        k_fold = StratifiedKFold(n_splits=splits, shuffle=True, random_state=17)\n",
    "        \n",
    "        for train_index, test_index in k_fold.split(self.X, self.y):\n",
    "            X_train, X_test = self.X.iloc[train_index], self.X.iloc[test_index]\n",
    "            y_train, y_test = self.y.iloc[train_index], self.y.iloc[test_index]\n",
    "                                    \n",
    "            learn_start = time.perf_counter()\n",
    "            estimator.fit(X_train, y_train)\n",
    "            learn_end = time.perf_counter()\n",
    "            \n",
    "            learning_times.append(learn_end - learn_start)\n",
    "            \n",
    "            metric_on_train = log_loss(y_train, estimator.predict_proba(X_train))\n",
    "            train_metrics.append(metric_on_train)\n",
    "            \n",
    "            metric_on_test = log_loss(y_test, estimator.predict_proba(X_test))\n",
    "            test_metrics.append(metric_on_test)\n",
    "            \n",
    "        self.train_metrics_mean.append(np.mean(train_metrics)),\n",
    "        self.test_metrics_mean.append(np.mean(test_metrics)),\n",
    "        self.learning_times_mean.append(np.mean(learning_times))\n",
    "        \n",
    "    def get_best_point(self):\n",
    "        best_test_metric = min(self.test_metrics_mean)\n",
    "        i = self.test_metrics_mean.index(best_test_metric)\n",
    "        return {\n",
    "            'value': self.parameter_values[i],\n",
    "            'test_metric': best_test_metric,\n",
    "            'train_metric' : self.train_metrics_mean[i],\n",
    "            'learning_time' : self.learning_times_mean[i]\n",
    "        }\n",
    "    \n",
    "    def get_curve_points(self):\n",
    "        points = []\n",
    "        for i in range(0, len(self.parameter_values)):\n",
    "            point = {\n",
    "                'value': self.parameter_values[i],\n",
    "                'test_metric': self.test_metrics_mean[i],\n",
    "                'mean_train_metric': self.train_metrics_mean[i],\n",
    "                'learning_time': self.learning_times_mean[i]\n",
    "            }\n",
    "            points.append(point)\n",
    "        return points"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class Plotter:\n",
    "    def plot_curve(self, curve, separately=False, learning_time=True, ylim=[0, 1.9]):               \n",
    "        plt.style.use('ggplot')\n",
    "        \n",
    "        axes_count = 1\n",
    "        validation_axes = 0\n",
    "        if separately:\n",
    "            axes_count += 1\n",
    "            validation_axes = 1\n",
    "            \n",
    "        if learning_time:\n",
    "            axes_count += 1\n",
    "            \n",
    "        fig, axes = plt.subplots(1, axes_count, figsize=(12, 6), sharey=False)\n",
    "        \n",
    "        axes[0].plot(curve.parameter_values, curve.train_metrics_mean, alpha=0.5, color='blue', label='train')\n",
    "        \n",
    "        axes[validation_axes].plot(\n",
    "            curve.parameter_values, curve.test_metrics_mean, alpha=0.5, color='red', label='validation')\n",
    "        \n",
    "        for i in range (0, validation_axes + 1):\n",
    "            axes[i].set_ylabel(\"LogLoss\")\n",
    "            axes[i].set_xlabel(curve.variable_parameter_name)\n",
    "            axes[i].legend(loc='best')\n",
    "            axes[i].set_ylim(ylim)\n",
    "        \n",
    "        if learning_time:\n",
    "            axes[axes_count-1].plot(\n",
    "                curve.parameter_values, curve.learning_times_mean, alpha=0.5, color='green', label='learning_time')\n",
    "            axes[axes_count-1].set_ylabel(\"Learning time\")\n",
    "            axes[axes_count-1].set_xlabel(curve.variable_parameter_name)\n",
    "            axes[axes_count-1].legend(loc='best')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "y = train_df['CareerSatisfaction']\n",
    "X = train_df.drop(columns=['CareerSatisfaction'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 10.2 Parameter 'n_estimators'"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "unaltered_params = {\"random_state\": 17}\n",
    "estimators_curve = LearningCurve('n_estimators', unaltered_params, X, y)\n",
    "\n",
    "for estimators in [10, 150, 350, 400]:\n",
    "    estimators_curve.add_learning_point(estimators)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "Plotter().plot_curve(estimators_curve, ylim=[0.25, 3])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Due to different scale it hard to see the training curve behavior. Plotting separately."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "Plotter().plot_curve(estimators_curve, separately=True, learning_time=False, ylim=None)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "estimators_curve.get_curve_points()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Such difference in metric improvement can be an indicator of overfitting. Analysing max_depth parameter as one of the affecting overfitting parameters."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 10.3 Parameter 'max_depth'"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "unaltered_params = {\"random_state\": 17, 'n_estimators': 350}\n",
    "depth_curve = LearningCurve('max_depth', unaltered_params, X, y)\n",
    "\n",
    "for depth in [5, 10, 20, 50, 70, 100]:\n",
    "    depth_curve.add_learning_point(depth)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "Plotter().plot_curve(depth_curve)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "depth_curve.get_curve_points()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "It looks like lower value of max_depth helps to fight overfitting on the train set, but worsen test metric as well. The first doesn't seem so worthy to sacrifice the second."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 10.4 Parameter 'max_features'"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "unaltered_params = {\"random_state\": 17, 'n_estimators': 350}\n",
    "features_curve = LearningCurve('max_features', unaltered_params, X, y)\n",
    "\n",
    "for features in [25, 50, 100, 150]:\n",
    "    point = features_curve.add_learning_point(features)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "Plotter().plot_curve(features_curve)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "features_curve.get_curve_points()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 10.5 Parameter 'min_samples_split '"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "unaltered_params = {\"random_state\": 17, 'n_estimators': 350, 'max_features':150 }\n",
    "split_curve = LearningCurve('min_samples_split', unaltered_params, X, y)\n",
    "\n",
    "for split in [2, 4, 16, 32, 64]:\n",
    "    point = split_curve.add_learning_point(split)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "split_curve.get_curve_points()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "Plotter().plot_curve(split_curve)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "Plotter().plot_curve(split_curve, separately=True, learning_time=False, ylim=None)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The graphs show that higher values of min_samples_split both help fighting overfitting on the train set and improving test set metric, along with decreasing learning time. Using the best found value."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "split_curve.get_best_point()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 10.6 Parameter 'min_samples_leaf'"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "unaltered_params = {\"random_state\": 17, 'n_estimators': 350, 'max_features':150, 'min_samples_split' : 64 }\n",
    "leaf_curve = LearningCurve('min_samples_leaf', unaltered_params, X, y)\n",
    "\n",
    "for samples_leaf in [1, 2, 4, 16, 32, 64, 128]:\n",
    "    point = leaf_curve.add_learning_point(samples_leaf)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "Plotter().plot_curve(leaf_curve, ylim=None)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "leaf_curve.get_curve_points()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Increasing the min_samples_leaf leads to overfitting not only on train set, but on the validation as well. Not changing the default value \"1\" of this parameter."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 11. Prediction for test or hold-out samples<a id='part11'></a>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "y_train = train_df['CareerSatisfaction']\n",
    "X_train = train_df.drop(columns=['CareerSatisfaction'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Load hold-out set from the disk"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "test_df = pd.read_csv('test.csv', index_col='Respondent')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "y_test = test_df['CareerSatisfaction']\n",
    "X_test = test_df.drop(columns=['CareerSatisfaction'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "random_forest = RandomForestClassifier(\n",
    "    random_state=17, n_estimators=350, max_features=150, min_samples_split=64)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Forming duplicate train set with samples balanced among classes (with simple bootstrapping, as earlier)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_balanced_df = balance_dataset(train_df)\n",
    "\n",
    "y_train_balanced = train_balanced_df['CareerSatisfaction']\n",
    "X_train_balanced = train_balanced_df.drop(columns=['CareerSatisfaction'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Fitting estimators on the whole train sets, evaluating on hold-out set."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "random_forest_balanced = RandomForestClassifier(\n",
    "    random_state=17, n_estimators=350, max_features=150, min_samples_split=64)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "evaluate_predictions(random_forest, X_train, y_train, X_test, y_test)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "evaluate_predictions(random_forest_balanced, X_train_balanced, y_train_balanced, X_test, y_test)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This way, the value of the metric on the hold-out set is close to those seen during cross-validation. The first is even slightly better (1.1169307446742236 vs. 1.1173659115520571), but there's no reason to suspect biased or leaked hold-out set creation. The samples were just randomly selected from the common set, saved to disk, loaded temporarily  only to complement with features proven on the train set, and did not participated in hyperparameter tuning. It looks like such small improvement can be explained by a lucky coincidence. <br/>\n",
    "Again, straight method of class balancing decreases logloss metric value, but the confusion matrixes (both, actually) looks much better than those in [8.3 Confusion matrix](#confusion_matrix) part with initial 7 classes. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 12. Conclusion<a id='part12'></a>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For the conclusion let's first see what features are considered as the most important by the RandomForest model. The factors that can influence developer's career satisfaction looks like the most interesting part of such research.<br>\n",
    "(The transcript of corresponding questions can be found in the [beggining of this notebook](#survey_questions))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "most_important_indx = np.argsort(-random_forest.feature_importances_)[:100]\n",
    "X_train.columns[most_important_indx]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<p>Again, the most significant feature is current job satisfaction. The overall career satisfaction is also affected by salary, coding experience, relationships with peers and involvement into community, job search status, attitude to benefits, attitude to job characteristics, (suddenly!) attitude to ads, current company conditions, lifestyle, hopes, education, dev roles, country and ethnicity, etc.</p>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<p>The 1.116 is not so high value for logloss metric, but the dataset is quite raw, and it turned out to be a challenging task to distinguish developer's career satisfaction level from the \"default\" answer \"Moderately satisfied\". Anyway, the improvement over baselines was achieved. The model looks like allowing further enhancement by hyperparameters tuning, but it requires high computational power and more time. There are over 700 columns in the dataset now; possibly this number can be reduced during further solution improvement, using, for example, feature importances denoted by the developed model.</p>\n",
    "<p>However, the greatest benefit would be brought by gathering more answers from respondents giving grade other than \"Moderately satisfied\", especially those with \"dissatisfied\" levels. (And it is also a valuable task to find out what makes people unhappy). As it can be seen above, even simple duplication/bootstraping of the data from minority classes, or uniting them together, helps to reduce their confusion with higher levels of satisfaction.</p>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<p>Possibly, another interesting research can be devoted to predicting <b>JobSatisfaction</b> level by other factors. Maybe they would correlate with it more directly than with overall career satisfaction. And the CareerSatisfaction value is hard to predict without JobSatisfaction feature at this moment. </p>\n",
    "<p>Nevertheless, both tasks can provide helpful information and insights for managers and HRs as well as for the developer community itself.</p>"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.0"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
