{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 数据处理与特征工程——原始特征  \n",
    "1.train数据处理  \n",
    "2.test数据处理    \n",
    "3.songs数据处理  \n",
    "4.members数据处理  \n",
    "5.生成原始特征训练和测试文件  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "分别对train&test,songs,members数据集进行数据处理，并且将处理后的特征进行合并，得到原始特征训练文件train_merge.csv和测试文件test_merge.csv。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "import numpy as np\n",
    "import scipy as sp\n",
    "import matplotlib.pyplot as plt\n",
    "import seaborn as sns\n",
    "from sklearn import preprocessing\n",
    "from sklearn.preprocessing import StandardScaler\n",
    "from collections import Counter\n",
    "import pickle\n",
    "import time"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1. train数据处理  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>msno</th>\n",
       "      <th>song_id</th>\n",
       "      <th>source_system_tab</th>\n",
       "      <th>source_screen_name</th>\n",
       "      <th>source_type</th>\n",
       "      <th>target</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>FGtllVqz18RPiwJj/edr2gV78zirAiY/9SmYvia+kCg=</td>\n",
       "      <td>BBzumQNXUHKdEBOB7mAJuzok+IJA1c2Ryg/yzTF6tik=</td>\n",
       "      <td>explore</td>\n",
       "      <td>Explore</td>\n",
       "      <td>online-playlist</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>Xumu+NIjS6QYVxDS4/t3SawvJ7viT9hPKXmf0RtLNx8=</td>\n",
       "      <td>bhp/MpSNoqoxOIB+/l8WPqu6jldth4DIpCm3ayXnJqM=</td>\n",
       "      <td>my library</td>\n",
       "      <td>Local playlist more</td>\n",
       "      <td>local-playlist</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>Xumu+NIjS6QYVxDS4/t3SawvJ7viT9hPKXmf0RtLNx8=</td>\n",
       "      <td>JNWfrrC7zNN7BdMpsISKa4Mw+xVJYNnxXh3/Epw7QgY=</td>\n",
       "      <td>my library</td>\n",
       "      <td>Local playlist more</td>\n",
       "      <td>local-playlist</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>Xumu+NIjS6QYVxDS4/t3SawvJ7viT9hPKXmf0RtLNx8=</td>\n",
       "      <td>2A87tzfnJTSWqD7gIZHisolhe4DMdzkbd6LzO1KHjNs=</td>\n",
       "      <td>my library</td>\n",
       "      <td>Local playlist more</td>\n",
       "      <td>local-playlist</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>FGtllVqz18RPiwJj/edr2gV78zirAiY/9SmYvia+kCg=</td>\n",
       "      <td>3qm6XTZ6MOCU11x8FIVbAGH5l5uMkT3/ZalWG1oo2Gc=</td>\n",
       "      <td>explore</td>\n",
       "      <td>Explore</td>\n",
       "      <td>online-playlist</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                                           msno  \\\n",
       "0  FGtllVqz18RPiwJj/edr2gV78zirAiY/9SmYvia+kCg=   \n",
       "1  Xumu+NIjS6QYVxDS4/t3SawvJ7viT9hPKXmf0RtLNx8=   \n",
       "2  Xumu+NIjS6QYVxDS4/t3SawvJ7viT9hPKXmf0RtLNx8=   \n",
       "3  Xumu+NIjS6QYVxDS4/t3SawvJ7viT9hPKXmf0RtLNx8=   \n",
       "4  FGtllVqz18RPiwJj/edr2gV78zirAiY/9SmYvia+kCg=   \n",
       "\n",
       "                                        song_id source_system_tab  \\\n",
       "0  BBzumQNXUHKdEBOB7mAJuzok+IJA1c2Ryg/yzTF6tik=           explore   \n",
       "1  bhp/MpSNoqoxOIB+/l8WPqu6jldth4DIpCm3ayXnJqM=        my library   \n",
       "2  JNWfrrC7zNN7BdMpsISKa4Mw+xVJYNnxXh3/Epw7QgY=        my library   \n",
       "3  2A87tzfnJTSWqD7gIZHisolhe4DMdzkbd6LzO1KHjNs=        my library   \n",
       "4  3qm6XTZ6MOCU11x8FIVbAGH5l5uMkT3/ZalWG1oo2Gc=           explore   \n",
       "\n",
       "    source_screen_name      source_type  target  \n",
       "0              Explore  online-playlist       1  \n",
       "1  Local playlist more   local-playlist       1  \n",
       "2  Local playlist more   local-playlist       1  \n",
       "3  Local playlist more   local-playlist       1  \n",
       "4              Explore  online-playlist       1  "
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "dpath = \"./data/\"\n",
    "train = pd.read_csv(dpath+\"train.csv\")\n",
    "train.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1.1 把特征取值较少的类别进行了合并，填补缺失值，把缺失值作为新的类别，得到train_clean.csv"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "def train_cleaner(input_file,output_file): \n",
    "    \"\"\"\n",
    "    function:\n",
    "        clean train data and write clean train data into output_file\n",
    "    params:\n",
    "        input_file: input file path\n",
    "        output_file: output file path\n",
    "    \"\"\"\n",
    "    fin = open(dpath+input_file,\"r+\")\n",
    "    fout = open(dpath+output_file,\"w+\")\n",
    "    #写入列名\n",
    "    ocolnames = [\"msno\",\"song_id\",\"source_system_tab\",\"source_screen_name\",\"source_type\",\"target\"]\n",
    "    fout.write(\",\".join(ocolnames) + \"\\n\")  \n",
    "    start = 0\n",
    "    #合并取值较少的特征并把缺失值作为新的类别\n",
    "    for line in fin:\n",
    "        if start == 0:\n",
    "            start+=1\n",
    "            continue\n",
    "        cols = line.strip().split(\",\")\n",
    "        if cols[2] in [\"notification\",\"settings\"]:\n",
    "            cols[2] = \"other system\"\n",
    "        if cols[2] == \"\":\n",
    "            cols[2] = \"nan system\"\n",
    "        if cols[3] in [\"My library_Search\",\"Self profile more\",\"Concert\",\"Payment\"]:\n",
    "            cols[3] = \"other screen\"\n",
    "        if cols[3] == \"\":\n",
    "            cols[3] = \"nan screen\"\n",
    "        if cols[4] in [\"artist\",\"my-daily-playlist\"]:\n",
    "            cols[4] = \"other type\"\n",
    "        if cols[4] == \"\":\n",
    "            cols[4] = \"nan type\"\n",
    "        fout.write(\",\".join(cols)+\"\\n\")\n",
    "    fin.close()\n",
    "    fout.close()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1.2 类别型特征LabelEncoder"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "def train_encode(input_file):\n",
    "    \"\"\"\n",
    "    feature LabelEncoder\n",
    "    args:\n",
    "        input_file: input file path\n",
    "    return:\n",
    "        labelencoded features: source_tab,source_name,source_type\n",
    "    \"\"\"\n",
    "    train_clean = pd.read_csv(dpath+input_file)\n",
    "    le = preprocessing.LabelEncoder()\n",
    "    colnames = [\"source_system_tab\",\"source_screen_name\",\"source_type\"]\n",
    "    #对3个特征进行LabelEncode\n",
    "    for colname in colnames:\n",
    "        if colname == \"source_system_tab\":\n",
    "            source_tab = le.fit_transform(train_clean[colname].astype(str))\n",
    "            pickle.dump(le,open(dpath+\"le_tab.pkl\",\"wb\"))\n",
    "        if colname == \"source_screen_name\":\n",
    "            source_name = le.fit_transform(train_clean[colname].astype(str))\n",
    "            pickle.dump(le,open(dpath+\"le_name.pkl\",\"wb\"))\n",
    "        if colname == \"source_type\":\n",
    "            source_type = le.fit_transform(train_clean[colname].astype(str))\n",
    "            pickle.dump(le,open(dpath+\"le_type.pkl\",\"wb\"))\n",
    "    return source_tab,source_name,source_type"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "train_encode(\"train_clean.csv\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1.3 把处理好的train数据写入文件，得到train_data.csv"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "def generate_train_data(input_file,output_file):\n",
    "    \"\"\"\n",
    "    write preprocessed feature into output_file\n",
    "    args:\n",
    "        input_file: input file path\n",
    "        output_file: output file path\n",
    "    \"\"\"\n",
    "    ocolnames = [\"msno\",\"song_id\",\"source_system_tab\",\"source_screen_name\",\"source_type\",\"target\"]\n",
    "    fin = open(dpath+input_file,\"r+\")\n",
    "    fout = open(dpath+output_file,\"w+\")\n",
    "    #写入列名\n",
    "    fout.write(\",\".join(ocolnames)+\"\\n\")\n",
    "    i = 0\n",
    "    start = 0\n",
    "    #得到LabelEncode后的特征\n",
    "    source_tab,source_name,source_type = train_encode(input_file)\n",
    "    #按行写入文件\n",
    "    for line in fin:\n",
    "        if start == 0:\n",
    "            start+=1\n",
    "            continue\n",
    "        cols = line.strip().split(\",\")\n",
    "        cols[2] = str(source_tab[i])\n",
    "        cols[3] = str(source_name[i])\n",
    "        cols[4] = str(source_type[i])\n",
    "        fout.write(\",\".join(cols)+\"\\n\")\n",
    "        i += 1\n",
    "        #文件遍历指针在第7377418行，舍去最后空行\n",
    "        if i == 7377418:\n",
    "            break\n",
    "    fin.close()\n",
    "    fout.close()    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Wall time: 1min 18s\n"
     ]
    }
   ],
   "source": [
    "%%time\n",
    "train_cleaner(\"train.csv\",\"train_clean.csv\")\n",
    "generate_train_data(\"train_clean.csv\",\"train_data.csv\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>msno</th>\n",
       "      <th>song_id</th>\n",
       "      <th>source_system_tab</th>\n",
       "      <th>source_screen_name</th>\n",
       "      <th>source_type</th>\n",
       "      <th>target</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>FGtllVqz18RPiwJj/edr2gV78zirAiY/9SmYvia+kCg=</td>\n",
       "      <td>BBzumQNXUHKdEBOB7mAJuzok+IJA1c2Ryg/yzTF6tik=</td>\n",
       "      <td>1</td>\n",
       "      <td>6</td>\n",
       "      <td>5</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>Xumu+NIjS6QYVxDS4/t3SawvJ7viT9hPKXmf0RtLNx8=</td>\n",
       "      <td>bhp/MpSNoqoxOIB+/l8WPqu6jldth4DIpCm3ayXnJqM=</td>\n",
       "      <td>3</td>\n",
       "      <td>7</td>\n",
       "      <td>3</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>Xumu+NIjS6QYVxDS4/t3SawvJ7viT9hPKXmf0RtLNx8=</td>\n",
       "      <td>JNWfrrC7zNN7BdMpsISKa4Mw+xVJYNnxXh3/Epw7QgY=</td>\n",
       "      <td>3</td>\n",
       "      <td>7</td>\n",
       "      <td>3</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>Xumu+NIjS6QYVxDS4/t3SawvJ7viT9hPKXmf0RtLNx8=</td>\n",
       "      <td>2A87tzfnJTSWqD7gIZHisolhe4DMdzkbd6LzO1KHjNs=</td>\n",
       "      <td>3</td>\n",
       "      <td>7</td>\n",
       "      <td>3</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>FGtllVqz18RPiwJj/edr2gV78zirAiY/9SmYvia+kCg=</td>\n",
       "      <td>3qm6XTZ6MOCU11x8FIVbAGH5l5uMkT3/ZalWG1oo2Gc=</td>\n",
       "      <td>1</td>\n",
       "      <td>6</td>\n",
       "      <td>5</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                                           msno  \\\n",
       "0  FGtllVqz18RPiwJj/edr2gV78zirAiY/9SmYvia+kCg=   \n",
       "1  Xumu+NIjS6QYVxDS4/t3SawvJ7viT9hPKXmf0RtLNx8=   \n",
       "2  Xumu+NIjS6QYVxDS4/t3SawvJ7viT9hPKXmf0RtLNx8=   \n",
       "3  Xumu+NIjS6QYVxDS4/t3SawvJ7viT9hPKXmf0RtLNx8=   \n",
       "4  FGtllVqz18RPiwJj/edr2gV78zirAiY/9SmYvia+kCg=   \n",
       "\n",
       "                                        song_id  source_system_tab  \\\n",
       "0  BBzumQNXUHKdEBOB7mAJuzok+IJA1c2Ryg/yzTF6tik=                  1   \n",
       "1  bhp/MpSNoqoxOIB+/l8WPqu6jldth4DIpCm3ayXnJqM=                  3   \n",
       "2  JNWfrrC7zNN7BdMpsISKa4Mw+xVJYNnxXh3/Epw7QgY=                  3   \n",
       "3  2A87tzfnJTSWqD7gIZHisolhe4DMdzkbd6LzO1KHjNs=                  3   \n",
       "4  3qm6XTZ6MOCU11x8FIVbAGH5l5uMkT3/ZalWG1oo2Gc=                  1   \n",
       "\n",
       "   source_screen_name  source_type  target  \n",
       "0                   6            5       1  \n",
       "1                   7            3       1  \n",
       "2                   7            3       1  \n",
       "3                   7            3       1  \n",
       "4                   6            5       1  "
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "train_data = pd.read_csv(dpath+\"train_data.csv\")\n",
    "train_data.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2. test数据处理"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "2.1 把特征取值较少的类别进行了合并，填补缺失值，把缺失值作为新的类别，得到train_clean.csv"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [],
   "source": [
    "def test_cleaner(input_file,output_file): \n",
    "    \"\"\"\n",
    "    clean test data and write test data into output_file\n",
    "    args:\n",
    "        input_file: input file path\n",
    "        output_file: output file path\n",
    "    \"\"\"\n",
    "    fin = open(dpath+input_file,\"r+\")\n",
    "    fout = open(dpath+output_file,\"w+\")\n",
    "    ocolnames = [\"id\",\"msno\",\"song_id\",\"source_system_tab\",\"source_screen_name\",\"source_type\"]\n",
    "    fout.write(\",\".join(ocolnames) + \"\\n\")  \n",
    "    start = 0\n",
    "    for line in fin:\n",
    "        if start == 0:\n",
    "            start+=1\n",
    "            continue\n",
    "        cols = line.strip().split(\",\")\n",
    "        #合并取值较少的特征并把缺失值作为新的类别\n",
    "        if cols[3] in [\"notification\",\"settings\"]:\n",
    "            cols[3] = \"other system\"\n",
    "        if cols[3] == \"\":\n",
    "            cols[3] = \"nan system\"\n",
    "        #该特征test多了2个取值\n",
    "        if cols[4] in [\"My library_Search\",\"Self profile more\",\"Concert\",\"Payment\",\\\n",
    "                       \"People local\",\"People global\"]:\n",
    "            cols[4] = \"other screen\"\n",
    "        if cols[4] == \"\":\n",
    "            cols[4] = \"nan screen\"\n",
    "        if cols[5] in [\"artist\",\"my-daily-playlist\"]:\n",
    "            cols[5] = \"other type\"\n",
    "        if cols[5] == \"\":\n",
    "            cols[5] = \"nan type\"\n",
    "        fout.write(\",\".join(cols)+\"\\n\")\n",
    "    fin.close()\n",
    "    fout.close()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "2.2 类别型特征LabelEncoder"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [],
   "source": [
    "def test_encode(input_file):\n",
    "    \"\"\"\n",
    "    feature LabelEncoder\n",
    "    args:\n",
    "        input_file: input file path\n",
    "    return:\n",
    "        labelencoded features: source_tab,source_name,source_type\n",
    "    \"\"\"\n",
    "    test_clean = pd.read_csv(dpath+input_file)\n",
    "    le_tab = pickle.load(open(dpath+\"le_tab.pkl\",\"rb\"))\n",
    "    le_name = pickle.load(open(dpath+\"le_name.pkl\",\"rb\"))\n",
    "    le_type = pickle.load(open(dpath+\"le_type.pkl\",\"rb\"))\n",
    "    colnames = [\"source_system_tab\",\"source_screen_name\",\"source_type\"]\n",
    "    #对3个特征进行LabelEncode编码，直接transform\n",
    "    for colname in colnames:\n",
    "        if colname == \"source_system_tab\":\n",
    "            source_tab = le_tab.transform(test_clean[colname].astype(str))\n",
    "        if colname == \"source_screen_name\":\n",
    "            source_name = le_name.transform(test_clean[colname].astype(str))\n",
    "        if colname == \"source_type\":\n",
    "            source_type = le_type.transform(test_clean[colname].astype(str))\n",
    "    return source_tab,source_name,source_type"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "2.3 把处理好的test数据写入文件，得到test_data.csv"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [],
   "source": [
    "def generate_test_data(input_file,output_file):\n",
    "    \"\"\"\n",
    "    write preprocessed feature into output_file\n",
    "    args:\n",
    "        input_file: input file path\n",
    "        output_file: output file path\n",
    "    \"\"\"\n",
    "    ocolnames = [\"id\",\"msno\",\"song_id\",\"source_system_tab\",\"source_screen_name\",\"source_type\"]\n",
    "    fin = open(dpath+input_file,\"r+\")\n",
    "    fout = open(dpath+output_file,\"w+\")\n",
    "    #写入列名\n",
    "    fout.write(\",\".join(ocolnames)+\"\\n\")\n",
    "    i = 0\n",
    "    start = 0\n",
    "    #得到LabelEncode后的特征\n",
    "    source_tab,source_name,source_type = test_encode(input_file)\n",
    "    #按行写入文件\n",
    "    for line in fin:\n",
    "        if start == 0:\n",
    "            start+=1\n",
    "            continue\n",
    "        cols = line.strip().split(\",\")\n",
    "        cols[3] = str(source_tab[i])\n",
    "        cols[4] = str(source_name[i])\n",
    "        cols[5] = str(source_type[i])\n",
    "        fout.write(\",\".join(cols)+\"\\n\")\n",
    "        i += 1\n",
    "        #文件遍历指针在第2556790行，舍去最后空行，不然会报错\n",
    "        if i == 2556790:\n",
    "            break\n",
    "    fin.close()\n",
    "    fout.close()    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Wall time: 29.1 s\n"
     ]
    }
   ],
   "source": [
    "%%time\n",
    "test_cleaner(\"test.csv\",\"test_clean.csv\")\n",
    "generate_test_data(\"test_clean.csv\",\"test_data.csv\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>id</th>\n",
       "      <th>msno</th>\n",
       "      <th>song_id</th>\n",
       "      <th>source_system_tab</th>\n",
       "      <th>source_screen_name</th>\n",
       "      <th>source_type</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>0</td>\n",
       "      <td>V8ruy7SGk7tDm3zA51DPpn6qutt+vmKMBKa21dp54uM=</td>\n",
       "      <td>WmHKgKMlp1lQMecNdNvDMkvIycZYHnFwDT72I5sIssc=</td>\n",
       "      <td>3</td>\n",
       "      <td>7</td>\n",
       "      <td>2</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>1</td>\n",
       "      <td>V8ruy7SGk7tDm3zA51DPpn6qutt+vmKMBKa21dp54uM=</td>\n",
       "      <td>y/rsZ9DC7FwK5F2PK2D5mj+aOBUJAjuu3dZ14NgE0vM=</td>\n",
       "      <td>3</td>\n",
       "      <td>7</td>\n",
       "      <td>2</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>2</td>\n",
       "      <td>/uQAlrAkaczV+nWCd2sPF2ekvXPRipV7q0l+gbLuxjw=</td>\n",
       "      <td>8eZLFOdGVdXBSqoAv5nsLigeH2BvKXzTQYtUM53I0k4=</td>\n",
       "      <td>0</td>\n",
       "      <td>16</td>\n",
       "      <td>9</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>3</td>\n",
       "      <td>1a6oo/iXKatxQx4eS9zTVD+KlSVaAFbTIqVvwLC1Y0k=</td>\n",
       "      <td>ztCf8thYsS4YN3GcIL/bvoxLm/T5mYBVKOO4C9NiVfQ=</td>\n",
       "      <td>7</td>\n",
       "      <td>11</td>\n",
       "      <td>7</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>4</td>\n",
       "      <td>1a6oo/iXKatxQx4eS9zTVD+KlSVaAFbTIqVvwLC1Y0k=</td>\n",
       "      <td>MKVMpslKcQhMaFEgcEQhEfi5+RZhMYlU3eRDpySrH8Y=</td>\n",
       "      <td>7</td>\n",
       "      <td>11</td>\n",
       "      <td>7</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "   id                                          msno  \\\n",
       "0   0  V8ruy7SGk7tDm3zA51DPpn6qutt+vmKMBKa21dp54uM=   \n",
       "1   1  V8ruy7SGk7tDm3zA51DPpn6qutt+vmKMBKa21dp54uM=   \n",
       "2   2  /uQAlrAkaczV+nWCd2sPF2ekvXPRipV7q0l+gbLuxjw=   \n",
       "3   3  1a6oo/iXKatxQx4eS9zTVD+KlSVaAFbTIqVvwLC1Y0k=   \n",
       "4   4  1a6oo/iXKatxQx4eS9zTVD+KlSVaAFbTIqVvwLC1Y0k=   \n",
       "\n",
       "                                        song_id  source_system_tab  \\\n",
       "0  WmHKgKMlp1lQMecNdNvDMkvIycZYHnFwDT72I5sIssc=                  3   \n",
       "1  y/rsZ9DC7FwK5F2PK2D5mj+aOBUJAjuu3dZ14NgE0vM=                  3   \n",
       "2  8eZLFOdGVdXBSqoAv5nsLigeH2BvKXzTQYtUM53I0k4=                  0   \n",
       "3  ztCf8thYsS4YN3GcIL/bvoxLm/T5mYBVKOO4C9NiVfQ=                  7   \n",
       "4  MKVMpslKcQhMaFEgcEQhEfi5+RZhMYlU3eRDpySrH8Y=                  7   \n",
       "\n",
       "   source_screen_name  source_type  \n",
       "0                   7            2  \n",
       "1                   7            2  \n",
       "2                  16            9  \n",
       "3                  11            7  \n",
       "4                  11            7  "
      ]
     },
     "execution_count": 20,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "test_data = pd.read_csv(dpath+\"test_data.csv\")\n",
    "test_data.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(2556790, 6)"
      ]
     },
     "execution_count": 21,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "test_data.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.songs数据处理"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "songs = pd.read_csv(dpath+\"songs.csv\")\n",
    "songs.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "songs.info()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "3.1 清洗songs，增加mult_genre特征"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [],
   "source": [
    "def songs_cleaner(input_file,output_file): \n",
    "    \"\"\"\n",
    "    function:\n",
    "        clean songs data and write songs data into output_file\n",
    "    params:\n",
    "        input_file: input file path\n",
    "        output_file: output file path\n",
    "    \"\"\"\n",
    "    #注意这里的encoding='UTF-8'，不然打开文件报错，为什么？\n",
    "    fin = open(dpath+input_file,\"r+\",encoding='UTF-8')\n",
    "    fout = open(dpath+output_file,\"w+\")\n",
    "    ocolnames = [\"song_id\",\"song_length\",\"genre_ids\",\"language\",\"mult_genre\"]\n",
    "    fout.write(\",\".join(ocolnames) + \"\\n\")\n",
    "    start = 0\n",
    "    for line in fin:\n",
    "        mult_genre = \"0\"\n",
    "        if start == 0:\n",
    "            start+=1\n",
    "            continue\n",
    "        cols = line.strip().split(\",\")\n",
    "        if cols[2] == \"\":\n",
    "            cols[2] = \"1234\"\n",
    "        if \"|\" in cols[2]:\n",
    "            cols[2] = cols[2][0]\n",
    "            mult_genre = \"1\"\n",
    "        #输出列，丢掉cols[3],cols[4],cols[5]\n",
    "        outcols = [cols[0],cols[1],cols[2],cols[6],mult_genre]\n",
    "        fout.write(\",\".join(outcols)+\"\\n\")\n",
    "    fin.close()\n",
    "    fout.close()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "3.2 genre_ids合并归类，少数类别划分为small_1,small_2,small_3"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [],
   "source": [
    "def merge_category(input_file,output_file):\n",
    "    \"\"\"\n",
    "    process genre_ids: merge small categories into small_1,small_2,small_3\n",
    "    params:\n",
    "        input_file: input file path\n",
    "        output_file: output file path\n",
    "    \"\"\"\n",
    "    data = pd.read_csv(dpath+input_file)\n",
    "    small_list1 = []\n",
    "    small_list2 = []\n",
    "    small_list3 = []\n",
    "    feature_count = data[\"genre_ids\"].value_counts()\n",
    "    for i in range(len(feature_count.values)):\n",
    "        if feature_count.values[i]<500:\n",
    "            small_list1.append(feature_count.index[i])\n",
    "        if feature_count.values[i]>=500 and feature_count.values[i]<2000:\n",
    "            small_list2.append(feature_count.index[i])\n",
    "        if feature_count.values[i]>=2000 and feature_count.values[i]<10000:\n",
    "            small_list3.append(feature_count.index[i])\n",
    "    #else不能省略，注意这里要赋值给data[\"genre_ids\"],不然未修改原df\n",
    "    data[\"genre_ids\"] = data.genre_ids.apply(lambda x:\"small_1\" if x in small_list1 else x)\n",
    "    data[\"genre_ids\"] = data.genre_ids.apply(lambda x:\"small_2\" if x in small_list2 else x)\n",
    "    data[\"genre_ids\"] = data.genre_ids.apply(lambda x:\"small_3\" if x in small_list3 else x)\n",
    "    data.to_csv(dpath+output_file,index=False)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "3.3 songs_clean类别型数据LabelEncoder，连续型做StandardScaler"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [],
   "source": [
    "def songs_encode(input_file):\n",
    "    \"\"\"\n",
    "     LabelEncoder to genre_ids and language, StandardScaler to song_length\n",
    "     args:\n",
    "         input_file: input file path\n",
    "    \"\"\"\n",
    "    songs_clean = pd.read_csv(dpath+input_file)\n",
    "    #song_lenth标准化\n",
    "    ss = StandardScaler()\n",
    "    #不接受1维数组，reshape成二维数组，注意后面写入文件时如何取值\n",
    "    song_length = np.array(songs_clean[\"song_length\"]).reshape(-1,1)\n",
    "    song_length = ss.fit_transform(song_length)\n",
    "    song_length = np.around(song_length,decimals=5)\n",
    "    #genre_ids和language编码\n",
    "    le = preprocessing.LabelEncoder()\n",
    "    colnames = [\"genre_ids\",\"language\"]\n",
    "    for colname in colnames:\n",
    "        if colname == \"genre_ids\":\n",
    "            genre_ids = le.fit_transform(songs_clean[colname].astype(str))\n",
    "        if colname == \"language\":\n",
    "            language = le.fit_transform(songs_clean[colname].astype(str))\n",
    "    return song_length,genre_ids,language"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "3.4 把songs清洗好的特征列写入文件，得到songs_data.csv"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [],
   "source": [
    "def generate_songs_data(input_file,output_file):\n",
    "    \"\"\"\n",
    "    write processed feature into output_file and generate songs_data.csv\n",
    "    args:\n",
    "        input_file: input file path\n",
    "        output_file: output file path\n",
    "    \"\"\"\n",
    "    ocolnames = [\"song_id\",\"song_length\",\"genre_ids\",\"language\",\"mult_genre\"]\n",
    "    fin = open(dpath+input_file,\"r+\")\n",
    "    fout = open(dpath+output_file,\"w+\")\n",
    "    fout.write(\",\".join(ocolnames)+\"\\n\")\n",
    "    i = 0\n",
    "    start = 0\n",
    "    song_length,genre_ids,language = songs_encode(\"songs_clean.csv\")\n",
    "    for line in fin:\n",
    "        if start == 0:\n",
    "            start+=1\n",
    "            continue\n",
    "        cols = line.strip().split(\",\")\n",
    "        #song_length是(-1,1)二维数组，song_length[i][0]取出值\n",
    "        cols[1] = str(song_length[i][0])\n",
    "        cols[2] = str(genre_ids[i])\n",
    "        cols[3] = str(language[i])\n",
    "        fout.write(\",\".join(cols)+\"\\n\")\n",
    "        i += 1\n",
    "        #文件遍历指针在第2296320行，舍去最后空行\n",
    "        if i == 2296320:\n",
    "            break\n",
    "    fin.close()\n",
    "    fout.close()    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {
    "collapsed": true
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "D:\\Program Files\\Anaconda\\lib\\site-packages\\sklearn\\utils\\validation.py:595: DataConversionWarning: Data with input dtype int64 was converted to float64 by StandardScaler.\n",
      "  warnings.warn(msg, DataConversionWarning)\n",
      "D:\\Program Files\\Anaconda\\lib\\site-packages\\sklearn\\utils\\validation.py:595: DataConversionWarning: Data with input dtype int64 was converted to float64 by StandardScaler.\n",
      "  warnings.warn(msg, DataConversionWarning)\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Wall time: 2min\n"
     ]
    }
   ],
   "source": [
    "%%time\n",
    "songs_cleaner(\"songs.csv\",\"songs_clean.csv\")\n",
    "merge_category(\"songs_clean.csv\",\"songs_clean.csv\")\n",
    "generate_songs_data(\"songs_clean.csv\",\"songs_data.csv\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>song_id</th>\n",
       "      <th>song_length</th>\n",
       "      <th>genre_ids</th>\n",
       "      <th>language</th>\n",
       "      <th>mult_genre</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>CXoTN1eb7AI+DntdU1vbcwGRV4SCIDxZu+YD8JP8r4E=</td>\n",
       "      <td>0.00402</td>\n",
       "      <td>25</td>\n",
       "      <td>4</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>o0kFgae9QtnYgRkVPqLJwa05zIhRlUjfF7O1tDw0ZDU=</td>\n",
       "      <td>-0.30865</td>\n",
       "      <td>22</td>\n",
       "      <td>5</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>DwVvVurfpuz+XPuFvucclVQEyPqcpUkHR0ne1RQzPs0=</td>\n",
       "      <td>-0.09454</td>\n",
       "      <td>25</td>\n",
       "      <td>5</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>dKMBWoZyScdxSkihKG+Vf47nc18N9q4m58+b4e7dSSE=</td>\n",
       "      <td>0.16506</td>\n",
       "      <td>25</td>\n",
       "      <td>4</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>W3bqWd3T+VeHFzHAUfARgW9AvVRaF4N5Yzm4Mr6Eo/o=</td>\n",
       "      <td>-0.66287</td>\n",
       "      <td>28</td>\n",
       "      <td>8</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                                        song_id  song_length  genre_ids  \\\n",
       "0  CXoTN1eb7AI+DntdU1vbcwGRV4SCIDxZu+YD8JP8r4E=      0.00402         25   \n",
       "1  o0kFgae9QtnYgRkVPqLJwa05zIhRlUjfF7O1tDw0ZDU=     -0.30865         22   \n",
       "2  DwVvVurfpuz+XPuFvucclVQEyPqcpUkHR0ne1RQzPs0=     -0.09454         25   \n",
       "3  dKMBWoZyScdxSkihKG+Vf47nc18N9q4m58+b4e7dSSE=      0.16506         25   \n",
       "4  W3bqWd3T+VeHFzHAUfARgW9AvVRaF4N5Yzm4Mr6Eo/o=     -0.66287         28   \n",
       "\n",
       "   language  mult_genre  \n",
       "0         4           0  \n",
       "1         5           0  \n",
       "2         5           0  \n",
       "3         4           0  \n",
       "4         8           0  "
      ]
     },
     "execution_count": 27,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "songs_data = pd.read_csv(dpath+\"songs_data.csv\")\n",
    "songs_data.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(2296320, 5)"
      ]
     },
     "execution_count": 28,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "songs_data.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "len(songs_data[\"genre_ids\"].unique())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4.members数据处理"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "members = pd.read_csv(dpath + \"members.csv\")\n",
    "members.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "4.1 清洗特征，得到members_clean.csv文件  \n",
    "对bd, registration_init_time, expiration_date进行离散化"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {},
   "outputs": [],
   "source": [
    "def members_encode(members_file):\n",
    "    \"\"\"\n",
    "    process feature: bd,registration_init_time,expiration_date\n",
    "    args:\n",
    "        members_file: members file path\n",
    "    return:\n",
    "        processed feature: bd,registration_init_time,expiration_date\n",
    "    \"\"\"\n",
    "    members = pd.read_csv(dpath+members_file)\n",
    "    bd = members[\"bd\"].apply(lambda x : 0 if x<0 or x>100 else x)\n",
    "    bd = pd.cut(bd.values,bins=[-1,6,12,18,22,25,30,35,40,50,60,100],labels=False)\n",
    "    #pd.cut输入数据类型要求\n",
    "    registration_init_time = pd.cut(members[\"registration_init_time\"].values,\\\n",
    "                        bins=range(20040000,20190000,10000),labels=False)\n",
    "    expiration_date = members[\"expiration_date\"].apply(lambda\\\n",
    "                  x : 20170930 if x==19700101 else x)\n",
    "    expiration_date = pd.cut(expiration_date.values,\\\n",
    "                        bins=range(20040000,20220000,10000),labels=False)\n",
    "    return bd, registration_init_time, expiration_date"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "4.2 把清洗好的特征写入文件，得到members_data.csv文件"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "metadata": {},
   "outputs": [],
   "source": [
    "def generate_members_data(input_file,output_file):\n",
    "    \"\"\"\n",
    "    write processed feature into output file\n",
    "    args:\n",
    "        input_file: input file path\n",
    "        output_file: output file path\n",
    "    \"\"\"\n",
    "    ocolnames = [\"msno\",\"city\",\"bd\",\"gender\",\"registered_via\",\\\n",
    "                 \"registration_init_time\",\"expiration_date\"]\n",
    "    fin = open(dpath+input_file,\"r+\")\n",
    "    fout = open(dpath+output_file,\"w+\")\n",
    "    fout.write(\",\".join(ocolnames)+\"\\n\")\n",
    "    bd,registration_init_time,expiration_date = members_encode(input_file)\n",
    "    i = 0\n",
    "    start = 0\n",
    "    for line in fin:\n",
    "        if start == 0:\n",
    "            start+=1\n",
    "            continue\n",
    "        cols = line.strip().split(\",\")\n",
    "        msno = cols[0]\n",
    "        city = cols[1]\n",
    "        registered_via = cols[4]\n",
    "        #对gender编码\n",
    "        if cols[3] == \"female\":\n",
    "            gender = \"0\"\n",
    "        elif cols[3] == \"male\":\n",
    "            gender = \"1\"\n",
    "        else:\n",
    "            gender = \"2\"\n",
    "        if expiration_date[i] == \"\":\n",
    "            expiration_date[i] = 16\n",
    "        outcols = [msno,city,str(bd[i]),gender,registered_via,\\\n",
    "                  str(registration_init_time[i]),str(expiration_date[i])]\n",
    "        fout.write(\",\".join(outcols)+\"\\n\")\n",
    "        i += 1\n",
    "        #文件遍历指针在第34403行，舍去最后空行\n",
    "        if i == 34403:\n",
    "            break\n",
    "    fin.close()\n",
    "    fout.close()    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Wall time: 878 ms\n"
     ]
    }
   ],
   "source": [
    "%%time\n",
    "generate_members_data(\"members.csv\",\"members_data.csv\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>msno</th>\n",
       "      <th>city</th>\n",
       "      <th>bd</th>\n",
       "      <th>gender</th>\n",
       "      <th>registered_via</th>\n",
       "      <th>registration_init_time</th>\n",
       "      <th>expiration_date</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>XQxgAYj3klVKjR3oxPPXYYFp4soD4TuBghkhMTD4oTw=</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>2</td>\n",
       "      <td>7</td>\n",
       "      <td>7</td>\n",
       "      <td>13</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>UizsfmJb9mV54qE9hCYyU07Va97c0lCRLEQX3ae+ztM=</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>2</td>\n",
       "      <td>7</td>\n",
       "      <td>11</td>\n",
       "      <td>13</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>D8nEhsIOBSoE6VthTaqDX8U6lqjJ7dLdr72mOyLya2A=</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>2</td>\n",
       "      <td>4</td>\n",
       "      <td>12</td>\n",
       "      <td>13</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>mCuD+tZ1hERA/o5GPqk38e041J8ZsBaLcu7nGoIIvhI=</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>2</td>\n",
       "      <td>9</td>\n",
       "      <td>11</td>\n",
       "      <td>11</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>q4HRBfVSssAFS9iRfxWrohxuk9kCYMKjHOEagUMV6rQ=</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>2</td>\n",
       "      <td>4</td>\n",
       "      <td>13</td>\n",
       "      <td>13</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                                           msno  city  bd  gender  \\\n",
       "0  XQxgAYj3klVKjR3oxPPXYYFp4soD4TuBghkhMTD4oTw=     1   0       2   \n",
       "1  UizsfmJb9mV54qE9hCYyU07Va97c0lCRLEQX3ae+ztM=     1   0       2   \n",
       "2  D8nEhsIOBSoE6VthTaqDX8U6lqjJ7dLdr72mOyLya2A=     1   0       2   \n",
       "3  mCuD+tZ1hERA/o5GPqk38e041J8ZsBaLcu7nGoIIvhI=     1   0       2   \n",
       "4  q4HRBfVSssAFS9iRfxWrohxuk9kCYMKjHOEagUMV6rQ=     1   0       2   \n",
       "\n",
       "   registered_via  registration_init_time  expiration_date  \n",
       "0               7                       7               13  \n",
       "1               7                      11               13  \n",
       "2               4                      12               13  \n",
       "3               9                      11               11  \n",
       "4               4                      13               13  "
      ]
     },
     "execution_count": 35,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "members_data = pd.read_csv(dpath+\"members_data.csv\")\n",
    "members_data.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(34403, 7)"
      ]
     },
     "execution_count": 36,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "members_data.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 5.生成原始特征训练和测试文件train_merge和test_merge"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "5.1 把songs_data,members_data转化为字典存储  \n",
    "songs_dict:  \n",
    "{\"song_id\": [\"song_length\",\"genre_ids\",\"language\",\"mult_genre\"], ...}   \n",
    "members_dict:  \n",
    "{\"msno\": [\"city\",\"bd\",\"gender\",\"registered_via\",\"registration_init_time\",\"expiration_date\"], ...}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "metadata": {},
   "outputs": [],
   "source": [
    "def generate_info_dict(input_file,output_file):\n",
    "    \"\"\"\n",
    "    get members_dict and songs_dict\n",
    "    args:\n",
    "        input_file: input file path\n",
    "        output_file: output file path\n",
    "    \"\"\"\n",
    "    fin = open(dpath+input_file,\"r+\")\n",
    "    info_dict = dict()\n",
    "    start = 0\n",
    "    for line in fin:\n",
    "        if start == 0:\n",
    "            start+=1\n",
    "            continue\n",
    "        cols = line.strip().split(\",\")\n",
    "        if cols[0] not in info_dict:\n",
    "            info_dict[cols[0]] = []\n",
    "        info_dict[cols[0]] = cols[1:]\n",
    "    pickle.dump(info_dict,open(dpath+output_file,\"wb\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "5.2 合并train，songs，memmbers数据，生成原始特征训练文件train_merge.csv    \n",
    "train数据集通过歌曲id和用户id映射到songs和members特征"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "dpath = \"./data/\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 56,
   "metadata": {},
   "outputs": [],
   "source": [
    "def generate_train_merge(input_file,output_file):\n",
    "    \"\"\"\n",
    "    merge train, songs and members feature, generate train_merge.csv\n",
    "    args:\n",
    "        input_file: input file path\n",
    "        output_file: output file path\n",
    "    \"\"\"\n",
    "    members_dict = pickle.load(open(dpath+\"members_dict.pkl\",\"rb\"))\n",
    "    songs_dict = pickle.load(open(dpath+\"songs_dict.pkl\",\"rb\"))\n",
    "    fin = open(dpath+input_file,\"r+\")\n",
    "    fout = open(dpath+output_file,\"w+\")\n",
    "    #写入列名，注意ocolnames顺序要和后面outcols对应\n",
    "    train_cols = [\"msno\",\"song_id\",\"source_system_tab\",\"source_screen_name\",\"source_type\",\"target\"]\n",
    "    members_cols = [\"city\",\"bd\",\"gender\",\"registered_via\",\"registration_init_time\",\"expiration_date\"]\n",
    "    songs_cols = [\"song_length\",\"genre_ids\",\"language\",\"mult_genre\"]\n",
    "    ocolnames = train_cols[:5]+members_cols+songs_cols+[\"target\"]\n",
    "    fout.write(\",\".join(ocolnames)+\"\\n\")\n",
    "    outcols = []\n",
    "    start = 0\n",
    "    for line in fin:\n",
    "        if start==0:\n",
    "            start += 1\n",
    "            continue\n",
    "        cols = line.strip().split(\",\")\n",
    "        #只在训练集或测试集出现的members和songs如何处理\n",
    "        if cols[0] in members_dict and cols[1] in songs_dict:\n",
    "            #合并3组特征并写入文件，注意cols[5]是个str，需要转换成list\n",
    "            outcols = cols[:5]+members_dict[cols[0]]+songs_dict[cols[1]]+[cols[5]]\n",
    "        else:\n",
    "            continue\n",
    "        fout.write(\",\".join(outcols)+\"\\n\")\n",
    "    fout.close()  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Wall time: 8.62 s\n"
     ]
    }
   ],
   "source": [
    "%%time\n",
    "generate_info_dict(\"songs_data.csv\",\"songs_dict.pkl\")\n",
    "generate_info_dict(\"members_data.csv\",\"members_dict.pkl\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 57,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Wall time: 35.2 s\n"
     ]
    }
   ],
   "source": [
    "%%time\n",
    "generate_train_merge(\"train_data.csv\",\"train_merge.csv\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 58,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>msno</th>\n",
       "      <th>song_id</th>\n",
       "      <th>source_system_tab</th>\n",
       "      <th>source_screen_name</th>\n",
       "      <th>source_type</th>\n",
       "      <th>city</th>\n",
       "      <th>bd</th>\n",
       "      <th>gender</th>\n",
       "      <th>registered_via</th>\n",
       "      <th>registration_init_time</th>\n",
       "      <th>expiration_date</th>\n",
       "      <th>song_length</th>\n",
       "      <th>genre_ids</th>\n",
       "      <th>language</th>\n",
       "      <th>mult_genre</th>\n",
       "      <th>target</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>FGtllVqz18RPiwJj/edr2gV78zirAiY/9SmYvia+kCg=</td>\n",
       "      <td>BBzumQNXUHKdEBOB7mAJuzok+IJA1c2Ryg/yzTF6tik=</td>\n",
       "      <td>1</td>\n",
       "      <td>6</td>\n",
       "      <td>5</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>2</td>\n",
       "      <td>7</td>\n",
       "      <td>8</td>\n",
       "      <td>13</td>\n",
       "      <td>-0.25183</td>\n",
       "      <td>17</td>\n",
       "      <td>8</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>Xumu+NIjS6QYVxDS4/t3SawvJ7viT9hPKXmf0RtLNx8=</td>\n",
       "      <td>bhp/MpSNoqoxOIB+/l8WPqu6jldth4DIpCm3ayXnJqM=</td>\n",
       "      <td>3</td>\n",
       "      <td>7</td>\n",
       "      <td>3</td>\n",
       "      <td>13</td>\n",
       "      <td>4</td>\n",
       "      <td>0</td>\n",
       "      <td>9</td>\n",
       "      <td>7</td>\n",
       "      <td>13</td>\n",
       "      <td>0.23361</td>\n",
       "      <td>6</td>\n",
       "      <td>8</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>Xumu+NIjS6QYVxDS4/t3SawvJ7viT9hPKXmf0RtLNx8=</td>\n",
       "      <td>JNWfrrC7zNN7BdMpsISKa4Mw+xVJYNnxXh3/Epw7QgY=</td>\n",
       "      <td>3</td>\n",
       "      <td>7</td>\n",
       "      <td>3</td>\n",
       "      <td>13</td>\n",
       "      <td>4</td>\n",
       "      <td>0</td>\n",
       "      <td>9</td>\n",
       "      <td>7</td>\n",
       "      <td>13</td>\n",
       "      <td>-0.13422</td>\n",
       "      <td>6</td>\n",
       "      <td>8</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>Xumu+NIjS6QYVxDS4/t3SawvJ7viT9hPKXmf0RtLNx8=</td>\n",
       "      <td>2A87tzfnJTSWqD7gIZHisolhe4DMdzkbd6LzO1KHjNs=</td>\n",
       "      <td>3</td>\n",
       "      <td>7</td>\n",
       "      <td>3</td>\n",
       "      <td>13</td>\n",
       "      <td>4</td>\n",
       "      <td>0</td>\n",
       "      <td>9</td>\n",
       "      <td>7</td>\n",
       "      <td>13</td>\n",
       "      <td>0.05294</td>\n",
       "      <td>40</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>FGtllVqz18RPiwJj/edr2gV78zirAiY/9SmYvia+kCg=</td>\n",
       "      <td>3qm6XTZ6MOCU11x8FIVbAGH5l5uMkT3/ZalWG1oo2Gc=</td>\n",
       "      <td>1</td>\n",
       "      <td>6</td>\n",
       "      <td>5</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>2</td>\n",
       "      <td>7</td>\n",
       "      <td>8</td>\n",
       "      <td>13</td>\n",
       "      <td>-0.36784</td>\n",
       "      <td>1</td>\n",
       "      <td>8</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                                           msno  \\\n",
       "0  FGtllVqz18RPiwJj/edr2gV78zirAiY/9SmYvia+kCg=   \n",
       "1  Xumu+NIjS6QYVxDS4/t3SawvJ7viT9hPKXmf0RtLNx8=   \n",
       "2  Xumu+NIjS6QYVxDS4/t3SawvJ7viT9hPKXmf0RtLNx8=   \n",
       "3  Xumu+NIjS6QYVxDS4/t3SawvJ7viT9hPKXmf0RtLNx8=   \n",
       "4  FGtllVqz18RPiwJj/edr2gV78zirAiY/9SmYvia+kCg=   \n",
       "\n",
       "                                        song_id  source_system_tab  \\\n",
       "0  BBzumQNXUHKdEBOB7mAJuzok+IJA1c2Ryg/yzTF6tik=                  1   \n",
       "1  bhp/MpSNoqoxOIB+/l8WPqu6jldth4DIpCm3ayXnJqM=                  3   \n",
       "2  JNWfrrC7zNN7BdMpsISKa4Mw+xVJYNnxXh3/Epw7QgY=                  3   \n",
       "3  2A87tzfnJTSWqD7gIZHisolhe4DMdzkbd6LzO1KHjNs=                  3   \n",
       "4  3qm6XTZ6MOCU11x8FIVbAGH5l5uMkT3/ZalWG1oo2Gc=                  1   \n",
       "\n",
       "   source_screen_name  source_type  city  bd  gender  registered_via  \\\n",
       "0                   6            5     1   0       2               7   \n",
       "1                   7            3    13   4       0               9   \n",
       "2                   7            3    13   4       0               9   \n",
       "3                   7            3    13   4       0               9   \n",
       "4                   6            5     1   0       2               7   \n",
       "\n",
       "   registration_init_time  expiration_date  song_length  genre_ids  language  \\\n",
       "0                       8               13     -0.25183         17         8   \n",
       "1                       7               13      0.23361          6         8   \n",
       "2                       7               13     -0.13422          6         8   \n",
       "3                       7               13      0.05294         40         0   \n",
       "4                       8               13     -0.36784          1         8   \n",
       "\n",
       "   mult_genre  target  \n",
       "0           0       1  \n",
       "1           0       1  \n",
       "2           0       1  \n",
       "3           0       1  \n",
       "4           0       1  "
      ]
     },
     "execution_count": 58,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "train_merge = pd.read_csv(dpath+\"train_merge.csv\")\n",
    "train_merge.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 59,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(7377403, 16)"
      ]
     },
     "execution_count": 59,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "train_merge.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 60,
   "metadata": {
    "collapsed": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "msno                      0\n",
       "song_id                   0\n",
       "source_system_tab         0\n",
       "source_screen_name        0\n",
       "source_type               0\n",
       "city                      0\n",
       "bd                        0\n",
       "gender                    0\n",
       "registered_via            0\n",
       "registration_init_time    0\n",
       "expiration_date           0\n",
       "song_length               0\n",
       "genre_ids                 0\n",
       "language                  0\n",
       "mult_genre                0\n",
       "target                    0\n",
       "dtype: int64"
      ]
     },
     "execution_count": 60,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "train_merge.apply(lambda x:sum(x.isnull()))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "5.3 合并train，songs，memmbers数据，生成原始特征测试文件test_merge.csv"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 69,
   "metadata": {},
   "outputs": [],
   "source": [
    "def generate_test_merge(input_file,output_file):\n",
    "    \"\"\"\n",
    "    merge test, songs and members feature, generate test_merge.csv\n",
    "    args:\n",
    "        input_file: input file path\n",
    "        output_file: output file path\n",
    "    \"\"\"\n",
    "    songs_dict = pickle.load(open(dpath+\"songs_dict.pkl\",\"rb\"))\n",
    "    members_dict = pickle.load(open(dpath+\"members_dict.pkl\",\"rb\"))\n",
    "    #dict默认值\n",
    "    songs_dict[-1] = [\"-0.12657\",\"25\",\"8\",\"0\"]\n",
    "    members_dict[-1] = [\"1\",\"0\",\"2\",\"4\",\"12\",\"13\"]\n",
    "    fin = open(dpath+input_file,\"r+\")\n",
    "    fout = open(dpath+output_file,\"w+\")\n",
    "    #注意ocolnames顺序要和后面outcols对应\n",
    "    #test多了\"id\",没有\"target\"\n",
    "    test_cols = [\"id\",\"msno\",\"song_id\",\"source_system_tab\",\"source_screen_name\",\"source_type\"]\n",
    "    members_cols = [\"city\",\"bd\",\"gender\",\"registered_via\",\"registration_init_time\",\"expiration_date\"]\n",
    "    songs_cols = [\"song_length\",\"genre_ids\",\"language\",\"mult_genre\"]\n",
    "    ocolnames = test_cols+members_cols+songs_cols\n",
    "    fout.write(\",\".join(ocolnames)+\"\\n\")\n",
    "    outcols = []\n",
    "    start = 0\n",
    "    for line in fin:\n",
    "        #略过第一行\n",
    "        if start == 0:\n",
    "            start += 1\n",
    "            continue        \n",
    "        cols = line.strip().split(\",\")\n",
    "        #注意这里cols索引，test多了第一列id，注意只在test出现的members和songs处理\n",
    "        if cols[1] in members_dict and cols[2] in songs_dict:\n",
    "            #合并3组特征并写入文件，注意cols[5]是个str，需要转换成list\n",
    "            outcols = cols+members_dict[cols[1]]+songs_dict[cols[2]]\n",
    "        elif cols[1] in members_dict and cols[2] not in songs_dict:\n",
    "            outcols = cols+members_dict[cols[1]]+songs_dict[-1]\n",
    "        elif cols[1] not in members_dict and cols[2] in songs_dict:\n",
    "            outcols = cols+members_dict[-1]+songs_dict[cols[2]]\n",
    "        else:\n",
    "            outcols = cols+members_dict[-1]+songs_dict[-1]\n",
    "        fout.write(\",\".join(outcols)+\"\\n\")\n",
    "    fout.close()  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 70,
   "metadata": {},
   "outputs": [],
   "source": [
    "generate_test_merge(\"test_data.csv\",\"test_merge.csv\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 71,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>id</th>\n",
       "      <th>msno</th>\n",
       "      <th>song_id</th>\n",
       "      <th>source_system_tab</th>\n",
       "      <th>source_screen_name</th>\n",
       "      <th>source_type</th>\n",
       "      <th>city</th>\n",
       "      <th>bd</th>\n",
       "      <th>gender</th>\n",
       "      <th>registered_via</th>\n",
       "      <th>registration_init_time</th>\n",
       "      <th>expiration_date</th>\n",
       "      <th>song_length</th>\n",
       "      <th>genre_ids</th>\n",
       "      <th>language</th>\n",
       "      <th>mult_genre</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>0</td>\n",
       "      <td>V8ruy7SGk7tDm3zA51DPpn6qutt+vmKMBKa21dp54uM=</td>\n",
       "      <td>WmHKgKMlp1lQMecNdNvDMkvIycZYHnFwDT72I5sIssc=</td>\n",
       "      <td>3</td>\n",
       "      <td>7</td>\n",
       "      <td>2</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>2</td>\n",
       "      <td>7</td>\n",
       "      <td>12</td>\n",
       "      <td>13</td>\n",
       "      <td>-0.14208</td>\n",
       "      <td>24</td>\n",
       "      <td>4</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>1</td>\n",
       "      <td>V8ruy7SGk7tDm3zA51DPpn6qutt+vmKMBKa21dp54uM=</td>\n",
       "      <td>y/rsZ9DC7FwK5F2PK2D5mj+aOBUJAjuu3dZ14NgE0vM=</td>\n",
       "      <td>3</td>\n",
       "      <td>7</td>\n",
       "      <td>2</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>2</td>\n",
       "      <td>7</td>\n",
       "      <td>12</td>\n",
       "      <td>13</td>\n",
       "      <td>0.45662</td>\n",
       "      <td>25</td>\n",
       "      <td>4</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>2</td>\n",
       "      <td>/uQAlrAkaczV+nWCd2sPF2ekvXPRipV7q0l+gbLuxjw=</td>\n",
       "      <td>8eZLFOdGVdXBSqoAv5nsLigeH2BvKXzTQYtUM53I0k4=</td>\n",
       "      <td>0</td>\n",
       "      <td>16</td>\n",
       "      <td>9</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>2</td>\n",
       "      <td>4</td>\n",
       "      <td>12</td>\n",
       "      <td>12</td>\n",
       "      <td>0.42822</td>\n",
       "      <td>12</td>\n",
       "      <td>2</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>3</td>\n",
       "      <td>1a6oo/iXKatxQx4eS9zTVD+KlSVaAFbTIqVvwLC1Y0k=</td>\n",
       "      <td>ztCf8thYsS4YN3GcIL/bvoxLm/T5mYBVKOO4C9NiVfQ=</td>\n",
       "      <td>7</td>\n",
       "      <td>11</td>\n",
       "      <td>7</td>\n",
       "      <td>3</td>\n",
       "      <td>5</td>\n",
       "      <td>1</td>\n",
       "      <td>9</td>\n",
       "      <td>3</td>\n",
       "      <td>13</td>\n",
       "      <td>0.23750</td>\n",
       "      <td>25</td>\n",
       "      <td>8</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>4</td>\n",
       "      <td>1a6oo/iXKatxQx4eS9zTVD+KlSVaAFbTIqVvwLC1Y0k=</td>\n",
       "      <td>MKVMpslKcQhMaFEgcEQhEfi5+RZhMYlU3eRDpySrH8Y=</td>\n",
       "      <td>7</td>\n",
       "      <td>11</td>\n",
       "      <td>7</td>\n",
       "      <td>3</td>\n",
       "      <td>5</td>\n",
       "      <td>1</td>\n",
       "      <td>9</td>\n",
       "      <td>3</td>\n",
       "      <td>13</td>\n",
       "      <td>-0.30702</td>\n",
       "      <td>32</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "   id                                          msno  \\\n",
       "0   0  V8ruy7SGk7tDm3zA51DPpn6qutt+vmKMBKa21dp54uM=   \n",
       "1   1  V8ruy7SGk7tDm3zA51DPpn6qutt+vmKMBKa21dp54uM=   \n",
       "2   2  /uQAlrAkaczV+nWCd2sPF2ekvXPRipV7q0l+gbLuxjw=   \n",
       "3   3  1a6oo/iXKatxQx4eS9zTVD+KlSVaAFbTIqVvwLC1Y0k=   \n",
       "4   4  1a6oo/iXKatxQx4eS9zTVD+KlSVaAFbTIqVvwLC1Y0k=   \n",
       "\n",
       "                                        song_id  source_system_tab  \\\n",
       "0  WmHKgKMlp1lQMecNdNvDMkvIycZYHnFwDT72I5sIssc=                  3   \n",
       "1  y/rsZ9DC7FwK5F2PK2D5mj+aOBUJAjuu3dZ14NgE0vM=                  3   \n",
       "2  8eZLFOdGVdXBSqoAv5nsLigeH2BvKXzTQYtUM53I0k4=                  0   \n",
       "3  ztCf8thYsS4YN3GcIL/bvoxLm/T5mYBVKOO4C9NiVfQ=                  7   \n",
       "4  MKVMpslKcQhMaFEgcEQhEfi5+RZhMYlU3eRDpySrH8Y=                  7   \n",
       "\n",
       "   source_screen_name  source_type  city  bd  gender  registered_via  \\\n",
       "0                   7            2     1   0       2               7   \n",
       "1                   7            2     1   0       2               7   \n",
       "2                  16            9     1   0       2               4   \n",
       "3                  11            7     3   5       1               9   \n",
       "4                  11            7     3   5       1               9   \n",
       "\n",
       "   registration_init_time  expiration_date  song_length  genre_ids  language  \\\n",
       "0                      12               13     -0.14208         24         4   \n",
       "1                      12               13      0.45662         25         4   \n",
       "2                      12               12      0.42822         12         2   \n",
       "3                       3               13      0.23750         25         8   \n",
       "4                       3               13     -0.30702         32         0   \n",
       "\n",
       "   mult_genre  \n",
       "0           0  \n",
       "1           0  \n",
       "2           0  \n",
       "3           0  \n",
       "4           0  "
      ]
     },
     "execution_count": 71,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "test_merge = pd.read_csv(dpath+\"test_merge.csv\")\n",
    "test_merge.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 72,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(2556790, 16)"
      ]
     },
     "execution_count": 72,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "test_merge.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 73,
   "metadata": {
    "collapsed": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "id                        0\n",
       "msno                      0\n",
       "song_id                   0\n",
       "source_system_tab         0\n",
       "source_screen_name        0\n",
       "source_type               0\n",
       "city                      0\n",
       "bd                        0\n",
       "gender                    0\n",
       "registered_via            0\n",
       "registration_init_time    0\n",
       "expiration_date           0\n",
       "song_length               0\n",
       "genre_ids                 0\n",
       "language                  0\n",
       "mult_genre                0\n",
       "dtype: int64"
      ]
     },
     "execution_count": 73,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "test_merge.apply(lambda x:sum(x.isnull()))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 遇到的坑"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "'<' not supported between instances of 'str' and 'float'  \n",
    "解决：.astype(str)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "对train进行LabelEncoder使用fit_transform，对test直接transform，注意train和test类别取值有不同"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "train和test的columns不同，注意分开处理"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "songs用utf-8格式打开，不然报错"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "SS标准化不接受1维数组，reshape成二维数组，注意后面写入文件时如何取值"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "标准化后的song_length是(-1,1)二维数组，song_length[i][0]取出值"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "pd.cut输入数据类型的要求"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "向文件写入数据时，注意ocolnames顺序要和后面outcols对应"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "注意只在训练集或测试集出现的members和songs的处理"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.1"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
