{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "45672153",
   "metadata": {},
   "source": [
    "# Python机器学习Kaggle案例实战（第21期）第6课书面作业\n",
    "学号：113778\n",
    "## 1 作业\n",
    "\n",
    "FM和FFM模型近年来表现突出，分别在由Criteo和Avazu举办的CTR预测竞赛中夺得冠军：\n",
    "\n",
    "https://www.kaggle.com/c/criteo-display-ad-challenge  \n",
    "https://www.kaggle.com/c/avazu-ctr-prediction\n",
    "\n",
    "从中选择其中一个题目，讲讲你的大致思路，并具体说明，如何使用FM或FFM模型。\n",
    "\n",
    "### 1.1 题目介绍\n",
    "\n",
    "**这里我挑选的是第一题，即：https://www.kaggle.com/c/criteo-display-ad-challenge**"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2a6471c0",
   "metadata": {},
   "source": [
    "Display advertising is a billion dollar effort and one of the central uses of machine learning on the Internet. However, its data and methods are usually kept under lock and key. In this research competition, CriteoLabs is sharing a week’s worth of data for you to develop models predicting ad click-through rate (CTR). Given a user and the page he is visiting, what is the probability that he will click on a given ad?   \n",
    "![](https://storage.googleapis.com/kaggle-competitions/kaggle/3934/media/image007.jpg)\n",
    "The goal of this challenge is to benchmark the most accurate ML algorithms for CTR estimation. All winning models will be released under an open source license. As a participant, you are given a chance to access the traffic logs from Criteo that include various undisclosed features along with the click labels. "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4662bebf",
   "metadata": {},
   "source": [
    "数据集如下：  \n",
    "Label - 目标变量，0或者1，表示广告是否会被点击，1表示点击，0表示不会。 此列只有训练集有，测试集中没有。   \n",
    "I1-I13 - 总共有13列连续型数据特征（主要是计数特征）。  \n",
    "C1-C26 - 共有26列分类特征。为了匿名，这些特性的值被散列到32位型的数据上。\n",
    "\n",
    "目标就是要预测测试集中的Label值，即广告是否会被点击。评价函数是 Logarithmic Loss 。（越小越好）\n",
    "### 1.2 解题思路\n",
    "查看一下情况：\n",
    "1. 目前kaggle网站上的对应数据集已经没有了，给出的提示“The data files for this competition are no longer available here. Please visit this link to access the files.”（给的link也提示page不存在）。\n",
    "2. Kaggle上的“criteo-dataset”仍能获取到这个数据集：train.txt(10.38G),test.txt(1.36G)。超大！\n",
    "3. 最终我挑选了前辈们精简过的一个小型的数据集：train.tiny.txt(500K),test.tiny.txt(500K)。里面就1999条样本，用于学习很合适！\n",
    "\n",
    "**思路如下：**\n",
    "1. 先探索一下数据集，数据集中有大量的缺失值，需要处理这些缺失值，我采用的方法是对连续型变量采用均值填充，对分类型变量采用众值填充。\n",
    "2. 重点没有放在特征工程上，我理解采用FM/FFM模型本身还是用了它特征间组合的能力，因此我的做法就是把所有特征全部放到FFM模型里去训练。\n",
    "3. 采用FFM模型，我比较了一下，最终采用了xlearn库，这个xlearn库即支持FM也支持FFM，同时速度很快。\n",
    "4. 将特性转换为libffm格式输入xlearn的进行训练，输出结果。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "525938cc",
   "metadata": {},
   "source": [
    "### 1.3 如何使用FM或FFM模型\n",
    "\n",
    "这里采用xlearn进行FM/FFM建模，其中输入的数据格式有些不一样：\n",
    "对于 FM 算法而言，输入数据格式必须是 CSV 或者 libsvm. 对于 FFM 算法而言，输入数据必须是 libffm 格式:\n",
    "\n",
    "```\n",
    "libsvm format:\n",
    "\n",
    "   y index_1:value_1 index_2:value_2 ... index_n:value_n\n",
    "\n",
    "   0   0:0.1   1:0.5   3:0.2   ...\n",
    "   0   0:0.2   2:0.3   5:0.1   ...\n",
    "   1   0:0.2   2:0.3   5:0.1   ...\n",
    "\n",
    "CSV format:\n",
    "\n",
    "   y value_1 value_2 .. value_n\n",
    "\n",
    "   0      0.1     0.2     0.2   ...\n",
    "   1      0.2     0.3     0.1   ...\n",
    "   0      0.1     0.2     0.4   ...\n",
    "\n",
    "libffm format:\n",
    "\n",
    "   y field_1:index_1:value_1 field_2:index_2:value_2   ...\n",
    "\n",
    "   0   0:0:0.1   1:1:0.5   2:3:0.2   ...\n",
    "   0   0:0:0.2   1:2:0.3   2:5:0.1   ...\n",
    "   1   0:0:0.2   1:2:0.3   2:5:0.1   ...\n",
    "```\n",
    "\n",
    "LIBFFM格式如下:\n",
    "```\n",
    "<label> <field1>:<feature1>:<value1> <field2>:<feature2>:<value2> ...\n",
    "```\n",
    "这里举个例子来说明一下：\n",
    "点击  广告投放者 广告发行者\n",
    "===== ========== =========\n",
    "0     Nike     CNN\n",
    "1     ESPN     BBC\n",
    "\n",
    "这里，我们有:  \n",
    "* 2 fields: 广告投放者 和 广告发行者\n",
    "* 4 features: 广告投放者-Nike, 广告投放者-ESPN, 广告发行者-CNN, 广告发行者-BBC\n",
    "\n",
    "需要建立两个字典, 一个给广告商建立，另一个给发行商建立, 例如:\n",
    "\n",
    "DictField[广告投放者] -> 0  \n",
    "DictField[广告发行者] -> 1  \n",
    "\n",
    "DictFeature[广告投放者-Nike] -> 0  \n",
    "DictFeature[广告发行者-CNN] -> 1  \n",
    "DictFeature[广告投放者-ESPN] -> 2  \n",
    "DictFeature[广告发行者-BBC] -> 3  \n",
    "\n",
    "然后, 你就可以转换为 FFM 格式的数据:  \n",
    "\n",
    "0 0:0:1 1:1:1  \n",
    "1 0:2:1 1:3:1  \n",
    "\n",
    "注意feature是类别形式的, 这里值都为1。  \n",
    "上面第一个FFM数据是 0:0:1，配合<field1>:<feature1>:<value1>   \n",
    "我们明白了，0:0:1 代表的是 Field广告投放者:Fearure广告投放者-Nike0:Value1。  \n",
    "Value是对应的Feature的值。如果是类别型的特征categorical，由于执行了OneHot，这里的值就是0或者1，比如性别、学历。这里要和之前的Feature搭配起来看，比如0:0:1，第二个0表示这个Feature是[广告投放者-Nike]，而这个特征确实是Nike的，所以是Yes，就写1。如果是numerical数值型的，就写实际数据。\n",
    "\n",
    "    \n",
    "因此，从这个看，不需要对分类型特征进行one-hot编码转换，可以将之进行libffm格式转换，再将libffm编码后数据送入FFM模型进行训练。  \n",
    "\n",
    "下面就是本次作业的代码："
   ]
  },
  {
   "cell_type": "markdown",
   "id": "97329e50",
   "metadata": {},
   "source": [
    "### 1.4 源代码\n",
    "\n",
    "先读入数据集："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "401ff0e1",
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import pandas as pd"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "id": "a46d98ed",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>Id</th>\n",
       "      <th>Label</th>\n",
       "      <th>I1</th>\n",
       "      <th>I2</th>\n",
       "      <th>I3</th>\n",
       "      <th>I4</th>\n",
       "      <th>I5</th>\n",
       "      <th>I6</th>\n",
       "      <th>I7</th>\n",
       "      <th>I8</th>\n",
       "      <th>...</th>\n",
       "      <th>C17</th>\n",
       "      <th>C18</th>\n",
       "      <th>C19</th>\n",
       "      <th>C20</th>\n",
       "      <th>C21</th>\n",
       "      <th>C22</th>\n",
       "      <th>C23</th>\n",
       "      <th>C24</th>\n",
       "      <th>C25</th>\n",
       "      <th>C26</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>10000000</td>\n",
       "      <td>0</td>\n",
       "      <td>1.0</td>\n",
       "      <td>1</td>\n",
       "      <td>5.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>1382.0</td>\n",
       "      <td>4.0</td>\n",
       "      <td>15.0</td>\n",
       "      <td>2.0</td>\n",
       "      <td>...</td>\n",
       "      <td>e5ba7672</td>\n",
       "      <td>f54016b9</td>\n",
       "      <td>21ddcdc9</td>\n",
       "      <td>b1252a9d</td>\n",
       "      <td>07b5194c</td>\n",
       "      <td>NaN</td>\n",
       "      <td>3a171ecb</td>\n",
       "      <td>c5c50484</td>\n",
       "      <td>e8b83407</td>\n",
       "      <td>9727dd16</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>10000001</td>\n",
       "      <td>0</td>\n",
       "      <td>2.0</td>\n",
       "      <td>0</td>\n",
       "      <td>44.0</td>\n",
       "      <td>1.0</td>\n",
       "      <td>102.0</td>\n",
       "      <td>8.0</td>\n",
       "      <td>2.0</td>\n",
       "      <td>2.0</td>\n",
       "      <td>...</td>\n",
       "      <td>07c540c4</td>\n",
       "      <td>b04e4670</td>\n",
       "      <td>21ddcdc9</td>\n",
       "      <td>5840adea</td>\n",
       "      <td>60f6221e</td>\n",
       "      <td>NaN</td>\n",
       "      <td>3a171ecb</td>\n",
       "      <td>43f13e8b</td>\n",
       "      <td>e8b83407</td>\n",
       "      <td>731c3655</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>10000002</td>\n",
       "      <td>0</td>\n",
       "      <td>2.0</td>\n",
       "      <td>0</td>\n",
       "      <td>1.0</td>\n",
       "      <td>14.0</td>\n",
       "      <td>767.0</td>\n",
       "      <td>89.0</td>\n",
       "      <td>4.0</td>\n",
       "      <td>2.0</td>\n",
       "      <td>...</td>\n",
       "      <td>8efede7f</td>\n",
       "      <td>3412118d</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>e587c466</td>\n",
       "      <td>ad3062eb</td>\n",
       "      <td>3a171ecb</td>\n",
       "      <td>3b183c5c</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>10000003</td>\n",
       "      <td>0</td>\n",
       "      <td>NaN</td>\n",
       "      <td>893</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>4392.0</td>\n",
       "      <td>NaN</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>...</td>\n",
       "      <td>1e88c74f</td>\n",
       "      <td>74ef3502</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>6b3a5ca6</td>\n",
       "      <td>NaN</td>\n",
       "      <td>3a171ecb</td>\n",
       "      <td>9117a34a</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>10000004</td>\n",
       "      <td>0</td>\n",
       "      <td>3.0</td>\n",
       "      <td>-1</td>\n",
       "      <td>NaN</td>\n",
       "      <td>0.0</td>\n",
       "      <td>2.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>3.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>...</td>\n",
       "      <td>1e88c74f</td>\n",
       "      <td>26b3c7a7</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>21c9516a</td>\n",
       "      <td>NaN</td>\n",
       "      <td>32c7478e</td>\n",
       "      <td>b34f3128</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>...</th>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1994</th>\n",
       "      <td>10001994</td>\n",
       "      <td>1</td>\n",
       "      <td>NaN</td>\n",
       "      <td>3</td>\n",
       "      <td>3.0</td>\n",
       "      <td>5.0</td>\n",
       "      <td>23857.0</td>\n",
       "      <td>275.0</td>\n",
       "      <td>2.0</td>\n",
       "      <td>6.0</td>\n",
       "      <td>...</td>\n",
       "      <td>e5ba7672</td>\n",
       "      <td>45e3284c</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>5c7c443c</td>\n",
       "      <td>NaN</td>\n",
       "      <td>32c7478e</td>\n",
       "      <td>8f079aa5</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1995</th>\n",
       "      <td>10001995</td>\n",
       "      <td>0</td>\n",
       "      <td>5.0</td>\n",
       "      <td>60</td>\n",
       "      <td>49.0</td>\n",
       "      <td>26.0</td>\n",
       "      <td>547.0</td>\n",
       "      <td>66.0</td>\n",
       "      <td>5.0</td>\n",
       "      <td>26.0</td>\n",
       "      <td>...</td>\n",
       "      <td>07c540c4</td>\n",
       "      <td>75edcf1f</td>\n",
       "      <td>21ddcdc9</td>\n",
       "      <td>b1252a9d</td>\n",
       "      <td>3dd38d65</td>\n",
       "      <td>ad3062eb</td>\n",
       "      <td>3a171ecb</td>\n",
       "      <td>c2fe6ca4</td>\n",
       "      <td>010f6491</td>\n",
       "      <td>0015d4de</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1996</th>\n",
       "      <td>10001996</td>\n",
       "      <td>0</td>\n",
       "      <td>NaN</td>\n",
       "      <td>2</td>\n",
       "      <td>3.0</td>\n",
       "      <td>NaN</td>\n",
       "      <td>4403.0</td>\n",
       "      <td>120.0</td>\n",
       "      <td>2.0</td>\n",
       "      <td>9.0</td>\n",
       "      <td>...</td>\n",
       "      <td>07c540c4</td>\n",
       "      <td>30d1165e</td>\n",
       "      <td>21ddcdc9</td>\n",
       "      <td>5840adea</td>\n",
       "      <td>9ad47d25</td>\n",
       "      <td>NaN</td>\n",
       "      <td>32c7478e</td>\n",
       "      <td>abda10be</td>\n",
       "      <td>010f6491</td>\n",
       "      <td>14886693</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1997</th>\n",
       "      <td>10001997</td>\n",
       "      <td>0</td>\n",
       "      <td>NaN</td>\n",
       "      <td>60</td>\n",
       "      <td>3.0</td>\n",
       "      <td>NaN</td>\n",
       "      <td>121222.0</td>\n",
       "      <td>NaN</td>\n",
       "      <td>NaN</td>\n",
       "      <td>0.0</td>\n",
       "      <td>...</td>\n",
       "      <td>07c540c4</td>\n",
       "      <td>9e4517be</td>\n",
       "      <td>3014a4b1</td>\n",
       "      <td>5840adea</td>\n",
       "      <td>572bdde8</td>\n",
       "      <td>NaN</td>\n",
       "      <td>3a171ecb</td>\n",
       "      <td>9fa3e01a</td>\n",
       "      <td>001f3601</td>\n",
       "      <td>d9bcfc08</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1998</th>\n",
       "      <td>10001998</td>\n",
       "      <td>0</td>\n",
       "      <td>NaN</td>\n",
       "      <td>241</td>\n",
       "      <td>177.0</td>\n",
       "      <td>35.0</td>\n",
       "      <td>2237.0</td>\n",
       "      <td>51.0</td>\n",
       "      <td>12.0</td>\n",
       "      <td>41.0</td>\n",
       "      <td>...</td>\n",
       "      <td>e5ba7672</td>\n",
       "      <td>c21c3e4c</td>\n",
       "      <td>1d04f4a4</td>\n",
       "      <td>a458ea53</td>\n",
       "      <td>90da9c54</td>\n",
       "      <td>NaN</td>\n",
       "      <td>32c7478e</td>\n",
       "      <td>20c8320e</td>\n",
       "      <td>9b3e8820</td>\n",
       "      <td>16edf87e</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "<p>1999 rows × 41 columns</p>\n",
       "</div>"
      ],
      "text/plain": [
       "            Id  Label   I1   I2     I3    I4        I5     I6    I7    I8  \\\n",
       "0     10000000      0  1.0    1    5.0   0.0    1382.0    4.0  15.0   2.0   \n",
       "1     10000001      0  2.0    0   44.0   1.0     102.0    8.0   2.0   2.0   \n",
       "2     10000002      0  2.0    0    1.0  14.0     767.0   89.0   4.0   2.0   \n",
       "3     10000003      0  NaN  893    NaN   NaN    4392.0    NaN   0.0   0.0   \n",
       "4     10000004      0  3.0   -1    NaN   0.0       2.0    0.0   3.0   0.0   \n",
       "...        ...    ...  ...  ...    ...   ...       ...    ...   ...   ...   \n",
       "1994  10001994      1  NaN    3    3.0   5.0   23857.0  275.0   2.0   6.0   \n",
       "1995  10001995      0  5.0   60   49.0  26.0     547.0   66.0   5.0  26.0   \n",
       "1996  10001996      0  NaN    2    3.0   NaN    4403.0  120.0   2.0   9.0   \n",
       "1997  10001997      0  NaN   60    3.0   NaN  121222.0    NaN   NaN   0.0   \n",
       "1998  10001998      0  NaN  241  177.0  35.0    2237.0   51.0  12.0  41.0   \n",
       "\n",
       "      ...       C17       C18       C19       C20       C21       C22  \\\n",
       "0     ...  e5ba7672  f54016b9  21ddcdc9  b1252a9d  07b5194c       NaN   \n",
       "1     ...  07c540c4  b04e4670  21ddcdc9  5840adea  60f6221e       NaN   \n",
       "2     ...  8efede7f  3412118d       NaN       NaN  e587c466  ad3062eb   \n",
       "3     ...  1e88c74f  74ef3502       NaN       NaN  6b3a5ca6       NaN   \n",
       "4     ...  1e88c74f  26b3c7a7       NaN       NaN  21c9516a       NaN   \n",
       "...   ...       ...       ...       ...       ...       ...       ...   \n",
       "1994  ...  e5ba7672  45e3284c       NaN       NaN  5c7c443c       NaN   \n",
       "1995  ...  07c540c4  75edcf1f  21ddcdc9  b1252a9d  3dd38d65  ad3062eb   \n",
       "1996  ...  07c540c4  30d1165e  21ddcdc9  5840adea  9ad47d25       NaN   \n",
       "1997  ...  07c540c4  9e4517be  3014a4b1  5840adea  572bdde8       NaN   \n",
       "1998  ...  e5ba7672  c21c3e4c  1d04f4a4  a458ea53  90da9c54       NaN   \n",
       "\n",
       "           C23       C24       C25       C26  \n",
       "0     3a171ecb  c5c50484  e8b83407  9727dd16  \n",
       "1     3a171ecb  43f13e8b  e8b83407  731c3655  \n",
       "2     3a171ecb  3b183c5c       NaN       NaN  \n",
       "3     3a171ecb  9117a34a       NaN       NaN  \n",
       "4     32c7478e  b34f3128       NaN       NaN  \n",
       "...        ...       ...       ...       ...  \n",
       "1994  32c7478e  8f079aa5       NaN       NaN  \n",
       "1995  3a171ecb  c2fe6ca4  010f6491  0015d4de  \n",
       "1996  32c7478e  abda10be  010f6491  14886693  \n",
       "1997  3a171ecb  9fa3e01a  001f3601  d9bcfc08  \n",
       "1998  32c7478e  20c8320e  9b3e8820  16edf87e  \n",
       "\n",
       "[1999 rows x 41 columns]"
      ]
     },
     "execution_count": 34,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "dt=pd.read_csv('train.tiny.csv')\n",
    "dt"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "id": "a415b068",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<class 'pandas.core.frame.DataFrame'>\n",
      "RangeIndex: 1999 entries, 0 to 1998\n",
      "Data columns (total 41 columns):\n",
      " #   Column  Non-Null Count  Dtype  \n",
      "---  ------  --------------  -----  \n",
      " 0   Id      1999 non-null   int64  \n",
      " 1   Label   1999 non-null   int64  \n",
      " 2   I1      1110 non-null   float64\n",
      " 3   I2      1999 non-null   int64  \n",
      " 4   I3      1550 non-null   float64\n",
      " 5   I4      1579 non-null   float64\n",
      " 6   I5      1906 non-null   float64\n",
      " 7   I6      1506 non-null   float64\n",
      " 8   I7      1906 non-null   float64\n",
      " 9   I8      1997 non-null   float64\n",
      " 10  I9      1906 non-null   float64\n",
      " 11  I10     1110 non-null   float64\n",
      " 12  I11     1906 non-null   float64\n",
      " 13  I12     445 non-null    float64\n",
      " 14  I13     1579 non-null   float64\n",
      " 15  C1      1999 non-null   object \n",
      " 16  C2      1999 non-null   object \n",
      " 17  C3      1933 non-null   object \n",
      " 18  C4      1933 non-null   object \n",
      " 19  C5      1999 non-null   object \n",
      " 20  C6      1748 non-null   object \n",
      " 21  C7      1999 non-null   object \n",
      " 22  C8      1999 non-null   object \n",
      " 23  C9      1999 non-null   object \n",
      " 24  C10     1999 non-null   object \n",
      " 25  C11     1999 non-null   object \n",
      " 26  C12     1933 non-null   object \n",
      " 27  C13     1999 non-null   object \n",
      " 28  C14     1999 non-null   object \n",
      " 29  C15     1999 non-null   object \n",
      " 30  C16     1933 non-null   object \n",
      " 31  C17     1999 non-null   object \n",
      " 32  C18     1999 non-null   object \n",
      " 33  C19     1034 non-null   object \n",
      " 34  C20     1034 non-null   object \n",
      " 35  C21     1933 non-null   object \n",
      " 36  C22     368 non-null    object \n",
      " 37  C23     1999 non-null   object \n",
      " 38  C24     1933 non-null   object \n",
      " 39  C25     1034 non-null   object \n",
      " 40  C26     1034 non-null   object \n",
      "dtypes: float64(12), int64(3), object(26)\n",
      "memory usage: 640.4+ KB\n"
     ]
    }
   ],
   "source": [
    "dt.info()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "63e2c690",
   "metadata": {},
   "source": [
    "可以看出有不少的缺失值。  \n",
    "下面对缺失值进行处理：\n",
    "1. 连续型变量缺失值：均值填充。\n",
    "2. 分类型变量缺失值：众值填充。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "id": "a5164c76",
   "metadata": {},
   "outputs": [],
   "source": [
    "O_index= [ 'I1', 'I2', 'I3', 'I4', 'I5', 'I6', 'I7', 'I8', 'I9', 'I10', 'I11', 'I12', 'I13']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "id": "ce3ddc54",
   "metadata": {},
   "outputs": [],
   "source": [
    "C_index=['C1', 'C2', 'C3', 'C4', 'C5', 'C6', 'C7', 'C8', \n",
    "         'C9', 'C10', 'C11', 'C12', 'C13', 'C14', 'C15', \n",
    "         'C16', 'C17', 'C18', 'C19', 'C20', 'C21','C22',\n",
    "         'C23', 'C24', 'C25', 'C26']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "id": "43b3f6a0",
   "metadata": {},
   "outputs": [],
   "source": [
    "for col_name in  C_index:\n",
    "    dt[col_name] = dt[col_name].fillna(dt.loc[dt[col_name].isnull() == False ,col_name].mode()[0])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "id": "0a810aeb",
   "metadata": {},
   "outputs": [],
   "source": [
    "for col_name in  O_index:\n",
    "    dt[col_name] = dt[col_name].fillna(dt.loc[dt[col_name].isnull() == False ,col_name].mean())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "id": "c8e4d02e",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<class 'pandas.core.frame.DataFrame'>\n",
      "RangeIndex: 1999 entries, 0 to 1998\n",
      "Data columns (total 41 columns):\n",
      " #   Column  Non-Null Count  Dtype  \n",
      "---  ------  --------------  -----  \n",
      " 0   Id      1999 non-null   int64  \n",
      " 1   Label   1999 non-null   int64  \n",
      " 2   I1      1999 non-null   float64\n",
      " 3   I2      1999 non-null   int64  \n",
      " 4   I3      1999 non-null   float64\n",
      " 5   I4      1999 non-null   float64\n",
      " 6   I5      1999 non-null   float64\n",
      " 7   I6      1999 non-null   float64\n",
      " 8   I7      1999 non-null   float64\n",
      " 9   I8      1999 non-null   float64\n",
      " 10  I9      1999 non-null   float64\n",
      " 11  I10     1999 non-null   float64\n",
      " 12  I11     1999 non-null   float64\n",
      " 13  I12     1999 non-null   float64\n",
      " 14  I13     1999 non-null   float64\n",
      " 15  C1      1999 non-null   object \n",
      " 16  C2      1999 non-null   object \n",
      " 17  C3      1999 non-null   object \n",
      " 18  C4      1999 non-null   object \n",
      " 19  C5      1999 non-null   object \n",
      " 20  C6      1999 non-null   object \n",
      " 21  C7      1999 non-null   object \n",
      " 22  C8      1999 non-null   object \n",
      " 23  C9      1999 non-null   object \n",
      " 24  C10     1999 non-null   object \n",
      " 25  C11     1999 non-null   object \n",
      " 26  C12     1999 non-null   object \n",
      " 27  C13     1999 non-null   object \n",
      " 28  C14     1999 non-null   object \n",
      " 29  C15     1999 non-null   object \n",
      " 30  C16     1999 non-null   object \n",
      " 31  C17     1999 non-null   object \n",
      " 32  C18     1999 non-null   object \n",
      " 33  C19     1999 non-null   object \n",
      " 34  C20     1999 non-null   object \n",
      " 35  C21     1999 non-null   object \n",
      " 36  C22     1999 non-null   object \n",
      " 37  C23     1999 non-null   object \n",
      " 38  C24     1999 non-null   object \n",
      " 39  C25     1999 non-null   object \n",
      " 40  C26     1999 non-null   object \n",
      "dtypes: float64(12), int64(3), object(26)\n",
      "memory usage: 640.4+ KB\n"
     ]
    }
   ],
   "source": [
    "dt.info()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "92fb4d54",
   "metadata": {},
   "source": [
    "已经没有缺失值了。  \n",
    "下面看一下Id变量，可以看出Id变量是没有重复的，这个变量可以不用参与预测。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "id": "e9081eba",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "1999"
      ]
     },
     "execution_count": 46,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "dt['Id'].nunique()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7b6cb197",
   "metadata": {},
   "source": [
    "FFMFormatPandas函数取之于Kaggle上的贡献，作用就是将panads的DataFrame数据转换为libffm格式，返回数据类型为pandas.series。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "id": "de50e2e3",
   "metadata": {},
   "outputs": [],
   "source": [
    "class FFMFormatPandas:\n",
    "    def __init__(self):\n",
    "        self.field_index_ = None\n",
    "        self.feature_index_ = None\n",
    "        self.y = None\n",
    "\n",
    "    def fit(self, df, y=None):\n",
    "        self.y = y\n",
    "        df_ffm = df[df.columns.difference([self.y])]\n",
    "        if self.field_index_ is None:\n",
    "            self.field_index_ = {col: i for i, col in enumerate(df_ffm)}\n",
    "\n",
    "        if self.feature_index_ is not None:\n",
    "            last_idx = max(list(self.feature_index_.values()))\n",
    "\n",
    "        if self.feature_index_ is None:\n",
    "            self.feature_index_ = dict()\n",
    "            last_idx = 0\n",
    "\n",
    "        for col in df.columns:\n",
    "            vals = df[col].unique()\n",
    "            for val in vals:\n",
    "                if pd.isnull(val):\n",
    "                    continue\n",
    "                name = '{}_{}'.format(col, val)\n",
    "                if name not in self.feature_index_:\n",
    "                    self.feature_index_[name] = last_idx\n",
    "                    last_idx += 1\n",
    "            self.feature_index_[col] = last_idx\n",
    "            last_idx += 1\n",
    "        return self\n",
    "\n",
    "    def fit_transform(self, df, y=None):\n",
    "        self.fit(df, y)\n",
    "        return self.transform(df)\n",
    "\n",
    "    def transform_row_(self, row, t):\n",
    "        ffm = []\n",
    "        if self.y != None:\n",
    "            ffm.append(str(row.loc[row.index == self.y][0]))\n",
    "        if self.y is None:\n",
    "            ffm.append(str(0))\n",
    "\n",
    "        for col, val in row.loc[row.index != self.y].to_dict().items():\n",
    "            col_type = t[col]\n",
    "            name = '{}_{}'.format(col, val)\n",
    "            if col_type.kind ==  'O':\n",
    "                ffm.append('{}:{}:1'.format(self.field_index_[col], self.feature_index_[name]))\n",
    "            elif col_type.kind == 'i':\n",
    "                ffm.append('{}:{}:{}'.format(self.field_index_[col], self.feature_index_[col], val))\n",
    "        return ' '.join(ffm)\n",
    "\n",
    "    def transform(self, df):\n",
    "        t = df.dtypes.to_dict()\n",
    "        return pd.Series({idx: self.transform_row_(row, t) for idx, row in df.iterrows()})"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 51,
   "id": "f1352e8b",
   "metadata": {},
   "outputs": [],
   "source": [
    "ffm_train = FFMFormatPandas()\n",
    "ffm_train_data = ffm_train.fit_transform(dt.drop(columns=['Id']), y='Label')\n",
    "ffm_train_data.to_csv('ffm_data.txt',index=False,header=False)\n",
    "ffm_train_data[:1600].to_csv('ffm_data_train.txt',index=False,header=False)\n",
    "ffm_train_data[1600:].to_csv('ffm_data_test.txt',index=False,header=False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "2f84993c",
   "metadata": {},
   "outputs": [],
   "source": [
    "import xlearn as xl\n",
    "\n",
    "# Training task\n",
    "ffm_model = xl.create_ffm() # Use field-aware factorization machine\n",
    "ffm_model.setTrain(\"./ffm_data_train.txt\")  # Training data\n",
    "ffm_model.setValidate(\"./ffm_data_test.txt\")  # Validation data\n",
    "\n",
    "# param:\n",
    "#  0. binary classification\n",
    "#  1. learning rate: 0.2\n",
    "#  2. regular lambda: 0.002\n",
    "#  3. evaluation metric: accuracy\n",
    "param = {'task':'binary', 'lr':0.2, 'lambda':0.002, 'metric':'acc'}\n",
    "\n",
    "# Start to train\n",
    "# The trained model will be stored in model.out\n",
    "ffm_model.fit(param, './model.out')\n",
    "\n",
    "# Prediction task\n",
    "ffm_model.setTest(\"./ffm_data_test.txt\")  # Test data\n",
    "ffm_model.setSigmoid()  # Convert output to 0-1\n",
    "\n",
    "# Start to predict\n",
    "# The output result will be stored in output.txt\n",
    "ffm_model.predict(\"./model.out\", \"./output.txt\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "id": "688da884",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Out of folds logloss is 0.5070\n"
     ]
    }
   ],
   "source": [
    "from sklearn.metrics import confusion_matrix, log_loss, roc_auc_score\n",
    "\n",
    "df_test=pd.read_csv('ffm_data_test.txt',sep=' ', header=None)\n",
    "df_pred=pd.read_csv('output.txt',sep=' ', header=None)\n",
    "y_test=df_test[0]\n",
    "y_pred=df_pred[0]\n",
    "\n",
    "print('Out of folds logloss is {:.4f}'.format(log_loss(y_test, y_pred)))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "35a89694",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
