{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Introduction\n",
    "\n",
    "问题描述\n",
    "采用Wide and Deep模型，对Criteo提供的Kaggle竞赛数据进行CTR预估。\n",
    "解题提示\n",
    "1、任务说明： \n",
    "采用Wide and Deep模型，对Criteo提供的Kaggle竞赛数据进行CTR预估。 \n",
    "\n",
    "2、数据描述： \n",
    "数据共包含11天的数据，其中10天为训练数据train，1天为测试数据test。 \n",
    "（1） 文件说明 \n",
    "• train.csv: 训练数据。 \n",
    "• eval.csv：测试数据 \n",
    "（2） 字段说明：字段已进行脱敏处理 \n",
    "• I1-I13: 整数型特征 \n",
    "• C1-C26：类别型特征，已进行Hash编码 \n",
    "• clicked：是否被点击 \n",
    "其他参考资料： \n",
    "1. TensorFlow Wide And Deep 模型详解与应用：https://cloud.tencent.com/developer/article/1143316 \n",
    "中文版：https://www.helplib.com/GitHub/article_153816 \n",
    "2. Wide and Deep模型分类例子：https://github.com/tensorflow/models/tree/master/official/wide_deep \n",
    "https://github.com/gutouyu/ML_CIA/tree/master/Wide&Deep \n",
    "3. 一个包含多个深度模型的用于CTR的工具包：DeepCTR \n",
    "https://github.com/shenweichen/DeepCTR \n",
    "中文版：https://zhuanlan.zhihu.com/p/53231955 "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Imports and constants"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Using TensorFlow version 1.10.0\n",
      "\n",
      "Feature columns are:  ['I1', 'I2', 'I3', 'I4', 'I5', 'I6', 'I7', 'I8', 'I9', 'I10', 'I11', 'I12', 'I13', 'C1', 'C2', 'C3', 'C4', 'C5', 'C6', 'C7', 'C8', 'C9', 'C10', 'C11', 'C12', 'C13', 'C14', 'C15', 'C16', 'C17', 'C18', 'C19', 'C20', 'C21', 'C22', 'C23', 'C24', 'C25', 'C26'] \n",
      "\n"
     ]
    }
   ],
   "source": [
    "from __future__ import absolute_import\n",
    "from __future__ import division\n",
    "from __future__ import print_function\n",
    "\n",
    "import time\n",
    "\n",
    "import tensorflow as tf\n",
    "\n",
    "tf.logging.set_verbosity(tf.logging.INFO) # Set to INFO for tracking training, default is WARN. ERROR for least messages\n",
    "\n",
    "print(\"Using TensorFlow version %s\\n\" % (tf.__version__))\n",
    "\n",
    "\n",
    "CONTINUOUS_COLUMNS =  [\"I\"+str(i) for i in range(1,14)] # 1-13 inclusive\n",
    "CATEGORICAL_COLUMNS = [\"C\"+str(i) for i in range(1,27)] # 1-26 inclusive\n",
    "LABEL_COLUMN = [\"clicked\"]\n",
    "\n",
    "TRAIN_DATA_COLUMNS = LABEL_COLUMN + CONTINUOUS_COLUMNS + CATEGORICAL_COLUMNS\n",
    "# TEST_DATA_COLUMNS = CONTINUOUS_COLUMNS + CATEGORICAL_COLUMNS\n",
    "\n",
    "FEATURE_COLUMNS = CONTINUOUS_COLUMNS + CATEGORICAL_COLUMNS\n",
    "\n",
    "print('Feature columns are: ', FEATURE_COLUMNS, '\\n')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Input file parsing"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "input function configured\n"
     ]
    }
   ],
   "source": [
    "BATCH_SIZE=400\n",
    "\n",
    "def generate_input_fn(filename, column_headers=TRAIN_DATA_COLUMNS, batch_size=BATCH_SIZE):\n",
    "    def _input_fn():\n",
    "        filename_queue = tf.train.string_input_producer([filename])\n",
    "        reader = tf.TextLineReader()\n",
    "        # Reads out batch_size number of lines\n",
    "        key, value = reader.read_up_to(filename_queue, num_records=batch_size)\n",
    "        \n",
    "        # 1 int label, 13 ints, 26 strings\n",
    "        cont_defaults = [ [0] for i in range(1,14) ]\n",
    "        cate_defaults = [ [\" \"] for i in range(1,27) ]\n",
    "        label_defaults = [ [0] ]\n",
    "        # The label is the first column of the data.\n",
    "        record_defaults = label_defaults + cont_defaults + cate_defaults\n",
    "\n",
    "        # Decode CSV data that was just read out. \n",
    "        # Note that this does NOT return a dict, \n",
    "        # so we will need to zip it up with our headers\n",
    "        columns = tf.decode_csv(value, record_defaults=record_defaults)\n",
    "        \n",
    "        # all_columns is a dictionary that maps from column names to tensors of the data.\n",
    "        all_columns = dict(zip(column_headers, columns))\n",
    "        \n",
    "        # Pop and save our labels \n",
    "        # dict.pop() returns the popped array of values; exactly what we need!\n",
    "        labels = all_columns.pop(LABEL_COLUMN[0])\n",
    "        \n",
    "        # the remaining columns are our features\n",
    "        features = all_columns \n",
    "\n",
    "        # Sparse categorical features must be represented with an additional dimension. \n",
    "        # There is no additional work needed for the Continuous columns; they are the unaltered columns.\n",
    "        # See docs for tf.SparseTensor for more info\n",
    "        for feature_name in CATEGORICAL_COLUMNS:\n",
    "            features[feature_name] = tf.expand_dims(features[feature_name], -1)\n",
    "\n",
    "        return features, labels\n",
    "\n",
    "    return _input_fn\n",
    "\n",
    "print('input function configured')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Create Feature Columns"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Wide/Sparse&deep/continuous columns configured\n"
     ]
    }
   ],
   "source": [
    "#wide放离散和交叉特征,提高记忆能力，deep放连续和离散交叉特征，增强泛化能力\n",
    "def build_feature_cols(sparse_columns=CATEGORICAL_COLUMNS,continuous_columns=CONTINUOUS_COLUMNS):\n",
    "    \n",
    "    wide_columns = []# Sparse base columns.\n",
    "    KEY_DICT = {'C2':529,'C5':259,'C6':14,'C8':542,'C9':3,'C14':26,'C17':10,'C20':4,'C22':15,'C23':15,'C25':69}\n",
    "    deep_columns = []# Continuous base columns.\n",
    "    boundaries_dict = {'I1':[2,247],'I4':[2,5,192],'I10':[1,2,3,4,5,6,7],'I11':[1,2,121],'I12':[1,147]}\n",
    "    bucketized = []\n",
    "    crossed_columns = []\n",
    "\n",
    "    for name in continuous_columns:\n",
    "        column = tf.contrib.layers.real_valued_column(name)\n",
    "        deep_columns.append(column) #连续只放deep\n",
    "        v = boundaries_dict.get(name,[])\n",
    "        if(len(v)>0):\n",
    "            bucketized.append(tf.contrib.layers.bucketized_column(column,boundaries=v))\n",
    "    \n",
    "    for name in sparse_columns:\n",
    "        v = KEY_DICT.get(name,0)\n",
    "        prefix = name[1:] + \"-\"\n",
    "        col = tf.contrib.layers.sparse_column_with_keys(name,[prefix+str(i) for i in range(0,int(v))]) if v > 0 else tf.contrib.layers.sparse_column_with_hash_bucket(name, hash_bucket_size=1000)\n",
    "        wide_columns.append(col)\n",
    "        deep_columns.append(tf.feature_column.embedding_column(col, dimension=8))\n",
    "    \n",
    "    for name in bucketized:\n",
    "        for column in wide_columns:\n",
    "            crossed_columns.append(tf.contrib.layers.crossed_column([name, column], hash_bucket_size=100000))#交叉特征只放wide\n",
    "            \n",
    "    return wide_columns+crossed_columns,deep_columns\n",
    "            \n",
    "wide_features,deep_features = build_feature_cols()\n",
    "print('Wide/Sparse&deep/continuous columns configured')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [],
   "source": [
    "def build_model(model_type, model_dir, wide_columns=wide_features, deep_columns=deep_features):\n",
    "    runconfig = tf.contrib.learn.RunConfig(save_checkpoints_secs=120,save_checkpoints_steps = None,gpu_memory_fraction = 0.8)\n",
    "    if model_type == 'WIDE':\n",
    "        return tf.contrib.learn.LinearClassifier(config=runconfig,model_dir=model_dir,feature_columns=wide_columns)\n",
    "    if model_type == 'DEEP':\n",
    "        return tf.contrib.learn.DNNClassifier(config=runconfig,model_dir=model_dir,feature_columns=deep_columns,hidden_units=[100, 70, 50, 25])\n",
    "\n",
    "    return tf.contrib.learn.DNNLinearCombinedClassifier(config=runconfig,\n",
    "                                                            model_dir=model_dir,linear_feature_columns=wide_columns,\n",
    "                                                            dnn_feature_columns=deep_columns,dnn_hidden_units=[100, 70, 50, 25])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "def create_model_dir(model_type):\n",
    "    # Returns something like models/model_WIDE_AND_DEEP_1493043407\n",
    "    return 'models/model_' + model_type + '_' + str(int(time.time()))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "CPU times: user 4 µs, sys: 0 ns, total: 4 µs\n",
      "Wall time: 6.68 µs\n"
     ]
    }
   ],
   "source": [
    "%%time\n",
    "#Fit the model (train it)\n",
    "\n",
    "train_file = \"data/train.csv\"\n",
    "eval_file  = \"data/eval.csv\"\n",
    "\n",
    "# This can be found with wc -l train.csv\n",
    "train_sample_size = 800000\n",
    "train_steps = train_sample_size/BATCH_SIZE # 8000/40 = 200\n",
    "\n",
    "eval_sample_size = 200000 # this can be found with a 'wc -l eval.csv'\n",
    "eval_steps = eval_sample_size/BATCH_SIZE # 2000/40 = 50\n",
    "\n",
    "def train_and_eval(model):\n",
    "    estimator = build_model(model_type=model, model_dir=create_model_dir(model))\n",
    "    m = estimator.fit(input_fn=generate_input_fn(train_file), steps=train_steps)\n",
    "    print('fit done')\n",
    "    \n",
    "    results = m.evaluate(input_fn=generate_input_fn(eval_file), steps=eval_steps)\n",
    "    print(results)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "WARNING:tensorflow:From <ipython-input-8-61d08d488249>:2: RunConfig.__init__ (from tensorflow.contrib.learn.python.learn.estimators.run_config) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "When switching to tf.estimator.Estimator, use tf.estimator.RunConfig instead.\n",
      "WARNING:tensorflow:From /home/chenxiangkong/.conda/envs/tensorflow/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/estimators/linear.py:469: multi_class_head (from tensorflow.contrib.learn.python.learn.estimators.head) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please switch to tf.contrib.estimator.*_head.\n",
      "WARNING:tensorflow:From /home/chenxiangkong/.conda/envs/tensorflow/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py:1179: BaseEstimator.__init__ (from tensorflow.contrib.learn.python.learn.estimators.estimator) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please replace uses of any Estimator from tf.contrib.learn with an Estimator from tf.estimator.*\n",
      "INFO:tensorflow:Using config: {'_task_type': None, '_task_id': 0, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f85bfc74278>, '_master': '', '_num_ps_replicas': 0, '_num_worker_replicas': 0, '_environment': 'local', '_is_chief': True, '_evaluation_master': '', '_train_distribute': None, '_device_fn': None, '_tf_config': gpu_options {\n",
      "  per_process_gpu_memory_fraction: 0.8\n",
      "}\n",
      ", '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_secs': 120, '_log_step_count_steps': 100, '_session_config': None, '_save_checkpoints_steps': None, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_model_dir': 'models/model_WIDE_1552725279'}\n",
      "WARNING:tensorflow:From /home/chenxiangkong/.conda/envs/tensorflow/lib/python3.6/site-packages/tensorflow/contrib/layers/python/layers/feature_column.py:2388: calling sparse_feature_cross (from tensorflow.contrib.layers.python.ops.sparse_feature_cross_op) with hash_key=None is deprecated and will be removed after 2016-11-20.\n",
      "Instructions for updating:\n",
      "The default behavior of sparse_feature_cross is changing, the default\n",
      "value for hash_key will change to SPARSE_FEATURE_CROSS_DEFAULT_HASH_KEY.\n",
      "From that point on sparse_feature_cross will always use FingerprintCat64\n",
      "to concatenate the feature fingerprints. And the underlying\n",
      "_sparse_feature_cross_op.sparse_feature_cross operation will be marked\n",
      "as deprecated.\n",
      "WARNING:tensorflow:From /home/chenxiangkong/.conda/envs/tensorflow/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/estimators/head.py:800: calling expand_dims (from tensorflow.python.ops.array_ops) with dim is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Use the `axis` argument instead\n",
      "WARNING:tensorflow:Casting <dtype: 'int32'> labels to bool.\n",
      "WARNING:tensorflow:Casting <dtype: 'int32'> labels to bool.\n",
      "WARNING:tensorflow:Trapezoidal rule is known to produce incorrect PR-AUCs; please switch to \"careful_interpolation\" instead.\n",
      "WARNING:tensorflow:Trapezoidal rule is known to produce incorrect PR-AUCs; please switch to \"careful_interpolation\" instead.\n",
      "WARNING:tensorflow:From /home/chenxiangkong/.conda/envs/tensorflow/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/estimators/head.py:678: ModelFnOps.__new__ (from tensorflow.contrib.learn.python.learn.estimators.model_fn) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "When switching to tf.estimator.Estimator, use tf.estimator.EstimatorSpec. You can use the `estimator_spec` method to create an equivalent one.\n",
      "INFO:tensorflow:Create CheckpointSaverHook.\n",
      "INFO:tensorflow:Graph was finalized.\n",
      "INFO:tensorflow:Running local_init_op.\n",
      "INFO:tensorflow:Done running local_init_op.\n",
      "INFO:tensorflow:Saving checkpoints for 0 into models/model_WIDE_1552725279/model.ckpt.\n",
      "INFO:tensorflow:loss = 0.6931474, step = 1\n",
      "INFO:tensorflow:global_step/sec: 8.20619\n",
      "INFO:tensorflow:loss = 0.4209091, step = 101 (12.189 sec)\n",
      "INFO:tensorflow:global_step/sec: 17.3414\n",
      "INFO:tensorflow:loss = 0.51556814, step = 201 (5.765 sec)\n",
      "INFO:tensorflow:global_step/sec: 17.3692\n",
      "INFO:tensorflow:loss = 0.45395932, step = 301 (5.758 sec)\n",
      "INFO:tensorflow:global_step/sec: 16.29\n",
      "INFO:tensorflow:loss = 0.4938274, step = 401 (6.139 sec)\n",
      "INFO:tensorflow:global_step/sec: 17.7832\n",
      "INFO:tensorflow:loss = 0.5642735, step = 501 (5.623 sec)\n",
      "INFO:tensorflow:global_step/sec: 18.0429\n",
      "INFO:tensorflow:loss = 0.4611702, step = 601 (5.542 sec)\n",
      "INFO:tensorflow:global_step/sec: 18.0889\n",
      "INFO:tensorflow:loss = 0.4785087, step = 701 (5.531 sec)\n",
      "INFO:tensorflow:global_step/sec: 17.6358\n",
      "INFO:tensorflow:loss = 0.55219704, step = 801 (5.668 sec)\n",
      "INFO:tensorflow:global_step/sec: 18.0789\n",
      "INFO:tensorflow:loss = 0.481985, step = 901 (5.531 sec)\n",
      "INFO:tensorflow:global_step/sec: 17.7269\n",
      "INFO:tensorflow:loss = 0.48716503, step = 1001 (5.644 sec)\n",
      "INFO:tensorflow:global_step/sec: 17.7663\n",
      "INFO:tensorflow:loss = 0.5498551, step = 1101 (5.628 sec)\n",
      "INFO:tensorflow:global_step/sec: 17.8905\n",
      "INFO:tensorflow:loss = 0.5041975, step = 1201 (5.588 sec)\n",
      "INFO:tensorflow:global_step/sec: 17.9402\n",
      "INFO:tensorflow:loss = 0.46469554, step = 1301 (5.574 sec)\n",
      "INFO:tensorflow:global_step/sec: 17.4808\n",
      "INFO:tensorflow:loss = 0.42220384, step = 1401 (5.721 sec)\n",
      "INFO:tensorflow:global_step/sec: 17.0675\n",
      "INFO:tensorflow:loss = 0.48441207, step = 1501 (5.859 sec)\n",
      "INFO:tensorflow:global_step/sec: 17.1937\n",
      "INFO:tensorflow:loss = 0.45562592, step = 1601 (5.816 sec)\n",
      "INFO:tensorflow:global_step/sec: 17.0655\n",
      "INFO:tensorflow:loss = 0.5044998, step = 1701 (5.860 sec)\n",
      "INFO:tensorflow:global_step/sec: 17.3473\n",
      "INFO:tensorflow:loss = 0.5015487, step = 1801 (5.764 sec)\n",
      "INFO:tensorflow:Saving checkpoints for 1870 into models/model_WIDE_1552725279/model.ckpt.\n",
      "INFO:tensorflow:global_step/sec: 12.3402\n",
      "INFO:tensorflow:loss = 0.48442113, step = 1901 (8.104 sec)\n",
      "INFO:tensorflow:Saving checkpoints for 2000 into models/model_WIDE_1552725279/model.ckpt.\n",
      "INFO:tensorflow:Loss for final step: 0.47655532.\n",
      "fit done\n",
      "WARNING:tensorflow:Casting <dtype: 'int32'> labels to bool.\n",
      "WARNING:tensorflow:Casting <dtype: 'int32'> labels to bool.\n",
      "WARNING:tensorflow:Trapezoidal rule is known to produce incorrect PR-AUCs; please switch to \"careful_interpolation\" instead.\n",
      "WARNING:tensorflow:Trapezoidal rule is known to produce incorrect PR-AUCs; please switch to \"careful_interpolation\" instead.\n",
      "INFO:tensorflow:Starting evaluation at 2019-03-16-08:37:43\n",
      "INFO:tensorflow:Graph was finalized.\n",
      "INFO:tensorflow:Restoring parameters from models/model_WIDE_1552725279/model.ckpt-2000\n",
      "INFO:tensorflow:Running local_init_op.\n",
      "INFO:tensorflow:Done running local_init_op.\n",
      "INFO:tensorflow:Evaluation [50/500]\n",
      "INFO:tensorflow:Evaluation [100/500]\n",
      "INFO:tensorflow:Evaluation [150/500]\n",
      "INFO:tensorflow:Evaluation [200/500]\n",
      "INFO:tensorflow:Evaluation [250/500]\n",
      "INFO:tensorflow:Evaluation [300/500]\n",
      "INFO:tensorflow:Evaluation [350/500]\n",
      "INFO:tensorflow:Evaluation [400/500]\n",
      "INFO:tensorflow:Evaluation [450/500]\n",
      "INFO:tensorflow:Evaluation [500/500]\n",
      "INFO:tensorflow:Finished evaluation at 2019-03-16-08:38:13\n",
      "INFO:tensorflow:Saving dict for global step 2000: accuracy = 0.77439, accuracy/baseline_label_mean = 0.251165, accuracy/threshold_0.500000_mean = 0.77439, auc = 0.75493586, auc_precision_recall = 0.5205868, global_step = 2000, labels/actual_label_mean = 0.251165, labels/prediction_mean = 0.27191108, loss = 0.48296872, precision/positive_threshold_0.500000_mean = 0.60703665, recall/positive_threshold_0.500000_mean = 0.2885155\n",
      "{'loss': 0.48296872, 'accuracy': 0.77439, 'labels/prediction_mean': 0.27191108, 'labels/actual_label_mean': 0.251165, 'accuracy/baseline_label_mean': 0.251165, 'auc': 0.75493586, 'auc_precision_recall': 0.5205868, 'accuracy/threshold_0.500000_mean': 0.77439, 'precision/positive_threshold_0.500000_mean': 0.60703665, 'recall/positive_threshold_0.500000_mean': 0.2885155, 'global_step': 2000}\n"
     ]
    }
   ],
   "source": [
    "train_and_eval('WIDE')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "如上测试结果可见：只用Wide模型时，accuracy = 0.77439,loss = 0.48296872, auc = 0.75493586"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "INFO:tensorflow:Using config: {'_task_type': None, '_task_id': 0, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f8569e290f0>, '_master': '', '_num_ps_replicas': 0, '_num_worker_replicas': 0, '_environment': 'local', '_is_chief': True, '_evaluation_master': '', '_train_distribute': None, '_device_fn': None, '_tf_config': gpu_options {\n",
      "  per_process_gpu_memory_fraction: 0.8\n",
      "}\n",
      ", '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_secs': 120, '_log_step_count_steps': 100, '_session_config': None, '_save_checkpoints_steps': None, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_model_dir': 'models/model_DEEP_1552725883'}\n",
      "WARNING:tensorflow:Casting <dtype: 'int32'> labels to bool.\n",
      "WARNING:tensorflow:Casting <dtype: 'int32'> labels to bool.\n",
      "WARNING:tensorflow:Trapezoidal rule is known to produce incorrect PR-AUCs; please switch to \"careful_interpolation\" instead.\n",
      "WARNING:tensorflow:Trapezoidal rule is known to produce incorrect PR-AUCs; please switch to \"careful_interpolation\" instead.\n",
      "INFO:tensorflow:Create CheckpointSaverHook.\n",
      "INFO:tensorflow:Graph was finalized.\n",
      "INFO:tensorflow:Running local_init_op.\n",
      "INFO:tensorflow:Done running local_init_op.\n",
      "INFO:tensorflow:Saving checkpoints for 0 into models/model_DEEP_1552725883/model.ckpt.\n",
      "INFO:tensorflow:loss = 56.53908, step = 1\n",
      "INFO:tensorflow:global_step/sec: 30.9415\n",
      "INFO:tensorflow:loss = 0.8102412, step = 101 (3.233 sec)\n",
      "INFO:tensorflow:global_step/sec: 51.0143\n",
      "INFO:tensorflow:loss = 1.4849195, step = 201 (1.962 sec)\n",
      "INFO:tensorflow:global_step/sec: 50.2772\n",
      "INFO:tensorflow:loss = 0.5485319, step = 301 (1.988 sec)\n",
      "INFO:tensorflow:global_step/sec: 49.1155\n",
      "INFO:tensorflow:loss = 0.8300238, step = 401 (2.038 sec)\n",
      "INFO:tensorflow:global_step/sec: 50.5307\n",
      "INFO:tensorflow:loss = 0.6256228, step = 501 (1.976 sec)\n",
      "INFO:tensorflow:global_step/sec: 48.9149\n",
      "INFO:tensorflow:loss = 0.53558207, step = 601 (2.046 sec)\n",
      "INFO:tensorflow:global_step/sec: 48.664\n",
      "INFO:tensorflow:loss = 0.54645765, step = 701 (2.053 sec)\n",
      "INFO:tensorflow:global_step/sec: 50.3301\n",
      "INFO:tensorflow:loss = 0.5813713, step = 801 (1.987 sec)\n",
      "INFO:tensorflow:global_step/sec: 49.2293\n",
      "INFO:tensorflow:loss = 0.54277104, step = 901 (2.034 sec)\n",
      "INFO:tensorflow:global_step/sec: 47.8076\n",
      "INFO:tensorflow:loss = 0.5346265, step = 1001 (2.091 sec)\n",
      "INFO:tensorflow:global_step/sec: 50.2592\n",
      "INFO:tensorflow:loss = 0.9726578, step = 1101 (1.987 sec)\n",
      "INFO:tensorflow:global_step/sec: 48.7756\n",
      "INFO:tensorflow:loss = 0.65699774, step = 1201 (2.052 sec)\n",
      "INFO:tensorflow:global_step/sec: 49.4954\n",
      "INFO:tensorflow:loss = 0.555435, step = 1301 (2.021 sec)\n",
      "INFO:tensorflow:global_step/sec: 51.0005\n",
      "INFO:tensorflow:loss = 0.54739153, step = 1401 (1.960 sec)\n",
      "INFO:tensorflow:global_step/sec: 52.1602\n",
      "INFO:tensorflow:loss = 0.5587342, step = 1501 (1.915 sec)\n",
      "INFO:tensorflow:global_step/sec: 51.4341\n",
      "INFO:tensorflow:loss = 0.48812798, step = 1601 (1.945 sec)\n",
      "INFO:tensorflow:global_step/sec: 50.9778\n",
      "INFO:tensorflow:loss = 0.55093056, step = 1701 (1.962 sec)\n",
      "INFO:tensorflow:global_step/sec: 51.8186\n",
      "INFO:tensorflow:loss = 0.6144836, step = 1801 (1.931 sec)\n",
      "INFO:tensorflow:global_step/sec: 50.1602\n",
      "INFO:tensorflow:loss = 0.5464162, step = 1901 (1.993 sec)\n",
      "INFO:tensorflow:Saving checkpoints for 2000 into models/model_DEEP_1552725883/model.ckpt.\n",
      "INFO:tensorflow:Loss for final step: 0.5566521.\n",
      "fit done\n",
      "WARNING:tensorflow:Casting <dtype: 'int32'> labels to bool.\n",
      "WARNING:tensorflow:Casting <dtype: 'int32'> labels to bool.\n",
      "WARNING:tensorflow:Trapezoidal rule is known to produce incorrect PR-AUCs; please switch to \"careful_interpolation\" instead.\n",
      "WARNING:tensorflow:Trapezoidal rule is known to produce incorrect PR-AUCs; please switch to \"careful_interpolation\" instead.\n",
      "INFO:tensorflow:Starting evaluation at 2019-03-16-08:45:37\n",
      "INFO:tensorflow:Graph was finalized.\n",
      "INFO:tensorflow:Restoring parameters from models/model_DEEP_1552725883/model.ckpt-2000\n",
      "INFO:tensorflow:Running local_init_op.\n",
      "INFO:tensorflow:Done running local_init_op.\n",
      "INFO:tensorflow:Evaluation [50/500]\n",
      "INFO:tensorflow:Evaluation [100/500]\n",
      "INFO:tensorflow:Evaluation [150/500]\n",
      "INFO:tensorflow:Evaluation [200/500]\n",
      "INFO:tensorflow:Evaluation [250/500]\n",
      "INFO:tensorflow:Evaluation [300/500]\n",
      "INFO:tensorflow:Evaluation [350/500]\n",
      "INFO:tensorflow:Evaluation [400/500]\n",
      "INFO:tensorflow:Evaluation [450/500]\n",
      "INFO:tensorflow:Evaluation [500/500]\n",
      "INFO:tensorflow:Finished evaluation at 2019-03-16-08:45:48\n",
      "INFO:tensorflow:Saving dict for global step 2000: accuracy = 0.753665, accuracy/baseline_label_mean = 0.251165, accuracy/threshold_0.500000_mean = 0.753665, auc = 0.65248513, auc_precision_recall = 0.39684814, global_step = 2000, labels/actual_label_mean = 0.251165, labels/prediction_mean = 0.2507197, loss = 0.54124963, precision/positive_threshold_0.500000_mean = 0.58178127, recall/positive_threshold_0.500000_mean = 0.06840125\n",
      "{'loss': 0.54124963, 'accuracy': 0.753665, 'labels/prediction_mean': 0.2507197, 'labels/actual_label_mean': 0.251165, 'accuracy/baseline_label_mean': 0.251165, 'auc': 0.65248513, 'auc_precision_recall': 0.39684814, 'accuracy/threshold_0.500000_mean': 0.753665, 'precision/positive_threshold_0.500000_mean': 0.58178127, 'recall/positive_threshold_0.500000_mean': 0.06840125, 'global_step': 2000}\n"
     ]
    }
   ],
   "source": [
    "train_and_eval('DEEP')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "如上测试结果可见：只用Deep模型时，accuracy = 0.753665,loss = 0.54124963, auc = 0.65248513"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "WARNING:tensorflow:From <ipython-input-14-dccde5b1900f>:10: calling DNNLinearCombinedClassifier.__init__ (from tensorflow.contrib.learn.python.learn.estimators.dnn_linear_combined) with fix_global_step_increment_bug=False is deprecated and will be removed after 2017-04-15.\n",
      "Instructions for updating:\n",
      "Please set fix_global_step_increment_bug=True and update training steps in your pipeline. See pydoc for details.\n",
      "INFO:tensorflow:Using config: {'_task_type': None, '_task_id': 0, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f8569fd0c18>, '_master': '', '_num_ps_replicas': 0, '_num_worker_replicas': 0, '_environment': 'local', '_is_chief': True, '_evaluation_master': '', '_train_distribute': None, '_device_fn': None, '_tf_config': gpu_options {\n",
      "  per_process_gpu_memory_fraction: 0.8\n",
      "}\n",
      ", '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_secs': 120, '_log_step_count_steps': 100, '_session_config': None, '_save_checkpoints_steps': None, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_model_dir': 'models/model_WIDE-DEEP_1552726073'}\n",
      "WARNING:tensorflow:Casting <dtype: 'int32'> labels to bool.\n",
      "WARNING:tensorflow:Casting <dtype: 'int32'> labels to bool.\n",
      "WARNING:tensorflow:Trapezoidal rule is known to produce incorrect PR-AUCs; please switch to \"careful_interpolation\" instead.\n",
      "WARNING:tensorflow:Trapezoidal rule is known to produce incorrect PR-AUCs; please switch to \"careful_interpolation\" instead.\n",
      "INFO:tensorflow:Create CheckpointSaverHook.\n",
      "INFO:tensorflow:Graph was finalized.\n",
      "INFO:tensorflow:Running local_init_op.\n",
      "INFO:tensorflow:Done running local_init_op.\n",
      "INFO:tensorflow:Saving checkpoints for 0 into models/model_WIDE-DEEP_1552726073/model.ckpt.\n",
      "INFO:tensorflow:loss = 108.4896, step = 2\n",
      "INFO:tensorflow:global_step/sec: 5.08391\n",
      "INFO:tensorflow:loss = 0.915145, step = 202 (31.934 sec)\n",
      "INFO:tensorflow:global_step/sec: 8.47793\n",
      "INFO:tensorflow:global_step/sec: 33.2486\n",
      "INFO:tensorflow:loss = 0.6467489, step = 402 (6.242 sec)\n",
      "INFO:tensorflow:global_step/sec: 31.3159\n",
      "INFO:tensorflow:global_step/sec: 31.2026\n",
      "INFO:tensorflow:loss = 0.4597013, step = 602 (6.355 sec)\n",
      "INFO:tensorflow:global_step/sec: 31.3878\n",
      "INFO:tensorflow:global_step/sec: 32.9091\n",
      "INFO:tensorflow:loss = 0.8782328, step = 802 (6.182 sec)\n",
      "INFO:tensorflow:global_step/sec: 32.1063\n",
      "INFO:tensorflow:global_step/sec: 32.3526\n",
      "INFO:tensorflow:loss = 0.564208, step = 1002 (6.159 sec)\n",
      "INFO:tensorflow:global_step/sec: 33.0698\n",
      "INFO:tensorflow:global_step/sec: 31.3466\n",
      "INFO:tensorflow:loss = 0.5135518, step = 1202 (6.351 sec)\n",
      "INFO:tensorflow:global_step/sec: 30.9538\n",
      "INFO:tensorflow:global_step/sec: 32.6906\n",
      "INFO:tensorflow:loss = 0.48183575, step = 1402 (6.245 sec)\n",
      "INFO:tensorflow:global_step/sec: 32.2375\n",
      "INFO:tensorflow:global_step/sec: 31.6638\n",
      "INFO:tensorflow:loss = 0.5498546, step = 1602 (6.255 sec)\n",
      "INFO:tensorflow:global_step/sec: 30.637\n",
      "INFO:tensorflow:global_step/sec: 33.5458\n",
      "INFO:tensorflow:loss = 0.4790377, step = 1802 (6.143 sec)\n",
      "INFO:tensorflow:global_step/sec: 32.5111\n",
      "INFO:tensorflow:global_step/sec: 31.226\n",
      "INFO:tensorflow:loss = 0.480802, step = 2002 (6.390 sec)\n",
      "INFO:tensorflow:Saving checkpoints for 2002 into models/model_WIDE-DEEP_1552726073/model.ckpt.\n",
      "INFO:tensorflow:Loss for final step: 0.480802.\n",
      "fit done\n",
      "WARNING:tensorflow:Casting <dtype: 'int32'> labels to bool.\n",
      "WARNING:tensorflow:Casting <dtype: 'int32'> labels to bool.\n",
      "WARNING:tensorflow:Trapezoidal rule is known to produce incorrect PR-AUCs; please switch to \"careful_interpolation\" instead.\n",
      "WARNING:tensorflow:Trapezoidal rule is known to produce incorrect PR-AUCs; please switch to \"careful_interpolation\" instead.\n",
      "INFO:tensorflow:Starting evaluation at 2019-03-16-08:50:30\n",
      "INFO:tensorflow:Graph was finalized.\n",
      "INFO:tensorflow:Restoring parameters from models/model_WIDE-DEEP_1552726073/model.ckpt-2002\n",
      "INFO:tensorflow:Running local_init_op.\n",
      "INFO:tensorflow:Done running local_init_op.\n",
      "INFO:tensorflow:Evaluation [50/500]\n",
      "INFO:tensorflow:Evaluation [100/500]\n",
      "INFO:tensorflow:Evaluation [150/500]\n",
      "INFO:tensorflow:Evaluation [200/500]\n",
      "INFO:tensorflow:Evaluation [250/500]\n",
      "INFO:tensorflow:Evaluation [300/500]\n",
      "INFO:tensorflow:Evaluation [350/500]\n",
      "INFO:tensorflow:Evaluation [400/500]\n",
      "INFO:tensorflow:Evaluation [450/500]\n",
      "INFO:tensorflow:Evaluation [500/500]\n",
      "INFO:tensorflow:Finished evaluation at 2019-03-16-08:51:02\n",
      "INFO:tensorflow:Saving dict for global step 2002: accuracy = 0.773885, accuracy/baseline_label_mean = 0.251165, accuracy/threshold_0.500000_mean = 0.773885, auc = 0.7512264, auc_precision_recall = 0.51615435, global_step = 2002, labels/actual_label_mean = 0.251165, labels/prediction_mean = 0.24222104, loss = 0.48987985, precision/positive_threshold_0.500000_mean = 0.64224875, recall/positive_threshold_0.500000_mean = 0.2251508\n",
      "{'loss': 0.48987985, 'accuracy': 0.773885, 'labels/prediction_mean': 0.24222104, 'labels/actual_label_mean': 0.251165, 'accuracy/baseline_label_mean': 0.251165, 'auc': 0.7512264, 'auc_precision_recall': 0.51615435, 'accuracy/threshold_0.500000_mean': 0.773885, 'precision/positive_threshold_0.500000_mean': 0.64224875, 'recall/positive_threshold_0.500000_mean': 0.2251508, 'global_step': 2002}\n"
     ]
    }
   ],
   "source": [
    "train_and_eval('WIDE-DEEP')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "两者都用时模型在eval.csv上的性能:accuracy = 0.773885,loss = 0.48987985, auc = 0.7512264"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "总结：\n",
    "1）新增交叉特征，比不做交叉，模型性能更好。原因是某些特征单独对点击影响不大，但交叉组合对是否点击却有很大影响；\n",
    "2）wide放离散和交叉特征,提高记忆能力，deep放连续和离散交叉特征，增强泛化能力，2者同时使用能提升模型的记忆和泛化能力；"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
