{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Otto商品分类——Logistic 回归"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们以Kaggle 2015年举办的Otto Group Product Classification Challenge竞赛数据为例，分别调用缺省参数LogisticRegression、LogisticRegression + GridSearchCV以及LogisticRegressionCV进行参数调优。实际应用中LogisticRegression + GridSearchCV或LogisticRegressionCV任选一个即可。\n",
    "\n",
    "Otto数据集是著名电商Otto提供的一个多类商品分类问题，类别数=9. 每个样本有93维数值型特征（整数，表示某种事件发生的次数，已经进行过脱敏处理）。 竞赛官网：https://www.kaggle.com/c/otto-group-product-classification-challenge/data\n",
    "\n",
    "\n",
    "第一名：https://www.kaggle.com/c/otto-group-product-classification-challenge/discussion/14335\n",
    "第二名：http://blog.kaggle.com/2015/06/09/otto-product-classification-winners-interview-2nd-place-alexander-guschin/"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# 首先 import 必要的模块\n",
    "import pandas as pd \n",
    "import numpy as np\n",
    "\n",
    "from sklearn.model_selection import GridSearchCV\n",
    "\n",
    "import matplotlib.pyplot as plt\n",
    "%matplotlib inline"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 读取数据 & 数据探索"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "scrolled": false
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style>\n",
       "    .dataframe thead tr:only-child th {\n",
       "        text-align: right;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: left;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>id</th>\n",
       "      <th>feat_1</th>\n",
       "      <th>feat_2</th>\n",
       "      <th>feat_3</th>\n",
       "      <th>feat_4</th>\n",
       "      <th>feat_5</th>\n",
       "      <th>feat_6</th>\n",
       "      <th>feat_7</th>\n",
       "      <th>feat_8</th>\n",
       "      <th>feat_9</th>\n",
       "      <th>...</th>\n",
       "      <th>feat_85</th>\n",
       "      <th>feat_86</th>\n",
       "      <th>feat_87</th>\n",
       "      <th>feat_88</th>\n",
       "      <th>feat_89</th>\n",
       "      <th>feat_90</th>\n",
       "      <th>feat_91</th>\n",
       "      <th>feat_92</th>\n",
       "      <th>feat_93</th>\n",
       "      <th>target</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>1</td>\n",
       "      <td>0.016393</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.0</td>\n",
       "      <td>...</td>\n",
       "      <td>0.018182</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>Class_1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>2</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.013158</td>\n",
       "      <td>0.0</td>\n",
       "      <td>...</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>Class_1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>3</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.013158</td>\n",
       "      <td>0.0</td>\n",
       "      <td>...</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>Class_1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>4</td>\n",
       "      <td>0.016393</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.014286</td>\n",
       "      <td>0.315789</td>\n",
       "      <td>0.1</td>\n",
       "      <td>0.131579</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.0</td>\n",
       "      <td>...</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.015385</td>\n",
       "      <td>0.029851</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>Class_1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>5</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.0</td>\n",
       "      <td>...</td>\n",
       "      <td>0.018182</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.007692</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>Class_1</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "<p>5 rows × 95 columns</p>\n",
       "</div>"
      ],
      "text/plain": [
       "   id    feat_1  feat_2  feat_3    feat_4    feat_5  feat_6    feat_7  \\\n",
       "0   1  0.016393     0.0     0.0  0.000000  0.000000     0.0  0.000000   \n",
       "1   2  0.000000     0.0     0.0  0.000000  0.000000     0.0  0.000000   \n",
       "2   3  0.000000     0.0     0.0  0.000000  0.000000     0.0  0.000000   \n",
       "3   4  0.016393     0.0     0.0  0.014286  0.315789     0.1  0.131579   \n",
       "4   5  0.000000     0.0     0.0  0.000000  0.000000     0.0  0.000000   \n",
       "\n",
       "     feat_8  feat_9   ...      feat_85   feat_86   feat_87  feat_88  feat_89  \\\n",
       "0  0.000000     0.0   ...     0.018182  0.000000  0.000000      0.0      0.0   \n",
       "1  0.013158     0.0   ...     0.000000  0.000000  0.000000      0.0      0.0   \n",
       "2  0.013158     0.0   ...     0.000000  0.000000  0.000000      0.0      0.0   \n",
       "3  0.000000     0.0   ...     0.000000  0.015385  0.029851      0.0      0.0   \n",
       "4  0.000000     0.0   ...     0.018182  0.000000  0.000000      0.0      0.0   \n",
       "\n",
       "    feat_90  feat_91  feat_92  feat_93   target  \n",
       "0  0.000000      0.0      0.0      0.0  Class_1  \n",
       "1  0.000000      0.0      0.0      0.0  Class_1  \n",
       "2  0.000000      0.0      0.0      0.0  Class_1  \n",
       "3  0.000000      0.0      0.0      0.0  Class_1  \n",
       "4  0.007692      0.0      0.0      0.0  Class_1  \n",
       "\n",
       "[5 rows x 95 columns]"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 读取数据\n",
    "# 请自行在log(x+1)特征和tf_idf特征上尝试，并比较不同特征的结果，\n",
    "# path to where the data lies\n",
    "dpath = './data/'\n",
    "train = pd.read_csv(dpath +\"Otto_FE_train_org.csv\")\n",
    "train.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 准备数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "y_train = train['target']   \n",
    "X_train = train.drop([\"id\", \"target\"], axis=1)\n",
    "\n",
    "#保存特征名字以备后用（可视化）\n",
    "feat_names = X_train.columns \n",
    "\n",
    "#sklearn的学习器大多之一稀疏数据输入，模型训练会快很多\n",
    "#查看一个学习器是否支持稀疏数据，可以看fit函数是否支持: X: {array-like, sparse matrix}.\n",
    "#可自行用timeit比较稠密数据和稀疏数据的训练时间\n",
    "from scipy.sparse import csr_matrix\n",
    "X_train = csr_matrix(X_train)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 默认参数的Logistic Regression"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "from sklearn.linear_model import LogisticRegression\n",
    "lr = LogisticRegression()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "scrolled": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "logloss of each fold is:  [ 0.79764023  0.79738555  0.79737262]\n",
      "cv logloss is: 0.797466131032\n"
     ]
    }
   ],
   "source": [
    "# 交叉验证用于评估模型性能和进行参数调优（模型选择）\n",
    "#分类任务中交叉验证缺省是采用StratifiedKFold\n",
    "#数据集比较大，采用3折交叉验证\n",
    "from sklearn.model_selection import cross_val_score\n",
    "loss = cross_val_score(lr, X_train, y_train, cv=3, scoring='neg_log_loss')\n",
    "#%timeit loss_sparse = cross_val_score(lr, X_train_sparse, y_train, cv=3, scoring='neg_log_loss')\n",
    "print 'logloss of each fold is: ',-loss\n",
    "print'cv logloss is:', -loss.mean()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 正则化的 Logistic Regression及参数调优"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "logistic回归的需要调整超参数有：C（正则系数，一般在log域（取log后的值）均匀设置候选参数）和正则函数penalty（L2/L1） \n",
    "目标函数为：J =  C* sum(logloss(f(xi), yi)) +* penalty \n",
    "\n",
    "在sklearn框架下，不同学习器的参数调整步骤相同：\n",
    "1. 设置参数搜索范围\n",
    "2. 生成学习器实例（参数设置）\n",
    "3. 生成GridSearchCV的实例（参数设置）\n",
    "4. 调用GridSearchCV的fit方法"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "#为了比较GridSearchCV和LogisticRegressionCV，两者用相同的交叉验证数据分割\n",
    "from sklearn.model_selection import StratifiedKFold\n",
    "fold = StratifiedKFold(n_splits=3, shuffle=True, random_state=777)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 65,
   "metadata": {
    "scrolled": false
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "GridSearchCV(cv=StratifiedKFold(n_splits=3, random_state=777, shuffle=True),\n",
       "       error_score='raise',\n",
       "       estimator=LogisticRegression(C=1.0, class_weight='balanced', dual=False,\n",
       "          fit_intercept=True, intercept_scaling=1, max_iter=100,\n",
       "          multi_class='ovr', n_jobs=1, penalty='l1', random_state=None,\n",
       "          solver='liblinear', tol=0.0001, verbose=0, warm_start=False),\n",
       "       fit_params=None, iid=True, n_jobs=4,\n",
       "       param_grid={'C': [0.001, 0.01, 0.1, 1, 10, 100, 1000]},\n",
       "       pre_dispatch='2*n_jobs', refit=True, return_train_score='warn',\n",
       "       scoring='neg_log_loss', verbose=0)"
      ]
     },
     "execution_count": 65,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from sklearn.model_selection import GridSearchCV\n",
    "from sklearn.linear_model import LogisticRegression\n",
    "\n",
    "#需要调优的参数\n",
    "# 请尝试将L1正则和L2正则分开，并配合合适的优化求解算法（slover）\n",
    "#tuned_parameters = {'penalty':['l1','l2'],\n",
    "#                   'C': [0.001, 0.01, 0.1, 1, 10, 100, 1000]\n",
    "#                   }\n",
    "penaltys = ['l1','l2']\n",
    "Cs = [0.001, 0.01, 0.1, 1, 10, 100, 1000]\n",
    "tuned_parameters = dict(penalty = penaltys, C = Cs)\n",
    "\n",
    "lr_penalty= LogisticRegression(solver='liblinear')\n",
    "grid= GridSearchCV(lr_penalty, tuned_parameters,cv=fold, scoring='neg_log_loss',n_jobs = 4,)\n",
    "grid.fit(X_train,y_train)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 66,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.729202819153\n",
      "{'C': 1000}\n"
     ]
    }
   ],
   "source": [
    "# examine the best model\n",
    "print(-grid.best_score_)\n",
    "print(grid.best_params_)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 64,
   "metadata": {},
   "outputs": [
    {
     "ename": "ValueError",
     "evalue": "cannot reshape array of size 7 into shape (7,2)",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mValueError\u001b[0m                                Traceback (most recent call last)",
      "\u001b[0;32m<ipython-input-64-6396f745ef61>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m      8\u001b[0m \u001b[0mn_Cs\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mlen\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mCs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m      9\u001b[0m \u001b[0mnumber_penaltys\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mlen\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mpenaltys\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 10\u001b[0;31m \u001b[0mtest_scores\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mnp\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0marray\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtest_means\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mreshape\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mn_Cs\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0mnumber_penaltys\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m     11\u001b[0m \u001b[0mtrain_scores\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mnp\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0marray\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtrain_means\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mreshape\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mn_Cs\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0mnumber_penaltys\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     12\u001b[0m \u001b[0mtest_stds\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mnp\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0marray\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtest_stds\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mreshape\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mn_Cs\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0mnumber_penaltys\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;31mValueError\u001b[0m: cannot reshape array of size 7 into shape (7,2)"
     ]
    }
   ],
   "source": [
    "# plot CV误差曲线\n",
    "test_means = grid.cv_results_[ 'mean_test_score' ]\n",
    "test_stds = grid.cv_results_[ 'std_test_score' ]\n",
    "train_means = grid.cv_results_[ 'mean_train_score' ]\n",
    "train_stds = grid.cv_results_[ 'std_train_score' ]\n",
    "\n",
    "# plot results\n",
    "n_Cs = len(Cs)\n",
    "number_penaltys = len(penaltys)\n",
    "test_scores = np.array(test_means).reshape(n_Cs,number_penaltys)\n",
    "train_scores = np.array(train_means).reshape(n_Cs,number_penaltys)\n",
    "test_stds = np.array(test_stds).reshape(n_Cs,number_penaltys)\n",
    "train_stds = np.array(train_stds).reshape(n_Cs,number_penaltys)\n",
    "\n",
    "x_axis = np.log10(Cs)\n",
    "for i, value in enumerate(penaltys):\n",
    "    #pyplot.plot(log(Cs), test_scores[i], label= 'penalty:'   + str(value))\n",
    "    plt.errorbar(x_axis, -test_scores[:,i], yerr=test_stds[:,i] ,label = penaltys[i] +' Test')\n",
    "    #plt.errorbar(x_axis, -train_scores[:,i], yerr=train_stds[:,i] ,label = penaltys[i] +' Train')\n",
    "    \n",
    "plt.legend()\n",
    "plt.xlabel( 'log(C)' )                                                                                                      \n",
    "plt.ylabel( 'logloss' )\n",
    "plt.savefig('LogisticGridSearchCV_C.png' )\n",
    "\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "上图给出了L1正则和L2正则下、不同正则参数C对应的模型在训练集上测试集上的logloss。可以看出在训练集上C越大（正则越少）的模型性能越好；但在测试集上当C=100时性能最好（L1正则）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "metadata": {
    "collapsed": true,
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "lr_best = grid.best_estimator_"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 用LogisticRegressionCV实现正则化的 Logistic Regression\n",
    "略"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### L1正则"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 61,
   "metadata": {
    "scrolled": false
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "LogisticRegressionCV(Cs=[0.001, 0.01, 0.1, 1, 10, 100, 1000],\n",
       "           class_weight='balanced',\n",
       "           cv=StratifiedKFold(n_splits=3, random_state=777, shuffle=True),\n",
       "           dual=False, fit_intercept=True, intercept_scaling=1.0,\n",
       "           max_iter=100, multi_class='ovr', n_jobs=4, penalty='l1',\n",
       "           random_state=None, refit=True, scoring='neg_log_loss',\n",
       "           solver='liblinear', tol=0.0001, verbose=0)"
      ]
     },
     "execution_count": 61,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from sklearn.linear_model import LogisticRegressionCV\n",
    "\n",
    "Cs = [1e-3, 1e-2, 1e-1, 1, 10, 100, 1000]\n",
    "#nCs = 9  #Cs values are chosen in a logarithmic scale between 1e-4 and 1e4.\n",
    "\n",
    "# 大量样本（6W+）、高维度（93），L1正则 --> 可选用saga优化求解器(0.19版本新功能)\n",
    "#LogisticRegressionCV比GridSearchCV快\n",
    "lrcv_L1 = LogisticRegressionCV(Cs=Cs, cv = fold, scoring='neg_log_loss', penalty='l1', solver='liblinear', multi_class='ovr',n_jobs=4)\n",
    "lrcv_L1.fit(X_train, y_train)    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 62,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAZIAAAEKCAYAAAA4t9PUAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMS4wLCBo\ndHRwOi8vbWF0cGxvdGxpYi5vcmcvpW3flQAAIABJREFUeJzt3XucVXW9x//Xe4bhfhMZFbkICAiD\nKNqIlalppqCJVFbazcpzzNKfFb9KPd5OaCcvZZ1zso5YdvulppZK3rAUJcvboCg3kQFFB1RGUO7M\nMDOf3x97TW1xYK571uyZ9/Px2A/2+q71XfuzMnm7bt+vIgIzM7OWKki7ADMzy28OEjMzaxUHiZmZ\ntYqDxMzMWsVBYmZmreIgMTOzVnGQmJlZqzhIzMysVRwkZmbWKt3SLqA9DB48OEaOHJl2GWZmeWXB\nggVvRURxY9t1iSAZOXIkZWVlaZdhZpZXJK1uyna+tGVmZq3iIDEzs1ZxkJiZWas4SMzMrFUcJGZm\n1ioOEjMza5WcBomkqZKWSyqXdFED68+VtEjSQkmPSypJ2kdK2p60L5T0f1l93pf0KZf0P5KUy2Mw\nM7M9y1mQSCoEbgCmASXAmfVBkeWWiJgUEZOBa4Hrs9atjIjJyefcrPafA+cAY5PP1Fwdw0NL3uCW\np17N1e7NzDqFXJ6RTAHKI2JVRFQDtwGnZW8QEZuyFvsAe5xAXtIQoH9EPBGZyeZ/C8xo27L/5Y/P\nVnDlvUt5Y+OOXP2EmVney2WQDAVey1quSNreRdJ5klaSOSO5IGvVKEnPSXpM0tFZ+6xobJ9t5ZKT\nS6itC6598MVc/YSZWd7LZZA0dO/iPWccEXFDRBwIXAhcmjS/DoyIiMOAmcAtkvo3dZ8Aks6RVCap\nrLKyskUHMGLv3px99Cj+9Nwannv17Rbtw8yss8tlkFQAw7OWhwFr97D9bSSXqSKiKiLWJ98XACuB\ncck+hzVlnxExOyJKI6K0uLjRMcd267zjxlDcrwez7l1K5mqamZlly2WQPAOMlTRKUnfgDGBO9gaS\nxmYtngKsSNqLk5v1SBpN5qb6qoh4Hdgs6f3J01pfBO7J4THQt0c3vnPSQTz36jvcs3BPOWhm1jXl\nLEgiogY4H5gLLANuj4glkmZJmp5sdr6kJZIWkrmEdVbSfgzwgqTngTuBcyNiQ7Lua8AvgHIyZyoP\n5OoY6p1++DAmDR3A1Q+8yLbqmlz/nJlZXlFXuFxTWloarR1GvuyVDZz+f09wwfFjmHniQW1UmZlZ\nxyVpQUSUNrad32xvotKRgzj10P25cf4qKt7elnY5ZmYdhoOkGS6aNh4Jrn7AjwObmdVzkDTD0IG9\n+OoxB3LvC6/z9MsbGu9gZtYFOEia6dxjD2TIgJ7MuncJdXWd//6SmVljHCTN1Kt7IRdNG8/iNZu4\nc0FF4x3MzDo5B0kLTD90fw4fMZBr5y5n846daZdjZpYqB0kLSOKKUyfy1pYqfjqvPO1yzMxS5SBp\noUOHD+SThw/jV4+/wur1W9Mux8wsNQ6SVvju1IPoVii+f9+ytEsxM0uNg6QV9u3fk/OOG8NDS9/k\n7+VvpV2OmVkqHCStdPaHRjFsr17M+vNSamrr0i7HzKzdOUhaqWdRIZecPIHlb27m1mdea7yDmVkn\n4yBpA1MP3o8jRw3i+oeWs3GbHwc2s67FQdIGJHH5qSW8s30n//3wirTLMTNrVw6SNjJx/wGcccQI\nfvvEK5Sv25J2OWZm7cZB0ob+3xPH0auokKvuW5p2KWZm7cZB0oYG9+3BBR8Zy6PLK5m3fF3a5ZiZ\ntYucBomkqZKWSyqXdFED68+VtEjSQkmPSypJ2j8qaUGyboGk47P6PJrsc2Hy2SeXx9BcZ31wJKMG\n9+Gqe5ey048Dm1kXkLMgkVQI3ABMA0qAM+uDIsstETEpIiYD1wLXJ+1vAadGxCQy87j/bpd+n4uI\nycmnQ/2nf/duBVx6ygRWVm7ld0+sTrscM7Ocy+UZyRSgPCJWRUQ1cBtwWvYGEbEpa7EPEEn7cxGx\nNmlfAvSU1COHtbap48fvw9FjB/OTv77Ehq3VaZdjZpZTuQySoUD2G3oVSdu7SDpP0koyZyQXNLCf\nTwLPRURVVtuvkstal0lSQz8u6RxJZZLKKisrW34ULSCJyz9WwtbqWq7/y/J2/W0zs/aWyyBp6C/4\n90wpGBE3RMSBwIXApe/agTQRuAb4albz55JLXkcnny809OMRMTsiSiOitLi4uIWH0HJj9+3H548c\nwS1PvcqLb2xqvIOZWZ7KZZBUAMOzlocBa3ezLWQufc2oX5A0DLgL+GJErKxvj4g1yZ+bgVvIXELr\nkL710XH071XErD8vJcLT8ppZ55TLIHkGGCtplKTuwBnAnOwNJI3NWjwFWJG0DwTuAy6OiL9nbd9N\n0uDkexHwMWBxDo+hVQb27s63ThjHP1au56Glb6ZdjplZTuQsSCKiBjgfmAssA26PiCWSZkmanmx2\nvqQlkhYCM8k8oUXSbwxw2S6P+fYA5kp6AVgIrAFuytUxtIXPHTmCsfv05b/uX0ZVTW3a5ZiZtTl1\nhUsupaWlUVZWltrvz3+pki/e/DQXTRvPuccemFodZmbNIWlBRJQ2tp3fbG8Hx4wr5oQJ+/DTR8pZ\nt3lH2uWYmbUpB0k7ueSUEqpqavnhXD8ObGadi4OknYwa3IcvfXAkdyyoYPGajWmXY2bWZhwk7ej/\n+chYBvXuzvf+vMSPA5tZp+EgaUf9exbx7ZMO4plX3ua+Ra+nXY6ZWZtwkLSzT5cOZ8KQ/vzg/hfZ\nsdOPA5tZ/nOQtLPCAnHFqSWseWc7s+evSrscM7NWc5Ck4P2j92bawfvx80dX8sZGPw5sZvnNQZKS\n/zh5ArURXPPgi2mXYmbWKg6SlAwf1Jt/P3oUdz23hmdffTvtcszMWsxBkqKvf3gM+/Trwff+vJS6\nOj8ObGb5yUGSoj49uvHdqeN5/rV3uHvhmrTLMTNrEQdJyj5x2FAOHTaAax58ka1VNWmXY2bWbA6S\nlBUUiMtPncibm6r4v8dWNt7BzKyDcZB0AO87YC9Om7w/N85fxWsbtqVdjplZszhIOogLp46nQHD1\nA34c2MzyS06DRNJUScsllUu6qIH150palMyA+Likkqx1Fyf9lks6qan7zFf7D+zF144dw32LXuep\nVevTLsfMrMlyFiSSCoEbgGlACXBmdlAkbomISRExGbgWuD7pW0JmjveJwFTgZ5IKm7jPvHXOMaPZ\nf0BPZt27lFo/DmxmeSKXZyRTgPKIWBUR1cBtwGnZG0TEpqzFPkD9356nAbdFRFVEvAyUJ/trdJ/5\nrFf3Qi46eQJL1m7ijrLX0i7HzKxJchkkQ4Hsvw0rkrZ3kXSepJVkzkguaKRvk/aZz049ZAilB+zF\nDx9azuYdO9Mux8ysUbkMEjXQ9p7rNRFxQ0QcCFwIXNpI3ybtE0DSOZLKJJVVVlY2seT0SeKKUyey\nfms1P32kPO1yzMwalcsgqQCGZy0PA9buYfvbgBmN9G3yPiNidkSURkRpcXFxM0tP16RhAzj98GHc\n/PeXefmtrWmXY2a2R7kMkmeAsZJGSepO5ub5nOwNJI3NWjwFWJF8nwOcIamHpFHAWODppuyzs/jO\n1IPoXljA9+9blnYpZmZ7lLMgiYga4HxgLrAMuD0ilkiaJWl6stn5kpZIWgjMBM5K+i4BbgeWAg8C\n50VE7e72matjSNM+/Xpy3vFj+OuyN3l8xVtpl2NmtluK6PyPmZaWlkZZWVnaZTTbjp21nPjj+fQs\nKuD+C46mW6HfHzWz9iNpQUSUNrad/2bqwHoWFfIfJ0/gpTe3cOvTr6ZdjplZgxwkHdxJE/flA6P3\n5kd/eYl3tlWnXY6Z2Xs4SDo4SVx+agmbtu/kJ39d0XgHM7N25iDJAxOG9OfMKSP43ZOrKV+3Oe1y\nzMzexUGSJ2Z+dBy9uxcy695ldIUHJMwsfzhI8sTefXvwjY+MZf5Llcxbvi7tcszM/slBkke++IGR\njB7ch6vuXUZ1TV3a5ZiZAQ6SvNK9WwGXfayEVW9t5bdPvJJ2OWZmgIMk7xw3fh+OHVfMfz+8gvVb\nqtIux8zMQZKPLvvYBLZV13L9X15KuxQzMwdJPhqzTz++8P4DuPXpV1n2+qbGO5iZ5ZCDJE9964Rx\nDOhVxKw/L/XjwGaWKgdJnhrQu4iZHx3HE6vWM3fJm2mXY2ZdmIMkj505ZQQH7duP/7p/GTt21qZd\njpl1UQ6SPNatsIDLTy3h1Q3buPnvL6ddjpl1UQ6SPHfUmMF8tGRfbniknHWbdqRdjpl1QQ6STuCS\nkydQXVvHdXOXp12KmXVBOQ0SSVMlLZdULumiBtbPlLRU0guSHpZ0QNJ+nKSFWZ8dkmYk634t6eWs\ndZNzeQz5YOTgPnzlqFHc+WwFL1S8k3Y5ZtbF5CxIJBUCNwDTgBLgTEklu2z2HFAaEYcAdwLXAkTE\nvIiYHBGTgeOBbcBDWf2+U78+Ihbm6hjyyfnHj2HvPt39OLCZtbtcnpFMAcojYlVEVAO3Aadlb5AE\nxrZk8UlgWAP7OR14IGs7a0C/nkV856SDKFv9Nn9+4fW0yzGzLiSXQTIUeC1ruSJp252zgQcaaD8D\nuHWXtu8nl8N+LKlHQzuTdI6kMklllZWVzak7b53+vuFM3L8/V9+/jO3VfhzYzNpHLoNEDbQ1eM1F\n0ueBUuC6XdqHAJOAuVnNFwPjgSOAQcCFDe0zImZHRGlElBYXFze/+jxUWCCuOHUiazfuYPb8VWmX\nY2ZdRC6DpAIYnrU8DFi760aSTgAuAaZHxK7D2X4auCsidtY3RMTrkVEF/IrMJTRLTBk1iFMOGcLP\nHytn7Tvb0y7HzLqAXAbJM8BYSaMkdSdziWpO9gaSDgNuJBMiDU37dya7XNZKzlKQJGAGsDgHtee1\ni6eNJwKuefDFtEsxsy4gZ0ESETXA+WQuSy0Dbo+IJZJmSZqebHYd0Be4I3mU959BI2kkmTOax3bZ\n9e8lLQIWAYOBq3J1DPlq2F69OeeY0dyzcC0LVr+ddjlm1smpKzwqWlpaGmVlZWmX0a62VtVw/I8e\nZb/+Pbnr60dRUNDQLSszs92TtCAiShvbrtlnJJIKJPVvWVnWXvr06MZF08bzfMVG7npuTdrlmFkn\n1qQgkXSLpP6S+gBLgeWSvpPb0qy1Tjt0KJOHD+SaB19ka1VN2uWYWSfV1DOSkojYRObm9v3ACOAL\nOavK2kRBgbji1BLWba7iZ4+Wp12OmXVSTQ2SIklFZILknuRx3M5/c6UTOGzEXnz8sKHc9LeXeW2D\nBwcws7bX1CC5EXgF6APMTwZX9GTheeLCqeMplJh179K0SzGzTqhJQRIR/xMRQyPi5ORlwNXAcTmu\nzdrIfgN68o0TxvKXpW/y8DJPy2tmbaupN9u/kdxsl6RfSnqWzKi8lie+ctQoxu7TlyvmLPE4XGbW\nppp6aesryc32E4Fi4MvA1Tmrytpc924FXDnjYCre3u4b72bWppoaJPVvs50M/CoinqfhQRmtA3v/\n6L35xGFDufGxVays3JJ2OWbWSTQ1SBZIeohMkMyV1A+oy11ZlisXnzyBHkUFXHHPEk+AZWZtoqlB\ncjZwEXBEMsFUdzKXtyzPFPfrwXdPOojHy9/iXk+AZWZtoKlPbdWRGQb+Ukk/BD4YES/ktDLLmc8e\neQCThg7gynuXsnnHzsY7mJntQVOf2roa+AaZ4VGWAhdI+kEuC7PcKSwQV844mMotVfzkryvSLsfM\n8lxTL22dDHw0Im6OiJuBqcApuSvLcm3y8IF8dsoIfv2PV1i61u+WmlnLNWf034FZ3we0dSHW/r57\n0ngG9irisnsWU1fnG+9m1jJNDZIfAM9J+rWk3wALgP/KXVnWHgb0LuLikyewYPXb3LmgIu1yzCxP\nNfVm+63A+4E/JZ8PRMRtjfWTNFXScknlki5qYP1MSUslvSDp4WQMr/p1tcmsibvOnDhK0lOSVkj6\nQzKNr7XQJw8fypSRg/jBA8t4e2t12uWYWR7aY5BIOrz+AwwBKoDXgP2Ttj31LQRuAKYBJcCZkkp2\n2ew5oDQiDgHuBK7NWrc9IiYnn+lZ7dcAP46IscDbZB5NthaSMjfeN+2o4dq5nuPdzJqvWyPrf7SH\ndcGex9uaApRHxCoASbcBp5F56iuzg4h5Wds/CXx+T8VIUvKbn02afgP8J/DzPfWzPTtov36c/aFR\nzJ6/ik+VDufwEXulXZKZ5ZE9BklEtGaE36Fkzl7qVQBH7mH7s4EHspZ7SioDaoCrI+JuYG/gnYio\nn+6vIvkda6VvfGQscxau5dK7FjPn/KPoVtjsWZjNrItq7IwEAEmfaKB5I7AoItbtrlsDbQ0+GiTp\n80ApcGxW84iIWCtpNPCIpEU0PAfK7vZ5DnAOwIgRI3ZTotXr06MbV5xawtd+/yy/e3I1Xz5qVNol\nmVmeaM4QKb8APpd8bgJmAn+XtLspdyuA4VnLw4C1u24k6QTgEmB6RFTVt0fE2uTPVcCjwGHAW8BA\nSfUB2OA+k36zI6I0IkqLi4ubeJhd29SD9+PYccX86KGXWLdpR9rlmFmeaGqQ1AETIuKTEfFJMjfP\nq8hcqrpwN32eAcYmT1l1B84A5mRvIOkwMrMvTs8+s5G0l6QeyffBwFHA0siMMjgPOD3Z9CzgniYe\ngzVCEt+bPpHq2jquum9Z2uWYWZ5oapCMjIjsqfXWAeMiYgPQ4GBNyX2M84G5wDLg9ohYImmWpPqn\nsK4D+gJ37PKY7wSgTNLzZILj6oiov0l/ITBTUjmZeya/bOIxWBOMHNyHrx17IHOeX8vfy99Kuxwz\nywNqylDikn4GjADuSJpOJ3Mj/TvAva28KZ9zpaWlUVZWlnYZeWPHzlpO+sl8CgvEA984mh7dCtMu\nycxSIGlBRJQ2tl1Tz0jOA34FTCZzr+I3wHkRsbWjh4g1X8+iQv5z+kRWVW7lF397Oe1yzKyDa+qb\n7QE8DjwC/BWYH54VqVM77qB9mHbwfvzvIyt4bcO2tMsxsw6sqcPIfxp4mswlrU8DT0k6fc+9LN9d\n9rESCiS+9+eljW9sZl1WUy9tXUJmdsSzIuKLZN5avyx3ZVlHsP/AXnzzhLH8ddmb/GXpm413MLMu\nqalBUrDLi4frm9HX8tiXjxrFuH378p9zlrC9ujbtcsysA2pqGDwoaa6kL0n6EnAfcH/uyrKOoqiw\ngKtmTGLNO9v56TzPpmhm79XUm+3fAWYDhwCHArMjYncvIlonM2XUID55+DBmz19F+botaZdjZh1M\nky9PRcQfI2JmRHwrIu7KZVHW8Vx88nh6FRVy+T2L8QN7ZpatsflINkva1MBnsyRP9N2FDO7bg+9M\nHc8/Vq5nzvMNDm9mZl3UHoMkIvpFRP8GPv0ion97FWkdw2enjOCQYQO46r5lbNrR4Mg4ZtYF+ckr\na7LCAnHVjIN5a0sV1z/0UtrlmFkH4SCxZjlk2EA+f+QB/PaJV1i8ZmPa5ZhZB+AgsWb79okHMahP\ndy69ezF1db7xbtbVOUis2Qb0LuI/Tp7Awtfe4Q9lrzXewcw6NQeJtcjHDxvKlFGDuObBF9mwtTrt\ncswsRQ4SaxEpc+N9y44arnngxbTLMbMUOUisxcbt24+zjx7FH8peY8HqDWmXY2YpyWmQSJoqabmk\nckkXNbB+pqSlkl6Q9LCkA5L2yZKekLQkWfeZrD6/lvRyMjXvQkmTc3kMtmcXHD+W/Qf05JK7FlNT\nW5d2OWaWgpwFiaRC4AZgGlACnCmpZJfNngNKI+IQ4E7g2qR9G/DFiJgITAV+ImlgVr/vRMTk5LMw\nV8dgjevToxuXn1rCi29s5jdPrE67HDNLQS7PSKYA5RGxKiKqgduA07I3iIh5EVE//d6TwLCk/aWI\nWJF8XwusA4pzWKu1wkkT9+PDBxVz/UPLeWPjjrTLMbN2lssgGQpkPxtakbTtztnAA7s2SpoCdAdW\nZjV/P7nk9WNJPdqiWGs5SXxv+kR21gVX3efZFM26mlwGiRpoa/DtNUmfB0qB63ZpHwL8DvhyRNRf\ngL8YGA8cAQwCGhzOXtI5ksoklVVWVrbsCKzJDti7D+d9eAz3vvA6f1vh/73NupJcBkkFMDxreRjw\nnmFjJZ1AZirf6RFRldXen8wEWpdGxJP17RHxemRUAb8icwntPSJidkSURkRpcbGvirWHrx47mpF7\n9+bye5ZQVePZFM26ilwGyTPAWEmjJHUHzgDmZG8g6TDgRjIhsi6rvTtwF/DbiLhjlz5Dkj8FzAAW\n5/AYrBl6FhUy67SDefmtrcx+bFXa5ZhZO8lZkEREDXA+MBdYBtweEUskzZI0PdnsOqAvcEfyKG99\n0HwaOAb4UgOP+f5e0iJgETAYuCpXx2DNd8y4Yk6ZNISfzivn1fXbGu9gZnlPXWG2u9LS0igrK0u7\njC7jjY07+MiPHmXKqEHc/KUjyJw8mlm+kbQgIkob285vtlub229AT7710XHMW17JQ0vfTLscM8sx\nB4nlxFkfHMn4/fox689L2VZdk3Y5ZpZDDhLLiaLCAq6ccTBr3tnO/z5SnnY5ZpZDDhLLmSNGDuL0\n9w3jpvmrWPHm5rTLMbMccZBYTl08bTx9enTjsnsW0xUe7DDrihwkllN79+3Bd6cexJOrNnDPwve8\nj2pmnYCDxHLujCNGcOjwgVx13zI2bt+Zdjlm1sYcJJZzhQXiqtMOZsPWKq5/aHna5ZhZG3OQWLuY\nNGwAX3j/AfzuydUsqtiYdjlm1oYcJNZuZp54EIP69ODSuxdRW+cb72adhYPE2s2AXkVcesoEnq/Y\nyG3PvJp2OWbWRhwk1q5Om7w/7x89iGsfXM5bW6oa72BmHZ6DxNqVJK6acTBbq2q4+oEX0y7HzNqA\ng8Ta3Zh9+vFvR4/mzgUVPP3yhrTLMbNWcpBYKi74yBj2H9CTy+5ezM7ausY7mFmH5SCxVPTu3o0r\npk9k+Zub+c0/Xkm7HDNrhZwGiaSpkpZLKpd0UQPrZ0paKukFSQ9LOiBr3VmSViSfs7La3ydpUbLP\n/5FnTcpbJ5bsy/Hj9+HHf3mJ1zduT7scM2uhnAWJpELgBmAaUAKcKalkl82eA0oj4hDgTuDapO8g\n4ArgSGAKcIWkvZI+PwfOAcYmn6m5OgbLLUn856kTqakLrrp3WdrlmFkL5fKMZApQHhGrIqIauA04\nLXuDiJgXEfUTez8JDEu+nwT8JSI2RMTbwF+AqZKGAP0j4onIDCX7W2BGDo/BcmzE3r05/7gx3Lfo\ndR57qTLtcsysBXIZJEOB17KWK5K23TkbeKCRvkOT703dp+WBc44dzajBfbjinsXs2Fmbdjlm1ky5\nDJKG7l00OC6GpM8DpcB1jfRtzj7PkVQmqayy0v+l25H16FbIrNMm8sr6bdz42Kq0yzGzZsplkFQA\nw7OWhwHvmZBC0gnAJcD0iKhqpG8F/7r8tdt9AkTE7IgojYjS4uLiFh+EtY+jxxbzsUOGcMOj5axe\nvzXtcsysGXIZJM8AYyWNktQdOAOYk72BpMOAG8mEyLqsVXOBEyXtldxkPxGYGxGvA5slvT95WuuL\nwD05PAZrR5d9rITuhQVcfs8Sz6ZolkdyFiQRUQOcTyYUlgG3R8QSSbMkTU82uw7oC9whaaGkOUnf\nDcCVZMLoGWBW0gbwNeAXQDmwkn/dV7E8t2//nnzzhLE89lIlc5e8kXY5ZtZE6gr/5VdaWhplZWVp\nl2FNUFNbx8f+93E2bt/JX2ceS58e3dIuyazLkrQgIkob285vtluH0q2wgKtmHMzrG3fwPw+vSLsc\nM2sCB4l1OKUjB/Hp0mH88vGXWf7G5rTLMbNGOEisQ7po2gT69uzGZXcv9o13sw7OQWId0qA+3blw\n6niefmUDf3p2TdrlmNkeOEisw/pM6XAmDx/IDx5YxsZtO9Mux8x2w0FiHVZBQWY2xQ1bq/nhQ8vT\nLsfMdsNBYh3awUMH8MUPjOT/e2o1L1S8k3Y5ZtYAB4l1eDNPHMfgvj249O7F1Nb5xrtZR+MgsQ6v\nf88iLj1lAi9UbOSWp19Nuxwz24WDxPLC9EP35wOj9+baB1+kcnNV4x3MrN04SCwvSOLKGRPZsbOW\nHzzg2RTNOhIHieWNMfv049+PHs2fnl3DzD8sZMWbfuvdrCPwiHiWVy74yFiqaur4/VOr+dNzazhp\n4r58/cNjOHT4wLRLM+uyPPqv5aX1W6r49T9e4Tf/eIVNO2r40JjBfP24A/nA6L3JTFVjZq3V1NF/\nHSSW1zbv2Mnvn3qVX/ztZd7aUsVhIwby9Q+P4SPj96GgwIFi1hoOkiwOks5vx85a7lhQwY2PraTi\n7e0ctG8/vn7cgZwyaQjdCn0r0KwlHCRZHCRdx87aOv78/Fp+/uhKVqzbwohBvfnqsaP55OHD6FlU\nmHZ5ZnmlQ0xsJWmqpOWSyiVd1MD6YyQ9K6lG0ulZ7cclU+/Wf3ZImpGs+7Wkl7PWTc7lMVh+KSos\n4BOHD2PuN4/hxi+8j716F3HJXYs55tp53DR/FVuratIu0azTydkZiaRC4CXgo0AFmbnXz4yIpVnb\njAT6A98G5kTEnQ3sZxCZ+dmHRcQ2Sb8G7m1o293xGUnXFRH8vXw9P3u0nH+sXM+AXkV86YMj+dIH\nR7JXn+5pl2fWoTX1jCSXj/9OAcojYlVS0G3AacA/gyQiXknW1e1hP6cDD0TEttyVap2VJD40djAf\nGjuY5159m589upL/fngFN/1tFZ87cgT/dvRo9u3fM+0yzfJaLi9tDQVey1quSNqa6wzg1l3avi/p\nBUk/ltSjpQVa13LYiL246YulzP3mMZxYsi+/fPxljr5mHhf/aRGr129NuzyzvJXLIGno2ctmXUeT\nNASYBMzNar4YGA8cAQwCLtxN33MklUkqq6ysbM7PWid30H79+MkZh/Hot4/jU6XD+OOCCo774aNc\ncOtzvPjGprTLM8s7uQySCmB41vIwYG0z9/Fp4K6I+Of0eBHxemRUAb8icwntPSJidkSURkRpcXFx\nM3/WuoIRe/fm+x+fxOMXHse/Hz2ah5e9ydSf/I1/+80zLFj9dtrlmeWNXAbJM8BYSaMkdSdziWpO\nM/dxJrtc1krOUlDm9eUZwOKL77bkAAAKj0lEQVQ2qNW6sH369+Tikyfw94uO51snjKNs9dt88uf/\n4IzZT/C3FZV0hUfkzVojp++RSDoZ+AlQCNwcEd+XNAsoi4g5ko4A7gL2AnYAb0TExKTvSODvwPCI\nqMva5yNAMZlLZwuBcyNiy57q8FNb1hxbq2q49elXuelvq3hzUxWThg7gvOMO5MSS/fy2vHUpfiEx\ni4PEWqKqppa7nl3Dzx9byer12xizT1++duyBTJ+8P0V+W966AAdJFgeJtUZNbR33L36Dn80r58U3\nNjN0YC++euxoPl063G/LW6fmIMniILG2EBHMW76OG+atZMHqtxnctztf+dAoPv/+A+jfsyjt8sza\nnIMki4PE2lJE8PTLG7jh0ZXMf6mSfj27cdYHRvLlo0ayd1+/1mSdh4Mki4PEcmVRxUZ+9mg5Dy55\ngx7dCjjjiBGcc8xo9h/YK+3SzFrNQZLFQWK5Vr5uC//32Erufm4NEnz8sKGce+yBjC7um3ZpZi3m\nIMniILH2suad7dw0fxW3Pv0q1bV1nHzwEL724QM5eOiAtEszazYHSRYHibW3t7ZUcfPjL/O7J1az\nuaqGY8cVc95xY5gyalDapZk1mYMki4PE0rJpx05+98Rqbn78ZdZvreaIkXvx9ePG8OFxxZ5b3jo8\nB0kWB4mlbXt1LbeXvcaNj61k7cYdlAzpz9ePO5BpBw+h0G/LWwflIMniILGOorqmjnsWZt6WX1W5\nlYG9ixjQq4heRYX06l5Ir6JCencvpGfyZ6a927/auxfSO2vbBvt0L6Rnt0IP52Kt1hEmtjKzXXTv\nVsCnSofzicOH8dCSN5i/4i22V9ewrbqW7Ttr2V5dy8btO9meLNe3V9fsae63hvUsKqB3EkINBc97\ngig7pLLDK2v7+u16FRXm9TAxEUFEZl6LiEj+hCDTvvt+Wd93mRXj3eve+3u7X5e90Pb7H9iriG45\n/mflIDFLQWGBmDZpCNMmDWnS9jW1deyoqWNbdQ07quvYtrMmEza7BE728o6dtWyrrmF7dR3bk+23\nVdfyzrZqXt+YvU2mT3MvThQV6l0hU1igf/0ltoe/pOt/p8F11K/fzV/0yfc97v+f6xveR1fz15nH\nMmaf3D6G7iAxywPdCgvoW1hA3x65+Vc2IqiqqcuETX0g/TOUat4VOPXrtu3yva4uQJlhuSX9c2Y7\n7domEMpqz1pONmhwXbIPGmxv4v6Tzg3v+937z/avo3n3ul0vHr57nXa7blfZD17oXe271tFwnz3V\nVdwOoy04SMwMKXN20bOokL3SLsbyTv5e5DQzsw7BQWJmZq3iIDEzs1bJaZBImippuaRySRc1sP4Y\nSc9KqpF0+i7raiUtTD5zstpHSXpK0gpJf0jmgzczs5TkLEgkFQI3ANOAEuBMSSW7bPYq8CXglgZ2\nsT0iJief6Vnt1wA/joixwNvA2W1evJmZNVkuz0imAOURsSoiqoHbgNOyN4iIVyLiBaBJb1sp87zb\n8cCdSdNvgBltV7KZmTVXLoNkKPBa1nJF0tZUPSWVSXpSUn1Y7A28ExE1je1T0jlJ/7LKysrm1m5m\nZk2Uy/dIGnr9pjnvlY6IiLWSRgOPSFoEbGrqPiNiNjAbMmNtNeN3zcysGXIZJBXA8KzlYcDapnaO\niLXJn6skPQocBvwRGCipW3JW0qR9Lliw4C1Jq5tRe7bBwFst7NvRdJZj6SzHAT6WjqqzHEtrj+OA\npmyUyyB5BhgraRSwBjgD+GxTOkraC9gWEVWSBgNHAddGREiaB5xO5p7LWcA9je0vIopbeAxIKmvK\n6Jf5oLMcS2c5DvCxdFSd5Vja6zhydo8kOWM4H5gLLANuj4glkmZJmg4g6QhJFcCngBslLUm6TwDK\nJD0PzAOujoilyboLgZmSysncM/llro7BzMwal9OxtiLifuD+Xdouz/r+DJnLU7v2+wcwaTf7XEXm\niTAzM+sA/GZ742anXUAb6izH0lmOA3wsHVVnOZZ2OY4uMUOimZnljs9IzMysVRwkTSDpSkkvJON+\nPSRp/7RrailJ10l6MTmeuyQNTLumlpD0KUlLJNVJysunaxobiy5fSLpZ0jpJi9OupTUkDZc0T9Ky\n5P9b30i7ppaS1FPS05KeT47lezn9PV/aapyk/hGxKfl+AVASEeemXFaLSDoReCQiaiRdAxARF6Zc\nVrNJmkBmaJ0bgW9HRFnKJTVLMhbdS8BHybxz9QxwZtbTiXlD0jHAFuC3EXFw2vW0lKQhwJCIeFZS\nP2ABMCNP/5kI6BMRWyQVAY8D34iIJ3Pxez4jaYL6EEn0oXlv6HcoEfFQ1hAzT9LAU3P5ICKWRcTy\ntOtohUbHossXETEf2JB2Ha0VEa9HxLPJ981kXltozrBOHUZkbEkWi5JPzv7ecpA0kaTvS3oN+Bxw\neWPb54mvAA+kXUQX1dqx6CyHJI0kM5rGU+lW0nKSCiUtBNYBf4mInB2LgyQh6a+SFjfwOQ0gIi6J\niOHA78m8aNlhNXYsyTaXADVkjqdDaspx5LHWjkVnOSKpL5nhmL65y9WIvBIRtRExmcxVhymScnbZ\nMacvJOaTiDihiZveAtwHXJHDclqlsWORdBbwMeAj0YFvkjXjn0k+atVYdJYbyf2EPwK/j4g/pV1P\nW4iId5LxCqcCOXkgwmckTSBpbNbidODFtGppLUlTyQwzMz0itqVdTxf2z7Hoklk+zwDmNNLHcii5\nQf1LYFlEXJ92Pa0hqbj+iUxJvYATyOHfW35qqwkk/RE4iMxTQquBcyNiTbpVtUwyRlkPYH3S9GQ+\nPoEm6ePA/wLFwDvAwog4Kd2qmkfSycBPgELg5oj4fsoltYikW4EPkxlp9k3giojIuzHwJH0I+Buw\niH9NtvcfyVBPeUXSIWQm/iskc8Jwe0TMytnvOUjMzKw1fGnLzMxaxUFiZmat4iAxM7NWcZCYmVmr\nOEjMzKxVHCRmbUDSlsa32mP/OyWNTr73lXSjpJXJyK3zJR0pqXvy3S8SW4fiIDFLmaSJQGEyjTTA\nL8gMgjg2IiYCXwIGJ4M7Pgx8JpVCzXbDQWLWhpRxXTIm2CJJn0naCyT9LDnDuFfS/ZJOT7p9Drgn\n2e5A4Ejg0oioA0hGCL4v2fbuZHuzDsOnyGZt6xPAZOBQMm96PyNpPnAUMBKYBOxDZojym5M+RwG3\nJt8nknlLv3Y3+18MHJGTys1ayGckZm3rQ8CtycirbwKPkfmL/0PAHRFRFxFvAPOy+gwBKpuy8yRg\nqpOJl8w6BAeJWdtqaHj4PbUDbAd6Jt+XAIdK2tO/mz2AHS2ozSwnHCRmbWs+8JlkUqFi4BjgaTJT\nnX4yuVeyL5lBDustA8YARMRKoAz4XjIaLZLG1s/BImlvoDIidrbXAZk1xkFi1rbuAl4AngceAb6b\nXMr6I5k5SBaTmWf+KWBj0uc+3h0s/wbsB5RLWgTcxL/mKjkOyLvRaK1z8+i/Zu1EUt+I2JKcVTwN\nHBURbyTzRcxLlnd3k71+H38CLs7z+eqtk/FTW2bt595ksqHuwJXJmQoRsV3SFWTmbH91d52TCbDu\ndohYR+MzEjMzaxXfIzEzs1ZxkJiZWas4SMzMrFUcJGZm1ioOEjMzaxUHiZmZtcr/DxpUGTWUl12N\nAAAAAElFTkSuQmCC\n",
      "text/plain": [
       "<matplotlib.figure.Figure at 0x10deb5810>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "# scores_：dict with classes as the keys, and the values as the grid of scores obtained during cross-validating each fold,\n",
    "# Each dict value has shape (n_folds, len(Cs))\n",
    "Cs = [1e-3, 1e-2, 1e-1, 1, 10, 100, 1000]\n",
    "n_Cs = len(Cs)\n",
    "n_classes = 9\n",
    "scores =  np.zeros((n_classes,n_Cs))\n",
    "\n",
    "for j in range(n_classes):\n",
    "        scores[j][:] = np.mean(lrcv_L1.scores_['Class_'+ str(j+1)],axis = 0)\n",
    "    \n",
    "logloss_mean = -np.mean(scores, axis = 0)\n",
    "plt.plot(np.log10(Cs), logloss_mean.reshape(n_Cs,1)) \n",
    "#plt(np.log10(reg.Cs)*np.ones(3), [0.28, 0.29, 0.30])\n",
    "plt.xlabel('log(C)')\n",
    "plt.ylabel('logloss')\n",
    "plt.show()\n",
    "\n",
    "#print ('C is:',lr_cv.C_)  #对多类分类问题，每个类别的分类器有一个C\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 63,
   "metadata": {
    "scrolled": false
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([ 0.35148217,  0.26520595,  0.18372371,  0.16013526,  0.15810339,\n",
       "        0.15829237,  0.15834754])"
      ]
     },
     "execution_count": 63,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "logloss_mean"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这个score似乎和GridSearchCV得到的Score不一样:("
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 52,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(10, 0.13541984939939911)\n"
     ]
    }
   ],
   "source": [
    "best_C = np.argmin(logloss_mean)\n",
    "best_score = np.min(logloss_mean)\n",
    "print (Cs[best_C], best_score)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 保存模型，用于后续测试"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "import cPickle\n",
    "\n",
    "cPickle.dump(grid.best_estimator_, open(\"Otto_L1_org.pkl\", 'wb'))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "惩罚不够，没有稀疏系数"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
