{
 "metadata": {
  "celltoolbar": "Raw Cell Format",
  "name": "",
  "signature": "sha256:04a14290b7b1a41ebb715140c44975a5e982107ef21b87ad4fa9d71473cb71bf"
 },
 "nbformat": 3,
 "nbformat_minor": 0,
 "worksheets": [
  {
   "cells": [
    {
     "cell_type": "heading",
     "level": 1,
     "metadata": {},
     "source": [
      "Image Processing"
     ]
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "En esta secci\u00f3n trabajaremos con clasificaci\u00f3n de im\u00e1genes. Cada instancia a clasificar es una imagen con el rostro de una persona. El objetivo es asignar el nombre de la persona con el rostro correcto. Para eso utilizaremos un dataset de *sklearn.datasets* que contiene im\u00e1genes de rostros con sus correpondientes etiquetas. Cada imagen se representa como un vector de pixeles.\n",
      "\n",
      "Utilizar la funci\u00f3n *fetch_lfw_people* para importar los datos de las personas de las que se tiene al menos 50 im\u00e1genes de su rostro. Inspeccionar su contenido (data, target y target_names), renderear el rostro de la primera instancia del dataset:"
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [
      "import pylab as pl\n",
      "from sklearn.cross_validation import train_test_split\n",
      "from sklearn.datasets import fetch_lfw_people\n",
      "import time\n",
      "\n",
      "start_time = time.clock()\n",
      "\n",
      "lfw_people = fetch_lfw_people(min_faces_per_person=70, resize=0.4)\n",
      "\n",
      "n_samples, h, w = lfw_people.images.shape\n",
      "\n",
      "X = lfw_people.data\n",
      "n_features = X.shape[1]\n",
      "\n",
      "y = lfw_people.target\n",
      "y_nombres = lfw_people.target_names\n",
      "target_names = lfw_people.target_names\n",
      "n_classes = target_names.shape[0]\n",
      "\n",
      "pl.figure(figsize=(1.8 * 4, 2.4 * 3))\n",
      "pl.subplots_adjust(bottom=0, left=.01, right=.99, top=.90, hspace=.35)\n",
      "pl.subplot(3, 4, 1)\n",
      "pl.imshow(X[0].reshape((h, w)), cmap=pl.cm.gray)\n",
      "pl.xticks(())\n",
      "pl.yticks(())\n",
      "\n",
      "pl.show()"
     ],
     "language": "python",
     "metadata": {},
     "outputs": [],
     "prompt_number": 8
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "Particionar los datos en dos conjuntos disjuntos de entrenamiento y testeo:"
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [
      "from sklearn.cross_validation import train_test_split\n",
      "\n",
      "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)"
     ],
     "language": "python",
     "metadata": {},
     "outputs": [],
     "prompt_number": 9
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "Extraer atributos de las im\u00e1genes para ser utilizados en el modelo de clasificaci\u00f3n. Para esto, investigar las clases de\n",
      "*Principal Component Analysis (PCA)* del paquete *sklearn.decomposition*:"
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [
      "from sklearn.decomposition import RandomizedPCA, PCA, KernelPCA\n",
      "\n",
      "pca = PCA(n_components=0.9, whiten=True).fit(X_train)\n",
      "#pca = RandomizedPCA().fit(X_train)\n",
      "X_train_pca = pca.transform(X_train)\n",
      "X_test_pca = pca.transform(X_test)\n"
     ],
     "language": "python",
     "metadata": {},
     "outputs": [],
     "prompt_number": 10
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "**PREGUNTA: Explique el m\u00e9todo de extracci\u00f3n de atributos y justifique su elecci\u00f3n.**"
     ]
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "**RESPUESTA:**\n",
      "\n",
      "PCA se utiliza para descomponer un conjunto de datos multivariado en un conjunto de componentes ortogonales sucesivas. Su objetivo es reducir la dimensionalidad de un conjunto de datos, teniendo como resultado la proyecci\u00f3n seg\u00fan la cual los datos queden mejor representados en t\u00e9rminos de m\u00ednimos cuadrados. \n",
      "\n",
      "De entre las principales clases para extraer attributos analizamos las siguientes:\n",
      "* PCA                \n",
      "* RandomizedPCA      \n",
      "* KernelPCA\t\t     \n",
      "* SparsePCA\n",
      "\n",
      "** MORE TRANSLATIONS !!! **\n",
      "-----------\n",
      "\n",
      "**PCA**\n",
      "\n",
      "Reducci\u00f3n de dimensionalidad lineal utilizando descomposici\u00f3n en valores singulares (SVD) de los datos y manteniendo solo los vectores singulares m\u00e1s significativos para proyectar los datos a un espacio de dimensi\u00f3n inferior. Solo funciona para matrices densas y no es escalable para grandes datos dimensionales.\n",
      "\n",
      "Parameters:\n",
      "\n",
      "- n_components : int, None or string\n",
      "Number of components to keep.\n",
      "if n_components is not set all components are kept:\n",
      "n_components == min(n_samples, n_features)\n",
      "if n_components == \u2018mle\u2019, Minka\u2019s MLE is used to guess the dimension if 0 < n_components < 1, select the number of components such that the amount of variance that needs to be explained is greater than the percentage specified by n_components\n",
      "\n",
      "- whiten : bool, optional\n",
      "When True (False by default) the components_ vectors are divided by n_samples times singular values to ensure uncorrelated outputs with unit component-wise variances. Whitening will remove some information from the transformed signal (the relative variance scales of the components) but can sometime improve the predictive accuracy of the downstream estimators by making there data respect some hard-wired assumptions.\n",
      "\n",
      "**RandomizedPCA**\n",
      "\n",
      "Reducci\u00f3n de dimensionalidad lineal utilizando descomposici\u00f3n en valores singulares (SVD) **aproximados** de los datos y manteniendo solo los vectores singulares m\u00e1s significativos para proyectar los datos a un espacio de dimensi\u00f3n inferior. \n",
      "\n",
      "Parameters:\n",
      "\n",
      "- n_components : int, optional\n",
      "Maximum number of components to keep. When not given or None, this is set to n_features (the second dimension of the training data).\n",
      "- iterated_power : int, optional\n",
      "Number of iterations for the power method. 3 by default.\n",
      "- whiten : bool, optional\n",
      "When True (False by default) the components_ vectors are divided by the singular values to ensure uncorrelated outputs with unit component-wise variances.\n",
      "Whitening will remove some information from the transformed signal (the relative variance scales of the components) but can sometime improve the predictive accuracy of the downstream estimators by making their data respect some hard-wired assumptions.\n",
      "\n",
      "**KernelPCA**\n",
      "\n",
      "En lugar de realizar la reducci\u00f3n de dimensionalidad linealmente, se hace mediante el uso de funciones kernel no lineales.\n",
      "\n",
      "Parameters:\n",
      "\n",
      "- kernel: \u201clinear\u201d | \u201cpoly\u201d | \u201crbf\u201d | \u201csigmoid\u201d | \u201ccosine\u201d | \u201cprecomputed\u201d :\n",
      "Kernel. Default: \u201clinear\u201d\n",
      "- degree : int, default=3\n",
      "Degree for poly kernels. Ignored by other kernels.\n",
      "- gamma : float, optional\n",
      "Kernel coefficient for rbf and poly kernels. Default: 1/n_features. Ignored by other kernels.\n",
      "- coef0 : float, optional\n",
      "Independent term in poly and sigmoid kernels. Ignored by other kernels.\n",
      "- kernel_params : mapping of string to any, optional\n",
      "Parameters (keyword arguments) and values for kernel passed as callable object. Ignored by other kernels.\n",
      "- alpha: int :\n",
      "Hyperparameter of the ridge regression that learns the inverse transform (when fit_inverse_transform=True). Default: 1.0\n",
      "- fit_inverse_transform: bool :\n",
      "Learn the inverse transform for non-precomputed kernels. (i.e. learn to find the pre-image of a point) Default: False\n",
      "- eigen_solver: string [\u2018auto\u2019|\u2019dense\u2019|\u2019arpack\u2019] :\n",
      "Select eigensolver to use. If n_components is much less than the number of training samples, arpack may be more efficient than the dense eigensolver.\n",
      "- tol: float :\n",
      "convergence tolerance for arpack. Default: 0 (optimal value will be chosen by arpack)\n",
      "- max_iter : int\n",
      "maximum number of iterations for arpack Default: None (optimal value will be chosen by arpack)\n",
      "\n",
      "**SparsePCA**\n",
      "\n",
      "Busca el conjunto de componentes dispersos que pueden reconstruir de manera \u00f3ptima los datos. El nivel de dispersi\u00f3n es controlable por el coeficiente de penalizaci\u00f3n L1, dado por el par\u00e1metro alfa.\n",
      "\n",
      "- n_components : int,\n",
      "Number of sparse atoms to extract.\n",
      "- alpha : float,\n",
      "Sparsity controlling parameter. Higher values lead to sparser components.\n",
      "- ridge_alpha : float,\n",
      "Amount of ridge shrinkage to apply in order to improve conditioning when calling the transform method.\n",
      "- max_iter : int,\n",
      "Maximum number of iterations to perform.\n",
      "- tol : float,\n",
      "Tolerance for the stopping condition.\n",
      "- method : {\u2018lars\u2019, \u2018cd\u2019}\n",
      "lars: uses the least angle regression method to solve the lasso problem (linear_model.lars_path) cd: uses the coordinate descent method to compute the Lasso solution (linear_model.Lasso). Lars will be faster if the estimated components are sparse.\n",
      "- n_jobs : int,\n",
      "Number of parallel jobs to run.\n",
      "\n",
      "Hay una obvia conveniencia de utilizar RandomizedPCA frente a PCA para arreglos densos o largos conjuntos de datos.\n",
      "SparcePCA demora mucho y es rotundamente descartado por eso.\n",
      "\n",
      "Se realizaron pruebas para determinar el mejor algorimo siendo el ganador el algorimo PCA con parametros component=0.9 y whiten=True."
     ]
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "Elija dos algoritmos de aprendizaje, entrene e intente obtener los mejores modelos de clasificaci\u00f3n posibles:"
     ]
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "Como algorimos de aprendizaje elegimos \u00c1rboles de decision y el algoritmo SVC, a continuaci\u00f3n los parametros de cada uno:\n",
      "\n",
      "**\u00c1rbol de decisi\u00f3n:** \n",
      "\n",
      "+ **criterion: ** String, opcional (por defecto=\u201dgini\u201d). Funci\u00f3n para medir la calidad de una partici\u00f3n de ejemplos. Opciones: \"gini\" o \"entropy\".\n",
      "+ **splitter: ** String, opcional (por defecto=\u201dbest\u201d). Estrategia utilizada para separar ejemplos en cada nodo. Opciones: \"best\" para elegir la mejor partici\u00f3n, o \"random\" para elegir la mejor partici\u00f3n aleatoria. \n",
      "+ **max_features** (int, float, string o None, opcional (por defecto=None))\n",
      "Cantidad de atributos a considerar cuando se est\u00e1 buscando la mejor partici\u00f3n.\n",
      "    + Si es int: M\u00e1xima cantidad que se considera en cada partici\u00f3n.\n",
      "    + Si es float: Porcentaje de atributos que se consideran en cada partici\u00f3n.\n",
      "    + Si \"auto\" o \"sqrt\": Se considera la ra\u00edz cuadrada del total de atributos posibles.\n",
      "    + Si \"log2\": Se consideran log2(total de atributos posibles) atributos.\n",
      "    + Si None: Se considera el total de atributos disponibles.\n",
      "\n",
      "+ **max_depth: ** Int o None, opcional (por defecto=None). Profundidad m\u00e1xima del \u00e1rbol. Si no se pone nada se contin\u00faa el algoritmo hasta que los conjuntos de ejemplos no se puedan subdividir (sean puros), o que todos los conjuntos finales tengan menos elementos que el valor del par\u00e1metro min_samples_split. Se ignora si el par\u00e1metro max_samples_leaf no vale None.\n",
      "\n",
      "+ **min_samples_split: ** Int, opcional (por defecto=2). M\u00ednima cantidad de ejemplos requeridos para dividir el un nodo (vale 2 por defecto)\n",
      "+ **min_samples_leaf** : Int, opcional (por defecto=1). M\u00ednima cantidad de ejemplos requeridos para estar en un nodo hoja (vale 1 por defecto).\n",
      "\n",
      "+ **max_leaf_nodes** : int o none, opcional (por defecto=none). Hace crecer un arbol con max_leaf_nodes nodos en la forma de primero el mejor (los mejores nodos se definen como los que logran una reducci\u00f3n relativa de la impureza). Si none es especificado entonces se puede obtener un numero ilimitados de nodos hoja. Si no se especifica 'none' entonces el parametro max_depth ser\u00e1 ignorado.\n",
      "\n",
      "+ **random_state: ** int, instancia de RandomState o None, opcional (por defecto=None). Especifica la forma en que se generan los n\u00fameros aleatorios. Si se le pasa un valor entero \u00e9ste ser\u00e1 la semilla para la generaci\u00f3n de los n\u00fameros, si se le pasa una instancia de RandomState \u00e9sta ser\u00e1 la generadora de los n\u00fameros, y si es \"None\" el generador es la instancia de RandomState usada por np.random.\n",
      "\n",
      "\n",
      "** SVC: **\n",
      "\n",
      "+ **C: ** Float, opcional (por defecto=1.0): Par\u00e1metro de penalizaci\u00f3n del t\u00e9rmino de error.\n",
      "+ **kernel: ** String, opcional (por defecto=\u2019rbf\u2019): Especifica el tipo de n\u00facleo que ser\u00e1 usado por el algoritmo. Puede ser \u2018linear\u2019, \u2018poly\u2019, \u2018rbf\u2019, \u2018sigmoid\u2019, \u2018precomputed\u2019 o el llamado a una funcion que se utiliza para precomputar la matriz del kernel.\n",
      "+ **degree: ** Int, optional (por defecto =3). Grado de la funci\u00f3n polinomial n\u00facleo, solo se toma en cuenta si el n\u00facleo es \"poly\", en otro caso se ignora.\n",
      "+ **gamma: ** Float, opcional (por defecto =0.0). Coeficiente del n\u00facleo para \u2018rbf\u2019, \u2018poly\u2019 y \u2018sigmoid\u2019. Si  gamma vale 0.0 entonces se toma 1/(cantidad de atributos).\n",
      "+ **coef0 : ** Float, opcional (por defecto=0.0). T\u00e9rmino independiente de la funci\u00f3n n\u00facleo. Solo se toma en cuenta en \u2018poly\u2019 y \u2018sigmoid\u2019.\n",
      "+ **probability: ** Boolean, opcional (por defecto=False):\n",
      "Indica si se deben permitir las estimaciones de probabilidad. Si desea utilizarse debe ser activado previo al entrenamiento, y provocar\u00e1 su relentecimiento.\n",
      "+ **shrinking: ** Boolean, optional (por defecto=True). Indica si se debe habilitar el \"shrinking\" (contracci\u00f3n), una t\u00e9cnica de optimizaci\u00f3n que reduce el conjunto de entrenamiento. \n",
      "+ **tol: ** Float, opcional (por defecto=1e-3)\n",
      "Tolerancia para el criterio de detenci\u00f3n del algoritmo.\n",
      "+ **cache_size: ** Float, optional. Tama\u00f1o del cach\u00e9 del n\u00facleo (en MB).\n",
      "+ **class_weight: ** {dict, \u2018auto\u2019}, opcional. Ajusta el par\u00e1metro C de la clase i como class_weight[i]*C para SVC. Si no se da, se supone que todas las clases tienen peso uno. El modo 'auto' utiliza los valores de 'y' para ajustar autom\u00e1ticamente los pesos inversamente proporcionales a las frecuencias de clase.\n",
      "\n",
      "+ **verbose: ** Bool, por defecto: False. Activa o desactiva la salida detallada.\n",
      "+ **max_iter: ** int, opcional (por defecto =-1)\n",
      "L\u00edmite m\u00e1ximo en iteraciones para la salida. Si vale -1 no tendr\u00e1 l\u00edmite.\n",
      "+ **random_state: ** Int, RandomState, o None (por defecto)\n",
      "Igual al par\u00e1metro random_state del algoritmo de \u00e1rbol de decisi\u00f3n."
     ]
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "Imprima los mejores resultados de precision, recall y accuracy para los algoritmos seleccionados:"
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [
      "from sklearn import grid_search\n",
      "from sklearn.pipeline import Pipeline\n",
      "from operator import itemgetter\n",
      "from scipy.stats import randint as sp_randint\n",
      "\n",
      "# imprimir_performance\n",
      "def imprimir_performance( X, y, clf ):\n",
      "    from sklearn import metrics\n",
      "\n",
      "    y_predicted = clf.predict(X)\n",
      "\n",
      "    print \"Performance para: %s\" % clf\n",
      "    print\n",
      "    print metrics.classification_report(y, y_predicted)\n",
      "    print\n",
      "    print \"Accuracy\"\n",
      "    print metrics.accuracy_score(y, y_predicted)\n",
      "    print\n",
      "    \n",
      "# report\n",
      "def report(grid_scores, n_top=3):\n",
      "    top_scores = sorted(grid_scores, key=itemgetter(1), reverse=True)[:n_top]\n",
      "    for i, score in enumerate(top_scores):\n",
      "        print(\"Modelo rankeado en la posici\u00f3n: {0}\".format(i + 1))\n",
      "        print(\"Puntuaci\u00f3n: {0:.3f} (std: {1:.3f})\".format(\n",
      "              score.mean_validation_score,\n",
      "              np.std(score.cv_validation_scores)))\n",
      "        print(\"Par\u00e1metros: {0}\".format(score.parameters))\n",
      "        print(\"\")\n",
      "\n",
      "# imprimir_performance detallada\n",
      "def imprimir_performance_detallada(clf, n_top=3):\n",
      "    print \"Ranking mejores puntuaciones\"\n",
      "    print \"---------------------------- \\n\"\n",
      "    \n",
      "    report(clf.grid_scores_)\n",
      "\n",
      "    print \"Impresion de performance para el modelo mejor evaluado\"\n",
      "    print \"------------------------------------------------------ \\n\"\n",
      "    imprimir_performance(X_test_pca, y_test, clf)"
     ],
     "language": "python",
     "metadata": {},
     "outputs": [],
     "prompt_number": 11
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [
      "from sklearn import svm, tree\n",
      "from sklearn.naive_bayes import GaussianNB\n",
      "from sklearn import grid_search\n",
      "import numpy as np\n",
      "\n",
      "# Nota: Aca agregar los grids con varios parametros para probar. Por ahora queda el fit nomas\n",
      "print \"***************\"\n",
      "print \"* Pruebas DT  *\"\n",
      "print \"***************\\n\"\n",
      "\n",
      "parameters_dt = {'criterion':['gini','entropy'], \n",
      "              'max_depth':[4,5,None],\n",
      "              'min_samples_split':[2,3], \n",
      "              'min_samples_leaf':[2,3],\n",
      "              'splitter':['best','random'],\n",
      "              'max_features':[0.6, 0.9, None]}\n",
      "\n",
      "dt = tree.DecisionTreeClassifier()\n",
      "clf_dt = grid_search.GridSearchCV(dt, parameters_dt)\n",
      "clf_dt.fit(X_train_pca, y_train)\n",
      "imprimir_performance_detallada(clf_dt)\n",
      "print \"\\n\\n\"\n",
      "\n",
      "print \"***************\"\n",
      "print \"* Pruebas SVC *\"\n",
      "print \"***************\\n\"\n",
      "\n",
      "parameters_svc = {'kernel':['rbf','sigmoid','linear','poly'],\n",
      "              'gamma':[0,0.1, 0.001],\n",
      "              'tol':[0.001,0.01,0.1],\n",
      "              'C':[4],\n",
      "              'degree': [2,3]}\n",
      "svc = svm.SVC()\n",
      "clf_svc = grid_search.GridSearchCV(svc, parameters_svc)\n",
      "clf_svc = clf_svc.fit(X_train_pca, y_train)\n",
      "imprimir_performance_detallada(clf_svc)\n",
      "print \"\\n\\n\"\n",
      "\n",
      "nb = GaussianNB()\n",
      "nb = nb.fit(X_train_pca, y_train)"
     ],
     "language": "python",
     "metadata": {},
     "outputs": [
      {
       "output_type": "stream",
       "stream": "stdout",
       "text": [
        "***************\n",
        "* Pruebas DT  *\n",
        "***************\n",
        "\n",
        "Ranking mejores puntuaciones"
       ]
      },
      {
       "output_type": "stream",
       "stream": "stdout",
       "text": [
        "\n",
        "---------------------------- \n",
        "\n",
        "Modelo rankeado en la posici\u00f3n: 1\n",
        "Puntuaci\u00f3n: 0.446 (std: 0.017)\n",
        "Par\u00e1metros: {'splitter': 'best', 'min_samples_leaf': 3, 'min_samples_split': 2, 'criterion': 'gini', 'max_features': 0.6, 'max_depth': 4}\n",
        "\n",
        "Modelo rankeado en la posici\u00f3n: 2\n",
        "Puntuaci\u00f3n: 0.445 (std: 0.019)\n",
        "Par\u00e1metros: {'splitter': 'random', 'min_samples_leaf': 2, 'min_samples_split': 3, 'criterion': 'entropy', 'max_features': 0.9, 'max_depth': 5}\n",
        "\n",
        "Modelo rankeado en la posici\u00f3n: 3\n",
        "Puntuaci\u00f3n: 0.444 (std: 0.037)\n",
        "Par\u00e1metros: {'splitter': 'random', 'min_samples_leaf': 2, 'min_samples_split': 2, 'criterion': 'entropy', 'max_features': None, 'max_depth': 5}\n",
        "\n",
        "Impresion de performance para el modelo mejor evaluado\n",
        "------------------------------------------------------ \n",
        "\n",
        "Performance para: GridSearchCV(cv=None,\n",
        "       estimator=DecisionTreeClassifier(compute_importances=None, criterion='gini',\n",
        "            max_depth=None, max_features=None, max_leaf_nodes=None,\n",
        "            min_density=None, min_samples_leaf=1, min_samples_split=2,\n",
        "            random_state=None, splitter='best'),\n",
        "       fit_params={}, iid=True, loss_func=None, n_jobs=1,\n",
        "       param_grid={'splitter': ['best', 'random'], 'min_samples_leaf': [2, 3], 'max_features': [0.6, 0.9, None], 'criterion': ['gini', 'entropy'], 'min_samples_split': [2, 3], 'max_depth': [4, 5, None]},\n",
        "       pre_dispatch='2*n_jobs', refit=True, score_func=None, scoring=None,\n",
        "       verbose=0)\n",
        "\n",
        "             precision    recall  f1-score   support\n",
        "\n",
        "          0       0.06      0.13      0.08        15\n",
        "          1       0.31      0.47      0.38        59\n",
        "          2       0.00      0.00      0.00        38\n",
        "          3       0.48      0.64      0.55       131\n",
        "          4       0.00      0.00      0.00        32\n",
        "          5       0.22      0.11      0.15        18\n",
        "          6       0.23      0.10      0.14        29\n",
        "\n",
        "avg / total       0.29      0.37      0.32       322\n",
        "\n",
        "\n",
        "Accuracy\n",
        "0.369565217391\n",
        "\n",
        "\n",
        "\n",
        "\n",
        "***************\n",
        "* Pruebas SVC *\n",
        "***************\n",
        "\n",
        "Ranking mejores puntuaciones"
       ]
      },
      {
       "output_type": "stream",
       "stream": "stdout",
       "text": [
        "\n",
        "---------------------------- \n",
        "\n",
        "Modelo rankeado en la posici\u00f3n: 1\n",
        "Puntuaci\u00f3n: 0.802 (std: 0.010)\n",
        "Par\u00e1metros: {'kernel': 'rbf', 'C': 4, 'tol': 0.1, 'gamma': 0, 'degree': 2}\n",
        "\n",
        "Modelo rankeado en la posici\u00f3n: 2\n",
        "Puntuaci\u00f3n: 0.802 (std: 0.010)\n",
        "Par\u00e1metros: {'kernel': 'rbf', 'C': 4, 'tol': 0.1, 'gamma': 0, 'degree': 3}\n",
        "\n",
        "Modelo rankeado en la posici\u00f3n: 3\n",
        "Puntuaci\u00f3n: 0.800 (std: 0.009)\n",
        "Par\u00e1metros: {'kernel': 'rbf', 'C': 4, 'tol': 0.001, 'gamma': 0, 'degree': 2}\n",
        "\n",
        "Impresion de performance para el modelo mejor evaluado\n",
        "------------------------------------------------------ \n",
        "\n",
        "Performance para: GridSearchCV(cv=None,\n",
        "       estimator=SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0, degree=3, gamma=0.0,\n",
        "  kernel='rbf', max_iter=-1, probability=False, random_state=None,\n",
        "  shrinking=True, tol=0.001, verbose=False),\n",
        "       fit_params={}, iid=True, loss_func=None, n_jobs=1,\n",
        "       param_grid={'degree': [2, 3], 'kernel': ['rbf', 'sigmoid', 'linear', 'poly'], 'C': [4], 'tol': [0.001, 0.01, 0.1], 'gamma': [0, 0.1, 0.001]},\n",
        "       pre_dispatch='2*n_jobs', refit=True, score_func=None, scoring=None,\n",
        "       verbose=0)\n",
        "\n",
        "             precision    recall  f1-score   support\n",
        "\n",
        "          0       1.00      0.47      0.64        15\n",
        "          1       0.80      0.81      0.81        59\n",
        "          2       0.94      0.82      0.87        38\n",
        "          3       0.80      0.96      0.87       131\n",
        "          4       1.00      0.66      0.79        32\n",
        "          5       0.92      0.67      0.77        18\n",
        "          6       0.80      0.83      0.81        29\n",
        "\n",
        "avg / total       0.85      0.84      0.83       322\n",
        "\n",
        "\n",
        "Accuracy\n",
        "0.835403726708\n",
        "\n",
        "\n",
        "\n",
        "\n"
       ]
      }
     ],
     "prompt_number": 12
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [
      "imprimir_performance(X_test_pca, y_test, nb)"
     ],
     "language": "python",
     "metadata": {},
     "outputs": [
      {
       "output_type": "stream",
       "stream": "stdout",
       "text": [
        "Performance para: GaussianNB()\n",
        "\n",
        "             precision    recall  f1-score   support\n",
        "\n",
        "          0       0.69      0.60      0.64        15\n",
        "          1       0.78      0.71      0.74        59\n",
        "          2       0.96      0.61      0.74        38\n",
        "          3       0.74      0.92      0.82       131\n",
        "          4       0.76      0.59      0.67        32\n",
        "          5       0.81      0.72      0.76        18\n",
        "          6       0.65      0.59      0.62        29\n",
        "\n",
        "avg / total       0.77      0.76      0.75       322\n",
        "\n",
        "\n",
        "Accuracy\n",
        "0.757763975155\n",
        "\n"
       ]
      }
     ],
     "prompt_number": 13
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "**PREGUNTA: Analice los resultados obtenidos.**"
     ]
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "**RESPUESTA:**\n",
      "\n",
      "SparcePCA demora mucho."
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [
      "print(\"--- %s Tiempo ejecucion, segundos ---\" % ((time.clock() - start_time)))"
     ],
     "language": "python",
     "metadata": {},
     "outputs": [
      {
       "output_type": "stream",
       "stream": "stdout",
       "text": [
        "--- 37.5093105517 Tiempo ejecucion, segundos ---\n"
       ]
      }
     ],
     "prompt_number": 14
    },
    {
     "cell_type": "heading",
     "level": 1,
     "metadata": {},
     "source": [
      "Text Processing"
     ]
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "En esta secci\u00f3n trabajaremos en un problema que involucra el uso de textos como atributos en un clasificador.\n",
      "Se posee un dataset que contiene datos de pel\u00edculas de cine, donde cada instancia es una pel\u00edcula y sus atributos son:\n",
      "\n",
      "- T\u00edtulo de la pel\u00edcula\n",
      "- G\u00e9nero\n",
      "- Director\n",
      "- Elenco\n",
      "- Argumento\n",
      "\n",
      "El objetivo de esta secci\u00f3n es poder predecir el g\u00e9nero de la pel\u00edcula en funci\u00f3n del resto de los atributos (t\u00edtulo, director, elenco y argumento)."
     ]
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "Importe el dataset desde el archivo *movies.csv* provisto junto con este notebook (recuerde las herramientas y t\u00e9cnicas utilizadas en el laboratorio parte 1):"
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [],
     "language": "python",
     "metadata": {},
     "outputs": [],
     "prompt_number": 14
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "Imprima las primeras 10 instancias del dataset:"
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [],
     "language": "python",
     "metadata": {},
     "outputs": [],
     "prompt_number": 14
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "Entrene un clasificador que aprenda a predecir el g\u00e9nero de la pel\u00edcula utilizando \u00fanicamente el atributo \"argumento\". Para esto es necesario primero transformar el atributo de texto a atributos num\u00e9ricos. El paquete *sklearn.feature_extraction.text* contiene clases que permiten transformar atributos de texto en num\u00e9ricos, realice esa transformaci\u00f3n:"
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [],
     "language": "python",
     "metadata": {},
     "outputs": [],
     "prompt_number": 14
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "Elija dos algoritmos de aprendizaje autom\u00e1tico, entrene ambos modelos e intente obtener los mejores resultados posibles. Tener en cuenta que una pel\u00edcula puede tener m\u00e1s de un genero, por lo cual el clasificador debe de poder devolver m\u00e1s de una etiqueta. Puede ver algunas referencias en la documentacion de [scikit-learn](http://scikit-learn.org/stable/modules/multiclass.html):"
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [],
     "language": "python",
     "metadata": {},
     "outputs": [],
     "prompt_number": 14
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "Imprima los mejores resultados de precision, recall y accuracy para los algoritmos seleccionados:"
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [],
     "language": "python",
     "metadata": {},
     "outputs": [],
     "prompt_number": 14
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "Entrene nuevamente ambos algoritmos, pero ahora utilice todos los atributos disponibles (t\u00edtulo, director, elenco y argumento). Intente obtener los mejores resultados posibles. Tenga en cuenta que algunos de los atributos puede tener los valores incompletos para alguanas instancias:"
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [],
     "language": "python",
     "metadata": {},
     "outputs": [],
     "prompt_number": 14
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "**PREGUNTA: Justifique las transformaciones que realiz\u00f3 a cada uno de los atributos.**"
     ]
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "**RESPUESTA:**"
     ]
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "Imprima los mejores resultados de precision, recall y accuracy para los algoritmos seleccionados:"
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [],
     "language": "python",
     "metadata": {},
     "outputs": [],
     "prompt_number": 14
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "**PREGUNTA: Con cu\u00e1l de los dos conjuntos de atributos de partida (solo argumento o t\u00edtulo, director elenco y argumento) obtuvo los mejores resultados? Analice los resultados obtenidos.**"
     ]
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "**RESPUESTA:**"
     ]
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "**PREGUNTA: Escriba las conclusiones generales que haya obtenido de la tarea.**"
     ]
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "**RESPUESTA:**"
     ]
    },
    {
     "cell_type": "markdown",
     "metadata": {},
     "source": [
      "\n",
      "\n",
      "\n",
      "\n",
      "** Fuentes **\n",
      "+ http://scikit-learn.org/0.12/auto_examples/applications/face_recognition.html\n",
      "+ http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html\n",
      "\n",
      "+ http://scipy-lectures.github.io/advanced/scikit-learn/\n",
      "+ http://sebastianraschka.com/Articles/2014_pca_step_by_step.html\n",
      "\n",
      "http://www.math.unipd.it/~aiolli/corsi/1314/aa/user_guide-0.12-git.pdf"
     ]
    },
    {
     "cell_type": "code",
     "collapsed": false,
     "input": [],
     "language": "python",
     "metadata": {},
     "outputs": [],
     "prompt_number": 14
    }
   ],
   "metadata": {}
  }
 ]
}