{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 1、减少数据集维度的主要动机是什么？ 主要缺点是啥？\n",
    "优点：\n",
    "\n",
    "1、加快算法训练的速度\n",
    "\n",
    "2、降低特征维度到2维或者3维从而可以在图中画出一个高维度的训练集，让我们可以通过视觉直观的发现一些非常重要的数据信息，比如聚类\n",
    "\n",
    "3、节省空间\n",
    "\n",
    "\n",
    "缺点：\n",
    "\n",
    "1、一些信息的丢失，会导致算法的性能降低\n",
    "\n",
    "2、可能需要大量的计算\n",
    "\n",
    "3、增加机器学习piplines的复杂性\n",
    "\n",
    "4、变换后的特征通常很难解释"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 2、什么是维数灾难？\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 3、一旦对某数据降维，我们可能恢复它吗？ 如果可以怎么才能恢复？ 如果不可以，为啥？\n",
    "\n",
    "不能恢复, 因为在对数据降维的时候信息就丢失了。\n",
    "\n",
    "但是有些算法可以重建类似于原始数据集的数据。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 4、PCA 可以用于降低一个高度非线性对数据集吗？\n",
    "\n",
    "可以"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 5、假设你对一个1000维的数据集应用PCA，同事设置方差解释率为95%， 你的最终数据集将会有多少维？\n",
    "\n",
    "这个却决于具体的数据集，如果数据集非常的完美可能只需要1维 就可以保持95%的解释率。如果数据是随机分散到1000个维度上面，那么可能需要950个维度。\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 6、在什么情况下你会是用普通的PCA、增量PCA、随机PCA、核PCA？\n",
    "\n",
    "普通的PCA是默认设置，仅在内存能够容的下数据集的时候使用。\n",
    "\n",
    "增量PCA： \n",
    "\n",
    "1、在数据集过大，内存不足时使用，它的速度要慢于普通的PCA。 \n",
    "\n",
    "2、当你在需要在线任务的时候增量PCA也比较适合，每次又新的实例到达时，增量PCA都会动态运行\n",
    "\n",
    "随机PCA： 能够快速找到接近前d个成分，他的计算复杂与d有关，而不是全量维度n相关。所以它的速度会别普通PCA更快。\n",
    "\n",
    "核PCA: 能够很好的处理非线性数据集"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 7、你该如何评价你的降维算法在你数据集上的表现？\n",
    "\n",
    "1、对降维后的数据集进行逆向变换，通过衡量逆向后的错误情况来衡量。\n",
    "\n",
    "Intuitively, a dimensionality reduction algorithm performs well if it eliminates a\n",
    "lot of dimensions from the dataset without losing too much information. One\n",
    "way to measure this is to apply the reverse transformation and measure the\n",
    "reconstruction error. However, not all dimensionality reduction algorithms provide\n",
    "a reverse transformation. Alternatively, if you are using dimensionality\n",
    "reduction as a preprocessing step before another Machine Learning algorithm\n",
    "(e.g., a Random Forest classifier), then you can simply measure the performance\n",
    "of that second algorithm; if dimensionality reduction did not lose too much\n",
    "information, then the algorithm should perform just as well as when using the\n",
    "original dataset."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 8、将两个不同的降维算法串联使用有意义吗？\n",
    "\n",
    "绝对有意义。\n",
    "\n",
    "比如可以先使用PCA算法快速的将无用的维度过滤，然后再使用LLE算法进行二次降维。 串联使用PCA和LLE虽然得到的性能与单独使用LLE一样，但是它极大的提高了算法的速度。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
