{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Chapter 3: Dimensionality Reduction Techniques\n",
    "\n",
    "This notebook provides practical \"recipes\" for using dimensionality reduction techniques in scikit-learn. Each recipe includes explanations, code examples, visualizations, best practices, and common pitfalls."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Principal Component Analysis (PCA)\n",
    "\n",
    "PCA is an *unsupervised* dimensionality reduction technique that transforms high-dimensional data into a new coordinate system where the axes (principal components) are ordered by the amount of variance they explain.\n",
    "\n",
    "### Key Concepts:\n",
    "- PCA finds directions of maximum variance in the data\n",
    "- Components are orthogonal to each other\n",
    "- First component explains the most variance, followed by second, etc.\n",
    "- Requires standardized data for optimal results\n",
    "\n",
    "### Getting ready\n",
    "To begin, we will load our toy dataset from scikit-learn. Version 1.5 of scikit-learn contains 6 datasets that are commonly used to illustrate various ML steps and features in the library. In this case, we will be using the Wine dataset."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Load libraries\n",
    "import numpy as np\n",
    "import pandas as pd\n",
    "from sklearn.datasets import load_wine\n",
    "import warnings\n",
    "\n",
    "# Set random seed for reproducibility and suppress warnings\n",
    "np.random.seed(2024)\n",
    "warnings.simplefilter(action='ignore', category=FutureWarning)\n",
    "\n",
    "#Load dataset\n",
    "wine = load_wine()\n",
    "df_wine = pd.DataFrame(data=wine.data, columns=wine.feature_names)\n",
    "target_wine = wine.target\n",
    "display(df_wine.head(10))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### How to do it..."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We will load some additional libraries from scikit-learn as well as Matplotlib which is a commonly used Python library for data visualization. You’ll also notice that we are using the `Pipeline()` class to string together the data scaling preprocessing step with PCA. This will be a regular convention in this book so it’s best to get comfortable with it!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Load libraries\n",
    "from sklearn.preprocessing import StandardScaler\n",
    "from sklearn.decomposition import PCA\n",
    "from sklearn.pipeline import Pipeline\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "# Create a pipeline for PCA\n",
    "pca_pipeline = Pipeline([\n",
    "    ('scaler', StandardScaler()),  # Always scale before PCA\n",
    "    ('pca', PCA(n_components=2))   # Reduce to 2 dimensions\n",
    "])\n",
    "\n",
    "# Fit and transform the data\n",
    "X_pca = pca_pipeline.fit_transform(df_wine)\n",
    "\n",
    "# Visualize the transformed data\n",
    "plt.figure(figsize=(10, 8))\n",
    "shapes = ['o', '^', 'D']\n",
    "colors = ['r', 'g', 'b']\n",
    "\n",
    "# Plot the scatter points\n",
    "for i, (shape, color) in enumerate(zip(shapes, colors)):\n",
    "    plt.scatter(X_pca[target_wine == i, 0], X_pca[target_wine == i, 1], \n",
    "                c=color, marker=shape, label=wine.target_names[i])\n",
    "\n",
    "# Get the PCA components and plot as vectors\n",
    "pca = pca_pipeline.named_steps['pca']\n",
    "origin = np.zeros(2)  # Origin point for vectors\n",
    "arrow_colors = ['black', 'orange']\n",
    "\n",
    "# Scale the components by their explained variance ratio for better visualization\n",
    "scaling = 3\n",
    "for i, (component, ratio) in enumerate(zip(pca.components_, pca.explained_variance_ratio_)):\n",
    "    plt.arrow(origin[0], origin[1],\n",
    "              component[0] * scaling, component[1] * scaling,\n",
    "              color=arrow_colors[i],\n",
    "              width=0.02, head_width=0.2, head_length=0.2,\n",
    "              label=f'PC{i+1} ({ratio:.1%} variance)')\n",
    "\n",
    "plt.xlabel('First Principal Component')\n",
    "plt.ylabel('Second Principal Component')\n",
    "plt.title('Wine Dataset - First Two Principal Components')\n",
    "plt.legend(title=\"Classes\")\n",
    "plt.grid(True, alpha=0.3)\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## How it works...\n",
    "\n",
    "PCA works by identifying the directions (principal components) in which the data varies the most. These components are linear combinations of the original features and are orthogonal (i.e., at right angles) to each other. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Get the explained variance ratio\n",
    "pca = pca_pipeline.named_steps['pca']\n",
    "explained_variance_ratio = pca.explained_variance_ratio_\n",
    "\n",
    "# Calculate cumulative variance\n",
    "cumulative_variance = np.sum(explained_variance_ratio)\n",
    "\n",
    "# Plot barchart\n",
    "fig, ax = plt.subplots()\n",
    "\n",
    "x = np.arange(1, len(explained_variance_ratio) + 1)\n",
    "y = explained_variance_ratio\n",
    "bars = ax.bar(x, y)\n",
    "\n",
    "# Add percentage labels on bars\n",
    "for bar in bars:\n",
    "    height = bar.get_height()\n",
    "    ax.text(bar.get_x() + bar.get_width()/2., height,\n",
    "            f'{height:.2%}',\n",
    "            ha='center', va='bottom')\n",
    "\n",
    "# Add cumulative variance text in upper right\n",
    "ax.text(0.95, 0.95, f'Total Variance\\nExplained: {cumulative_variance:.2%}',\n",
    "        transform=ax.transAxes,\n",
    "        ha='right', va='top',\n",
    "        bbox=dict(facecolor='white', alpha=0.8, edgecolor='none'))\n",
    "\n",
    "ax.set_xlabel('Principal Component')\n",
    "ax.set_ylabel('Explained Variance Ratio')\n",
    "ax.set_title('Explained Variance Ratio by Principal Component')\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1.\tTop Row: Original Standardized Features. The first two scatter plots display pairs of original standardized features before PCA: \"Alcohol vs. Malic Acid\" on the left and \"Flavanoids vs. Proanthocyanins\" on the right. Each data point is color-coded by wine class (class_0, class_1, class_2), with distinct markers for each class. The distribution shows overlap and separation among classes in the raw feature space.\n",
    "2.\tBottom Row: PCA-Transformed Features. The bottom row shows the same features transformed using PCA, where data is reoriented along the first two principal components (PC1 and PC2). The arrows indicate the directions of PC1 and PC2, which capture the maximum variance in the data. PCA helps to better separate the wine classes by projecting the features into a lower-dimensional space with improved class distinction.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Choose two features: 'alcohol' and 'malic_acid'\n",
    "features_1 = ['alcohol', 'malic_acid']\n",
    "df_wine_subset_1 = df_wine[features_1]\n",
    "df_wine_subset_1 = pd.DataFrame(StandardScaler().fit_transform(df_wine_subset_1), columns=features_1)\n",
    "\n",
    "# Perform PCA on the two features\n",
    "pca_2d_1 = PCA(n_components=2)\n",
    "X_pca_2d_1 = pca_2d_1.fit_transform(df_wine_subset_1)\n",
    "\n",
    "# Choose two different features: 'flavanoids' and 'proanthocyanins'\n",
    "features_2 = ['flavanoids', 'proanthocyanins']\n",
    "df_wine_subset_2 = df_wine[features_2]\n",
    "df_wine_subset_2 = pd.DataFrame(StandardScaler().fit_transform(df_wine_subset_2), columns=features_2)\n",
    "\n",
    "# Perform PCA on the two different features\n",
    "pca_2d_2 = PCA(n_components=2)\n",
    "X_pca_2d_2 = pca_2d_2.fit_transform(df_wine_subset_2)\n",
    "\n",
    "# Create a figure with four subplots in a 2x2 grid\n",
    "fig, axes = plt.subplots(2, 2, figsize=(14, 10))\n",
    "\n",
    "# Original data plot 1 (top left)\n",
    "for i, (shape, color) in enumerate(zip(shapes, colors)):\n",
    "    mask = target_wine == i\n",
    "    axes[0,0].scatter(df_wine_subset_1['alcohol'][mask],\n",
    "                     df_wine_subset_1['malic_acid'][mask],\n",
    "                     marker=shape, c=color, edgecolor='k', s=100,\n",
    "                     label=wine.target_names[i])\n",
    "axes[0,0].set_xlabel('Alcohol')\n",
    "axes[0,0].set_ylabel('Malic Acid')\n",
    "axes[0,0].set_title('Original Data (Scaled) - Alcohol vs Malic Acid')\n",
    "axes[0,0].legend()\n",
    "\n",
    "# Original data plot 2 (top right)\n",
    "for i, (shape, color) in enumerate(zip(shapes, colors)):\n",
    "    mask = target_wine == i\n",
    "    axes[0,1].scatter(df_wine_subset_2['flavanoids'][mask],\n",
    "                     df_wine_subset_2['proanthocyanins'][mask],\n",
    "                     marker=shape, c=color, edgecolor='k', s=100,\n",
    "                     label=wine.target_names[i])\n",
    "axes[0,1].set_xlabel('Flavanoids')\n",
    "axes[0,1].set_ylabel('Proanthocyanins')\n",
    "axes[0,1].set_title('Original Data (Scaled) - Flavanoids vs Proanthocyanins')\n",
    "axes[0,1].legend()\n",
    "\n",
    "# PCA plot 1 (bottom left)\n",
    "for i, (shape, color) in enumerate(zip(shapes, colors)):\n",
    "    mask = target_wine == i\n",
    "    axes[1,0].scatter(X_pca_2d_1[mask, 0], \n",
    "                     X_pca_2d_1[mask, 1],\n",
    "                     marker=shape, c=color, edgecolor='k', s=100,\n",
    "                     label=wine.target_names[i])\n",
    "axes[1,0].set_xlabel('First Principal Component')\n",
    "axes[1,0].set_ylabel('Second Principal Component')\n",
    "axes[1,0].set_title('PCA transformed Alcohol and Malic Acid')\n",
    "\n",
    "# Plot the principal components as vectors for the first PCA plot\n",
    "origin_1 = np.zeros(2)\n",
    "components_1 = np.eye(2)\n",
    "arrow_colors_1 = ['black', 'orange']\n",
    "scaling = 3\n",
    "for i, component in enumerate(components_1):\n",
    "    axes[1,0].arrow(origin_1[0], origin_1[1], component[0] * scaling, component[1] * scaling,\n",
    "                    color=arrow_colors_1[i], width=0.02, head_width=0.1, head_length=0.1)\n",
    "    axes[1,0].plot([], [], color=arrow_colors_1[i], label=f'PC{i+1}')\n",
    "\n",
    "axes[1,0].legend()\n",
    "\n",
    "# PCA plot 2 (bottom right)\n",
    "for i, (shape, color) in enumerate(zip(shapes, colors)):\n",
    "    mask = target_wine == i\n",
    "    axes[1,1].scatter(X_pca_2d_2[mask, 0],\n",
    "                     X_pca_2d_2[mask, 1],\n",
    "                     marker=shape, c=color, edgecolor='k', s=100,\n",
    "                     label=wine.target_names[i])\n",
    "axes[1,1].set_xlabel('First Principal Component')\n",
    "axes[1,1].set_ylabel('Second Principal Component')\n",
    "axes[1,1].set_title('PCA transformed Flavanoids and Proanthocyanins')\n",
    "\n",
    "# Plot the principal components as vectors for the second PCA plot\n",
    "origin_2 = np.zeros(2)\n",
    "components_2 = np.eye(2)\n",
    "for i, component in enumerate(components_2):\n",
    "    axes[1,1].arrow(origin_2[0], origin_2[1], component[0] * scaling, component[1] * scaling,\n",
    "                    color=arrow_colors_1[i], width=0.02, head_width=0.1, head_length=0.1)\n",
    "    axes[1,1].plot([], [], color=arrow_colors_1[i], label=f'PC{i+1}')\n",
    "\n",
    "axes[1,1].legend()\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Best Practices for PCA:\n",
    "1. Always scale your data before applying PCA\n",
    "2. Check explained variance ratio to determine number of components\n",
    "3. Use PCA for:\n",
    "   - Dimensionality reduction\n",
    "   - Feature extraction\n",
    "   - Data visualization\n",
    "   \n",
    "### Common Pitfalls:\n",
    "- Not scaling data before PCA\n",
    "- Using PCA when interpretability is important\n",
    "- Keeping too few or too many components"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Linear Discriminant Analysis (LDA)\n",
    "\n",
    "LDA is a *supervised* dimensionality reduction technique that finds linear combinations of features that best separate classes.\n",
    "\n",
    "### Key Concepts:\n",
    "- Maximizes class separability\n",
    "- Can be used for both dimensionality reduction and classification\n",
    "- Takes class labels into account (supervised)\n",
    "\n",
    "### Getting ready\n",
    "We will use the same Wine dataset used previously, so we do not have to load it again."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### How to do it...\n",
    "\n",
    "As we saw with PCA, LDA only requires loading a single scikit-learn class to perform it on your dataset. We will also be using the `Pipeline()` class to string together our scaling prior to applying LDA."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Load libraries\n",
    "from sklearn.discriminant_analysis import LinearDiscriminantAnalysis\n",
    "\n",
    "# Split wine dataset by features and target\n",
    "X_wine, y_wine = wine.data, wine.target\n",
    "\n",
    "# Create LDA pipeline for wine dataset\n",
    "lda_pipeline_wine = Pipeline([\n",
    "    ('scaler', StandardScaler()),\n",
    "    ('lda', LinearDiscriminantAnalysis(n_components=2))  # min(n_features, n_classes - 1) for wine dataset is 2\n",
    "])\n",
    "\n",
    "# Fit and transform the wine data\n",
    "X_lda_wine = lda_pipeline_wine.fit_transform(X_wine, y_wine)\n",
    "\n",
    "# Visualize LDA transformation for wine dataset\n",
    "plt.figure(figsize=(10, 8))\n",
    "\n",
    "# Define markers and colors for each class\n",
    "shapes = ['o', '^', 'D']\n",
    "colors = ['r', 'g', 'b']\n",
    "\n",
    "# Plot each class with different marker and color\n",
    "for i, (shape, color) in enumerate(zip(shapes, colors)):\n",
    "    mask = y_wine == i\n",
    "    plt.scatter(X_lda_wine[mask, 0], X_lda_wine[mask, 1],\n",
    "               c=color, marker=shape, edgecolor='black',\n",
    "               label=wine.target_names[i])\n",
    "\n",
    "plt.xlabel('First LDA Component')\n",
    "plt.ylabel('Second LDA Component')\n",
    "plt.title('Wine Dataset - LDA Components')\n",
    "plt.legend()\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## How it works...\n",
    "\n",
    "LDA is a supervised technique that seeks to find a linear combination of features that best separates two or more classes. While both PCA and LDA are used for dimensionality reduction, they have distinct objectives and methodologies. When visualized, both PCA and LDA can be similar in appearance depending on the dataset it’s applied to."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Create side-by-side plots\n",
    "plt.figure(figsize=(14, 6))\n",
    "\n",
    "plt.subplot(121)\n",
    "shapes = ['o', '^', 'D'] \n",
    "colors = ['r', 'g', 'b']\n",
    "\n",
    "for i, (shape, color) in enumerate(zip(shapes, colors)):\n",
    "    mask = y_wine == i\n",
    "    plt.scatter(X_pca[mask, 0], X_pca[mask, 1],\n",
    "               c=color, marker=shape, edgecolor='black',\n",
    "               label=wine.target_names[i])\n",
    "\n",
    "plt.xlabel('First Principal Component')\n",
    "plt.ylabel('Second Principal Component')\n",
    "plt.title('Wine Dataset - PCA Components')\n",
    "plt.legend()\n",
    "\n",
    "plt.subplot(122)\n",
    "for i, (shape, color) in enumerate(zip(shapes, colors)):\n",
    "    mask = y_wine == i\n",
    "    plt.scatter(X_lda_wine[mask, 0], X_lda_wine[mask, 1],\n",
    "               c=color, marker=shape, edgecolor='black',\n",
    "               label=wine.target_names[i])\n",
    "\n",
    "plt.xlabel('First LDA Component')\n",
    "plt.ylabel('Second LDA Component')\n",
    "plt.title('Wine Dataset - LDA Components')\n",
    "plt.legend()\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Best Practices for LDA:\n",
    "1. Scale features before applying LDA\n",
    "2. Use when class separation is important\n",
    "3. Check assumptions (normal distribution, homoscedasticity)\n",
    "\n",
    "### Common Pitfalls:\n",
    "- Using LDA with highly imbalanced classes\n",
    "- Applying to non-normally distributed data\n",
    "- Using when classes have very different covariance structures"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## t-SNE for Data Visualization\n",
    "\n",
    "t-SNE is a non-linear dimensionality reduction technique particularly well-suited for visualization of high-dimensional data.\n",
    "\n",
    "### Key Concepts:\n",
    "- Preserves local structure of the data\n",
    "- Non-linear transformation\n",
    "- Particularly good for visualization\n",
    "\n",
    "### Getting ready\n",
    "For our t-SNE demonstration, we'll be using a different dataset: another \"famous\" machine learning dataset called **MNIST** which consists of images of handwritten digits 0-9. From UCI Machine Learning Repository: *\"We used preprocessing programs made available by NIST to extract normalized bitmaps of handwritten digits from a preprinted form. From a total of 43 people, 30 contributed to the training set and different 13 to the test set. 32x32 bitmaps are divided into nonoverlapping blocks of 4x4 and the number of on pixels are counted in each block. This generates an input matrix of 8x8 where each element is an integer in the range 0..16. This reduces dimensionality and gives invariance to small distortions.\"*"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Load libraries\n",
    "from sklearn.datasets import load_digits\n",
    "\n",
    "# Load dataset\n",
    "digits = load_digits()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### How to do it...\n",
    "\n",
    "Again, we will use the Pipeline() class to sequentially apply data scaling prior to t-SNE."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Load libraries\n",
    "from sklearn.manifold import TSNE\n",
    "\n",
    "# Create t-SNE pipeline\n",
    "tsne_pipeline = Pipeline([\n",
    "    ('scaler', StandardScaler()),\n",
    "    ('tsne', TSNE(n_components=2, random_state=2024))\n",
    "])\n",
    "\n",
    "# Fit and transform the digits data\n",
    "X_tsne = tsne_pipeline.fit_transform(digits.data)\n",
    "# Visualize t-SNE results\n",
    "plt.figure(figsize=(10, 8))\n",
    "scatter = plt.scatter(X_tsne[:, 0], X_tsne[:, 1], c=digits.target, cmap='Paired', label=digits.target)\n",
    "plt.xlabel('t-SNE Component 1')\n",
    "plt.ylabel('t-SNE Component 2')\n",
    "plt.title('Digits Dataset - t-SNE Visualization')\n",
    "\n",
    "# Create legend\n",
    "legend_elements = [plt.Line2D([0], [0], marker='o', color='w', \n",
    "                             markerfacecolor=plt.cm.Paired(i/9), \n",
    "                             label=str(i), markersize=10)\n",
    "                  for i in range(10)]\n",
    "plt.legend(handles=legend_elements, title='Digit', loc='center left', bbox_to_anchor=(1, 0.5))\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Best Practices for t-SNE:\n",
    "1. Scale your data before applying t-SNE\n",
    "2. Experiment with perplexity parameter\n",
    "3. Use primarily for visualization\n",
    "\n",
    "### Common Pitfalls:\n",
    "- Using t-SNE for dimensionality reduction in a pipeline\n",
    "- Over-interpreting global structure\n",
    "- Not tuning perplexity parameter"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Practical Exercises on Dimensionality Reduction\n",
    "\n",
    "### Exercise 1: PCA with Logistic Regression\n",
    "Compare classification performance with and without PCA"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Load libraries\n",
    "YOUR CODE HERE\n",
    "\n",
    "# Split data\n",
    "YOUR CODE HERE\n",
    "\n",
    "# Pipeline without PCA\n",
    "YOUR CODE HERE\n",
    "\n",
    "# Pipeline with PCA\n",
    "YOUR CODE HERE\n",
    "\n",
    "# Fit and evaluate both pipelines\n",
    "YOUR CODE HERE\n",
    "\n",
    "# Print results\n",
    "YOUR CODE HERE"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Exercise 2: t-SNE for Clustering Visualization\n",
    "Visualize how well t-SNE preserves cluster structure"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Load libraries\n",
    "YOUR CODE HERE\n",
    "\n",
    "# Apply K-means clustering\n",
    "YOUR CODE HERE\n",
    "\n",
    "# Create side-by-side plots\n",
    "YOUR CODE HERE\n",
    "\n",
    "# Plot using true labels\n",
    "YOUR CODE HERE\n",
    "\n",
    "# Plot using cluster labels\n",
    "YOUR CODE HERE"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.13.0"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
