{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "provenance": [],
      "collapsed_sections": [
        "wImcsGl6Z13G"
      ]
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "cells": [
    {
      "cell_type": "markdown",
      "source": [
        "# 🎨 PyGWalker: Turn Your Pandas DataFrame into an Interactive UI for Visual Analysis\n",
        "\n",
        "<div align=\"center\">\n",
        "\n",
        "[![PyPI version](https://badge.fury.io/py/pygwalker.svg)](https://badge.fury.io/py/pygwalker)\n",
        "[![GitHub stars](https://img.shields.io/github/stars/Kanaries/pygwalker?style=social)](https://github.com/Kanaries/pygwalker)\n",
        "[![Downloads](https://img.shields.io/pypi/dm/pygwalker)](https://pypi.org/project/pygwalker/)\n",
        "\n",
        "**Transform your DataFrame into a Tableau-style interface with just one line of code!**\n",
        "\n",
        "[Documentation](https://docs.kanaries.net/pygwalker) | [GitHub](https://github.com/Kanaries/pygwalker) | [Discord Community](https://discord.gg/Z4ngFWXz2U)\n",
        "\n",
        "</div>\n",
        "\n",
        "---\n",
        "\n",
        "**📊 Tutorial Info:**\n",
        "- ⏱️ **Estimated Time**: 30-45 minutes  \n",
        "- 📈 **Level**: Beginner to Intermediate  \n",
        "- 🐍 **Prerequisites**: Basic Python and pandas knowledge\n",
        "\n",
        "---"
      ],
      "metadata": {
        "id": "MBfSYAjuhzK_"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "## 📋 Table of Contents\n",
        "\n",
        "1. [👋 Welcome & Introduction](#scrollTo=PEKrSxuJiH9Y&uniqifier=1)\n",
        "2. [🛠️ Setup & Installation](#scrollTo=RMnj1t4xRayv&line=2&uniqifier=1)\n",
        "3. [🚀 Quick Start](#scrollTo=9O-R8S2qTJ4h&uniqifier=1)\n",
        "4. [🎯 Core Features](#scrollTo=-8GysBxbUBgc&uniqifier=1)\n",
        "   - Loading Data\n",
        "   - Visualization Types\n",
        "   - Advanced Features\n",
        "5. [💡 Practical Use Cases](#scrollTo=W08-T0cBXIJa&uniqifier=1)\n",
        "   - Sales Analysis\n",
        "   - Customer Segmentation\n",
        "   - Data Quality Checks\n",
        "   - A/B Testing\n",
        "6. [💎 Best Practices](#scrollTo=o21glSewYWdT&uniqifier=1)\n",
        "7. [🐛 Troubleshooting](#scrollTo=Mr1jc3ExZ2Rs&uniqifier=1)\n",
        "8. [📚 Additional Resources](#scrollTo=MsxtC8eObowb&uniqifier=1)\n",
        "\n",
        "---\n",
        "\n",
        "## ✅ Your Progress Tracker\n",
        "\n",
        "Track your learning journey:\n",
        "\n",
        "- [ ] Completed Setup & Installation\n",
        "- [ ] Completed Quick Start\n",
        "- [ ] Completed Core Features\n",
        "- [ ] Completed Practical Use Cases\n",
        "- [ ] Completed Best Practices\n",
        "\n",
        "Mark them as you go! 🎉\n",
        "\n",
        "---"
      ],
      "metadata": {
        "id": "VitloDwmh94o"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "<a id=\"welcome\"></a>\n",
        "## 👋 Welcome!\n",
        "\n",
        "Hey there, data explorer! 🚀\n",
        "\n",
        "If you've ever wanted the power of tools like Tableau or Power BI but inside your Python environment, you're in the right place. **PyGWalker** (Python binding of Graphic Walker) is here to make your data exploration journey smooth, intuitive, and honestly... pretty fun!\n",
        "\n",
        "This tutorial will walk you through everything you need to know to become a PyGWalker pro. Whether you're a beginner or an experienced data scientist, you'll find something valuable here.\n",
        "\n",
        "### 🎯 What You'll Learn\n",
        "\n",
        "- ✅ How to get started in minutes\n",
        "- ✅ Creating stunning visualizations with drag-and-drop\n",
        "- ✅ Advanced features and customization\n",
        "- ✅ Real-world use cases and best practices\n",
        "- ✅ Performance optimization tips\n",
        "- ✅ Troubleshooting common issues\n",
        "\n",
        "### 📚 Prerequisites\n",
        "\n",
        "**Required:**\n",
        "- 🐍 Basic Python knowledge\n",
        "- 📊 Familiarity with pandas DataFrames\n",
        "\n",
        "**Helpful (but not required):**\n",
        "- 📈 Basic data visualization concepts\n",
        "- 📊 Understanding of statistical terms (mean, median, etc.)\n",
        "\n",
        "**Don't worry if you're new!** We explain everything as we go. 🎓\n",
        "\n",
        "Let's dive in! 🏊‍♂️\n",
        "\n",
        "---"
      ],
      "metadata": {
        "id": "PEKrSxuJiH9Y"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "## 🤔 What is PyGWalker?\n",
        "\n",
        "**PyGWalker** (pronounced \"Pig Walker\" 🐷) is a Python library that turns your pandas DataFrame into an interactive, Tableau-style user interface for visual exploration.\n",
        "\n",
        "### Why PyGWalker is Awesome\n",
        "\n",
        "**🎯 One-Line Magic**\n",
        "- Seriously. One line of code. That's all it takes to get a full visual analysis interface.\n",
        "\n",
        "**🖱️ Drag-and-Drop Interface**\n",
        "- No need to memorize complex plotting syntax\n",
        "- Drag columns to create charts instantly\n",
        "- Switch between visualization types with a click\n",
        "\n",
        "**🚀 Lightning Fast**\n",
        "- Built for performance with large datasets\n",
        "- Real-time interaction with your data\n",
        "- Smooth experience in Jupyter and Google Colab\n",
        "\n",
        "**🎨 Rich Visualization Options**\n",
        "- Scatter plots, bar charts, line charts, heatmaps, and more\n",
        "- Automatic chart recommendations based on your data\n",
        "- Customizable colors, scales, and styling\n",
        "\n",
        "**🔍 Exploratory Data Analysis (EDA) Supercharged**\n",
        "- Filter, aggregate, and drill down into your data\n",
        "- Discover patterns and insights visually\n",
        "- Perfect for the initial data exploration phase\n",
        "\n",
        "---"
      ],
      "metadata": {
        "id": "_UvjB9ZimjAY"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "## 🆚 PyGWalker vs Traditional Tools\n",
        "\n",
        "| Feature | PyGWalker | Matplotlib/Seaborn | Tableau/Power BI |\n",
        "|---------|-----------|-------------------|------------------|\n",
        "| **Code Required** | 1 line | Many lines | No code |\n",
        "| **Interactive** | ✅ Yes | ❌ No | ✅ Yes |\n",
        "| **Python Integration** | ✅ Native | ✅ Native | ❌ Limited |\n",
        "| **Learning Curve** | 🟢 Easy | 🟡 Medium | 🟡 Medium |\n",
        "| **Cost** | 🟢 Free | 🟢 Free | 🔴 Paid (mostly) |\n",
        "| **Jupyter/Colab** | ✅ Perfect | ✅ Good | ❌ No |\n",
        "| **Drag-and-Drop** | ✅ Yes | ❌ No | ✅ Yes |\n",
        "\n",
        "### When to Use PyGWalker:\n",
        "\n",
        "✅ **Perfect for:**\n",
        "- Quick exploratory data analysis (EDA)\n",
        "- Interactive presentations in notebooks\n",
        "- When you want visual insights without writing plotting code\n",
        "- Sharing interactive analysis with non-technical stakeholders\n",
        "- Teaching data analysis concepts\n",
        "\n",
        "⚠️ **Maybe not ideal for:**\n",
        "- Production dashboards (use Plotly Dash or Streamlit)\n",
        "- Highly customized, publication-ready static plots (use Matplotlib)\n",
        "- Automated reporting pipelines"
      ],
      "metadata": {
        "id": "Pw1DgZRBmi0o"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "## 🎯 What We'll Build Today\n",
        "\n",
        "Throughout this tutorial, we'll explore PyGWalker using real-world datasets. Here's a sneak peek:\n",
        "\n",
        "1. **Quick Start**: Get your first visualization in under 2 minutes ⏱️\n",
        "2. **Core Features**: Master the drag-and-drop interface 🎨\n",
        "3. **Advanced Techniques**: Calculated fields, filters, and aggregations 🔧\n",
        "4. **Real Use Cases**: Sales analysis, customer segmentation, and more 📊\n",
        "5. **Pro Tips**: Best practices and performance optimization 💡\n",
        "\n",
        "By the end, you'll be able to:\n",
        "- ✅ Quickly explore any dataset visually\n",
        "- ✅ Create insightful visualizations without complex code\n",
        "- ✅ Impress your team with interactive data stories\n",
        "- ✅ Speed up your data analysis workflow significantly\n",
        "\n",
        "Ready? Let's get started! 🎉\n",
        "----"
      ],
      "metadata": {
        "id": "_2-6wKFURa08"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "## 🛠️ Setup & Installation\n",
        "\n",
        "Let's get PyGWalker installed and ready to roll! This should take less than a minute. ⏱️"
      ],
      "metadata": {
        "id": "RMnj1t4xRayv"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Install PyGWalker\n",
        "!pip install pygwalker -q\n",
        "\n",
        "print(\"✅ PyGWalker installed successfully!\")"
      ],
      "metadata": {
        "id": "X4Cl9m3AS_yh"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Import necessary libraries\n",
        "import pandas as pd\n",
        "import pygwalker as pyg\n",
        "import warnings\n",
        "warnings.filterwarnings('ignore')\n",
        "\n",
        "# Check PyGWalker version\n",
        "print(f\"🐷 PyGWalker version: {pyg.__version__}\")\n",
        "print(\"✅ All imports successful!\")"
      ],
      "metadata": {
        "id": "Zo0XO0BXTEK7"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 📦 What We Just Installed\n",
        "\n",
        "**PyGWalker** comes with everything you need:\n",
        "- Interactive visualization interface\n",
        "- Automatic chart type recommendations\n",
        "- Data processing capabilities\n",
        "- Export functionality\n",
        "\n",
        "**Compatibility:**\n",
        "- ✅ Google Colab (where we are now!)\n",
        "- ✅ Jupyter Notebook\n",
        "- ✅ Jupyter Lab\n",
        "- ✅ VS Code notebooks\n",
        "- ✅ Any IPython environment\n",
        "\n",
        "**Note:** PyGWalker works best with pandas DataFrames, so make sure your data is in that format!"
      ],
      "metadata": {
        "id": "X2pub7G2Rawh"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "\n",
        "## 🚀 Quick Start: Your First Visualization in 30 Seconds!\n",
        "\n",
        "Let's jump right in with a fun dataset: **Palmer Penguins** 🐧\n",
        "\n",
        "This dataset contains measurements of 3 penguin species from islands in Antarctica. It's perfect for learning because:\n",
        "- 🎯 Small and easy to understand\n",
        "- 📊 Mix of numerical and categorical data\n",
        "- 🐧 Penguins are adorable!\n",
        "\n",
        "Let's load it and create our first interactive visualization!"
      ],
      "metadata": {
        "id": "9O-R8S2qTJ4h"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Load the Palmer Penguins dataset\n",
        "url = \"https://raw.githubusercontent.com/mwaskom/seaborn-data/master/penguins.csv\"\n",
        "df = pd.read_csv(url)\n",
        "\n",
        "print(\"🐧 Dataset loaded successfully!\")\n",
        "print(f\"📊 Shape: {df.shape[0]} rows × {df.shape[1]} columns\")\n",
        "print(\"\\n\" + \"=\"*50)\n",
        "print(\"First look at our penguin friends:\")\n",
        "print(\"=\"*50)\n",
        "df.head()"
      ],
      "metadata": {
        "id": "PhkakdNpR8fk"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 🔍 Understanding Our Dataset\n",
        "\n",
        "Let's see what we're working with:"
      ],
      "metadata": {
        "id": "6QeA2kNQRauY"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Dataset overview\n",
        "print(\"📋 Dataset Information:\")\n",
        "print(\"=\"*50)\n",
        "df.info()\n",
        "\n",
        "print(\"\\n📊 Basic Statistics:\")\n",
        "print(\"=\"*50)\n",
        "df.describe()\n",
        "\n",
        "print(\"\\n🐧 Penguin Species in our dataset:\")\n",
        "print(\"=\"*50)\n",
        "print(df['species'].value_counts())"
      ],
      "metadata": {
        "id": "5MyIEqBtTkvi"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "**Our penguin dataset includes:**\n",
        "- 🏝️ **species**: Penguin species (Adelie, Chinstrap, Gentoo)\n",
        "- 🏝️ **island**: Island where observed (Torgersen, Biscoe, Dream)\n",
        "- 📏 **bill_length_mm**: Length of the bill in millimeters\n",
        "- 📏 **bill_depth_mm**: Depth of the bill in millimeters\n",
        "- 🦅 **flipper_length_mm**: Length of the flipper in millimeters\n",
        "- ⚖️ **body_mass_g**: Body mass in grams\n",
        "- ⚧️ **sex**: Penguin sex (Male, Female)\n",
        "\n",
        "Now, let's see the magic! ✨"
      ],
      "metadata": {
        "id": "xA0Nc_EnRasO"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "\n",
        "## 🎨 The Magic Moment: One Line of Code!\n",
        "\n",
        "Here it comes... the moment you've been waiting for!\n",
        "\n",
        "Watch how **ONE single line** transforms our DataFrame into a full-fledged interactive visualization tool:"
      ],
      "metadata": {
        "id": "k9kD-nkKTp7z"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# 🪄 THE MAGIC LINE 🪄\n",
        "pyg.walk(df)"
      ],
      "metadata": {
        "id": "eTahv3MjRZ8w"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 🎉 Congratulations! You Did It!\n",
        "\n",
        "**What just happened?**\n",
        "\n",
        "With that single line of code, you now have:\n",
        "- ✅ An interactive drag-and-drop interface\n",
        "- ✅ Multiple chart types at your fingertips\n",
        "- ✅ Automatic data type detection\n",
        "- ✅ Real-time filtering and aggregation\n",
        "- ✅ The power of Tableau... in your notebook!\n",
        "\n",
        "### 🎮 How to Use the Interface:\n",
        "\n",
        "**Left Panel - Fields:**\n",
        "- Drag any field (column) to the shelves on the right\n",
        "\n",
        "**Main Canvas - Visualization Area:**\n",
        "- See your charts come to life in real-time\n",
        "\n",
        "**Top Bar - Controls:**\n",
        "- 📊 Change chart types (bar, line, scatter, etc.)\n",
        "- 🎨 Customize colors and styling\n",
        "- 💾 Export your visualizations\n",
        "- ⚙️ Access advanced settings\n",
        "\n",
        "### 💡 Try These Quick Experiments:\n",
        "\n",
        "1. **Scatter Plot**:\n",
        "   - Drag `bill_length_mm` to X-axis\n",
        "   - Drag `bill_depth_mm` to Y-axis\n",
        "   - Drag `species` to Color\n",
        "   - 🎉 See how different species cluster!\n",
        "\n",
        "2. **Bar Chart**:\n",
        "   - Drag `species` to X-axis\n",
        "   - Drag `body_mass_g` to Y-axis\n",
        "   - The interface will auto-aggregate (mean by default)\n",
        "   - 🐧 Compare penguin sizes!\n",
        "\n",
        "3. **Distribution**:\n",
        "   - Drag `flipper_length_mm` to X-axis\n",
        "   - Change chart type to histogram\n",
        "   - Drag `species` to Color\n",
        "   - 📊 See the distribution patterns!\n",
        "\n",
        "**Take a few minutes to play around!** There's no wrong way to explore. 🎪"
      ],
      "metadata": {
        "id": "hYIbya8RT4FM"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 🤓 Pro Tip: Understanding the Interface\n",
        "\n",
        "PyGWalker's interface is divided into key areas:\n",
        "\n",
        "**Encoding Shelves** (where you drag fields):\n",
        "- **Dimensions** 📏: Categorical/discrete data (species, island, sex)\n",
        "- **Measures** 📊: Numerical/continuous data (bill_length, body_mass)\n",
        "\n",
        "**Marks Shelf**:\n",
        "- **Color**: Add visual distinction\n",
        "- **Size**: Vary point/bar sizes\n",
        "- **Opacity**: Control transparency\n",
        "- **Shape**: Different marker shapes\n",
        "\n",
        "**Filters**:\n",
        "- Click the filter icon on any field to narrow down your data\n",
        "\n",
        "The interface automatically suggests the best visualization based on what you drag. Smart, right? 🧠"
      ],
      "metadata": {
        "id": "1FkBJpdST5kr"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "\n",
        "# 🎯 Core Features: Mastering PyGWalker\n",
        "\n",
        "Now that you've seen the magic, let's dive deeper into what makes PyGWalker so powerful!\n",
        "\n",
        "We'll explore:\n",
        "- 📁 Loading different data sources\n",
        "- 📊 Creating various visualization types\n",
        "- 🔧 Advanced features and customization\n",
        "- 🎨 Making your visualizations pop!"
      ],
      "metadata": {
        "id": "-8GysBxbUBgc"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "## 📁 Part 1: Loading Different Data Sources\n",
        "\n",
        "PyGWalker is flexible! You can feed it data from multiple sources. Let's see how:"
      ],
      "metadata": {
        "id": "oK901jFKUHOH"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Method 1: From a CSV file (what we just did!)\n",
        "df_from_url = pd.read_csv(\"https://raw.githubusercontent.com/mwaskom/seaborn-data/master/penguins.csv\")\n",
        "print(\"✅ Method 1: Loaded from URL\")\n",
        "\n",
        "# Method 2: From a local CSV (if you have one)\n",
        "# df_from_local = pd.read_csv('your_file.csv')\n",
        "\n",
        "# Method 3: From a dictionary\n",
        "data_dict = {\n",
        "    'species': ['Adelie', 'Gentoo', 'Chinstrap'],\n",
        "    'avg_mass': [3700, 5076, 3733],\n",
        "    'count': [152, 124, 68]\n",
        "}\n",
        "df_from_dict = pd.DataFrame(data_dict)\n",
        "print(\"✅ Method 2: Created from dictionary\")\n",
        "\n",
        "# Method 4: From Excel (requires openpyxl)\n",
        "# df_from_excel = pd.read_excel('your_file.xlsx')\n",
        "\n",
        "# Method 5: From SQL, APIs, web scraping... anything that becomes a DataFrame!\n",
        "\n",
        "print(\"\\n🎯 The key point: If it's a pandas DataFrame, PyGWalker can visualize it!\")"
      ],
      "metadata": {
        "id": "UARakUVoUO75"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 💡 Quick Tip: Data Preparation\n",
        "\n",
        "PyGWalker works best when your data is clean! Before using `pyg.walk()`, consider:\n",
        "\n",
        "✅ **Good practices:**\n",
        "- Handle missing values (we'll see this in action!)\n",
        "- Use meaningful column names\n",
        "- Ensure proper data types (dates as datetime, numbers as numeric)\n",
        "- Keep reasonable dataset sizes (< 100k rows for best performance)\n",
        "\n",
        "⚠️ **PyGWalker will still work** with messy data, but clean data = better insights!"
      ],
      "metadata": {
        "id": "t4FpuA3tUStE"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "\n",
        "## 📊 Part 2: Visualization Types & When to Use Them\n",
        "\n",
        "PyGWalker supports a wide variety of chart types. Let's explore the most useful ones with our penguin friends! 🐧"
      ],
      "metadata": {
        "id": "YKVsGrRxUHMC"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 1️⃣ Scatter Plots: Finding Relationships\n",
        "\n",
        "**Best for:** Exploring relationships between two numerical variables\n",
        "\n",
        "**Let's investigate:** Do penguins with longer bills also have deeper bills?"
      ],
      "metadata": {
        "id": "DRlXacmXUHJm"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Let's create a clean version of our dataset for visualization\n",
        "df_clean = df.dropna()  # Remove rows with missing values\n",
        "\n",
        "print(f\"🧹 Cleaned dataset: {df_clean.shape[0]} rows (removed {df.shape[0] - df_clean.shape[0]} rows with missing data)\")\n",
        "print(\"\\n🎨 Now let's visualize! Run the cell below:\")"
      ],
      "metadata": {
        "id": "jJPlITgeUeRH"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Scatter plot exploration\n",
        "pyg.walk(df_clean, hide_data_source_config=True)"
      ],
      "metadata": {
        "id": "L3cI2ryFUg7d"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 🎯 Try This - Scatter Plot Exercise:\n",
        "\n",
        "**Step-by-step instructions:**\n",
        "\n",
        "\n",
        "- 💡 If \"Aggregation\" is enabled, disable it. (cube symbol in the top bar)\n",
        "\n",
        "\n",
        "1. **Create the basic scatter:**\n",
        "   - Drag `bill_length_mm` to **X-axis**\n",
        "   - Drag `bill_depth_mm` to **Y-axis**\n",
        "   - You should see a scatter plot!\n",
        "\n",
        "2. **Add species distinction:**\n",
        "   - Drag `species` to **Color** in the Marks shelf\n",
        "   - Wow! See how each species forms its own cluster? 🎨\n",
        "\n",
        "3. **Add more context:**\n",
        "   - Drag `sex` to **Shape**\n",
        "   - Drag `body_mass_g` to **Size**\n",
        "   - Now you can see size, species, and sex all at once!\n",
        "\n",
        "4. **Insights to look for:**\n",
        "   - 🔍 Gentoo penguins have longer but shallower bills\n",
        "   - 🔍 Adelie penguins cluster in the upper-left\n",
        "   - 🔍 Each species has distinct bill characteristics!\n",
        "\n",
        "**Pro move:** Try changing the chart type using the dropdown at the top. PyGWalker suggests the best type automatically! 🤖"
      ],
      "metadata": {
        "id": "XSpDmh_uUHHV"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "\n",
        "### 2️⃣ Bar Charts: Comparing Categories\n",
        "\n",
        "**Best for:** Comparing values across different groups\n",
        "\n",
        "**Let's investigate:** Which penguin species is the heaviest on average?"
      ],
      "metadata": {
        "id": "uNo9M2pHVHDk"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Bar chart exploration\n",
        "pyg.walk(df_clean, hide_data_source_config=True)"
      ],
      "metadata": {
        "id": "zaR-Iw6rVIrE"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 🎯 Try This - Bar Chart Exercise:\n",
        "\n",
        "**Creating a comparative bar chart:**\n",
        "\n",
        "1. **Basic bar chart:**\n",
        "   - Drag `species` to **X-axis**\n",
        "   - Drag `body_mass_g` to **Y-axis**\n",
        "   - PyGWalker automatically calculates the **average** (mean)!\n",
        "\n",
        "2. **Compare by island:**\n",
        "   - Drag `island` to **Color**\n",
        "   - Now you can see how species weights vary by location 🏝️\n",
        "\n",
        "3. **Change aggregation:**\n",
        "   - Click on `body_mass_g` in the Y-axis\n",
        "   - Try different aggregations: Sum, Count, Median, Min, Max\n",
        "   - Each tells a different story!\n",
        "\n",
        "4. **Flip it:**\n",
        "   - Try swapping X and Y axes (drag them to opposite positions)\n",
        "   - Horizontal bars can be easier to read with long labels\n",
        "\n",
        "**Did you notice?** Gentoo penguins are significantly heavier! 💪🐧"
      ],
      "metadata": {
        "id": "1GMxDewDVKF6"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "\n",
        "### 3️⃣ Line Charts: Trends Over... Wait! 🤔\n",
        "\n",
        "**Interesting discovery:** Our penguin dataset doesn't have a time dimension!\n",
        "\n",
        "But that's okay - let's create one to demonstrate line charts:"
      ],
      "metadata": {
        "id": "unqRdFtaVG9Z"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Let's create a time-series dataset for demonstration\n",
        "import numpy as np\n",
        "\n",
        "# Simulate penguin population monitoring over months\n",
        "months = pd.date_range('2023-01-01', periods=12, freq='M')\n",
        "penguin_trends = pd.DataFrame({\n",
        "    'month': months,\n",
        "    'Adelie_count': np.random.randint(45, 55, 12),\n",
        "    'Gentoo_count': np.random.randint(35, 45, 12),\n",
        "    'Chinstrap_count': np.random.randint(20, 30, 12)\n",
        "})\n",
        "\n",
        "# Reshape for PyGWalker\n",
        "penguin_trends_long = penguin_trends.melt(\n",
        "    id_vars=['month'],\n",
        "    var_name='species',\n",
        "    value_name='count'\n",
        ")\n",
        "\n",
        "print(\"📈 Time-series data created!\")\n",
        "penguin_trends_long.head(10)"
      ],
      "metadata": {
        "id": "wLVYHVIkVO9-"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Line chart exploration\n",
        "pyg.walk(penguin_trends_long, hide_data_source_config=True)"
      ],
      "metadata": {
        "id": "uZEI3AYxVSfH"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 🎯 Try This - Line Chart Exercise:\n",
        "\n",
        "1. **Create the trend line:**\n",
        "   - Drag `month` to **X-axis**\n",
        "   - Drag `count` to **Y-axis**\n",
        "   - Change chart type to **Line** 📈\n",
        "\n",
        "2. **Compare species:**\n",
        "   - Drag `species` to **Color**\n",
        "   - Now you see all three species trends!\n",
        "\n",
        "3. **Add markers:**\n",
        "   - In the marks settings, enable data points\n",
        "   - Makes it easier to see individual observations\n",
        "\n",
        "**Use case:** Line charts are perfect for time-series data, trends, and sequential patterns!"
      ],
      "metadata": {
        "id": "rUT6czOXVG4i"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "\n",
        "### 4️⃣ Histograms & Distributions: Understanding Spread\n",
        "\n",
        "**Best for:** Seeing the distribution and frequency of values\n",
        "\n",
        "**Let's investigate:** How are flipper lengths distributed across our penguins?"
      ],
      "metadata": {
        "id": "m2ORLshAVG2c"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Back to our main penguin dataset\n",
        "pyg.walk(df_clean, hide_data_source_config=True)"
      ],
      "metadata": {
        "id": "IRbb-HEXSExG"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 🎯 Try This - Histogram Exercise:\n",
        "\n",
        "1. **Create a histogram:**\n",
        "   - Drag `flipper_length_mm` to **X-axis**\n",
        "   - Change chart type to **Histogram** or **Bar**\n",
        "   - PyGWalker automatically bins the data!\n",
        "\n",
        "2. **See by species:**\n",
        "   - Drag `species` to **Color**\n",
        "   - Choose \"Stack\" or \"Dodge\" layout\n",
        "   - See how each species has different flipper sizes! 🦅\n",
        "\n",
        "3. **Adjust bins:**\n",
        "   - Click on the X-axis settings\n",
        "   - Try different bin sizes (narrower = more detail)\n",
        "\n",
        "**Insight:** You'll notice three distinct peaks - one for each species! This is called a \"trimodal distribution.\" 🎯"
      ],
      "metadata": {
        "id": "6uSvOvlPVizw"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "\n",
        "### 5️⃣ Heatmaps & Tables: Dense Data Views\n",
        "\n",
        "**Best for:** Showing values across two categorical dimensions"
      ],
      "metadata": {
        "id": "edDktKbiVix8"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Create a summary table for heatmap\n",
        "summary_df = df_clean.groupby(['species', 'island']).agg({\n",
        "    'body_mass_g': 'mean',\n",
        "    'bill_length_mm': 'mean',\n",
        "    'flipper_length_mm': 'mean'\n",
        "}).reset_index().round(1)\n",
        "\n",
        "print(\"📊 Summary statistics by species and island:\")\n",
        "summary_df"
      ],
      "metadata": {
        "id": "G5AOwJeFVoRA"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Heatmap exploration\n",
        "pyg.walk(summary_df, hide_data_source_config=True)"
      ],
      "metadata": {
        "id": "s3RUjiyDVvFZ"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 🎯 Try This - Heatmap Exercise:\n",
        "\n",
        "1. **Create a heatmap:**\n",
        "   - Drag `species` to **X-axis**\n",
        "   - Drag `island` to **Y-axis**\n",
        "   - Drag `body_mass_g` to **Color**\n",
        "   - Change chart type to **Heatmap** or **Square**\n",
        "\n",
        "2. **Insights at a glance:**\n",
        "   - Dark colors = higher values\n",
        "   - Light colors = lower values\n",
        "   - Empty cells = no data (e.g., no Chinstrap on Biscoe)\n",
        "\n",
        "**Use case:** Heatmaps are perfect for correlation matrices, confusion matrices, or any 2D categorical comparison!"
      ],
      "metadata": {
        "id": "O4Iw30vaVit3"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "\n",
        "## 🔧 Part 3: Advanced Features\n",
        "\n",
        "Now let's level up! 🚀 These features will make you a PyGWalker power user."
      ],
      "metadata": {
        "id": "HhcXxRSqVird"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### ⚡ Feature 1: Filters - Focus on What Matters\n",
        "\n",
        "Filters help you drill down into specific subsets of your data.\n",
        "\n",
        "**Let's explore:** Male vs Female penguins by species"
      ],
      "metadata": {
        "id": "ji36B_CaVipV"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Full dataset for filtering demo\n",
        "pyg.walk(df_clean, hide_data_source_config=True)"
      ],
      "metadata": {
        "id": "1fTUUI7OV27B"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 🎯 Try This - Filtering Exercise:\n",
        "\n",
        "1. **Add a filter:**\n",
        "   - Find the **Filters** section (usually top-right)\n",
        "   - Click \"Add Filter\"\n",
        "   - Select `sex` and choose only \"Male\"\n",
        "   - Watch your visualization update instantly! ⚡\n",
        "\n",
        "2. **Multiple filters:**\n",
        "   - Add another filter for `island`\n",
        "   - Select only \"Biscoe\"\n",
        "   - Now you're looking at male penguins from Biscoe only!\n",
        "\n",
        "3. **Dynamic filtering:**\n",
        "   - Try selecting different values\n",
        "   - Remove filters to go back to full data\n",
        "   - Use range filters for numeric fields (e.g., body_mass_g > 4000)\n",
        "\n",
        "**Pro tip:** Filters don't change your DataFrame - they just change what's displayed! Your original data stays safe. 🛡️"
      ],
      "metadata": {
        "id": "X3AiYe5DVimc"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "\n",
        "### ⚡ Feature 2: Aggregations - Summarize Like a Pro\n",
        "\n",
        "PyGWalker automatically aggregates data when needed. Let's master this!"
      ],
      "metadata": {
        "id": "MPMD9cAVV6RW"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "pyg.walk(df_clean, hide_data_source_config=True)"
      ],
      "metadata": {
        "id": "A44TX3HIV943"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 🎯 Try This - Aggregation Exercise:\n",
        "\n",
        "1. **Understanding auto-aggregation:**\n",
        "   - Drag `species` to X-axis\n",
        "   - Drag `body_mass_g` to Y-axis\n",
        "   - Notice it shows **average** by default\n",
        "\n",
        "2. **Change aggregation type:**\n",
        "   - Click on `body_mass_g` in the Y-axis shelf\n",
        "   - Try these:\n",
        "     - **Count**: How many penguins per species?\n",
        "     - **Sum**: Total mass of all penguins per species\n",
        "     - **Median**: Middle value (less affected by outliers)\n",
        "     - **Min/Max**: Smallest/largest penguin per species\n",
        "     - **Std Dev**: How varied are the weights?\n",
        "\n",
        "3. **Multiple measures:**\n",
        "   - You can add multiple fields to Y-axis!\n",
        "   - Try adding both `body_mass_g` and `flipper_length_mm`\n",
        "   - Compare two metrics side by side\n",
        "\n",
        "**Real-world use:** Aggregations are crucial for sales reports, KPI dashboards, and summary statistics! 📊"
      ],
      "metadata": {
        "id": "aesvBSpTV6PM"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "\n",
        "### ⚡ Feature 3: Sorting & Ranking\n",
        "\n",
        "Make patterns jump out by ordering your data!"
      ],
      "metadata": {
        "id": "29c2e3XkV6NH"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 🎯 Try This - Sorting Exercise:\n",
        "\n",
        "1. **Sort a bar chart:**\n",
        "   - Create: `species` on X, `body_mass_g` (mean) on Y\n",
        "   - Click on the axis or bar\n",
        "   - Look for sort options (ascending/descending)\n",
        "   - Watch bars rearrange! 📊\n",
        "\n",
        "2. **Sort by multiple fields:**\n",
        "   - Some chart types allow nested sorting\n",
        "   - Great for complex categorical data\n",
        "\n",
        "**Use case:** Rankings, top N analysis, identifying outliers"
      ],
      "metadata": {
        "id": "stbFmVaoV6Ky"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "\n",
        "### ⚡ Feature 4: Calculated Fields (Power User Feature! 🔥)\n",
        "\n",
        "Create new metrics on-the-fly without modifying your DataFrame!\n",
        "\n",
        "**Example:** Let's calculate Body Mass Index (sort of) for penguins"
      ],
      "metadata": {
        "id": "n-cYO2XqWIwY"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 💡 Calculated Field Example:\n",
        "\n",
        "While PyGWalker has calculated field capabilities, the exact implementation varies by version.\n",
        "\n",
        "**Alternative approach** - Create in pandas first:"
      ],
      "metadata": {
        "id": "qd5VCW6oWItS"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Add calculated fields to our DataFrame\n",
        "df_enhanced = df_clean.copy()\n",
        "\n",
        "# Bill ratio: length to depth\n",
        "df_enhanced['bill_ratio'] = (df_enhanced['bill_length_mm'] / df_enhanced['bill_depth_mm']).round(2)\n",
        "\n",
        "# Mass category\n",
        "df_enhanced['size_category'] = pd.cut(\n",
        "    df_enhanced['body_mass_g'],\n",
        "    bins=[0, 3500, 4500, 6500],\n",
        "    labels=['Small', 'Medium', 'Large']\n",
        ")\n",
        "\n",
        "# Flipper to mass ratio (efficiency!)\n",
        "df_enhanced['flipper_mass_ratio'] = (df_enhanced['flipper_length_mm'] / df_enhanced['body_mass_g'] * 1000).round(2)\n",
        "\n",
        "print(\"✨ Enhanced dataset with calculated fields!\")\n",
        "df_enhanced[['species', 'bill_ratio', 'size_category', 'flipper_mass_ratio']].head()"
      ],
      "metadata": {
        "id": "yKxQeYNyWMrU"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Explore the enhanced dataset\n",
        "pyg.walk(df_enhanced, hide_data_source_config=True)"
      ],
      "metadata": {
        "id": "GwysISBkWQU3"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 🎯 Try This - Calculated Fields Exercise:\n",
        "\n",
        "1. **Explore bill ratio:**\n",
        "   - Drag `bill_ratio` to X-axis\n",
        "   - Drag `species` to Color\n",
        "   - Which species has the longest bills relative to depth?\n",
        "\n",
        "2. **Use size categories:**\n",
        "   - Drag `size_category` to X-axis\n",
        "   - Drag `species` to Color\n",
        "   - Create a stacked bar chart\n",
        "   - See the size distribution per species!\n",
        "\n",
        "3. **Efficiency analysis:**\n",
        "   - Create scatter: `body_mass_g` vs `flipper_mass_ratio`\n",
        "   - Color by `species`\n",
        "   - Higher ratio = more flipper per unit of body mass (more \"efficient\" flippers!)\n",
        "\n",
        "**Real-world use:** Calculated fields are essential for:\n",
        "- KPIs (conversion rates, profit margins)\n",
        "- Normalized metrics (per capita, percentages)\n",
        "- Custom business logic"
      ],
      "metadata": {
        "id": "bKcj7zJKWIrM"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "\n",
        "## 🎨 Part 4: Customization & Styling\n",
        "\n",
        "Make your visualizations beautiful and professional! ✨"
      ],
      "metadata": {
        "id": "HRU7Dg9uWIo9"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# PyGWalker with custom configuration\n",
        "pyg.walk(\n",
        "    df_enhanced,\n",
        "    hide_data_source_config=True,\n",
        "    spec=\"./config.json\",  # Save/load your chart configurations (optional)\n",
        "    kernel_computation=True  # Better performance for large datasets\n",
        ")"
      ],
      "metadata": {
        "id": "JxZnRVPvWUeR"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 🎨 Customization Options:\n",
        "\n",
        "**Visual Styling:**\n",
        "- 🌗 **Theme**: Switch between light and dark modes\n",
        "- 🎨 **Colors**: Click on color legends to customize palettes\n",
        "- 📏 **Axes**: Customize labels, scales (linear, log), and ranges\n",
        "- 📊 **Titles**: Add descriptive titles to your charts\n",
        "\n",
        "**Interface Options:**\n",
        "- `hide_data_source_config=True`: Cleaner interface, hides data source panel\n",
        "- `dark='light'` or `appearance='dark'`: Theme preference\n",
        "- `kernel_computation=True`: Offload calculations to Python kernel (faster!)\n",
        "\n",
        "**Saving Your Work:**\n",
        "- 💾 **Export charts**: Use the export button for PNG/SVG\n",
        "- 📋 **Save configuration**: Export your chart setup as JSON\n",
        "- 🔄 **Load configuration**: Reuse your favorite chart setups\n",
        "\n",
        "**Pro tip:** You can save your PyGWalker configuration and load it later for consistent visualizations! 🎯"
      ],
      "metadata": {
        "id": "T6h6ooBIWWnK"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "\n",
        "## 🎓 Quick Recap: Core Features\n",
        "\n",
        "You've learned A LOT! Let's recap: 🎉\n",
        "\n",
        "✅ **Data Loading**: URLs, files, dictionaries → any DataFrame works!\n",
        "\n",
        "✅ **Chart Types**:\n",
        "- 📊 Scatter plots → relationships\n",
        "- 📊 Bar charts → comparisons\n",
        "- 📈 Line charts → trends\n",
        "- 📊 Histograms → distributions\n",
        "- 🔥 Heatmaps → 2D patterns\n",
        "\n",
        "✅ **Advanced Features**:\n",
        "- 🔍 Filters → focus on subsets\n",
        "- 📊 Aggregations → summarize data\n",
        "- 🔄 Sorting → reveal patterns\n",
        "- ⚡ Calculated fields → custom metrics\n",
        "\n",
        "✅ **Customization**:\n",
        "- 🎨 Themes and colors\n",
        "- 💾 Export and save\n",
        "- ⚙️ Performance options\n",
        "\n",
        "**You're now a PyGWalker intermediate user!** 🎊\n",
        "\n",
        "Next up, we'll dive into real-world use cases and best practices. Ready? 🚀"
      ],
      "metadata": {
        "id": "8UGLif49WWie"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "\n",
        "# 💡 Practical Use Cases: Real-World Applications\n",
        "\n",
        "Time to see PyGWalker in action with scenarios you'll actually face! 🌍\n",
        "\n",
        "We'll explore:\n",
        "1. 📈 **Sales Analysis**: Revenue trends and performance\n",
        "2. 👥 **Customer Segmentation**: Understanding your audience\n",
        "3. 🔍 **Data Quality Checks**: Finding problems in your data\n",
        "4. 🧪 **A/B Testing Results**: Comparing experiments\n",
        "\n",
        "Each example includes a realistic dataset and step-by-step analysis. Let's go! 🚀"
      ],
      "metadata": {
        "id": "W08-T0cBXIJa"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "\n",
        "## 📈 Use Case 1: Sales Analysis\n",
        "\n",
        "**Scenario:** You're a data analyst at an e-commerce company. Your manager wants insights on:\n",
        "- Which products are selling best?\n",
        "- Are sales trending up or down?\n",
        "- Which regions are performing well?\n",
        "\n",
        "Let's create a realistic sales dataset and analyze it!"
      ],
      "metadata": {
        "id": "2cCOD9c-XO-0"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Create a realistic sales dataset\n",
        "import pandas as pd\n",
        "import numpy as np\n",
        "from datetime import datetime, timedelta\n",
        "\n",
        "# Set seed for reproducibility\n",
        "np.random.seed(42)\n",
        "\n",
        "# Generate dates for the last 12 months\n",
        "date_range = pd.date_range(end=datetime.now(), periods=365, freq='D')\n",
        "\n",
        "# Product categories\n",
        "products = ['Laptop', 'Phone', 'Tablet', 'Headphones', 'Smartwatch', 'Camera']\n",
        "regions = ['North America', 'Europe', 'Asia', 'South America']\n",
        "channels = ['Online', 'Retail']\n",
        "\n",
        "# Generate sales data\n",
        "n_records = 1000\n",
        "sales_data = pd.DataFrame({\n",
        "    'date': np.random.choice(date_range, n_records),\n",
        "    'product': np.random.choice(products, n_records),\n",
        "    'region': np.random.choice(regions, n_records),\n",
        "    'channel': np.random.choice(channels, n_records),\n",
        "    'units_sold': np.random.randint(1, 50, n_records),\n",
        "    'unit_price': np.random.uniform(50, 2000, n_records).round(2),\n",
        "})\n",
        "\n",
        "# Calculate revenue\n",
        "sales_data['revenue'] = (sales_data['units_sold'] * sales_data['unit_price']).round(2)\n",
        "\n",
        "# Add some seasonality (higher sales in Nov-Dec)\n",
        "sales_data.loc[sales_data['date'].dt.month.isin([11, 12]), 'revenue'] *= 1.5\n",
        "sales_data['revenue'] = sales_data['revenue'].round(2)\n",
        "\n",
        "# Sort by date\n",
        "sales_data = sales_data.sort_values('date').reset_index(drop=True)\n",
        "\n",
        "print(\"💰 Sales dataset created!\")\n",
        "print(f\"📊 Records: {len(sales_data):,}\")\n",
        "print(f\"💵 Total Revenue: ${sales_data['revenue'].sum():,.2f}\")\n",
        "print(f\"📅 Date Range: {sales_data['date'].min().date()} to {sales_data['date'].max().date()}\")\n",
        "print(\"\\n\" + \"=\"*60)\n",
        "sales_data.head(10)"
      ],
      "metadata": {
        "id": "KCkf7bjHXRLp"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Quick overview of sales data\n",
        "print(\"📊 Sales Summary Statistics:\\n\")\n",
        "print(sales_data.describe())\n",
        "print(\"\\n\" + \"=\"*60)\n",
        "print(\"🎯 Sales by Product:\\n\")\n",
        "print(sales_data.groupby('product')['revenue'].agg(['sum', 'mean', 'count']).sort_values('sum', ascending=False))"
      ],
      "metadata": {
        "id": "7zC7YDbyXci6"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Let's analyze! 🚀\n",
        "pyg.walk(sales_data, hide_data_source_config=True)"
      ],
      "metadata": {
        "id": "V720eGNIXeSl"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 🎯 Sales Analysis Exercises:\n",
        "\n",
        "**Exercise 1: Revenue by Product** 📊\n",
        "1. Drag `product` to **X-axis**\n",
        "2. Drag `revenue` to **Y-axis** (will auto-aggregate to SUM)\n",
        "3. Sort descending to see top performers\n",
        "4. **Question**: Which product generates the most revenue?\n",
        "\n",
        "**Exercise 2: Sales Trends Over Time** 📈\n",
        "1. Drag `date` to **X-axis**\n",
        "2. Drag `revenue` to **Y-axis**\n",
        "3. Change to **Line chart**\n",
        "4. Drag `product` to **Color**\n",
        "5. **Question**: Do you see the holiday season spike? (Nov-Dec)\n",
        "\n",
        "**Exercise 3: Regional Performance** 🌍\n",
        "1. Create a bar chart: `region` vs `revenue`\n",
        "2. Drag `channel` to **Color**\n",
        "3. Use \"Dodge\" layout to compare side-by-side\n",
        "4. **Question**: Which region prefers online vs retail?\n",
        "\n",
        "**Exercise 4: Profitability Analysis** 💰\n",
        "1. Scatter plot: `units_sold` (X) vs `revenue` (Y)\n",
        "2. Add `product` to **Color**\n",
        "3. Add `unit_price` to **Size**\n",
        "4. **Question**: Which products are high-volume vs high-value?\n",
        "\n",
        "**Exercise 5: Monthly Trends** 📅\n",
        "1. Create a calculated field for month (or use date aggregation)\n",
        "2. Line chart: month vs revenue\n",
        "3. **Insight**: Identify seasonal patterns for inventory planning!\n",
        "\n",
        "### 💡 Business Insights You Might Discover:\n",
        "\n",
        "- 🏆 **Top performers**: Identify which products to stock more\n",
        "- 📉 **Declining products**: Spot products needing promotion\n",
        "- 🌍 **Regional preferences**: Tailor marketing by region\n",
        "- 📅 **Seasonality**: Plan inventory and campaigns\n",
        "- 💻 **Channel effectiveness**: Optimize sales channels\n",
        "\n",
        "**Pro tip**: Save these visualizations and share them with your team! Export as PNG for presentations. 📸"
      ],
      "metadata": {
        "id": "He7n8oMNXO8q"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "\n",
        "## 👥 Use Case 2: Customer Segmentation\n",
        "\n",
        "**Scenario:** You work for a subscription service. Marketing wants to understand customer segments for targeted campaigns.\n",
        "\n",
        "**Goals:**\n",
        "- Who are our most valuable customers?\n",
        "- What behaviors distinguish different segments?\n",
        "- How can we reduce churn?\n",
        "\n",
        "Let's create customer data and segment it!"
      ],
      "metadata": {
        "id": "YEVWmhJiXO6X"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Create customer dataset\n",
        "np.random.seed(123)\n",
        "\n",
        "n_customers = 500\n",
        "\n",
        "customer_data = pd.DataFrame({\n",
        "    'customer_id': range(1, n_customers + 1),\n",
        "    'age': np.random.randint(18, 70, n_customers),\n",
        "    'subscription_months': np.random.randint(1, 36, n_customers),\n",
        "    'monthly_spend': np.random.uniform(10, 200, n_customers).round(2),\n",
        "    'login_frequency': np.random.randint(1, 30, n_customers),\n",
        "    'support_tickets': np.random.randint(0, 10, n_customers),\n",
        "    'referrals': np.random.randint(0, 5, n_customers),\n",
        "})\n",
        "\n",
        "# Calculate lifetime value\n",
        "customer_data['lifetime_value'] = (\n",
        "    customer_data['subscription_months'] * customer_data['monthly_spend']\n",
        ").round(2)\n",
        "\n",
        "# Create engagement score\n",
        "customer_data['engagement_score'] = (\n",
        "    (customer_data['login_frequency'] * 2) +\n",
        "    (customer_data['referrals'] * 10) -\n",
        "    (customer_data['support_tickets'] * 3)\n",
        ")\n",
        "\n",
        "# Create segments based on lifetime value\n",
        "customer_data['value_segment'] = pd.cut(\n",
        "    customer_data['lifetime_value'],\n",
        "    bins=[0, 500, 2000, 10000],\n",
        "    labels=['Low Value', 'Medium Value', 'High Value']\n",
        ")\n",
        "\n",
        "# Create age groups\n",
        "customer_data['age_group'] = pd.cut(\n",
        "    customer_data['age'],\n",
        "    bins=[0, 25, 35, 50, 100],\n",
        "    labels=['18-25', '26-35', '36-50', '50+']\n",
        ")\n",
        "\n",
        "# Churn prediction (synthetic)\n",
        "customer_data['churn_risk'] = np.where(\n",
        "    (customer_data['engagement_score'] < 20) & (customer_data['subscription_months'] < 6),\n",
        "    'High Risk',\n",
        "    np.where(\n",
        "        (customer_data['engagement_score'] < 40) & (customer_data['subscription_months'] < 12),\n",
        "        'Medium Risk',\n",
        "        'Low Risk'\n",
        "    )\n",
        ")\n",
        "\n",
        "print(\"👥 Customer dataset created!\")\n",
        "print(f\"📊 Total Customers: {len(customer_data):,}\")\n",
        "print(f\"💰 Average Lifetime Value: ${customer_data['lifetime_value'].mean():,.2f}\")\n",
        "print(f\"⚠️ High Churn Risk: {(customer_data['churn_risk'] == 'High Risk').sum()} customers\")\n",
        "print(\"\\n\" + \"=\"*60)\n",
        "customer_data.head(10)"
      ],
      "metadata": {
        "id": "MXp6luH1Xtwh"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Customer segments overview\n",
        "print(\"📊 Customer Segmentation Overview:\\n\")\n",
        "print(\"By Value Segment:\")\n",
        "print(customer_data['value_segment'].value_counts())\n",
        "print(\"\\nBy Churn Risk:\")\n",
        "print(customer_data['churn_risk'].value_counts())\n",
        "print(\"\\nBy Age Group:\")\n",
        "print(customer_data['age_group'].value_counts())"
      ],
      "metadata": {
        "id": "4O0S8FKoXwKA"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Analyze customer segments! 🎯\n",
        "pyg.walk(customer_data, hide_data_source_config=True)"
      ],
      "metadata": {
        "id": "YPWqaw4wXwE4"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 🎯 Customer Segmentation Exercises:\n",
        "\n",
        "**Exercise 1: Value Segment Distribution** 💎\n",
        "1. Bar chart: `value_segment` (X) vs `customer_id` count (Y)\n",
        "2. Drag `churn_risk` to **Color**\n",
        "3. Use stacked bars\n",
        "4. **Question**: Are high-value customers at risk of churning?\n",
        "\n",
        "**Exercise 2: Engagement Analysis** 📊\n",
        "1. Scatter plot: `login_frequency` (X) vs `monthly_spend` (Y)\n",
        "2. Add `value_segment` to **Color**\n",
        "3. Add `subscription_months` to **Size**\n",
        "4. **Question**: Do engaged users spend more?\n",
        "\n",
        "**Exercise 3: Age Demographics** 👤\n",
        "1. Bar chart: `age_group` (X) vs `lifetime_value` average (Y)\n",
        "2. Which age group has highest LTV?\n",
        "3. Add filter: show only \"High Value\" customers\n",
        "4. **Insight**: Target similar demographics in marketing!\n",
        "\n",
        "**Exercise 4: Churn Risk Factors** ⚠️\n",
        "1. Box plot or violin plot: `churn_risk` (X) vs `engagement_score` (Y)\n",
        "2. Add another view: `churn_risk` vs `support_tickets`\n",
        "3. **Question**: What predicts churn? Low engagement? Many issues?\n",
        "\n",
        "**Exercise 5: Referral Champions** 🏆\n",
        "1. Filter: `referrals` >= 2\n",
        "2. Scatter: `subscription_months` vs `lifetime_value`\n",
        "3. Color by `age_group`\n",
        "4. **Insight**: Identify your brand advocates for referral programs!\n",
        "\n",
        "### 💡 Marketing Actions Based on Insights:\n",
        "\n",
        "**High-Value + High Churn Risk** 🚨\n",
        "- Immediate personal outreach\n",
        "- Exclusive perks or discounts\n",
        "- Address support issues proactively\n",
        "\n",
        "**Medium Value + High Engagement** 🌟\n",
        "- Upsell opportunities\n",
        "- Premium features\n",
        "- Referral incentives\n",
        "\n",
        "**Low Value + Young Demographic** 🎯\n",
        "- Growth potential\n",
        "- Educational content\n",
        "- Community building\n",
        "\n",
        "**Low Engagement + Any Value** 📧\n",
        "- Re-engagement campaigns\n",
        "- Product education\n",
        "- Feature highlights\n",
        "\n",
        "**Real-world impact**: Customer segmentation can increase ROI by 200%+ on marketing campaigns! 🎯"
      ],
      "metadata": {
        "id": "7Z_0TzAQXO4Z"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Analyze customer segments! 🎯\n",
        "pyg.walk(customer_data, hide_data_source_config=True)"
      ],
      "metadata": {
        "id": "7buvoOJVX6NH"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 🎯 Customer Segmentation Exercises:\n",
        "\n",
        "**Exercise 1: Value Segment Distribution** 💎\n",
        "1. Bar chart: `value_segment` (X) vs `customer_id` count (Y)\n",
        "2. Drag `churn_risk` to **Color**\n",
        "3. Use stacked bars\n",
        "4. **Question**: Are high-value customers at risk of churning?\n",
        "\n",
        "**Exercise 2: Engagement Analysis** 📊\n",
        "1. Scatter plot: `login_frequency` (X) vs `monthly_spend` (Y)\n",
        "2. Add `value_segment` to **Color**\n",
        "3. Add `subscription_months` to **Size**\n",
        "4. **Question**: Do engaged users spend more?\n",
        "\n",
        "**Exercise 3: Age Demographics** 👤\n",
        "1. Bar chart: `age_group` (X) vs `lifetime_value` average (Y)\n",
        "2. Which age group has highest LTV?\n",
        "3. Add filter: show only \"High Value\" customers\n",
        "4. **Insight**: Target similar demographics in marketing!\n",
        "\n",
        "**Exercise 4: Churn Risk Factors** ⚠️\n",
        "1. Box plot or violin plot: `churn_risk` (X) vs `engagement_score` (Y)\n",
        "2. Add another view: `churn_risk` vs `support_tickets`\n",
        "3. **Question**: What predicts churn? Low engagement? Many issues?\n",
        "\n",
        "**Exercise 5: Referral Champions** 🏆\n",
        "1. Filter: `referrals` >= 2\n",
        "2. Scatter: `subscription_months` vs `lifetime_value`\n",
        "3. Color by `age_group`\n",
        "4. **Insight**: Identify your brand advocates for referral programs!\n",
        "\n",
        "### 💡 Marketing Actions Based on Insights:\n",
        "\n",
        "**High-Value + High Churn Risk** 🚨\n",
        "- Immediate personal outreach\n",
        "- Exclusive perks or discounts\n",
        "- Address support issues proactively\n",
        "\n",
        "**Medium Value + High Engagement** 🌟\n",
        "- Upsell opportunities\n",
        "- Premium features\n",
        "- Referral incentives\n",
        "\n",
        "**Low Value + Young Demographic** 🎯\n",
        "- Growth potential\n",
        "- Educational content\n",
        "- Community building\n",
        "\n",
        "**Low Engagement + Any Value** 📧\n",
        "- Re-engagement campaigns\n",
        "- Product education\n",
        "- Feature highlights\n",
        "\n",
        "**Real-world impact**: Customer segmentation can increase ROI by 200%+ on marketing campaigns! 🎯"
      ],
      "metadata": {
        "id": "jeoKItDMXO2Q"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "\n",
        "## 🔍 Use Case 3: Data Quality Checks\n",
        "\n",
        "**Scenario:** You've received a new dataset from a vendor. Before analysis, you need to check data quality!\n",
        "\n",
        "**What to look for:**\n",
        "- Missing values\n",
        "- Outliers and anomalies\n",
        "- Inconsistent data\n",
        "- Distribution issues\n",
        "\n",
        "PyGWalker is PERFECT for visual data quality checks! 🔍"
      ],
      "metadata": {
        "id": "HbjhBrpfXOzG"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Create a \"messy\" dataset for quality checking\n",
        "np.random.seed(456)\n",
        "\n",
        "n_records = 300\n",
        "\n",
        "messy_data = pd.DataFrame({\n",
        "    'transaction_id': range(1, n_records + 1),\n",
        "    'amount': np.random.uniform(10, 1000, n_records),\n",
        "    'category': np.random.choice(['Electronics', 'Clothing', 'Food', 'Other', None], n_records),\n",
        "    'quantity': np.random.randint(1, 20, n_records),\n",
        "    'customer_age': np.random.randint(15, 80, n_records),\n",
        "    'rating': np.random.choice([1, 2, 3, 4, 5, None], n_records),\n",
        "})\n",
        "\n",
        "# Introduce data quality issues\n",
        "\n",
        "# 1. Missing values (10% in category, 15% in rating)\n",
        "messy_data.loc[np.random.choice(messy_data.index, 30), 'category'] = None\n",
        "messy_data.loc[np.random.choice(messy_data.index, 45), 'rating'] = None\n",
        "\n",
        "# 2. Outliers in amount (some crazy high values)\n",
        "messy_data.loc[np.random.choice(messy_data.index, 5), 'amount'] = np.random.uniform(5000, 10000, 5)\n",
        "\n",
        "# 3. Impossible values (negative amounts, ages > 100)\n",
        "messy_data.loc[np.random.choice(messy_data.index, 3), 'amount'] = -np.random.uniform(10, 100, 3)\n",
        "messy_data.loc[np.random.choice(messy_data.index, 4), 'customer_age'] = np.random.randint(100, 150, 4)\n",
        "\n",
        "# 4. Duplicates\n",
        "duplicate_rows = messy_data.sample(10)\n",
        "messy_data = pd.concat([messy_data, duplicate_rows], ignore_index=True)\n",
        "\n",
        "print(\"🔍 Messy dataset created (intentionally flawed!):\")\n",
        "print(f\"📊 Total Records: {len(messy_data):,}\")\n",
        "print(f\"❌ Missing values: {messy_data.isnull().sum().sum()}\")\n",
        "print(f\"🔄 Duplicate rows: {messy_data.duplicated().sum()}\")\n",
        "print(f\"⚠️ Negative amounts: {(messy_data['amount'] < 0).sum()}\")\n",
        "print(f\"⚠️ Invalid ages: {(messy_data['customer_age'] > 100).sum()}\")\n",
        "print(\"\\n\" + \"=\"*60)\n",
        "messy_data.head(15)"
      ],
      "metadata": {
        "id": "NUhBKRU0YAAO"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Check missing values\n",
        "print(\"📊 Missing Values Report:\\n\")\n",
        "missing_report = pd.DataFrame({\n",
        "    'Column': messy_data.columns,\n",
        "    'Missing': messy_data.isnull().sum(),\n",
        "    'Percentage': (messy_data.isnull().sum() / len(messy_data) * 100).round(2)\n",
        "})\n",
        "print(missing_report)"
      ],
      "metadata": {
        "id": "-ieJA2eiX_3r"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Visual data quality check! 🔍\n",
        "pyg.walk(messy_data, hide_data_source_config=True)"
      ],
      "metadata": {
        "id": "2RHUe7TnX_lt"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 🎯 Data Quality Check Exercises:\n",
        "\n",
        "**Exercise 1: Spot Outliers in Amount** 💰\n",
        "1. **Histogram**: Drag `amount` to X-axis\n",
        "2. **Question**: See those bars way out on the right? Those are outliers!\n",
        "3. **Box plot**: Change chart type to see whiskers and outliers clearly\n",
        "4. **Action**: Investigate transactions > $5,000\n",
        "\n",
        "**Exercise 2: Find Missing Values Patterns** ❌\n",
        "1. Create a calculated field: `is_missing_category` (or filter by null)\n",
        "2. Compare missing vs non-missing records\n",
        "3. **Question**: Is missingness random or systematic?\n",
        "4. Bar chart: `category` counts - see how many nulls exist\n",
        "\n",
        "**Exercise 3: Detect Impossible Values** 🚨\n",
        "1. **Scatter plot**: `transaction_id` (X) vs `customer_age` (Y)\n",
        "2. **Question**: See any points above 100? Those are errors!\n",
        "3. Repeat for `amount` - look for negative values\n",
        "4. **Action**: Create filters to isolate problematic records\n",
        "\n",
        "**Exercise 4: Check Distributions** 📊\n",
        "1. **Histogram**: `rating` distribution\n",
        "2. **Question**: Is the distribution reasonable? Too many nulls?\n",
        "3. Compare across `category`\n",
        "4. **Insight**: Some categories might have more missing ratings\n",
        "\n",
        "**Exercise 5: Identify Patterns in Quality Issues** 🔍\n",
        "1. Create a flag: `has_issues` = TRUE if (age > 100 OR amount < 0)\n",
        "2. Analyze: Do issues cluster in certain categories?\n",
        "3. **Insight**: Data quality issues might be systematic!\n",
        "\n",
        "### 🛠️ Data Cleaning Actions:\n",
        "\n",
        "After visual inspection, here's what to fix:"
      ],
      "metadata": {
        "id": "JvlvfINRXOwz"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Clean the messy data based on insights\n",
        "messy_data_cleaned = messy_data.copy()\n",
        "\n",
        "# 1. Remove duplicates\n",
        "messy_data_cleaned = messy_data_cleaned.drop_duplicates()\n",
        "\n",
        "# 2. Fix impossible values\n",
        "messy_data_cleaned = messy_data_cleaned[\n",
        "    (messy_data_cleaned['amount'] >= 0) &\n",
        "    (messy_data_cleaned['customer_age'] <= 100)\n",
        "]\n",
        "\n",
        "# 3. Handle outliers (cap at 99th percentile)\n",
        "amount_99th = messy_data_cleaned['amount'].quantile(0.99)\n",
        "messy_data_cleaned.loc[messy_data_cleaned['amount'] > amount_99th, 'amount'] = amount_99th\n",
        "\n",
        "# 4. Fill missing categories\n",
        "messy_data_cleaned['category'] = messy_data_cleaned['category'].fillna('Unknown')\n",
        "\n",
        "print(\"✅ Data cleaned!\")\n",
        "print(f\"📊 Records before: {len(messy_data):,} → after: {len(messy_data_cleaned):,}\")\n",
        "print(f\"✨ Removed: {len(messy_data) - len(messy_data_cleaned):,} problematic records\")\n",
        "print(f\"❌ Missing values: {messy_data_cleaned.isnull().sum().sum()}\")"
      ],
      "metadata": {
        "id": "jkQX9wTTYI9a"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Compare before and after! 📊\n",
        "print(\"Let's visualize the cleaned data:\")\n",
        "pyg.walk(messy_data_cleaned, hide_data_source_config=True)"
      ],
      "metadata": {
        "id": "cNoZ1HABYLtU"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 💡 Data Quality Best Practices:\n",
        "\n",
        "**Always Check Before Analysis:** ✅\n",
        "- 📊 **Distributions**: Histograms reveal outliers and skewness\n",
        "- ❌ **Missing values**: Identify patterns, not just counts\n",
        "- 🔢 **Range checks**: Min/max should make business sense\n",
        "- 🔄 **Duplicates**: Visual patterns can reveal duplicate records\n",
        "- 📈 **Trends**: Unexpected spikes might indicate data issues\n",
        "\n",
        "**PyGWalker for QA:**\n",
        "- ⚡ Faster than writing multiple plotting commands\n",
        "- 👁️ Interactive exploration helps spot subtle issues\n",
        "- 🎯 Visual patterns are easier to spot than statistics\n",
        "- 📸 Export problematic charts for documentation\n",
        "\n",
        "**Real-world impact**: Catching data quality issues early saves hours (or days!) of debugging later! 🎯"
      ],
      "metadata": {
        "id": "rGGp9XuxYOR6"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "\n",
        "## 🧪 Use Case 4: A/B Testing Results\n",
        "\n",
        "**Scenario:** Your product team ran an A/B test on a new feature. You need to analyze if variant B performs better than variant A.\n",
        "\n",
        "**Metrics to compare:**\n",
        "- Conversion rate\n",
        "- Average order value\n",
        "- User engagement\n",
        "- Statistical significance\n",
        "\n",
        "Let's analyze test results! 🧪"
      ],
      "metadata": {
        "id": "HWwPcZDmYOLT"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Create A/B test dataset\n",
        "np.random.seed(789)\n",
        "\n",
        "n_users = 1000\n",
        "\n",
        "# Variant B performs slightly better (simulate this)\n",
        "variant_a_users = n_users // 2\n",
        "variant_b_users = n_users - variant_a_users\n",
        "\n",
        "ab_test_data = pd.DataFrame({\n",
        "    'user_id': range(1, n_users + 1),\n",
        "    'variant': ['A'] * variant_a_users + ['B'] * variant_b_users,\n",
        "    'converted': (\n",
        "        list(np.random.choice([0, 1], variant_a_users, p=[0.75, 0.25])) +  # A: 25% conversion\n",
        "        list(np.random.choice([0, 1], variant_b_users, p=[0.65, 0.35]))    # B: 35% conversion\n",
        "    ),\n",
        "    'time_on_page': np.concatenate([\n",
        "        np.random.uniform(30, 180, variant_a_users),   # A: average 105 seconds\n",
        "        np.random.uniform(45, 210, variant_b_users)     # B: average 127 seconds\n",
        "    ]),\n",
        "    'pages_viewed': np.concatenate([\n",
        "        np.random.randint(1, 8, variant_a_users),      # A: fewer pages\n",
        "        np.random.randint(2, 10, variant_b_users)       # B: more pages\n",
        "    ]),\n",
        "})\n",
        "\n",
        "# Add order value (only for converted users)\n",
        "ab_test_data['order_value'] = 0\n",
        "ab_test_data.loc[ab_test_data['converted'] == 1, 'order_value'] = np.random.uniform(20, 200, ab_test_data['converted'].sum())\n",
        "\n",
        "# Round numeric columns\n",
        "ab_test_data['time_on_page'] = ab_test_data['time_on_page'].round(1)\n",
        "ab_test_data['order_value'] = ab_test_data['order_value'].round(2)\n",
        "\n",
        "# Add day of test\n",
        "ab_test_data['test_day'] = np.random.randint(1, 15, n_users)\n",
        "\n",
        "print(\"🧪 A/B Test dataset created!\")\n",
        "print(f\"👥 Total Users: {len(ab_test_data):,}\")\n",
        "print(f\"📊 Variant A: {variant_a_users:,} users\")\n",
        "print(f\"📊 Variant B: {variant_b_users:,} users\")\n",
        "print(\"\\n\" + \"=\"*60)\n",
        "ab_test_data.head(10)"
      ],
      "metadata": {
        "id": "JKH_MPhIWIYo"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Quick A/B test summary\n",
        "print(\"📊 A/B Test Results Summary:\\n\")\n",
        "summary = ab_test_data.groupby('variant').agg({\n",
        "    'converted': ['sum', 'mean'],\n",
        "    'time_on_page': 'mean',\n",
        "    'pages_viewed': 'mean',\n",
        "    'order_value': 'mean'\n",
        "}).round(3)\n",
        "\n",
        "summary.columns = ['Total Conversions', 'Conversion Rate', 'Avg Time (sec)', 'Avg Pages', 'Avg Order Value']\n",
        "print(summary)\n",
        "\n",
        "# Calculate lift\n",
        "conv_rate_a = ab_test_data[ab_test_data['variant'] == 'A']['converted'].mean()\n",
        "conv_rate_b = ab_test_data[ab_test_data['variant'] == 'B']['converted'].mean()\n",
        "lift = ((conv_rate_b - conv_rate_a) / conv_rate_a * 100)\n",
        "\n",
        "print(f\"\\n🚀 Lift (B vs A): {lift:.1f}%\")\n",
        "print(f\"{'🎉 Variant B wins!' if lift > 0 else '📉 Variant A is better'}\")"
      ],
      "metadata": {
        "id": "2VAgL48hWIWi"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Analyze the A/B test! 🎯\n",
        "pyg.walk(ab_test_data, hide_data_source_config=True)"
      ],
      "metadata": {
        "id": "8rxTHSADVZZc"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 🎯 A/B Testing Analysis Exercises:\n",
        "\n",
        "**Exercise 1: Conversion Rate Comparison** 📊\n",
        "1. Bar chart: `variant` (X) vs `converted` (Y, mean aggregation)\n",
        "2. **Question**: Which variant has higher conversion rate?\n",
        "3. Change to actual counts (sum) to see volume\n",
        "4. **Insight**: B converts at ~35% vs A at ~25% = 40% lift! 🚀\n",
        "\n",
        "**Exercise 2: Engagement Metrics** ⏱️\n",
        "1. Box plot: `variant` (X) vs `time_on_page` (Y)\n",
        "2. See the distribution differences\n",
        "3. Repeat for `pages_viewed`\n",
        "4. **Question**: Is B more engaging overall?\n",
        "\n",
        "**Exercise 3: Order Value Analysis** 💰\n",
        "1. Filter: `converted` = 1 (only converted users)\n",
        "2. Box plot: `variant` vs `order_value`\n",
        "3. **Question**: Do B users spend more per order?\n",
        "4. **Important**: Similar values = increased revenue comes from MORE conversions, not bigger orders\n",
        "\n",
        "**Exercise 4: Time-Series Check** 📅\n",
        "1. Line chart: `test_day` (X) vs `converted` mean (Y)\n",
        "2. Color by `variant`\n",
        "3. **Question**: Is performance consistent across days?\n",
        "4. **Watch for**: Novelty effects or contamination\n",
        "\n",
        "**Exercise 5: Segment Analysis** 🎯\n",
        "1. Create bins for `time_on_page` (low, medium, high engagement)\n",
        "2. Compare conversion by engagement level and variant\n",
        "3. **Question**: Does B work better for certain user types?\n",
        "4. **Advanced**: Look for interaction effects\n",
        "\n",
        "### 📈 Statistical Considerations:\n",
        "\n",
        "**What PyGWalker Shows:**\n",
        "- ✅ Descriptive statistics visually\n",
        "- ✅ Distribution shapes and outliers\n",
        "- ✅ Trends over time\n",
        "- ✅ Segment-level differences\n",
        "\n",
        "**What You Still Need:**\n",
        "- 📊 Statistical significance tests (t-test, chi-square)\n",
        "- 📊 Confidence intervals\n",
        "- 📊 Power analysis\n",
        "\n",
        "**Pro tip:** Use PyGWalker for exploratory analysis, then confirm with statistical tests in Python!"
      ],
      "metadata": {
        "id": "D5ihKMAQYWit"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Quick statistical test (bonus!)\n",
        "from scipy import stats\n",
        "\n",
        "# Chi-square test for conversion rate\n",
        "contingency_table = pd.crosstab(ab_test_data['variant'], ab_test_data['converted'])\n",
        "chi2, p_value, dof, expected = stats.chi2_contingency(contingency_table)\n",
        "\n",
        "print(\"📊 Statistical Significance Test (Chi-Square):\")\n",
        "print(\"=\"*60)\n",
        "print(f\"Chi-square statistic: {chi2:.4f}\")\n",
        "print(f\"P-value: {p_value:.4f}\")\n",
        "print(f\"\\n{'✅ Statistically significant (p < 0.05)!' if p_value < 0.05 else '❌ Not statistically significant (p >= 0.05)'}\")\n",
        "print(\"\\nConclusion:\")\n",
        "if p_value < 0.05 and conv_rate_b > conv_rate_a:\n",
        "    print(\"🎉 Variant B is significantly better! Ship it! 🚀\")\n",
        "elif p_value < 0.05:\n",
        "    print(\"📉 Variant A is significantly better. Keep the original.\")\n",
        "else:\n",
        "    print(\"🤷 No significant difference. Need more data or run longer.\")"
      ],
      "metadata": {
        "id": "mPDCkrIZYaUh"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 💡 A/B Testing Best Practices:\n",
        "\n",
        "**Before the Test:** 📋\n",
        "- Define success metrics clearly\n",
        "- Calculate required sample size\n",
        "- Ensure random assignment\n",
        "- Set test duration\n",
        "\n",
        "**During Analysis with PyGWalker:** 🔍\n",
        "- ✅ Check for outliers (can skew results)\n",
        "- ✅ Verify randomization (variants should look similar demographically)\n",
        "- ✅ Look for time-based patterns (novelty effects)\n",
        "- ✅ Segment analysis (does it work for everyone?)\n",
        "\n",
        "**Making the Decision:** ✅\n",
        "1. Visual exploration (PyGWalker) ✨\n",
        "2. Statistical tests (scipy/statsmodels)\n",
        "3. Business context (cost, feasibility)\n",
        "4. Segment analysis (any negative impacts?)\n",
        "\n",
        "**Real-world impact**: Proper A/B analysis can increase revenue by 10-30% through optimized features! 📈\n",
        "\n",
        "---\n",
        "\n",
        "**You've completed all 4 real-world use cases!** 🎊\n",
        "- ✅ Sales analysis for business insights\n",
        "- ✅ Customer segmentation for targeted marketing\n",
        "- ✅ Data quality checks for reliable analysis\n",
        "- ✅ A/B testing for product decisions\n",
        "\n",
        "Next up: Best practices and pro tips! 🚀"
      ],
      "metadata": {
        "id": "0_G_c7Q-YWf6"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "\n",
        "# 💎 Best Practices & Pro Tips\n",
        "\n",
        "You've learned the fundamentals and seen real-world applications. Now let's level up with advanced techniques and best practices! 🚀\n",
        "\n",
        "In this section:\n",
        "- ⚡ Performance optimization for large datasets\n",
        "- 🎯 Workflow recommendations\n",
        "- 🐛 Troubleshooting common issues\n",
        "- 🔥 Pro tips from power users\n",
        "- 📚 Additional resources"
      ],
      "metadata": {
        "id": "o21glSewYWdT"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "\n",
        "## ⚡ Performance Optimization\n",
        "\n",
        "PyGWalker is fast, but with large datasets, a few tweaks can make it even faster! ⚡"
      ],
      "metadata": {
        "id": "vRD2t9XEYWae"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 📊 How Big is Too Big?\n",
        "\n",
        "**Performance Guidelines:**\n",
        "\n",
        "| Dataset Size | Performance | Recommendations |\n",
        "|-------------|-------------|-----------------|\n",
        "| < 10K rows | 🟢 Excellent | Use as-is, no optimization needed |\n",
        "| 10K - 100K | 🟡 Good | Consider sampling for exploration |\n",
        "| 100K - 1M | 🟠 Moderate | Use sampling + kernel calc |\n",
        "| > 1M rows | 🔴 Slow | Aggregate first or use database |\n",
        "\n",
        "**Rule of thumb**: If your DataFrame takes >2 seconds to display, it's time to optimize!"
      ],
      "metadata": {
        "id": "VjoA9lqPYWXo"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 🎯 Optimization Technique 1: Smart Sampling\n",
        "\n",
        "For initial exploration, you don't always need ALL the data!"
      ],
      "metadata": {
        "id": "h0AELWmMYWUu"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Create a large dataset for demonstration\n",
        "import pandas as pd\n",
        "import numpy as np\n",
        "\n",
        "np.random.seed(42)\n",
        "large_dataset = pd.DataFrame({\n",
        "    'date': pd.date_range('2020-01-01', periods=500000, freq='min'),\n",
        "    'user_id': np.random.randint(1, 10000, 500000),\n",
        "    'event_type': np.random.choice(['click', 'view', 'purchase', 'cart'], 500000),\n",
        "    'value': np.random.uniform(0, 100, 500000),\n",
        "    'session_duration': np.random.randint(10, 3600, 500000)\n",
        "})\n",
        "\n",
        "print(f\"📊 Large dataset created: {len(large_dataset):,} rows\")\n",
        "print(f\"💾 Memory usage: {large_dataset.memory_usage(deep=True).sum() / 1024**2:.2f} MB\")"
      ],
      "metadata": {
        "id": "eUoSWJDVYVjy"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# ❌ Bad Practice: Using entire large dataset\n",
        "# pyg.walk(large_dataset)  # This will be slow!\n",
        "\n",
        "# ✅ Good Practice: Sample for exploration\n",
        "sample_size = 10000\n",
        "df_sample = large_dataset.sample(n=sample_size, random_state=42)\n",
        "\n",
        "print(f\"✅ Sampled {sample_size:,} rows for exploration\")\n",
        "print(f\"📊 That's {(sample_size/len(large_dataset)*100):.1f}% of the data\")\n",
        "print(f\"⚡ Speed improvement: ~{len(large_dataset)//sample_size}x faster!\")"
      ],
      "metadata": {
        "id": "rOhbmg58YwDY"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Fast exploration with sampled data\n",
        "pyg.walk(df_sample, hide_data_source_config=True, kernel_computation=True)"
      ],
      "metadata": {
        "id": "Jzb1Q91fY0YI"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 🎯 Optimization Technique 2: Pre-Aggregation\n",
        "\n",
        "If you're analyzing trends, aggregate BEFORE visualizing!"
      ],
      "metadata": {
        "id": "heforJCCZANf"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# ❌ Bad Practice: Visualizing 500K raw records for time trends\n",
        "\n",
        "# ✅ Good Practice: Aggregate first!\n",
        "daily_summary = large_dataset.groupby([\n",
        "    large_dataset['date'].dt.date,\n",
        "    'event_type'\n",
        "]).agg({\n",
        "    'user_id': 'nunique',  # Unique users\n",
        "    'value': ['sum', 'mean'],\n",
        "    'session_duration': 'mean'\n",
        "}).reset_index()\n",
        "\n",
        "daily_summary.columns = ['date', 'event_type', 'unique_users', 'total_value', 'avg_value', 'avg_duration']\n",
        "\n",
        "print(f\"✅ Aggregated from {len(large_dataset):,} → {len(daily_summary):,} rows\")\n",
        "print(f\"⚡ That's a {len(large_dataset)//len(daily_summary)}x reduction!\")\n",
        "print(\"\\nNow this will be lightning fast! ⚡\")\n",
        "daily_summary.head()"
      ],
      "metadata": {
        "id": "xoJbqYHVY_8j"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# Super fast visualization of aggregated data\n",
        "pyg.walk(daily_summary, hide_data_source_config=True, kernel_computation=True)"
      ],
      "metadata": {
        "id": "0LclEJoxY154"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 🎯 Optimization Technique 3: Data Type Optimization\n",
        "\n",
        "Smaller data types = less memory = faster performance!"
      ],
      "metadata": {
        "id": "geREAPlvZEau"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Check current memory usage\n",
        "print(\"📊 Memory Usage by Column (BEFORE optimization):\")\n",
        "print(\"=\"*60)\n",
        "memory_before = large_dataset.memory_usage(deep=True)\n",
        "print(memory_before)\n",
        "print(f\"\\n💾 Total: {memory_before.sum() / 1024**2:.2f} MB\")"
      ],
      "metadata": {
        "id": "kWNq2eRUZHx5"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# ✅ Optimize data types\n",
        "large_dataset_optimized = large_dataset.copy()\n",
        "\n",
        "# Convert object to category (huge savings!)\n",
        "large_dataset_optimized['event_type'] = large_dataset_optimized['event_type'].astype('category')\n",
        "\n",
        "# Use smaller int types\n",
        "large_dataset_optimized['user_id'] = large_dataset_optimized['user_id'].astype('int32')\n",
        "large_dataset_optimized['session_duration'] = large_dataset_optimized['session_duration'].astype('int16')\n",
        "\n",
        "# Use float32 instead of float64\n",
        "large_dataset_optimized['value'] = large_dataset_optimized['value'].astype('float32')\n",
        "\n",
        "print(\"📊 Memory Usage by Column (AFTER optimization):\")\n",
        "print(\"=\"*60)\n",
        "memory_after = large_dataset_optimized.memory_usage(deep=True)\n",
        "print(memory_after)\n",
        "print(f\"\\n💾 Total: {memory_after.sum() / 1024**2:.2f} MB\")\n",
        "\n",
        "savings = (1 - memory_after.sum() / memory_before.sum()) * 100\n",
        "print(f\"\\n🎉 Memory savings: {savings:.1f}%!\")"
      ],
      "metadata": {
        "id": "Y4c7Xwt8ZHr3"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 🎯 Optimization Technique 4: Use Kernel Calculation\n",
        "\n",
        "PyGWalker can offload calculations to your Python kernel for better performance!"
      ],
      "metadata": {
        "id": "rS-YrUVqZEWI"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# ✅ Best Practice: Enable kernel calculations\n",
        "pyg.walk(\n",
        "    df_sample,\n",
        "    kernel_computation=True,  # 🔥 This is the magic parameter!\n",
        "    hide_data_source_config=True\n",
        ")"
      ],
      "metadata": {
        "id": "ks0Q9GeoZWON"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 📋 Performance Optimization Checklist\n",
        "\n",
        "Before using PyGWalker on large datasets:\n",
        "\n",
        "✅ **Step 1**: Check dataset size\n",
        "- If > 100K rows, consider optimization\n",
        "\n",
        "✅ **Step 2**: Sample for exploration\n",
        "- Use `.sample()` for initial analysis\n",
        "- 10K-50K rows is usually plenty\n",
        "\n",
        "✅ **Step 3**: Aggregate when possible\n",
        "- Daily/weekly summaries for time-series\n",
        "- Group by categories for comparisons\n",
        "\n",
        "✅ **Step 4**: Optimize data types\n",
        "- Use `category` for text with few unique values\n",
        "- Use smaller numeric types (int32, float32)\n",
        "\n",
        "✅ **Step 5**: Enable kernel calculation\n",
        "- Set `kernel_computation=True`\n",
        "\n",
        "✅ **Step 6**: Clean up first\n",
        "- Remove unnecessary columns\n",
        "- Drop duplicates\n",
        "- Handle missing values\n",
        "\n",
        "**Pro tip**: Profile your code with `%%time` to measure improvements! ⏱️"
      ],
      "metadata": {
        "id": "3izOiLxtZaUC"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "\n",
        "## 🎯 Workflow Best Practices\n",
        "\n",
        "How to integrate PyGWalker into your data science workflow efficiently!"
      ],
      "metadata": {
        "id": "VdQC3qpPZaMe"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 📊 The Recommended Workflow\n",
        "\n",
        "**Phase 1: Initial Exploration** 🔍"
      ],
      "metadata": {
        "id": "vcuncb6MZarx"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# 1. Load your data\n",
        "df = pd.read_csv(\"your_data.csv\")\n",
        "\n",
        "# 2. Quick overview\n",
        "print(f\"Shape: {df.shape}\")\n",
        "print(f\"\\nData types:\\n{df.dtypes}\")\n",
        "print(f\"\\nMissing values:\\n{df.isnull().sum()}\")\n",
        "\n",
        "# 3. Basic statistics\n",
        "df.describe()"
      ],
      "metadata": {
        "id": "PweX8TJPZgSl"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "code",
      "source": [
        "# 4. PyGWalker for visual exploration ⭐\n",
        "# Spend 10-15 minutes exploring interactively\n",
        "pyg.walk(df, hide_data_source_config=True)"
      ],
      "metadata": {
        "id": "UIdswBaBZnKQ"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "**Phase 2: Deep Dive Analysis** 🎯\n",
        "\n",
        "After initial exploration, you'll have questions. Answer them systematically:"
      ],
      "metadata": {
        "id": "tmHcHf5-ZEQ0"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Example: Based on PyGWalker exploration, you noticed something interesting\n",
        "# Now create focused analysis\n",
        "\n",
        "# 5. Clean and prepare data based on insights\n",
        "df_clean = df.dropna(subset=['important_column'])\n",
        "df_clean = df_clean[df_clean['value'] > 0]\n",
        "\n",
        "# 6. Create calculated fields you identified as useful\n",
        "df_clean['new_metric'] = df_clean['a'] / df_clean['b']\n",
        "\n",
        "# 7. Explore the refined dataset\n",
        "pyg.walk(df_clean, hide_data_source_config=True)"
      ],
      "metadata": {
        "id": "hajxTkkOZt8c"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "**Phase 3: Documentation & Sharing** 📝"
      ],
      "metadata": {
        "id": "Ji12xYTNZyWh"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# 8. Export key visualizations\n",
        "# Use PyGWalker's export button to save charts as PNG/SVG\n",
        "\n",
        "# 9. Document insights in markdown cells\n",
        "\"\"\"\n",
        "Key Findings:\n",
        "- Insight 1: [description]\n",
        "- Insight 2: [description]\n",
        "- Recommendation: [action items]\n",
        "\"\"\"\n",
        "\n",
        "# 10. Create final summary statistics\n",
        "final_summary = df_clean.groupby('category').agg({\n",
        "    'metric1': 'mean',\n",
        "    'metric2': 'sum'\n",
        "})\n",
        "print(final_summary)"
      ],
      "metadata": {
        "id": "JYZXbARtZDmI"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 💡 PyGWalker in Different Workflows\n",
        "\n",
        "**For Data Scientists:** 🧪\n",
        "- ✅ Use PyGWalker for EDA before modeling\n",
        "- ✅ Visualize feature distributions\n",
        "- ✅ Spot outliers that might affect models\n",
        "- ✅ Understand feature relationships\n",
        "\n",
        "**For Analysts:** 📊\n",
        "- ✅ Quick ad-hoc analysis\n",
        "- ✅ Create presentation-ready charts\n",
        "- ✅ Interactive dashboards in notebooks\n",
        "- ✅ Self-service analytics\n",
        "\n",
        "**For Data Engineers:** 🔧\n",
        "- ✅ Data quality validation\n",
        "- ✅ Pipeline monitoring\n",
        "- ✅ Quick sanity checks\n",
        "- ✅ Distribution verification\n",
        "\n",
        "**For Business Users:** 💼\n",
        "- ✅ Explore data without coding (mostly!)\n",
        "- ✅ Answer business questions quickly\n",
        "- ✅ Drag-and-drop simplicity\n",
        "- ✅ Share insights with stakeholders"
      ],
      "metadata": {
        "id": "okiN9oBRZ0T4"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "\n",
        "## 🐛 Troubleshooting & Common Issues\n",
        "\n",
        "Running into problems? Here are solutions to the most common issues! 🔧"
      ],
      "metadata": {
        "id": "Mr1jc3ExZ2Rs"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### ❌ Issue 1: PyGWalker Widget Not Displaying\n",
        "\n",
        "**Symptoms:**\n",
        "- Blank output cell\n",
        "- No interactive interface appears\n",
        "- Just see `<pygwalker.walker.Walker object at 0x...>`\n",
        "\n",
        "**Solutions:**"
      ],
      "metadata": {
        "id": "wImcsGl6Z13G"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# ✅ Solution 1: Make sure you're in a supported environment\n",
        "import sys\n",
        "print(f\"Python version: {sys.version}\")\n",
        "print(f\"Environment: {'Google Colab' if 'google.colab' in sys.modules else 'Other'}\")\n",
        "\n",
        "# ✅ Solution 2: Update PyGWalker to latest version\n",
        "# !pip install --upgrade pygwalker\n",
        "\n",
        "# ✅ Solution 3: Restart runtime and try again\n",
        "# In Colab: Runtime > Restart runtime"
      ],
      "metadata": {
        "id": "q6rx7TZhZ7WR"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### ❌ Issue 2: Slow Performance / Browser Freezing\n",
        "\n",
        "**Symptoms:**\n",
        "- Interface takes forever to load\n",
        "- Browser becomes unresponsive\n",
        "- Lag when dragging fields\n",
        "\n",
        "**Solutions:**"
      ],
      "metadata": {
        "id": "cu06HSfmZ77p"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# ✅ Solution 1: Sample your data\n",
        "df_sample = df.sample(min(10000, len(df)))\n",
        "pyg.walk(df_sample)\n",
        "\n",
        "# ✅ Solution 2: Use kernel calculation\n",
        "pyg.walk(df, kernel_computation=True)\n",
        "\n",
        "# ✅ Solution 3: Drop unnecessary columns\n",
        "df_slim = df[['col1', 'col2', 'col3']]  # Only columns you need\n",
        "pyg.walk(df_slim)"
      ],
      "metadata": {
        "id": "WxTxYjg4Z-Dn"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### ❌ Issue 3: Charts Look Wrong / Unexpected Aggregations\n",
        "\n",
        "**Symptoms:**\n",
        "- Numbers don't match expectations\n",
        "- Chart shows \"sum\" when you want \"count\"\n",
        "- Weird groupings\n",
        "\n",
        "**Solutions:**\n",
        "\n",
        "💡 **Understand auto-aggregation:**\n",
        "- When you drag a measure (numeric) to an axis with dimensions, PyGWalker aggregates!\n",
        "- Default is usually **SUM** or **MEAN**\n",
        "- Click on the field in the shelf to change aggregation type\n",
        "\n",
        "💡 **Check data types:**\n",
        "- Text fields stored as numbers? Convert them!\n",
        "- Dates recognized as strings? Parse them!"
      ],
      "metadata": {
        "id": "FYYgtxhfZ-1N"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# ✅ Fix data types\n",
        "df['date_column'] = pd.to_datetime(df['date_column'])\n",
        "df['category_column'] = df['category_column'].astype('category')\n",
        "df['numeric_column'] = pd.to_numeric(df['numeric_column'], errors='coerce')"
      ],
      "metadata": {
        "id": "GD58AdYmaBND"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### ❌ Issue 4: Missing Values Causing Problems\n",
        "\n",
        "**Symptoms:**\n",
        "- Filters not working as expected\n",
        "- Aggregations returning NaN\n",
        "- Charts missing data points\n",
        "\n",
        "**Solutions:**"
      ],
      "metadata": {
        "id": "rnlbFBiVaB9E"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# ✅ Option 1: Drop missing values\n",
        "df_clean = df.dropna()\n",
        "\n",
        "# ✅ Option 2: Fill missing values\n",
        "df['column'] = df['column'].fillna(0)  # or mean, median, etc.\n",
        "\n",
        "# ✅ Option 3: Create \"Missing\" category\n",
        "df['column'] = df['column'].fillna('Unknown')"
      ],
      "metadata": {
        "id": "bTWUy7SnaEiG"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### ❌ Issue 5: Can't Export or Save Visualizations\n",
        "\n",
        "**Symptoms:**\n",
        "- Export button not working\n",
        "- Can't save chart configurations\n",
        "\n",
        "**Solutions:**\n",
        "\n",
        "💡 **Export as image:**\n",
        "1. Look for the download/export icon (usually top-right)\n",
        "2. Choose PNG or SVG format\n",
        "3. Save to your local machine\n",
        "\n",
        "💡 **Save configuration:**"
      ],
      "metadata": {
        "id": "6lwlQQ4raB6I"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# ✅ Save your chart setup as JSON\n",
        "pyg.walk(df, spec=\"./my_chart_config.json\")\n",
        "\n",
        "# ✅ Load it later\n",
        "pyg.walk(df, spec=\"./my_chart_config.json\")"
      ],
      "metadata": {
        "id": "Y62voZ8haP_2"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### ❌ Issue 6: Colors/Themes Not Applying\n",
        "\n",
        "**Symptoms:**\n",
        "- Dark mode not working\n",
        "- Custom colors not showing\n",
        "\n",
        "**Solutions:**"
      ],
      "metadata": {
        "id": "H8yLGGvkaB3N"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# ✅ Explicitly set theme\n",
        "pyg.walk(df,appearance='light')  # or 'dark'\n",
        "\n",
        "# ✅ For custom styling, modify after rendering\n",
        "# (Advanced: requires CSS knowledge)"
      ],
      "metadata": {
        "id": "Hzl6HMpqaTMP"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 🆘 Still Having Issues?\n",
        "\n",
        "**Debug Checklist:** ✅\n",
        "1. ✅ Updated to latest PyGWalker version?\n",
        "2. ✅ Restarted your notebook kernel?\n",
        "3. ✅ Checked DataFrame has data? (`df.head()`)\n",
        "4. ✅ Verified data types? (`df.dtypes`)\n",
        "5. ✅ Tried with a simple example first?\n",
        "6. ✅ Checked GitHub issues for similar problems?\n",
        "\n",
        "**Get Help:**\n",
        "- 📚 [Official Documentation](https://docs.kanaries.net/pygwalker)\n",
        "- 💬 [Discord Community](https://discord.gg/Z4ngFWXz2U)\n",
        "- 🐛 [GitHub Issues](https://github.com/Kanaries/pygwalker/issues)\n",
        "- 📧 Email support (check docs for contact)"
      ],
      "metadata": {
        "id": "X78hxZNiaByN"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "\n",
        "## 🔥 Pro Tips from Power Users\n",
        "\n",
        "Advanced techniques that will make you a PyGWalker master! 🎯"
      ],
      "metadata": {
        "id": "oK_JNYGraV6Q"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 💎 Pro Tip 1: Save & Reuse Chart Configurations\n",
        "\n",
        "Create once, reuse everywhere!"
      ],
      "metadata": {
        "id": "Rlo3mqPxaBu7"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# ✅ Save your perfect chart setup\n",
        "pyg.walk(df, spec=\"./sales_dashboard.json\")\n",
        "\n",
        "# Later, load it with new data (same structure)\n",
        "df_new = pd.read_csv(\"next_month_data.csv\")\n",
        "pyg.walk(df_new, spec=\"./sales_dashboard.json\")\n",
        "\n",
        "# 🎉 Instant dashboard with new data!"
      ],
      "metadata": {
        "id": "GVMookxHaZ9S"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 💎 Pro Tip 2: Combine with Other Libraries\n",
        "\n",
        "PyGWalker plays nicely with the Python ecosystem!"
      ],
      "metadata": {
        "id": "lu7zgyqgacmC"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Example: Use pandas for heavy preprocessing, PyGWalker for visualization\n",
        "import pandas as pd\n",
        "\n",
        "# Complex aggregation in pandas\n",
        "summary = df.groupby(['category', 'region']).agg({\n",
        "    'revenue': ['sum', 'mean'],\n",
        "    'units': 'sum',\n",
        "    'customers': 'nunique'\n",
        "}).reset_index()\n",
        "\n",
        "summary.columns = ['category', 'region', 'total_revenue', 'avg_revenue', 'total_units', 'unique_customers']\n",
        "\n",
        "# Beautiful visualization in PyGWalker\n",
        "pyg.walk(summary)"
      ],
      "metadata": {
        "id": "GlU89glTab1A"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 💎 Pro Tip 3: Use for Jupyter Presentations\n",
        "\n",
        "Create interactive presentations with RISE + PyGWalker!"
      ],
      "metadata": {
        "id": "4qtEbuM6afUC"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Install RISE for slideshows\n",
        "# !pip install RISE\n",
        "\n",
        "# Then use PyGWalker in your slides for interactive demos\n",
        "# Your audience can explore data in real-time! 🎪"
      ],
      "metadata": {
        "id": "q2FImEMYajTs"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 💎 Pro Tip 4: Quick Data Quality Dashboard\n",
        "\n",
        "Create a reusable data quality checker!"
      ],
      "metadata": {
        "id": "4npbJo4ZahQV"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def data_quality_report(df):\n",
        "    \"\"\"\n",
        "    Create a comprehensive data quality report with PyGWalker\n",
        "    \"\"\"\n",
        "    import pandas as pd\n",
        "\n",
        "    # Create quality metrics DataFrame\n",
        "    quality_df = pd.DataFrame({\n",
        "        'column': df.columns,\n",
        "        'dtype': df.dtypes.astype(str),\n",
        "        'missing_count': df.isnull().sum(),\n",
        "        'missing_pct': (df.isnull().sum() / len(df) * 100).round(2),\n",
        "        'unique_values': [df[col].nunique() for col in df.columns],\n",
        "        'sample_value': [str(df[col].iloc[0]) if len(df) > 0 else '' for col in df.columns]\n",
        "    })\n",
        "\n",
        "    print(\"📊 Data Quality Report:\")\n",
        "    print(\"=\"*60)\n",
        "    print(quality_df.to_string())\n",
        "\n",
        "    # Visualize with PyGWalker\n",
        "    return pyg.walk(quality_df, hide_data_source_config=True)\n",
        "\n",
        "# Use it on any DataFrame!\n",
        "# data_quality_report(your_df)"
      ],
      "metadata": {
        "id": "fATUQBZzahDu"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 💎 Pro Tip 5: Create Custom Analysis Templates\n",
        "\n",
        "Build reusable analysis workflows!"
      ],
      "metadata": {
        "id": "v3bgpdVVag6l"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "def customer_analysis(df, customer_col, value_col, date_col):\n",
        "    \"\"\"\n",
        "    Standardized customer analysis with PyGWalker\n",
        "    \"\"\"\n",
        "    # Create summary\n",
        "    summary = df.groupby(customer_col).agg({\n",
        "        value_col: ['sum', 'mean', 'count'],\n",
        "        date_col: ['min', 'max']\n",
        "    }).reset_index()\n",
        "\n",
        "    summary.columns = [customer_col, 'total_value', 'avg_value', 'transactions', 'first_purchase', 'last_purchase']\n",
        "\n",
        "    # Calculate additional metrics\n",
        "    summary['customer_tenure_days'] = (summary['last_purchase'] - summary['first_purchase']).dt.days\n",
        "    summary['value_segment'] = pd.qcut(summary['total_value'], q=3, labels=['Low', 'Medium', 'High'])\n",
        "\n",
        "    return pyg.walk(summary, hide_data_source_config=True)\n",
        "\n",
        "# One function, works with any customer dataset! 🎯"
      ],
      "metadata": {
        "id": "BUIMRRhsagvm"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 💎 Pro Tip 6: Keyboard Shortcuts\n",
        "\n",
        "Speed up your workflow! ⚡\n",
        "\n",
        "**Common shortcuts** (may vary by version):\n",
        "- `Ctrl/Cmd + Z`: Undo last action\n",
        "- `Ctrl/Cmd + C`: Copy chart\n",
        "- `ESC`: Clear selection\n",
        "- Drag field while holding `Shift`: Duplicate field\n",
        "\n",
        "**Pro move**: Hover over buttons for tooltips! 💡"
      ],
      "metadata": {
        "id": "ulIQyAfOagmY"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 💎 Pro Tip 7: Mobile-Friendly Dashboards\n",
        "\n",
        "PyGWalker visualizations work on mobile browsers!"
      ],
      "metadata": {
        "id": "1zgmqk-Waypg"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# ✅ For better mobile experience\n",
        "pyg.walk(df, hide_data_source_config=True)  # Cleaner interface\n",
        "\n",
        "# Share your Colab notebook link with stakeholders\n",
        "# They can view (and interact!) on their phones! 📱"
      ],
      "metadata": {
        "id": "eAUCba7tagYQ"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "\n",
        "## 📊 PyGWalker vs Other Tools: Deep Dive\n",
        "\n",
        "Let's see how PyGWalker compares to popular alternatives:"
      ],
      "metadata": {
        "id": "4B_sJR-Oaf6u"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 🆚 PyGWalker vs Matplotlib/Seaborn\n",
        "\n",
        "**Matplotlib/Seaborn:**"
      ],
      "metadata": {
        "id": "bjcrNLUzaBl7"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Traditional approach - multiple lines of code\n",
        "import matplotlib.pyplot as plt\n",
        "import seaborn as sns\n",
        "\n",
        "fig, axes = plt.subplots(2, 2, figsize=(12, 10))\n",
        "\n",
        "# Plot 1: Scatter\n",
        "axes[0, 0].scatter(df['x'], df['y'])\n",
        "axes[0, 0].set_title('X vs Y')\n",
        "\n",
        "# Plot 2: Histogram\n",
        "axes[0, 1].hist(df['value'], bins=20)\n",
        "axes[0, 1].set_title('Value Distribution')\n",
        "\n",
        "# Plot 3: Box plot\n",
        "sns.boxplot(data=df, x='category', y='value', ax=axes[1, 0])\n",
        "axes[1, 0].set_title('Value by Category')\n",
        "\n",
        "# Plot 4: Line chart\n",
        "df.groupby('date')['value'].mean().plot(ax=axes[1, 1])\n",
        "axes[1, 1].set_title('Trend Over Time')\n",
        "\n",
        "plt.tight_layout()\n",
        "plt.show()"
      ],
      "metadata": {
        "id": "rM1wcyyla52X"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "**PyGWalker approach:**"
      ],
      "metadata": {
        "id": "5XDtyKnSa3MR"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# One line! 🎉\n",
        "pyg.walk(df)\n",
        "# Then drag and drop to create all 4 visualizations interactively!"
      ],
      "metadata": {
        "id": "9WvI9RkkbOu-"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "**Verdict:** 🏆\n",
        "\n",
        "| Aspect | Matplotlib/Seaborn | PyGWalker |\n",
        "|--------|-------------------|-----------|\n",
        "| **Code Required** | Many lines | 1 line |\n",
        "| **Flexibility** | 🟢 Extreme | 🟡 High |\n",
        "| **Speed (to insight)** | 🔴 Slow | 🟢 Fast |\n",
        "| **Interactivity** | 🔴 None | 🟢 Full |\n",
        "| **Learning Curve** | 🔴 Steep | 🟢 Easy |\n",
        "| **Publication Quality** | 🟢 Excellent | 🟡 Good |\n",
        "| **Best For** | Final charts | Exploration |\n",
        "\n",
        "**Use Matplotlib/Seaborn when**: You need pixel-perfect, publication-ready static charts\n",
        "\n",
        "**Use PyGWalker when**: You're exploring data and want insights fast! ⚡"
      ],
      "metadata": {
        "id": "ngPHWNYYa3-Z"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 🆚 PyGWalker vs Plotly\n",
        "\n",
        "**Plotly:**"
      ],
      "metadata": {
        "id": "37ZZgkfya33d"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Plotly - still requires code for each chart\n",
        "import plotly.express as px\n",
        "\n",
        "fig = px.scatter(df, x='x', y='y', color='category', size='value')\n",
        "fig.show()\n",
        "\n",
        "# Different chart? New code!\n",
        "fig = px.bar(df, x='category', y='value')\n",
        "fig.show()"
      ],
      "metadata": {
        "id": "qmDQ4xvEbYAe"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "**PyGWalker:**"
      ],
      "metadata": {
        "id": "f-VUfCeFa3yZ"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Switch between chart types with clicks!\n",
        "pyg.walk(df)"
      ],
      "metadata": {
        "id": "9RNGAK7qbawr"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "**Verdict:** 🏆\n",
        "\n",
        "| Aspect | Plotly | PyGWalker |\n",
        "|--------|--------|-----------|\n",
        "| **Interactivity** | 🟢 Excellent | 🟢 Excellent |\n",
        "| **Code Required** | 🟡 Moderate | 🟢 Minimal |\n",
        "| **Chart Types** | 🟢 Extensive | 🟡 Good |\n",
        "| **Ease of Use** | 🟡 Medium | 🟢 Easy |\n",
        "| **Customization** | 🟢 Very High | 🟡 Moderate |\n",
        "| **Dashboard Building** | 🟢 Dash/Streamlit | 🟡 Notebook only |\n",
        "\n",
        "**Use Plotly when**: Building production dashboards or need specific chart types\n",
        "\n",
        "**Use PyGWalker when**: Rapid exploration in notebooks! 🚀"
      ],
      "metadata": {
        "id": "B9Vq21W1bgWK"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 🆚 PyGWalker vs Tableau/Power BI\n",
        "\n",
        "**Tableau/Power BI:**\n",
        "- 💰 Expensive (hundreds/thousands per year)\n",
        "- 🖥️ Separate desktop application\n",
        "- ❌ Not integrated with Python\n",
        "- ✅ Enterprise features (collaboration, permissions)\n",
        "- ✅ Polished UI\n",
        "\n",
        "**PyGWalker:**\n",
        "- 🆓 Free and open source\n",
        "- 📓 Lives in your notebook\n",
        "- 🐍 Native Python integration\n",
        "- ❌ Limited collaboration features\n",
        "- ✅ Simple and effective\n",
        "\n",
        "**Verdict:** 🏆\n",
        "\n",
        "Use Tableau/Power BI when you need enterprise-wide BI solution\n",
        "\n",
        "Use PyGWalker when you want \"Tableau-like\" exploration IN Python! 🎯"
      ],
      "metadata": {
        "id": "x7fRnwx2a3su"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 🆚 PyGWalker vs pandas.plot()\n",
        "\n",
        "**pandas.plot():**"
      ],
      "metadata": {
        "id": "pGvNi51Fa3hv"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Quick but limited\n",
        "df['value'].plot(kind='hist')\n",
        "df.groupby('category')['value'].mean().plot(kind='bar')"
      ],
      "metadata": {
        "id": "Qcxx1es1brBI"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "**PyGWalker:**"
      ],
      "metadata": {
        "id": "E3w3EPwFbo3g"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Quick AND powerful\n",
        "pyg.walk(df)"
      ],
      "metadata": {
        "id": "JMXRhzJdbty6"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "**Verdict:** 🏆\n",
        "\n",
        "pandas.plot() is great for quick checks, but PyGWalker is better for serious exploration!\n",
        "\n",
        "**Pro tip**: Use both! pandas.plot() for ultra-quick checks, PyGWalker for deeper dives. 🎯"
      ],
      "metadata": {
        "id": "afnuc1Rjboz2"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "\n",
        "## 📚 Additional Resources\n",
        "\n",
        "Keep learning and stay updated! 📖"
      ],
      "metadata": {
        "id": "MsxtC8eObowb"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 📖 Official Documentation & Learning\n",
        "\n",
        "**Essential Links:**\n",
        "- 📘 [Official Documentation](https://docs.kanaries.net/pygwalker) - Complete guide\n",
        "- 🎥 [Video Tutorials](https://www.youtube.com/@kanaries_data) - Watch and learn\n",
        "- 💻 [GitHub Repository](https://github.com/Kanaries/pygwalker) - Source code & issues\n",
        "- 📝 [Release Notes](https://github.com/Kanaries/pygwalker/releases) - What's new\n",
        "- 🎓 [Example Gallery](https://docs.kanaries.net/pygwalker/examples) - Inspiration\n",
        "\n",
        "**Community:**\n",
        "- 💬 [Discord Server](https://discord.gg/Z4ngFWXz2U) - Get help, share tips\n",
        "- 🐦 [Twitter/X](https://twitter.com/kanaries_data) - Latest updates\n",
        "- 📧 [Newsletter](https://kanaries.net) - Monthly insights\n",
        "\n",
        "### 🎯 Related Tools from Kanaries\n",
        "\n",
        "PyGWalker is part of the Kanaries ecosystem:\n",
        "\n",
        "- 🎨 **Graphic Walker** - Web-based version (JavaScript)\n",
        "- 🚀 **RATH** - Automated data analysis & insights\n",
        "- 📊 **Kanaries Cloud** - Hosted analytics platform\n",
        "\n",
        "### 📚 Recommended Learning Path\n",
        "\n",
        "**Beginner** (You are here! 🎉):\n",
        "1. ✅ Complete this tutorial\n",
        "2. ✅ Practice with your own datasets\n",
        "3. ✅ Join the Discord community\n",
        "\n",
        "**Intermediate**:\n",
        "1. 📊 Explore advanced configurations\n",
        "2. 🔧 Integrate into your workflow\n",
        "3. 💡 Contribute examples to the community\n",
        "\n",
        "**Advanced**:\n",
        "1. 🚀 Optimize for large datasets\n",
        "2. 🎨 Customize with themes\n",
        "3. 🤝 Contribute to the project!\n",
        "\n",
        "### 🎓 Practice Datasets\n",
        "\n",
        "Want more practice? Try these datasets:\n",
        "\n",
        "**Built-in (via seaborn-data):**\n",
        "- 🐧 Penguins (what we used!)\n",
        "- 💎 Diamonds\n",
        "- 🚢 Titanic\n",
        "- 🚕 Taxis\n",
        "- 🌸 Iris\n",
        "\n",
        "**External:**\n",
        "- 📊 [Kaggle Datasets](https://www.kaggle.com/datasets)\n",
        "- 🏛️ [UCI ML Repository](https://archive.ics.uci.edu/ml/index.php)\n",
        "- 🌐 [data.gov](https://data.gov)\n",
        "- 📈 [Our World in Data](https://ourworldindata.org)"
      ],
      "metadata": {
        "id": "73VozURlbosl"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# Quick access to seaborn datasets\n",
        "datasets = ['penguins', 'diamonds', 'titanic', 'taxis', 'iris', 'tips', 'flights']\n",
        "\n",
        "print(\"📊 Available seaborn datasets:\")\n",
        "for dataset in datasets:\n",
        "    url = f\"https://raw.githubusercontent.com/mwaskom/seaborn-data/master/{dataset}.csv\"\n",
        "    print(f\"  • {dataset}: {url}\")\n",
        "\n",
        "# Try any of them!\n",
        "# df = pd.read_csv(url)\n",
        "# pyg.walk(df)"
      ],
      "metadata": {
        "id": "P3CZ_ND7b38V"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "\n",
        "## 🤝 Contributing to PyGWalker\n",
        "\n",
        "Want to give back? Here's how! ❤️"
      ],
      "metadata": {
        "id": "MiFF4AT9boli"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 🎯 Ways to Contribute\n",
        "\n",
        "**Even if you're not a developer:**\n",
        "\n",
        "1. **⭐ Star the Repository**\n",
        "   - Go to [GitHub](https://github.com/Kanaries/pygwalker)\n",
        "   - Click the ⭐ star button\n",
        "   - Helps the project grow!\n",
        "\n",
        "2. **📝 Share Your Use Cases**\n",
        "   - Write blog posts\n",
        "   - Create video tutorials\n",
        "   - Share on social media\n",
        "   - Tag [@kanaries_data](https://twitter.com/kanaries_data)\n",
        "\n",
        "3. **🐛 Report Bugs**\n",
        "   - Found an issue? [Report it!](https://github.com/Kanaries/pygwalker/issues)\n",
        "   - Include: Python version, code snippet, error message\n",
        "   - Screenshots help a lot!\n",
        "\n",
        "4. **💡 Suggest Features**\n",
        "   - Have an idea? [Open a discussion](https://github.com/Kanaries/pygwalker/discussions)\n",
        "   - Explain the use case\n",
        "   - Why would it help others?\n",
        "\n",
        "5. **📚 Improve Documentation**\n",
        "   - Fix typos\n",
        "   - Add examples\n",
        "   - Clarify confusing sections\n",
        "   - Translate to other languages\n",
        "\n",
        "**If you ARE a developer:**\n",
        "\n",
        "6. **💻 Contribute Code**\n",
        "   - Check [good first issues](https://github.com/Kanaries/pygwalker/labels/good%20first%20issue)\n",
        "   - Fork, code, submit PR\n",
        "   - Follow the contributing guidelines\n",
        "\n",
        "7. **🧪 Add Tests**\n",
        "   - Improve test coverage\n",
        "   - Add edge case tests\n",
        "   - Performance benchmarks\n",
        "\n",
        "### 📋 Contributing Guidelines\n",
        "\n",
        "Before submitting (like this tutorial!):\n",
        "\n",
        "✅ **1. Check Existing Issues/PRs**\n",
        "- Avoid duplicates!\n",
        "\n",
        "✅ **2. Open an Issue First**\n",
        "- Describe what you want to add\n",
        "- Get feedback before spending time\n",
        "\n",
        "✅ **3. Follow the Style**\n",
        "- Match existing code/doc style\n",
        "- Use clear, descriptive names\n",
        "- Add comments where helpful\n",
        "\n",
        "✅ **4. Test Thoroughly**\n",
        "- Test in different environments\n",
        "- Check for edge cases\n",
        "- Include examples\n",
        "\n",
        "✅ **5. Write Clear PR Description**\n",
        "- What does it do?\n",
        "- Why is it useful?\n",
        "- How to test it?\n",
        "- Screenshots if visual\n",
        "\n",
        "### 🎉 Recognition\n",
        "\n",
        "Contributors get:\n",
        "- ✅ Name in contributors list\n",
        "- ✅ GitHub profile contribution\n",
        "- ✅ Satisfaction of helping thousands! 🌍\n",
        "- ✅ Resume/portfolio material\n",
        "- ✅ Experience with open source\n",
        "\n",
        "**This tutorial** is an example of community contribution! 🙌"
      ],
      "metadata": {
        "id": "sSqude8lboAk"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "\n",
        "## 🎊 Conclusion: You're Now a PyGWalker Pro!\n",
        "\n",
        "**Congratulations!** 🎉 You've completed the comprehensive PyGWalker tutorial!\n",
        "\n",
        "### ✅ What You've Mastered:\n",
        "\n",
        "**Fundamentals:**\n",
        "- ✅ Installation and setup\n",
        "- ✅ Basic visualizations with drag-and-drop\n",
        "- ✅ Understanding the interface\n",
        "- ✅ Chart types and when to use them\n",
        "\n",
        "**Advanced Techniques:**\n",
        "- ✅ Filters and aggregations\n",
        "- ✅ Calculated fields\n",
        "- ✅ Customization and styling\n",
        "- ✅ Performance optimization\n",
        "\n",
        "**Real-World Applications:**\n",
        "- ✅ Sales analysis\n",
        "- ✅ Customer segmentation\n",
        "- ✅ Data quality checks\n",
        "- ✅ A/B testing\n",
        "\n",
        "**Best Practices:**\n",
        "- ✅ Workflow integration\n",
        "- ✅ Troubleshooting common issues\n",
        "- ✅ Pro tips and tricks\n",
        "- ✅ Tool comparison\n",
        "\n",
        "### 🚀 What's Next?\n",
        "\n",
        "**Immediate Actions:**\n",
        "1. 🎯 **Practice** - Use PyGWalker on your own datasets\n",
        "2. ⭐ **Star the repo** - Show your support!\n",
        "3. 💬 **Join Discord** - Connect with the community\n",
        "4. 📝 **Share** - Teach others what you learned\n",
        "\n",
        "**This Week:**\n",
        "1. 📊 Integrate PyGWalker into your workflow\n",
        "2. 🔍 Explore at least 3 different datasets\n",
        "3. 💡 Share one insight you discovered\n",
        "\n",
        "**This Month:**\n",
        "1. 🤝 Help someone learn PyGWalker\n",
        "2. 🐛 Report a bug or suggest a feature\n",
        "3. 📝 Write a blog post or create a video\n",
        "\n",
        "### 💡 Remember:\n",
        "\n",
        "> **\"The best way to learn data analysis is to analyze data!\"**\n",
        "\n",
        "PyGWalker makes that process:\n",
        "- ⚡ Faster\n",
        "- 🎨 More intuitive\n",
        "- 😊 More enjoyable\n",
        "\n",
        "**Don't wait for the perfect dataset** - start exploring now! Every dataset has a story to tell. 📖\n",
        "\n",
        "### 🙏 Thank You!\n",
        "\n",
        "Thank you for completing this tutorial! We hope PyGWalker becomes an essential part of your data toolkit.\n",
        "\n",
        "**Questions? Ideas? Feedback?**\n",
        "- 💬 Discord: [Join here](https://discord.gg/Z4ngFWXz2U)\n",
        "- 🐛 Issues: [GitHub](https://github.com/Kanaries/pygwalker/issues)\n",
        "- 📧 Email: Check official docs\n",
        "\n",
        "**Happy exploring!** 🎉🐧📊\n",
        "\n",
        "---\n",
        "\n",
        "**Made with ❤️ by the PyGWalker Community**\n",
        "\n",
        "*This tutorial was created as a community contribution. Star the repo and contribute your own examples!*\n",
        "\n",
        "**Version**: PyGWalker 0.4.x+  \n",
        "**Last Updated**: 2024  \n",
        "**License**: Apache-2.0  \n",
        "\n",
        "🔗 **Share this tutorial**: Help others discover PyGWalker!\n",
        "\n",
        "#DataScience #Python #DataVisualization #PyGWalker #OpenSource"
      ],
      "metadata": {
        "id": "Tvs_hO-lcL1C"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "\n",
        "## 📎 Appendix: Quick Reference\n",
        "\n",
        "### 🎯 Common PyGWalker Patterns\n",
        "\n",
        "**Basic Usage:**\n",
        "```python\n",
        "import pygwalker as pyg\n",
        "pyg.walk(df)"
      ],
      "metadata": {
        "id": "xxpi8gA6cT-3"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "**With Options:**\n",
        "```python\n",
        "pyg.walk(\n",
        "    df,\n",
        "    hide_data_source_config=True,  # Cleaner UI\n",
        "   appearance='light',  # or 'dark'\n",
        "    kernel_computation=True,  # Better performance\n",
        "    spec=\"./config.json\"  # Save/load config\n",
        ")\n",
        "```"
      ],
      "metadata": {
        "id": "WJn8AfMEdhNt"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "**Performance Optimization:**\n",
        "```python\n",
        "# Sample large datasets\n",
        "df_sample = df.sample(10000)\n",
        "pyg.walk(df_sample, kernel_computation=True)\n",
        "```"
      ],
      "metadata": {
        "id": "J-cO9-Kkdn_o"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "### 🎨 Chart Type Quick Guide\n",
        "\n",
        "| Use Case | Chart Type | Fields |\n",
        "|----------|-----------|--------|\n",
        "| Correlation | Scatter | X: numeric, Y: numeric, Color: category |\n",
        "| Comparison | Bar | X: category, Y: numeric (aggregated) |\n",
        "| Trend | Line | X: date/time, Y: numeric, Color: category |\n",
        "| Distribution | Histogram | X: numeric (binned) |\n",
        "| Part-to-whole | Pie | Angle: category, Value: numeric |\n",
        "| Relationship | Heatmap | X: category, Y: category, Color: numeric |\n",
        "\n",
        "### ⌨️ Keyboard Shortcuts\n",
        "\n",
        "- `Ctrl/Cmd + Z`: Undo\n",
        "- `ESC`: Clear selection\n",
        "- `Delete`: Remove field from shelf\n",
        "\n",
        "### 🔗 Essential Links\n",
        "\n",
        "- 📚 Docs: https://docs.kanaries.net/pygwalker\n",
        "- 💻 GitHub: https://github.com/Kanaries/pygwalker\n",
        "- 💬 Discord: https://discord.gg/Z4ngFWXz2U\n",
        "- 🐦 Twitter: https://twitter.com/kanaries_data\n",
        "\n",
        "---\n",
        "\n",
        "**End of Tutorial** 🎓\n",
        "\n",
        "Happy Data Exploring! 🚀📊🐧"
      ],
      "metadata": {
        "id": "o1r5NkbYcf34"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "---\n",
        "\n",
        "## 🙏 Acknowledgments\n",
        "\n",
        "This tutorial was created by **[Leonardo Braga](https://www.linkedin.com/in/leonardo-borges1)**\n",
        "as a community contribution to PyGWalker.\n",
        "\n",
        "💼 Data Science & AI Student | 🇧🇷 Brasília, Brazil  \n",
        "💻 [GitHub](https://github.com/Leo-bsb) | 📧 leoborgesprofissional@gmail.com\n",
        "\n",
        "*Passionate about open-source, data science, and making analytics accessible to everyone.*\n",
        "\n",
        "---\n",
        "\n",
        "### 🌟 Found this helpful?\n",
        "\n",
        "- ⭐ Star [PyGWalker on GitHub](https://github.com/Kanaries/pygwalker)\n",
        "- 🤝 Contribute your own tutorials\n",
        "- 💬 Join the [Discord community](https://discord.gg/Z4ngFWXz2U)\n",
        "- 🐦 Follow [@kanaries_data](https://twitter.com/kanaries_data)\n",
        "\n",
        "**Questions or feedback?** Open an issue or reach out directly!\n",
        "\n",
        "---\n",
        "\n",
        "**License:** This tutorial follows PyGWalker's Apache-2.0 License  \n",
        "**Last Updated:** 11/06/2025   \n",
        "**PyGWalker Version:** 0.4.x+\n",
        "\n",
        "*Made with ❤️ for the data community*"
      ],
      "metadata": {
        "id": "AGYdtHz1e5XY"
      }
    },
    {
      "cell_type": "code",
      "source": [],
      "metadata": {
        "id": "sXwW8F7Npio9"
      },
      "execution_count": null,
      "outputs": []
    }
  ]
}