{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "6fe4fc93",
   "metadata": {},
   "source": [
    "# Apriori Algorithm Example with Binary Matrix\n",
    "\n",
    "Apriori algorithm is used to find frequent itemsets in a dataset. Let's go through an example to demonstrate how this is done using a binary matrix:\n",
    "\n",
    "## Given Transaction Data:\n",
    "\n",
    "- Transaction 1: {Apple, Banana}\n",
    "- Transaction 2: {Apple, Orange}\n",
    "- Transaction 3: {Apple, Banana, Orange}\n",
    "- Transaction 4: {Banana, Grape}\n",
    "\n",
    "## Step 1: Convert Transactions to Binary Matrix\n",
    "\n",
    "Using `TransactionEncoder`, the transactions are converted into a binary matrix:\n",
    "\n",
    "\n",
    "|       | Apple | Banana | Orange | Grape |\n",
    "|-------|-------|--------|--------|-------|\n",
    "| Trans1|   1   |   1    |   0    |   0   |\n",
    "| Trans2|   1   |   0    |   1    |   0   |\n",
    "| Trans3|   1   |   1    |   1    |   0   |\n",
    "| Trans4|   0   |   1    |   0    |   1   |\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "In this matrix, 1 indicates that the item is present in the transaction, and 0 indicates its absence.\n",
    "\n",
    "\n",
    "## Step 2: Apply Apriori Algorithm\n",
    "\n",
    "### Generate Candidate Itemsets\n",
    "\n",
    "- Initially, all single items are considered as candidate itemsets. For example, `{Apple}`, `{Banana}`, `{Orange}`, `{Grape}`.\n",
    "\n",
    "\n",
    "### Calculate Support\n",
    "\n",
    "- Next, calculate the support for each candidate itemset. For example, the support for `{Apple}` is 3/4 or 0.75 (as it appears in 3 out of 4 transactions).\n",
    "\n",
    "\n",
    "\n",
    "### Pruning Step\n",
    "\n",
    "- Assume a minimum support threshold of 0.5. All itemsets with support less than 0.5 are discarded. \n",
    "\n",
    "\n",
    "### Generate Larger Itemsets\n",
    "\n",
    "- Combine retained itemsets to form larger itemsets. For example, `{Apple, Banana}`, `{Apple, Orange}` and so on.\n",
    "\n",
    "\n",
    "### Repeat Process\n",
    "\n",
    "- For each new itemset generated, repeat the support calculation and pruning steps. Continue this process until no larger frequent itemsets can be formed.\n",
    "\n",
    "\n",
    "### Result\n",
    "\n",
    "- Finally, the algorithm outputs all itemsets that meet the minimum support requirement. These are the frequent itemsets, indicating combinations of items that often appear together in the data.\n",
    "\n",
    "This process can be automated using the `apriori` function in the `mlxtend` library, which handles the binary matrix, performs the described steps, and returns the frequent itemsets.\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "81cb8327",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.12"
  },
  "toc": {
   "base_numbering": 1,
   "nav_menu": {},
   "number_sections": true,
   "sideBar": true,
   "skip_h1_title": false,
   "title_cell": "Table of Contents",
   "title_sidebar": "Contents",
   "toc_cell": false,
   "toc_position": {},
   "toc_section_display": true,
   "toc_window_display": false
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
