{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# TopoBench: Learnable Topology Lifting Tutorial\n",
    "\n",
    "This tutorial demonstrates how TopoBench enables **learnable topology lifting** - the ability to learn and infer graph structure during the training process rather than using fixed, predefined topologies.\n",
    "\n",
    "## Overview\n",
    "\n",
    "TopoBench provides a unified pipeline that decouples topology learning from the main model architecture. This allows you to:\n",
    "- Learn optimal graph structures automatically\n",
    "- Integrate custom topology learning modules easily  \n",
    "- Combine different lifting strategies with various backbone models\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Configuration Inspection\n",
    "\n",
    "Let's start by examining the configuration for learnable topology lifting:\n",
    "\n",
    "\n",
    "```python\n",
    "import rootutils\n",
    "rootutils.setup_root(__file__, indicator=\".project-root\", pythonpath=True)\n",
    "import hydra\n",
    "from omegaconf import OmegaConf\n",
    "\n",
    "# Load the configurations\n",
    "config_file = \"run.yaml\"\n",
    "with hydra.initialize(\n",
    "    version_base=\"1.3\",\n",
    "    config_path=\"../configs\",\n",
    "    job_name=\"run\"\n",
    "):\n",
    "    print('Current config file: ', config_file)\n",
    "    configs = hydra.compose(\n",
    "        config_name=\"run.yaml\",\n",
    "        overrides=[f\"dataset=graph/cocitation_cora\", f\"model=graph/gcn_dgm\"], \n",
    "        return_hydra_config=True, \n",
    "    )\n",
    "\n",
    "# First lets take a look at not resolved configurations\n",
    "resolved_config = OmegaConf.to_container(configs.model.feature_encoder, resolve=False)\n",
    "print(OmegaConf.to_yaml(resolved_config))\n",
    "```\n",
    "\n",
    "\n",
    "**Output:**\n",
    "```yaml\n",
    "_target_: topobench.nn.encoders.${model.feature_encoder.encoder_name}\n",
    "encoder_name: DGMStructureFeatureEncoder\n",
    "in_channels: ${infer_in_channels:${dataset},${oc.select:transforms,null}}\n",
    "out_channels: 64\n",
    "proj_dropout: 0.0\n",
    "loss:\n",
    "  _target_: topobench.loss.model.DGMLoss\n",
    "  loss_weight: 10\n",
    "```\n",
    "\n",
    "```python\n",
    "# Now let's resolve the configuration to see the final destinations\n",
    "resolved_config = OmegaConf.to_container(configs.model.feature_encoder, resolve=True)\n",
    "print(\"✅ Resolved Configuration:\")\n",
    "print(OmegaConf.to_yaml(resolved_config))\n",
    "```\n",
    "\n",
    "**Output:**\n",
    "```yaml\n",
    "_target_: topobench.nn.encoders.DGMStructureFeatureEncoder\n",
    "encoder_name: DGMStructureFeatureEncoder\n",
    "in_channels: [computed_based_on_dataset]\n",
    "out_channels: 64\n",
    "proj_dropout: 0.0\n",
    "loss:\n",
    "  _target_: topobench.loss.model.DGMLoss\n",
    "  loss_weight: 10\n",
    "```\n",
    "\n",
    "Here we can see that feature encoder has a loss:\n",
    "\n",
    "```yaml\n",
    "loss:\n",
    "  _target_: topobench.loss.model.DGMLoss\n",
    "  loss_weight: 10\n",
    "```\n",
    "\n",
    "Let's now take a step back and understand a TopoBench pipeline!"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 🏗️ The TopoBench Pipeline\n",
    "\n",
    "TopoBench uses a unified pipeline for all models that separates topology learning from the main model computation:\n",
    "\n",
    "```python\n",
    "def model_step(self, batch: Data) -> dict:\n",
    "    \"\"\"Perform a single model step on a batch of data.\n",
    "\n",
    "    Parameters\n",
    "    ----------\n",
    "    batch : torch_geometric.data.Data\n",
    "        Batch object containing the batched data.\n",
    "\n",
    "    Returns\n",
    "    -------\n",
    "    dict\n",
    "        Dictionary containing the model output and the loss.\n",
    "    \"\"\"\n",
    "    # Allow batch object to know the phase of the training\n",
    "    batch[\"model_state\"] = self.state_str\n",
    "\n",
    "    # 🔍 Feature Encoder - This is where topology learning happens!\n",
    "    batch = self.feature_encoder(batch)\n",
    "\n",
    "    # 🧠 Domain model (your main architecture)\n",
    "    model_out = self.forward(batch)\n",
    "\n",
    "    # 📊 Readout\n",
    "    if self.readout is not None:\n",
    "        model_out = self.readout(model_out=model_out, batch=batch)\n",
    "\n",
    "    # 📉 Loss computation (includes topology learning loss)\n",
    "    model_out = self.process_outputs(model_out=model_out, batch=batch)\n",
    "    model_out = self.loss(model_out=model_out, batch=batch)\n",
    "    \n",
    "    # 📈 Metrics\n",
    "    self.evaluator.update(model_out)\n",
    "\n",
    "    return model_out\n",
    "```\n",
    "\n",
    "### 🎯 Key Insight\n",
    "The **feature encoder step** is where topology learning is decoupled from the main model. This allows the pipeline to learn optimal graph structures before they're passed to the backbone model.\n",
    "\n",
    "## 🔬 Deep Dive: DGM Structure Feature Encoder\n",
    "\n",
    "Let's examine how the `DGMStructureFeatureEncoder` implements learnable topology lifting:\n",
    "\n",
    "```python\n",
    "class DGMStructureFeatureEncoder(AbstractFeatureEncoder):\n",
    "    \n",
    "    def forward(self, data: torch_geometric.data.Data) -> torch_geometric.data.Data:\n",
    "        \"\"\"Forward pass that learns and updates graph structure.\n",
    "\n",
    "        The method applies BaseEncoders to features of selected dimensions\n",
    "        and infers new graph topology using Deep Graph Matching (DGM).\n",
    "\n",
    "        Parameters\n",
    "        ----------\n",
    "        data : torch_geometric.data.Data\n",
    "            Input data object with x_{i} features for each dimension i.\n",
    "\n",
    "        Returns\n",
    "        -------\n",
    "        torch_geometric.data.Data\n",
    "            Output data object with learned topology and updated features.\n",
    "        \"\"\"\n",
    "        # Ensure x_0 exists (node features)\n",
    "        if not hasattr(data, \"x_0\"):\n",
    "            data.x_0 = data.x\n",
    "\n",
    "        # Process each topological dimension\n",
    "        for i in self.dimensions:\n",
    "            if hasattr(data, f\"x_{i}\") and hasattr(self, f\"encoder_{i}\"):\n",
    "                batch = getattr(data, f\"batch_{i}\")\n",
    "                \n",
    "                # 🚀 The magic happens here: DGM inference\n",
    "                x_, x_aux, edges_dgm, logprobs = getattr(self, f\"encoder_{i}\")(\n",
    "                    data[f\"x_{i}\"], batch\n",
    "                )\n",
    "                \n",
    "                # Update features and structure\n",
    "                data[f\"x_{i}\"] = x_\n",
    "                data[f\"x_aux_{i}\"] = x_aux\n",
    "                \n",
    "                # 🔥 Key: Replace original edges with learned edges!\n",
    "                data[\"edges_index\"] = edges_dgm\n",
    "                data[f\"logprobs_{i}\"] = logprobs\n",
    "                \n",
    "        return data\n",
    "```\n",
    "\n",
    "### 🔑 Critical Operations\n",
    "\n",
    "The encoder performs two crucial operations:\n",
    "\n",
    "1. **Feature Learning**: Updates node features (`x_`) and auxiliary features (`x_aux`)\n",
    "2. **Topology Learning**: **Replaces** the original edge structure with learned edges:\n",
    "   ```python\n",
    "   data[\"edges_index\"] = edges_dgm      # New learned topology substiturs the old one\n",
    "   data[f\"logprobs_{i}\"] = logprobs     # Variables needed to compute the loss\n",
    "   ```\n",
    "\n",
    "## 🎯 Loss Integration\n",
    "\n",
    "The learned topology is optimized through the loss function in the main pipeline:\n",
    "\n",
    "```python\n",
    "# Loss computation includes both task loss and topology learning loss\n",
    "model_out = self.process_outputs(model_out=model_out, batch=batch)\n",
    "model_out = self.loss(model_out=model_out, batch=batch)  # batch contains learned edges + logprobs\n",
    "```\n",
    "\n",
    "This enables end-to-end learning where the topology is optimized jointly with the main task objective.\n",
    "\n",
    "## 🚀 Running the Example\n",
    "\n",
    "Execute the learnable topology lifting pipeline:\n",
    "\n",
    "```bash\n",
    "python -m topobench dataset=graph/cocitation_cora model=graph/gcn_dgm\n",
    "```\n",
    "\n",
    "This command:\n",
    "- Uses the **Cora citation network** dataset\n",
    "- Applies **GCN backbone** with **DGM topology learning**\n",
    "- Learns optimal graph structure during training\n",
    "\n",
    "## 🛠️ Integrating Your Own Learnable Module\n",
    "\n",
    "Want to add your custom topology learning approach? Follow these three simple steps:\n",
    "\n",
    "### Step 1: Create Your Feature Encoder\n",
    "\n",
    "Create `/topobench/nn/encoders/my_custom_encoder.py`:\n",
    "\n",
    "```python\n",
    "from topobench.nn.encoders.base import AbstractFeatureEncoder\n",
    "\n",
    "class MyCustomTopologyEncoder(AbstractFeatureEncoder):\n",
    "    def forward(self, data):\n",
    "        # Your custom topology learning logic here\n",
    "        learned_edges = self.infer_topology(data.x_0)\n",
    "        data[\"edges_index\"] = learned_edges\n",
    "        return data\n",
    "```\n",
    "\n",
    "### Step 2: Create Your Loss Function  \n",
    "\n",
    "Create `/topobench/loss/model/my_custom_loss.py`:\n",
    "\n",
    "```python\n",
    "class MyCustomTopologyLoss:\n",
    "    def __init__(self, loss_weight=1.0, regularization=0.01):\n",
    "        self.loss_weight = loss_weight\n",
    "        self.regularization = regularization\n",
    "    \n",
    "    def __call__(self, model_out, batch):\n",
    "        # Your custom topology learning loss\n",
    "        topology_loss = self.compute_topology_loss(batch)\n",
    "        model_out[\"loss\"] += self.loss_weight * topology_loss\n",
    "        return model_out\n",
    "```\n",
    "\n",
    "### Step 3: Create Model Configuration\n",
    "\n",
    "Create `/configs/model/graph/my_custom_model.yaml`:\n",
    "\n",
    "```yaml\n",
    "_target_: topobench.nn.encoders.MyCustomTopologyEncoder\n",
    "encoder_name: MyCustomTopologyEncoder\n",
    "in_channels:\n",
    "  - ${infer_in_channels:${dataset},${oc.select:transforms,null}}\n",
    "out_channels: 64\n",
    "proj_dropout: 0.0\n",
    "loss:\n",
    "  _target_: topobench.loss.model.MyCustomTopologyLoss\n",
    "  loss_weight: 5.0\n",
    "  regularization: 0.01\n",
    "```\n",
    "\n",
    "### Step 4: Run Your Model\n",
    "\n",
    "```bash\n",
    "python -m topobench dataset=graph/your_dataset model=graph/my_custom_model\n",
    "```\n",
    "\n",
    "## 🎉 Summary\n",
    "\n",
    "TopoBench's learnable topology lifting provides:\n",
    "\n",
    "- **🔄 Unified Pipeline**: Consistent interface for all topology learning approaches\n",
    "- **🎯 Decoupled Design**: Topology learning separated from backbone models  \n",
    "- **🔧 Easy Integration**: Simple 3-step process to add custom modules\n",
    "- **📈 End-to-End Learning**: Joint optimization of structure and task objectives\n",
    "\n",
    "The key insight is that topology learning happens in the **feature encoder stage**, allowing any backbone model to benefit from learned graph structures without modification.\n",
    "\n",
    "---\n",
    "\n",
    "**Next Steps**: Try experimenting with different datasets and backbone models to see how learnable topology lifting improves performance on your specific tasks!"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "topobench",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.3"
  },
  "orig_nbformat": 4
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
