{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1. **最小单元对齐模块 (MUAM)** 实现\n",
    "\n",
    "根据论文，**MUAM** 采用了多头交叉注意力机制，计算文本词元和图像块之间的相似度，并生成对齐分数。\n",
    "\n",
    "**解释**：\n",
    "\n",
    "1. `MultiHeadCrossAttention`：使用PyTorch的`MultiheadAttention`来实现多头交叉注意力机制。\n",
    "2. `MUAM`：通过注意力机制对齐文本和图像，然后计算相似度矩阵并通过线性层计算对齐分数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F\n",
    "\n",
    "class MultiHeadCrossAttention(nn.Module):\n",
    "    def __init__(self, embed_dim, num_heads):\n",
    "        super(MultiHeadCrossAttention, self).__init__()\n",
    "        self.multihead_attn = nn.MultiheadAttention(embed_dim, num_heads)\n",
    "        self.norm = nn.LayerNorm(embed_dim)\n",
    "        self.mlp = nn.Sequential(\n",
    "            nn.Linear(embed_dim, embed_dim),\n",
    "            nn.ReLU(),\n",
    "            nn.Linear(embed_dim, embed_dim)\n",
    "        )\n",
    "\n",
    "    def forward(self, text_embeds, image_embeds):\n",
    "        # 交叉注意力机制, 将图像和文本特征对齐\n",
    "        attn_output, _ = self.multihead_attn(text_embeds, image_embeds, image_embeds)\n",
    "        text_embeds = text_embeds + attn_output  # 残差连接\n",
    "        text_embeds = self.norm(text_embeds)  # LayerNorm\n",
    "        text_embeds = self.mlp(text_embeds)  # MLP层\n",
    "        return text_embeds\n",
    "\n",
    "class MUAM(nn.Module):\n",
    "    def __init__(self, embed_dim, num_heads):\n",
    "        super(MUAM, self).__init__()\n",
    "        self.cross_attention = MultiHeadCrossAttention(embed_dim, num_heads)\n",
    "        self.fc = nn.Linear(embed_dim, 1)\n",
    "\n",
    "    def forward(self, text_embeds, image_embeds):\n",
    "        # 文本和图像特征对齐\n",
    "        aligned_text_embeds = self.cross_attention(text_embeds, image_embeds)\n",
    "\n",
    "        # 计算最小单元对齐度分数\n",
    "        similarity_matrix = torch.matmul(aligned_text_embeds, image_embeds.transpose(-1, -2))\n",
    "        alignment_scores = F.softmax(similarity_matrix, dim=-1)\n",
    "        alignment_scores = self.fc(alignment_scores).squeeze(-1)  # 压缩维度为[batch_size, num_image_patches]\n",
    "        return alignment_scores"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2. **组成结构融合模块 (CSFM)** 实现\n",
    "\n",
    "**CSFM** 是基于图注意力网络 (GAT) 的模块，用于捕捉文本和图像的结构依赖关系，并在宏观层面计算两个模态的融合度。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class GraphAttentionNetwork(nn.Module):\n",
    "    def __init__(self, in_features, out_features, dropout=0.1, alpha=0.2):\n",
    "        super(GraphAttentionNetwork, self).__init__()\n",
    "        self.fc = nn.Linear(in_features, out_features)\n",
    "        self.attn = nn.Parameter(torch.empty(size=(out_features, 1)))\n",
    "        nn.init.xavier_uniform_(self.attn.data, gain=1.414)\n",
    "        self.leakyrelu = nn.LeakyReLU(alpha)\n",
    "        self.dropout = nn.Dropout(dropout)\n",
    "\n",
    "    def forward(self, node_features, adj):\n",
    "        # 线性变换\n",
    "        h = self.fc(node_features)\n",
    "        \n",
    "        # 计算注意力得分\n",
    "        attn_scores = torch.matmul(h, h.transpose(0, 1))  # 相似度矩阵\n",
    "        attn_scores = self.leakyrelu(attn_scores)\n",
    "\n",
    "        # 使用邻接矩阵屏蔽无关节点\n",
    "        attn_scores = torch.where(adj > 0, attn_scores, torch.zeros_like(attn_scores))\n",
    "        \n",
    "        # 归一化并计算注意力输出\n",
    "        attn_weights = F.softmax(attn_scores, dim=1)\n",
    "        attn_weights = self.dropout(attn_weights)\n",
    "        h_prime = torch.matmul(attn_weights, h)\n",
    "\n",
    "        return h_prime\n",
    "\n",
    "class CSFM(nn.Module):\n",
    "    def __init__(self, embed_dim, graph_out_dim):\n",
    "        super(CSFM, self).__init__()\n",
    "        self.text_gat = GraphAttentionNetwork(embed_dim, graph_out_dim)\n",
    "        self.image_gat = GraphAttentionNetwork(embed_dim, graph_out_dim)\n",
    "        self.fc = nn.Linear(graph_out_dim, 1)\n",
    "\n",
    "    def forward(self, text_features, image_features, text_adj, image_adj):\n",
    "        # 图注意力网络获取文本和图像的结构表示\n",
    "        text_structure = self.text_gat(text_features, text_adj)\n",
    "        image_structure = self.image_gat(image_features, image_adj)\n",
    "\n",
    "        # 计算文本和图像的融合度分数\n",
    "        fusion_scores = torch.matmul(text_structure, image_structure.transpose(-1, -2))\n",
    "        fusion_scores = F.softmax(fusion_scores, dim=-1)\n",
    "        fusion_scores = self.fc(fusion_scores).squeeze(-1)\n",
    "\n",
    "        return fusion_scores"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**解释**：\n",
    "\n",
    "1. `GraphAttentionNetwork`：实现了一个基础的图注意力网络，用来捕捉节点之间的关系（在这里，文本词元和图像块被视为图的节点）。\n",
    "2. `CSFM`：分别使用图注意力网络来处理文本和图像的结构表示，之后计算文本和图像的融合度。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "3. **将MUAM和CSFM集成到整体模型中**\n",
    "\n",
    "现在，我们可以将 `MUAM` 和 `CSFM` 模块集成到一个完整的模型中，来实现多模态数据的交互和对齐。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "class CMHICLModel(nn.Module):\n",
    "    def __init__(self, embed_dim, num_heads, graph_out_dim):\n",
    "        super(CMHICLModel, self).__init__()\n",
    "        self.muam = MUAM(embed_dim, num_heads)\n",
    "        self.csfm = CSFM(embed_dim, graph_out_dim)\n",
    "\n",
    "    def forward(self, text_embeds, image_embeds, text_adj, image_adj):\n",
    "        # 最小单元对齐分数 (MUAM)\n",
    "        alignment_scores = self.muam(text_embeds, image_embeds)\n",
    "        \n",
    "        # 组成结构融合分数 (CSFM)\n",
    "        fusion_scores = self.csfm(text_embeds, image_embeds, text_adj, image_adj)\n",
    "        \n",
    "        # 最终输出为对齐分数和融合分数的结合\n",
    "        output = torch.cat([alignment_scores, fusion_scores], dim=-1)\n",
    "        return output"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**最终模型解释**：\n",
    "\n",
    "1. `CMHICLModel` 集成了 **MUAM** 和 **CSFM**，用来分别从微观和宏观两个角度对齐和融合文本与图像模态。\n",
    "2. 该模型的输入包括文本和图像的嵌入表示，以及它们各自的邻接矩阵（用于图注意力网络的结构依赖建模）。\n",
    "\n",
    "通过以上实现，我们可以有效地模拟论文中的两个创新模块 (**MUAM** 和 **CSFM**)，并将其应用于多模态讽刺检测任务中。"
   ]
  }
 ],
 "metadata": {
  "language_info": {
   "name": "python"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
