{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "949e8e0e",
   "metadata": {},
   "source": [
    "\n",
    "# 7-8，DIEN网络"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "86d35e35",
   "metadata": {},
   "source": [
    "阿里妈妈在CTR预估领域有3篇比较有名的文章。\n",
    "\n",
    "2017年的深度兴趣网络, DIN(DeepInterestNetwork)。 \n",
    "\n",
    "2018年的深度兴趣演化网络, DIEN(DeepInterestEvolutionNetWork)。\n",
    "\n",
    "2019年的深度会话兴趣网络, DSIN(DeepSessionInterestNetWork)。\n",
    "\n",
    "这3篇文章的主要思想和相互关系用一句话分别概括如下：\n",
    "\n",
    "第1篇DIN说，用户的行为日志中只有一部分和当前候选广告有关。可以利用Attention机制从用户行为日志中建模出和当前候选广告相关的用户兴趣表示。我们试过涨点了嘻嘻嘻。\n",
    "\n",
    "第2篇DIEN说，用户最近的行为可能比较远的行为更加重要。可以用循环神经网络GRU建模用户兴趣随时间的演化。我们试过也涨点了嘿嘿嘿。\n",
    "\n",
    "第3篇DSIN说，用户在同一次会话中的行为高度相关，在不同会话间的行为则相对独立。可以把用户行为日志按照时间间隔分割成会话并用SelfAttention机制建模它们之间的相互作用。我们试过又涨点了哈哈哈。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c5af6364",
   "metadata": {},
   "source": [
    "参考材料：\n",
    "\n",
    "* DIEN论文： https://arxiv.org/pdf/1809.03672.pdf \n",
    "\n",
    "* DIN+DIEN，机器学习唯一指定涨点技Attention： https://zhuanlan.zhihu.com/p/431131396\n",
    "\n",
    "* 从DIN到DIEN看阿里CTR算法的进化脉络： https://zhuanlan.zhihu.com/p/78365283\n",
    "\n",
    "* 代码实现参考： https://github.com/GitHub-HongweiZhang/prediction-flow\n",
    "\n",
    "上一篇文章我们介绍了DIN, 本篇文章我们介绍DIEN。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ef1be357",
   "metadata": {},
   "source": [
    "DIEN这篇文章的主要创新之处有3点：\n",
    "\n",
    "* 一是引入GRU来从用户行为日志序列中自然地抽取每个行为日志对应的用户兴趣表示(兴趣抽取层)。\n",
    "\n",
    "* 二是设计了一个辅助loss层，通过做一个辅助任务(区分真实的用户历史点击行为和负采样的非用户点击行为)来强化用户兴趣表示的学习。\n",
    "\n",
    "* 三是将注意力机制和GRU结构结合起来(AUGRU: Attention UPdate GRU)，来建模用户兴趣的时间演化得到最终的用户表示(兴趣演化层)。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7dd12a20",
   "metadata": {},
   "source": [
    "其中引入辅助Loss的技巧是神经网络涨点非常通用的一种高级技巧，值得我们学习。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "29f3fb51",
   "metadata": {},
   "source": [
    "<br>\n",
    "\n",
    "<font color=\"red\">\n",
    " \n",
    "公众号 **算法美食屋** 回复关键词：**pytorch**， 获取本项目源码和所用数据集百度云盘下载链接。\n",
    "    \n",
    "</font> \n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ae1f76cb",
   "metadata": {},
   "source": [
    "## 一，DIEN原理解析"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f9ffc518",
   "metadata": {},
   "source": [
    "DIEN的主要出发点是，用户最近的行为可能比较远的行为更加重要。可以用循环神经网络GRU建模用户兴趣随时间的演化。\n",
    "\n",
    "DIEN选择的是不容易梯度消失且较快的GRU。\n",
    "\n",
    "![](https://tva1.sinaimg.cn/large/e6c9d24egy1h3x1brptqij20k10b8jsp.jpg)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5a7c1015",
   "metadata": {},
   "source": [
    "### 1, 兴趣抽取层"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0cef24e5",
   "metadata": {},
   "source": [
    "图中的 $b(t)$ 是用户的行为序列，而 $e(t)$是对应的embedding。随着自然发生的顺序， $e(t)$被输入GRU中，这就是兴趣抽取层。\n",
    "\n",
    "也是DIEN的第一条创新：引入GRU来从用户行为日志序列中自然地抽取每个行为日志对应的用户兴趣表示(兴趣抽取层)。\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e5cabf40",
   "metadata": {},
   "source": [
    "### 2，辅助loss "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b56e181c",
   "metadata": {},
   "source": [
    "如果忽略上面的AUGRU环节，GRU中的隐状态 $h(t)$就应该成为用户的行为序列最后的表示。\n",
    "\n",
    "如果直接就这样做，也不是不可以，但是$h(t)$学习到的东西可能不是我们想要的用户兴趣表示，或者说$h(t)$很难学习到有意义的信息。\n",
    "\n",
    "因为$h(t)$ 的迭代经过了很多步，然后还要和其他特征做拼接，然后还要经过MLP，最后才得到输出去计算Loss。\n",
    "\n",
    "这样的结果就是最后来了一个正样本或负样本，反向传播很难归因到 $h(t)$ 上。\n",
    "\n",
    "基于此DIEN给出了第二个要点：使用辅助Loss来强化$h(t)$的学习。\n",
    "\n",
    "我们来看看这个辅助Loss是怎么做的？这里设计了一个辅助任务，使用$h(t)$来区分真实的用户历史点击行为和负采样的非用户点击行为。\n",
    "\n",
    "由于$h(t)$ 代表着 t 时刻的用户兴趣表示，我们可以用它来预测 t+1时刻的广告用户是否点击。\n",
    "\n",
    "因为用户行为日志中都是用户点击过的广告(正样本, $e(t)$)，所以我们可以从全部的广告中给用户采样同样数量的用户没有点击过的广告作为负样本$e'(t)$。\n",
    "\n",
    "结合$h(t)$和 $e(t)$, $e'(t)$作为输入, 我们可以做一个二分类的辅助任务。\n",
    "\n",
    "这个辅助任务给$h(t)$在每个t时刻都提供了一个监督信号，使得$h(t)$能够更好地成为用户兴趣的抽取表示。\n",
    "\n",
    "真实应用场合下，你把开始的输入和最后的要求告诉网络，它就能给你一个好的结果的情况非常少。\n",
    "\n",
    "大多数时候是需要你去控制每一步的输入输出，每一步的loss才能防止网络各种偷懒作弊。\n",
    "\n",
    "辅助loss能够使得网络更受控制，向我们需要的方向发展，非常建议大家在实际业务中多试试辅助loss。\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d0e6f486",
   "metadata": {},
   "source": [
    "### 3，兴趣演化层\n",
    "\n",
    "通过兴趣抽取层和辅助loss，我们得到了每个t时刻用户的一般兴趣表示。\n",
    "\n",
    "注意这个兴趣表示是一般性的，还没有和我们的候选广告做Attention关联。\n",
    "\n",
    "在DIN中，我们通过Attention机制构建了和候选广告相关的用户兴趣表示。\n",
    "\n",
    "而在DIEN中，我们希望建立的是和和候选广告相关，并且和时间演化相关的用户兴趣表示。\n",
    "\n",
    "DIEN通过结合Attention机制和GRU结构来做到这一点，这就是第三点创新AUGRU : Attention UPdate Gate GRU。\n",
    "\n",
    "下面我们进行详细讲解。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c6c0c97d",
   "metadata": {},
   "source": [
    "一般地，各种RNN序列模型层(SimpleRNN,GRU,LSTM等)可以用函数表示如下:\n",
    "\n",
    "$$h_t = f(h_{t-1},i_t)$$\n",
    "\n",
    "这个公式的含义是：t时刻循环神经网络的输出向量$h_t$由t-1时刻的输出向量$h_{t-1}$和t时刻的输入$i_t$变换而来。\n",
    "\n",
    "为了结合Attention机制和GRU结构，我们需要设计这样的一个有三种输入的序列模型\n",
    "\n",
    "$$h_t = g(h_{t-1},i_t, a_t)$$\n",
    "\n",
    "这里的$a_t$是 t时刻的用户兴趣表示输入 $i_t$和候选广告计算出的attention 得分，是个标量。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "98d78bd1",
   "metadata": {},
   "source": [
    "我们先看看 GRU的 具体函数形式： \n",
    "\n",
    "$$\n",
    "\\begin{align}\n",
    "u_t &= \\sigma(W^u i_t + U^u h_{t-1} + b^u) \\tag{1} \\\\\n",
    "r_t &= \\sigma(W^r i_t + U^r h_{t-1} + b^r) \\tag{2} \\\\\n",
    "n_t &= \\tanh(W^n i_t + r_t \\circ U^n h_{t-1} + b^n) \\tag{3} \\\\\n",
    "h_t &= h_{t-1} - u_t \t\\circ h_{t-1} + u_t \\circ n_t \\tag{4} \\\\\n",
    "\\end{align}\n",
    "$$\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "af28e848",
   "metadata": {},
   "source": [
    "公式中的小圈表示哈达玛积，也就是两个向量逐位相乘。\n",
    "\n",
    "其中(1)式和(2)式计算的是更新门$u_t$和重置门$r_t$，是两个长度和$h_t$相同的向量。\n",
    "\n",
    "更新门用于控制每一步$h_t$被更新的比例，更新门越大，$h_t$更新幅度越大。\n",
    "\n",
    "重置门用于控制更新候选向量$n_t$中前一步的状态$h_{t-1}$被重新放入的比例，重置门越大，更新候选向量中$h_{t-1}$被重新放进来的比例越大。\n",
    "\n",
    "注意到(4)式 实际上和ResNet的残差结构是相似的，都是 f(x) = x + g(x) 的形式，可以有效地防止长序列学习反向传播过程中梯度消失问题。\n",
    "\n",
    "如何在GRU的基础上把attention得分融入进来呢？有以下一些非常自然的想法：\n",
    "\n",
    "* 1， 用$a_t$缩放输入$i_t$, 这就是AIGRU: Attention Input GRU。其含义是相关性高的在输入端进行放大。\n",
    "\n",
    "* 2， 用$a_t$代替GRU的更新门，这就是AGRU: Attention based GRU。其含义是用直接用相关性作为更新幅度。\n",
    "\n",
    "* 3， 用$a_t$缩放GRU的更新门$u_t$，这就是AUGRU:  Attention Update Gate GRU。其含义是用用相关性缩放更新幅度。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "eb903e31",
   "metadata": {},
   "source": [
    "AIGRU实际上并没有改变GRU的结构，只是改变了其输入，这种方式对Attention的使用比较含蓄，我把每个历史广告的相关性强弱通过输入告诉GRU，GRU你就给我好好学吧，希望你把相关性强的广告多长点到脑子里。但是这种方式效果不是很理想，即使是相关性为0的历史广告，也会对进行更新。\n",
    "\n",
    "AGRU是改变了GRU的结构的，并且对Attention的使用非常激进，完全删掉了GRU原有的的更新门，GRU你的脑子归Attention管了，遇到相关性高的广告，一定大大地记上一笔。不过AGRU也有一个缺陷，那就是Attention得分实际上是个标量，无法反应不同维度的差异。\n",
    "\n",
    "AUGRU也是改变了GRU的结构的，并且对Attention的使用比较折衷，让Attention缩放GRU原有的更新幅度。GRU我给你找了个搭档Attention，你更新前先问问它，你两一起决定该迈多大的步子吧。\n",
    "\n",
    "DIEN论文中通过对比实验发现AUGRU的效果最好。\n",
    "\n",
    "我们看看AUGRU的核心实现代码。基本上和公式是一致的，应用了F.linear函数来实现矩阵乘法和加偏置。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "950dd47b",
   "metadata": {
    "lines_to_next_cell": 2
   },
   "outputs": [],
   "source": [
    "import torch \n",
    "from torch import nn \n",
    "\n",
    "class AttentionUpdateGateGRUCell(nn.Module):\n",
    "    def __init__(self, input_size, hidden_size, bias=True):\n",
    "        super().__init__()\n",
    "        self.input_size = input_size\n",
    "        self.hidden_size = hidden_size\n",
    "        self.bias = bias\n",
    "        # (Wu|Wr|Wn)\n",
    "        self.weight_ih = nn.Parameter(\n",
    "            torch.Tensor(3 * hidden_size, input_size))\n",
    "        # (Uu|Ur|Un)\n",
    "        self.weight_hh = nn.Parameter(\n",
    "            torch.Tensor(3 * hidden_size, hidden_size))\n",
    "        if bias:\n",
    "            # (b_iu|b_ir|b_in)\n",
    "            self.bias_ih = nn.Parameter(torch.Tensor(3 * hidden_size))\n",
    "            # (b_hu|b_hr|b_hn)\n",
    "            self.bias_hh = nn.Parameter(torch.Tensor(3 * hidden_size))\n",
    "        else:\n",
    "            self.register_parameter('bias_ih', None)\n",
    "            self.register_parameter('bias_hh', None)\n",
    "        self.reset_parameters()\n",
    "\n",
    "    def reset_parameters(self):\n",
    "        stdv = 1.0 / (self.hidden_size)**0.5\n",
    "        for weight in self.parameters():\n",
    "            nn.init.uniform_(weight, -stdv, stdv)\n",
    "            \n",
    "    def forward(self, x, hx, att_score):\n",
    "        gi = F.linear(x, self.weight_ih, self.bias_ih)\n",
    "        gh = F.linear(hx, self.weight_hh, self.bias_hh)\n",
    "        i_r, i_u, i_n = gi.chunk(3, 1)\n",
    "        h_r, h_u, h_n = gh.chunk(3, 1)\n",
    "\n",
    "        resetgate = torch.sigmoid(i_r + h_r)\n",
    "        updategate = torch.sigmoid(i_u + h_u)\n",
    "        newgate = torch.tanh(i_n + resetgate * h_n)\n",
    "\n",
    "        updategate = att_score.view(-1, 1) * updategate\n",
    "        hy = (1-updategate)*hx +  updategate*newgate\n",
    "\n",
    "        return hy\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "535b9c3e",
   "metadata": {},
   "source": [
    "## 二，DIEN的pytorch实现"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3e6cee47",
   "metadata": {},
   "source": [
    "下面是一个DIEN模型的完整pytorch实现。许多代码和DIN的实现是一样的。\n",
    "\n",
    "这里的AttentionGroup类用来建立候选广告属性，历史广告属性，以及负采样的广告属性的pair关系。\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "3b7bd9e4",
   "metadata": {},
   "outputs": [],
   "source": [
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.nn.functional as F \n",
    "from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence\n",
    "from collections import OrderedDict\n",
    "\n",
    "class MaxPooling(nn.Module):\n",
    "    def __init__(self, dim):\n",
    "        super(MaxPooling, self).__init__()\n",
    "        self.dim = dim\n",
    "\n",
    "    def forward(self, input):\n",
    "        return torch.max(input, self.dim)[0]\n",
    "\n",
    "\n",
    "class SumPooling(nn.Module):\n",
    "    def __init__(self, dim):\n",
    "        super(SumPooling, self).__init__()\n",
    "        self.dim = dim\n",
    "\n",
    "    def forward(self, input):\n",
    "        return torch.sum(input, self.dim)\n",
    "\n",
    "class Dice(nn.Module):\n",
    "    \"\"\"\n",
    "    The Data Adaptive Activation Function in DIN, a generalization of PReLu.\n",
    "    \"\"\"\n",
    "    def __init__(self, emb_size, dim=2, epsilon=1e-8):\n",
    "        super(Dice, self).__init__()\n",
    "        assert dim == 2 or dim == 3\n",
    "\n",
    "        self.bn = nn.BatchNorm1d(emb_size, eps=epsilon)\n",
    "        self.sigmoid = nn.Sigmoid()\n",
    "        self.dim = dim\n",
    "        \n",
    "        # wrap alpha in nn.Parameter to make it trainable\n",
    "        self.alpha = nn.Parameter(torch.zeros((emb_size,))) if self.dim == 2 else nn.Parameter(\n",
    "            torch.zeros((emb_size, 1)))\n",
    "\n",
    "\n",
    "    def forward(self, x):\n",
    "        assert x.dim() == self.dim\n",
    "        if self.dim == 2:\n",
    "            x_p = self.sigmoid(self.bn(x))\n",
    "            out = self.alpha * (1 - x_p) * x + x_p * x\n",
    "        else:\n",
    "            x = torch.transpose(x, 1, 2)\n",
    "            x_p = self.sigmoid(self.bn(x))\n",
    "            out = self.alpha * (1 - x_p) * x + x_p * x\n",
    "            out = torch.transpose(out, 1, 2)\n",
    "        return out\n",
    "\n",
    "    \n",
    "class Identity(nn.Module):\n",
    "    def __init__(self):\n",
    "        super().__init__()\n",
    "    def forward(self, x):\n",
    "        return x\n",
    "    \n",
    "def get_activation_layer(name, hidden_size=None, dice_dim=2):\n",
    "    name = name.lower()\n",
    "    name_dict = {x.lower():x for x in dir(nn) if '__' not in x and 'Z'>=x[0]>='A'}\n",
    "    if name==\"linear\":\n",
    "        return Identity()\n",
    "    elif name==\"dice\":\n",
    "        assert dice_dim\n",
    "        return Dice(hidden_size, dice_dim)\n",
    "    else:\n",
    "        assert name in name_dict, f'activation type {name} not supported!'\n",
    "        return getattr(nn,name_dict[name])()\n",
    "    \n",
    "def init_weights(model):\n",
    "    if isinstance(model, nn.Linear):\n",
    "        if model.weight is not None:\n",
    "            nn.init.kaiming_uniform_(model.weight.data)\n",
    "        if model.bias is not None:\n",
    "            nn.init.normal_(model.bias.data)\n",
    "    elif isinstance(model, (nn.BatchNorm1d,nn.BatchNorm2d,nn.BatchNorm3d)):\n",
    "        if model.weight is not None:\n",
    "            nn.init.normal_(model.weight.data, mean=1, std=0.02)\n",
    "        if model.bias is not None:\n",
    "            nn.init.constant_(model.bias.data, 0)\n",
    "    else:\n",
    "        pass\n",
    "\n",
    "\n",
    "class MLP(nn.Module):\n",
    "    def __init__(self, input_size, hidden_layers,\n",
    "                 dropout=0.0, batchnorm=True, activation='relu'):\n",
    "        super(MLP, self).__init__()\n",
    "        modules = OrderedDict()\n",
    "        previous_size = input_size\n",
    "        for index, hidden_layer in enumerate(hidden_layers):\n",
    "            modules[f\"dense{index}\"] = nn.Linear(previous_size, hidden_layer)\n",
    "            if batchnorm:\n",
    "                modules[f\"batchnorm{index}\"] = nn.BatchNorm1d(hidden_layer)\n",
    "            if activation:\n",
    "                modules[f\"activation{index}\"] = get_activation_layer(activation,hidden_layer,2)\n",
    "            if dropout:\n",
    "                modules[f\"dropout{index}\"] = nn.Dropout(dropout)\n",
    "            previous_size = hidden_layer\n",
    "        self.mlp = nn.Sequential(modules)\n",
    "\n",
    "    def forward(self, x):\n",
    "        return self.mlp(x)\n",
    "\n",
    "\n",
    "class AttentionGRUCell(nn.Module):\n",
    "    def __init__(self, input_size, hidden_size, bias=True):\n",
    "        super().__init__()\n",
    "        self.input_size = input_size\n",
    "        self.hidden_size = hidden_size\n",
    "        self.bias = bias\n",
    "        # (Wr|Wn)\n",
    "        self.weight_ih = nn.Parameter(\n",
    "            torch.Tensor(2 * hidden_size, input_size))\n",
    "        # (Ur|Un)\n",
    "        self.weight_hh = nn.Parameter(\n",
    "            torch.Tensor(2 * hidden_size, hidden_size))\n",
    "        if bias:\n",
    "            # (b_ir|b_in)\n",
    "            self.bias_ih = nn.Parameter(torch.Tensor(2 * hidden_size))\n",
    "            # (b_hr|b_hn)\n",
    "            self.bias_hh = nn.Parameter(torch.Tensor(2 * hidden_size))\n",
    "        else:\n",
    "            self.register_parameter('bias_ih', None)\n",
    "            self.register_parameter('bias_hh', None)\n",
    "        self.reset_parameters()\n",
    "\n",
    "    def reset_parameters(self):\n",
    "        stdv = 1.0 / (self.hidden_size)**0.5\n",
    "        for weight in self.parameters():\n",
    "            nn.init.uniform_(weight, -stdv, stdv)\n",
    "\n",
    "    def forward(self, x, hx, att_score):\n",
    "\n",
    "        gi = F.linear(x, self.weight_ih, self.bias_ih)\n",
    "        gh = F.linear(hx, self.weight_hh, self.bias_hh)\n",
    "        i_r, i_n = gi.chunk(2, 1)\n",
    "        h_r, h_n = gh.chunk(2, 1)\n",
    "\n",
    "        resetgate = torch.sigmoid(i_r + h_r)\n",
    "        newgate = torch.tanh(i_n + resetgate * h_n)\n",
    "        att_score = att_score.view(-1, 1)\n",
    "        hy = (1. - att_score) * hx + att_score * newgate\n",
    "        \n",
    "        return hy\n",
    "\n",
    "\n",
    "class AttentionUpdateGateGRUCell(nn.Module):\n",
    "    def __init__(self, input_size, hidden_size, bias=True):\n",
    "        super().__init__()\n",
    "        self.input_size = input_size\n",
    "        self.hidden_size = hidden_size\n",
    "        self.bias = bias\n",
    "        # (Wu|Wr|Wn)\n",
    "        self.weight_ih = nn.Parameter(\n",
    "            torch.Tensor(3 * hidden_size, input_size))\n",
    "        # (Uu|Ur|Un)\n",
    "        self.weight_hh = nn.Parameter(\n",
    "            torch.Tensor(3 * hidden_size, hidden_size))\n",
    "        if bias:\n",
    "            # (b_iu|b_ir|b_in)\n",
    "            self.bias_ih = nn.Parameter(torch.Tensor(3 * hidden_size))\n",
    "            # (b_hu|b_hr|b_hn)\n",
    "            self.bias_hh = nn.Parameter(torch.Tensor(3 * hidden_size))\n",
    "        else:\n",
    "            self.register_parameter('bias_ih', None)\n",
    "            self.register_parameter('bias_hh', None)\n",
    "        self.reset_parameters()\n",
    "\n",
    "    def reset_parameters(self):\n",
    "        stdv = 1.0 / (self.hidden_size)**0.5\n",
    "        for weight in self.parameters():\n",
    "            nn.init.uniform_(weight, -stdv, stdv)\n",
    "            \n",
    "    def forward(self, x, hx, att_score):\n",
    "        gi = F.linear(x, self.weight_ih, self.bias_ih)\n",
    "        gh = F.linear(hx, self.weight_hh, self.bias_hh)\n",
    "        i_u,i_r, i_n = gi.chunk(3, 1)\n",
    "        h_u,h_r, h_n = gh.chunk(3, 1)\n",
    "\n",
    "        updategate = torch.sigmoid(i_u + h_u)\n",
    "        resetgate = torch.sigmoid(i_r + h_r)\n",
    "        newgate = torch.tanh(i_n + resetgate * h_n)\n",
    "\n",
    "        updategate = att_score.view(-1, 1) * updategate\n",
    "        hy = (1-updategate)*hx +  updategate*newgate\n",
    "\n",
    "        return hy\n",
    "\n",
    "\n",
    "\n",
    "class DynamicGRU(nn.Module):\n",
    "    def __init__(self, input_size, hidden_size, bias=True, gru_type='AGRU'):\n",
    "        super(DynamicGRU, self).__init__()\n",
    "        self.input_size = input_size\n",
    "        self.hidden_size = hidden_size\n",
    "\n",
    "        if gru_type == 'AGRU':\n",
    "            self.rnn = AttentionGRUCell(input_size, hidden_size, bias)\n",
    "        elif gru_type == 'AUGRU':\n",
    "            self.rnn = AttentionUpdateGateGRUCell(\n",
    "                input_size, hidden_size, bias)\n",
    "\n",
    "    def forward(self, x, att_scores, hx=None):\n",
    "        is_packed_input = isinstance(x, nn.utils.rnn.PackedSequence)\n",
    "        if not is_packed_input:\n",
    "            raise NotImplementedError(\n",
    "                \"DynamicGRU only supports packed input\")\n",
    "\n",
    "        is_packed_att_scores = isinstance(att_scores, nn.utils.rnn.PackedSequence)\n",
    "        if not is_packed_att_scores:\n",
    "            raise NotImplementedError(\n",
    "                \"DynamicGRU only supports packed att_scores\")\n",
    "\n",
    "        x, batch_sizes, sorted_indices, unsorted_indices = x\n",
    "        att_scores, _, _, _ = att_scores\n",
    "\n",
    "        max_batch_size = batch_sizes[0]\n",
    "        max_batch_size = int(max_batch_size)\n",
    "\n",
    "        if hx is None:\n",
    "            hx = torch.zeros(\n",
    "                max_batch_size, self.hidden_size,\n",
    "                dtype=x.dtype, device=x.device)\n",
    "\n",
    "        outputs = torch.zeros(\n",
    "            x.size(0), self.hidden_size,\n",
    "            dtype=x.dtype, device=x.device)\n",
    "\n",
    "        begin = 0\n",
    "        for batch in batch_sizes:\n",
    "            new_hx = self.rnn(\n",
    "                x[begin: begin + batch],\n",
    "                hx[0:batch],\n",
    "                att_scores[begin: begin + batch])\n",
    "            outputs[begin: begin + batch] = new_hx\n",
    "            hx = new_hx\n",
    "            begin += batch\n",
    "\n",
    "        return nn.utils.rnn.PackedSequence(\n",
    "            outputs, batch_sizes, sorted_indices, unsorted_indices)\n",
    "    \n",
    "\n",
    "class Attention(nn.Module):\n",
    "    def __init__(\n",
    "            self,\n",
    "            input_size,\n",
    "            hidden_layers,\n",
    "            dropout=0.0,\n",
    "            batchnorm=True,\n",
    "            activation='prelu',\n",
    "            return_scores=False):\n",
    "        \n",
    "        super().__init__()\n",
    "        self.return_scores = return_scores\n",
    "        \n",
    "        self.mlp = MLP(\n",
    "            input_size=input_size * 4,\n",
    "            hidden_layers=hidden_layers,\n",
    "            dropout=dropout,\n",
    "            batchnorm=batchnorm,\n",
    "            activation=activation)\n",
    "        self.fc = nn.Linear(hidden_layers[-1], 1)\n",
    "\n",
    "    def forward(self, query, keys, keys_length):\n",
    "        \"\"\"\n",
    "        Parameters\n",
    "        ----------\n",
    "        query: 2D tensor, [Batch, Hidden]\n",
    "        keys: 3D tensor, [Batch, Time, Hidden]\n",
    "        keys_length: 1D tensor, [Batch]\n",
    "\n",
    "        Returns\n",
    "        -------\n",
    "        outputs: 2D tensor, [Batch, Hidden]\n",
    "        \"\"\"\n",
    "        batch_size, max_length, dim = keys.size()\n",
    "\n",
    "        query = query.unsqueeze(1).expand(-1, max_length, -1)\n",
    "\n",
    "        din_all = torch.cat(\n",
    "            [query, keys, query - keys, query * keys], dim=-1)\n",
    "\n",
    "        din_all = din_all.view(batch_size * max_length, -1)\n",
    "\n",
    "        outputs = self.mlp(din_all)\n",
    "\n",
    "        outputs = self.fc(outputs).view(batch_size, max_length)  # [B, T]\n",
    "\n",
    "        # Scale\n",
    "        outputs = outputs / (dim ** 0.5)\n",
    "\n",
    "        # Mask\n",
    "        mask = (torch.arange(max_length, device=keys_length.device).repeat(\n",
    "            batch_size, 1) < keys_length.view(-1, 1))\n",
    "        outputs[~mask] = -np.inf\n",
    "\n",
    "        # Activation\n",
    "        outputs = F.softmax(outputs, dim=1)  #DIN uses sigmoid,DIEN uses softmax; [B, T]\n",
    "\n",
    "        if not self.return_scores:\n",
    "            # Weighted sum\n",
    "            outputs = torch.matmul(\n",
    "                outputs.unsqueeze(1), keys).squeeze()  # [B, H]\n",
    "        return outputs \n",
    "    \n",
    "class AuxiliaryNet(nn.Module):\n",
    "    def __init__(self, input_size, hidden_layers, activation='sigmoid'):\n",
    "        super().__init__()\n",
    "        modules = OrderedDict()\n",
    "        previous_size = input_size\n",
    "        for index, hidden_layer in enumerate(hidden_layers):\n",
    "            modules[f\"dense{index}\"] = nn.Linear(previous_size, hidden_layer)\n",
    "            if activation:\n",
    "                modules[f\"activation{index}\"] = get_activation_layer(activation)\n",
    "            previous_size = hidden_layer\n",
    "        modules[\"final_layer\"] = nn.Linear(previous_size, 1)\n",
    "        self.mlp = nn.Sequential(modules)\n",
    "\n",
    "    def forward(self, x):\n",
    "        return torch.sigmoid(self.mlp(x))\n",
    "\n",
    "\n",
    "class Interest(nn.Module):\n",
    "    SUPPORTED_GRU_TYPE = ['GRU', 'AIGRU', 'AGRU', 'AUGRU']\n",
    "\n",
    "    def __init__(\n",
    "            self,\n",
    "            input_size,\n",
    "            gru_type='AUGRU',\n",
    "            gru_dropout=0.0,\n",
    "            att_hidden_layers=[80, 40],\n",
    "            att_dropout=0.0,\n",
    "            att_batchnorm=True,\n",
    "            att_activation='prelu',\n",
    "            use_negsampling=False):\n",
    "        super(Interest, self).__init__()\n",
    "        if gru_type not in Interest.SUPPORTED_GRU_TYPE:\n",
    "            raise NotImplementedError(f\"gru_type: {gru_type} is not supported\")\n",
    "\n",
    "        self.gru_type = gru_type\n",
    "        self.use_negsampling = use_negsampling\n",
    "\n",
    "        self.interest_extractor = nn.GRU(\n",
    "            input_size=input_size,\n",
    "            hidden_size=input_size,\n",
    "            batch_first=True,\n",
    "            bidirectional=False)\n",
    "\n",
    "        if self.use_negsampling:\n",
    "            self.auxiliary_net = AuxiliaryNet(\n",
    "                input_size * 2, hidden_layers=[100, 50])\n",
    "\n",
    "        if gru_type == 'GRU':\n",
    "            self.attention = Attention(\n",
    "                input_size=input_size,\n",
    "                hidden_layers=att_hidden_layers,\n",
    "                dropout=att_dropout,\n",
    "                batchnorm=att_batchnorm,\n",
    "                activation=att_activation)\n",
    "            \n",
    "            self.interest_evolution = nn.GRU(\n",
    "                input_size=input_size,\n",
    "                hidden_size=input_size,\n",
    "                batch_first=True,\n",
    "                bidirectional=False)\n",
    "                \n",
    "        elif gru_type == 'AIGRU':\n",
    "            self.attention = Attention(\n",
    "                input_size=input_size,\n",
    "                hidden_layers=att_hidden_layers,\n",
    "                dropout=att_dropout,\n",
    "                batchnorm=att_batchnorm,\n",
    "                activation=att_activation,\n",
    "                return_scores=True)\n",
    "\n",
    "            self.interest_evolution = nn.GRU(\n",
    "                input_size=input_size,\n",
    "                hidden_size=input_size,\n",
    "                batch_first=True,\n",
    "                bidirectional=False)\n",
    "            \n",
    "        elif gru_type == 'AGRU' or gru_type == 'AUGRU':\n",
    "            self.attention = Attention(\n",
    "                input_size=input_size,\n",
    "                hidden_layers=att_hidden_layers,\n",
    "                dropout=att_dropout,\n",
    "                batchnorm=att_batchnorm,\n",
    "                activation=att_activation,\n",
    "                return_scores=True)\n",
    "\n",
    "            self.interest_evolution = DynamicGRU(\n",
    "                input_size=input_size,\n",
    "                hidden_size=input_size,\n",
    "                gru_type=gru_type)\n",
    "\n",
    "    @staticmethod\n",
    "    def get_last_state(states, keys_length):\n",
    "        # states [B, T, H]\n",
    "        batch_size, max_seq_length, hidden_size = states.size()\n",
    "\n",
    "        mask = (torch.arange(max_seq_length, device=keys_length.device).repeat(\n",
    "            batch_size, 1) == (keys_length.view(-1, 1) - 1))\n",
    "\n",
    "        return states[mask]\n",
    "\n",
    "    def cal_auxiliary_loss(\n",
    "            self, states, click_seq, noclick_seq, keys_length):\n",
    "        # states [B, T, H]\n",
    "        # click_seq [B, T, H]\n",
    "        # noclick_seq [B, T, H]\n",
    "        # keys_length [B]\n",
    "        batch_size, max_seq_length, embedding_size = states.size()\n",
    "\n",
    "        mask = (torch.arange(max_seq_length, device=states.device).repeat(\n",
    "            batch_size, 1) < keys_length.view(-1, 1)).float()\n",
    "\n",
    "        click_input = torch.cat([states, click_seq], dim=-1)\n",
    "        noclick_input = torch.cat([states, noclick_seq], dim=-1)\n",
    "        embedding_size = embedding_size * 2\n",
    "\n",
    "        click_p = self.auxiliary_net(\n",
    "            click_input.view(\n",
    "                batch_size * max_seq_length, embedding_size)).view(\n",
    "                    batch_size, max_seq_length)[mask > 0].view(-1, 1)\n",
    "        click_target = torch.ones(\n",
    "            click_p.size(), dtype=torch.float, device=click_p.device)\n",
    "\n",
    "        noclick_p = self.auxiliary_net(\n",
    "            noclick_input.view(\n",
    "                batch_size * max_seq_length, embedding_size)).view(\n",
    "                    batch_size, max_seq_length)[mask > 0].view(-1, 1)\n",
    "        noclick_target = torch.zeros(\n",
    "            noclick_p.size(), dtype=torch.float, device=noclick_p.device)\n",
    "\n",
    "        loss = F.binary_cross_entropy(\n",
    "            torch.cat([click_p, noclick_p], dim=0),\n",
    "            torch.cat([click_target, noclick_target], dim=0))\n",
    "\n",
    "        return loss\n",
    "\n",
    "    def forward(self, query, keys, keys_length, neg_keys=None):\n",
    "        \"\"\"\n",
    "        Parameters\n",
    "        ----------\n",
    "        query: 2D tensor, [Batch, Hidden]\n",
    "        keys: 3D tensor, [Batch, Time, Hidden]\n",
    "        keys_length: 1D tensor, [Batch]\n",
    "        neg_keys: 3D tensor, [Batch, Time, Hidden]\n",
    "\n",
    "        Returns\n",
    "        -------\n",
    "        outputs: 2D tensor, [Batch, Hidden]\n",
    "        \"\"\"\n",
    "        batch_size, max_length, dim = keys.size()\n",
    "\n",
    "        packed_keys = pack_padded_sequence(\n",
    "            keys,\n",
    "            lengths=keys_length.squeeze().cpu(),\n",
    "            batch_first=True,\n",
    "            enforce_sorted=False)\n",
    "\n",
    "        packed_interests, _ = self.interest_extractor(packed_keys)\n",
    "\n",
    "        aloss = None\n",
    "        if (self.gru_type != 'GRU') or self.use_negsampling:\n",
    "            interests, _ = pad_packed_sequence(\n",
    "                packed_interests,\n",
    "                batch_first=True,\n",
    "                padding_value=0.0,\n",
    "                total_length=max_length)\n",
    "\n",
    "            if self.use_negsampling:\n",
    "                aloss = self.cal_auxiliary_loss(\n",
    "                    interests[:, :-1, :],\n",
    "                    keys[:, 1:, :],\n",
    "                    neg_keys[:, 1:, :],\n",
    "                    keys_length - 1)\n",
    "\n",
    "        if self.gru_type == 'GRU':\n",
    "            packed_interests, _ = self.interest_evolution(packed_interests)\n",
    "\n",
    "            interests, _ = pad_packed_sequence(\n",
    "                packed_interests,\n",
    "                batch_first=True,\n",
    "                padding_value=0.0,\n",
    "                total_length=max_length)\n",
    "\n",
    "            outputs = self.attention(query, interests, keys_length)\n",
    "\n",
    "        elif self.gru_type == 'AIGRU':\n",
    "            # attention\n",
    "            scores = self.attention(query, interests, keys_length)\n",
    "            interests = interests * scores.unsqueeze(-1)\n",
    "\n",
    "            packed_interests = pack_padded_sequence(\n",
    "                interests,\n",
    "                lengths=keys_length.squeeze().cpu(),\n",
    "                batch_first=True,\n",
    "                enforce_sorted=False)\n",
    "            _, outputs = self.interest_evolution(packed_interests)\n",
    "            outputs = outputs.squeeze()\n",
    "\n",
    "        elif self.gru_type == 'AGRU' or self.gru_type == 'AUGRU':\n",
    "            # attention\n",
    "            scores = self.attention(query, interests, keys_length)\n",
    "\n",
    "            packed_interests = pack_padded_sequence(\n",
    "                interests,\n",
    "                lengths=keys_length.squeeze().cpu(),\n",
    "                batch_first=True,\n",
    "                enforce_sorted=False)\n",
    "\n",
    "            packed_scores = pack_padded_sequence(\n",
    "                scores,\n",
    "                lengths=keys_length.squeeze().cpu(),\n",
    "                batch_first=True,\n",
    "                enforce_sorted=False)\n",
    "\n",
    "            outputs, _ = pad_packed_sequence(\n",
    "                self.interest_evolution(\n",
    "                    packed_interests, packed_scores), batch_first=True)\n",
    "            # pick last state\n",
    "            outputs = Interest.get_last_state(\n",
    "                outputs, keys_length.squeeze())\n",
    "\n",
    "        return outputs, aloss\n",
    "    \n",
    "class AttentionGroup(object):\n",
    "    def __init__(self, name, pairs,\n",
    "                 hidden_layers, activation='dice', att_dropout=0.0,\n",
    "                 gru_type='AUGRU', gru_dropout=0.0):\n",
    "        self.name = name\n",
    "        self.pairs = pairs\n",
    "        self.hidden_layers = hidden_layers\n",
    "        self.activation = activation\n",
    "        self.att_dropout = att_dropout\n",
    "        self.gru_type = gru_type\n",
    "        self.gru_dropout = gru_dropout\n",
    "\n",
    "        self.related_feature_names = set()\n",
    "        self.neg_feature_names = set()\n",
    "        for pair in pairs:\n",
    "            self.related_feature_names.add(pair['ad'])\n",
    "            self.related_feature_names.add(pair['pos_hist'])\n",
    "            if 'neg_hist' in pair:\n",
    "                self.related_feature_names.add(pair['neg_hist'])\n",
    "                self.neg_feature_names.add(pair['neg_hist'])\n",
    "\n",
    "    def is_attention_feature(self, feature_name):\n",
    "        if feature_name in self.related_feature_names:\n",
    "            return True\n",
    "        return False\n",
    "\n",
    "    def is_neg_sampling_feature(self, feature_name):\n",
    "        if feature_name in self.neg_feature_names:\n",
    "            return True\n",
    "        return False\n",
    "\n",
    "    @property\n",
    "    def pairs_count(self):\n",
    "        return len(self.pairs)\n",
    "    \n",
    "class DIEN(nn.Module):\n",
    "    def __init__(self, num_features,cat_features,seq_features, \n",
    "                 cat_nums,embedding_size, attention_groups,\n",
    "                 mlp_hidden_layers, mlp_activation='prelu', mlp_dropout=0.0,\n",
    "                 use_negsampling = False,\n",
    "                 d_out = 1\n",
    "                 ):\n",
    "        super().__init__()\n",
    "        self.num_features = num_features\n",
    "        self.cat_features = cat_features\n",
    "        self.seq_features = seq_features\n",
    "        self.cat_nums = cat_nums \n",
    "        self.embedding_size = embedding_size\n",
    "        \n",
    "        self.attention_groups = attention_groups\n",
    "        \n",
    "        self.mlp_hidden_layers = mlp_hidden_layers\n",
    "        self.mlp_activation = mlp_activation\n",
    "        self.mlp_dropout = mlp_dropout\n",
    "        \n",
    "        self.d_out = d_out\n",
    "        self.use_negsampling = use_negsampling\n",
    "        \n",
    "        #embedding\n",
    "        self.embeddings = OrderedDict()\n",
    "        for feature in self.cat_features+self.seq_features:\n",
    "            self.embeddings[feature] = nn.Embedding(\n",
    "                self.cat_nums[feature], self.embedding_size, padding_idx=0)\n",
    "            self.add_module(f\"embedding:{feature}\",self.embeddings[feature])\n",
    "\n",
    "        self.sequence_poolings = OrderedDict()\n",
    "        self.attention_poolings = OrderedDict()\n",
    "        total_embedding_sizes = 0\n",
    "        for feature in self.cat_features:\n",
    "            total_embedding_sizes += self.embedding_size\n",
    "        for feature in self.seq_features:\n",
    "            if not self.is_neg_sampling_feature(feature):\n",
    "                total_embedding_sizes += self.embedding_size\n",
    "        \n",
    "        #sequence_pooling\n",
    "        for feature in self.seq_features:\n",
    "            if not self.is_attention_feature(feature):\n",
    "                self.sequence_poolings[feature] = MaxPooling(1)\n",
    "                self.add_module(f\"pooling:{feature}\",self.sequence_poolings[feature])\n",
    "\n",
    "        #attention_pooling\n",
    "        for attention_group in self.attention_groups:\n",
    "            self.attention_poolings[attention_group.name] = (\n",
    "                self.create_attention_fn(attention_group))\n",
    "            self.add_module(f\"attention_pooling:{attention_group.name}\",\n",
    "                self.attention_poolings[attention_group.name])\n",
    "\n",
    "        total_input_size = total_embedding_sizes+len(self.num_features)\n",
    "        \n",
    "        self.mlp = MLP(\n",
    "            total_input_size,\n",
    "            mlp_hidden_layers,\n",
    "            dropout=mlp_dropout, batchnorm=True, activation=mlp_activation)\n",
    "        \n",
    "        self.final_layer = nn.Linear(mlp_hidden_layers[-1], self.d_out)\n",
    "        self.apply(init_weights)\n",
    "        \n",
    "        \n",
    "    def forward(self, x):\n",
    "        final_layer_inputs = list()\n",
    "\n",
    "        # linear\n",
    "        number_inputs = list()\n",
    "        for feature in self.num_features:\n",
    "            number_inputs.append(x[feature].view(-1, 1))\n",
    "\n",
    "        embeddings = OrderedDict()\n",
    "        for feature in self.cat_features:\n",
    "            embeddings[feature] = self.embeddings[feature](x[feature])\n",
    "\n",
    "        for feature in self.seq_features:\n",
    "            if not self.is_attention_feature(feature):\n",
    "                embeddings[feature] = self.sequence_poolings[feature](\n",
    "                    self.embeddings[feature](x[feature]))\n",
    "\n",
    "        auxiliary_losses = []\n",
    "        for attention_group in self.attention_groups:\n",
    "            query = torch.cat(\n",
    "                [embeddings[pair['ad']]\n",
    "                 for pair in attention_group.pairs],\n",
    "                dim=-1)\n",
    "            pos_hist = torch.cat(\n",
    "                [self.embeddings[pair['pos_hist']](\n",
    "                    x[pair['pos_hist']]) for pair in attention_group.pairs],\n",
    "                dim=-1)\n",
    "            \n",
    "            #hist_length = torch.sum(hist>0,axis=1)\n",
    "            keys_length = torch.min(torch.cat(\n",
    "                [torch.sum(x[pair['pos_hist']]>0,axis=1).view(-1, 1)\n",
    "                 for pair in attention_group.pairs],\n",
    "                dim=-1), dim=-1)[0]\n",
    "    \n",
    "            neg_hist = None\n",
    "            if self.use_negsampling:\n",
    "                neg_hist = torch.cat(\n",
    "                    [self.embeddings[pair['neg_hist']](\n",
    "                        x[pair['neg_hist']])\n",
    "                     for pair in attention_group.pairs],\n",
    "                    dim=-1)\n",
    "                \n",
    "            embeddings[attention_group.name], tmp_loss = (\n",
    "                self.attention_poolings[attention_group.name](\n",
    "                    query, pos_hist, keys_length, neg_hist))\n",
    "            if tmp_loss is not None:\n",
    "                auxiliary_losses.append(tmp_loss)\n",
    "\n",
    "        emb_concat = torch.cat(number_inputs + [\n",
    "            emb for emb in embeddings.values()], dim=-1)\n",
    "\n",
    "        final_layer_inputs = self.mlp(emb_concat)\n",
    "\n",
    "        output = self.final_layer(final_layer_inputs)\n",
    "        \n",
    "        auxiliary_avg_loss = None\n",
    "        if auxiliary_losses:\n",
    "            auxiliary_avg_loss = auxiliary_losses[0]\n",
    "            size = len(auxiliary_losses)\n",
    "            for i in range(1, size):\n",
    "                auxiliary_avg_loss += auxiliary_losses[i]\n",
    "            auxiliary_avg_loss /= size\n",
    "            \n",
    "        if  self.d_out==1:\n",
    "            output = output.squeeze() \n",
    "            \n",
    "        return output, auxiliary_avg_loss\n",
    "\n",
    "    def create_attention_fn(self, attention_group):\n",
    "        return Interest(\n",
    "            attention_group.pairs_count * self.embedding_size,\n",
    "            gru_type=attention_group.gru_type,\n",
    "            gru_dropout=attention_group.gru_dropout,\n",
    "            att_hidden_layers=attention_group.hidden_layers,\n",
    "            att_dropout=attention_group.att_dropout,\n",
    "            att_activation=attention_group.activation,\n",
    "            use_negsampling=self.use_negsampling)\n",
    "    \n",
    "    def is_attention_feature(self, feature):\n",
    "        for group in self.attention_groups:\n",
    "            if group.is_attention_feature(feature):\n",
    "                return True\n",
    "        return False\n",
    "\n",
    "    def is_neg_sampling_feature(self, feature):\n",
    "        for group in self.attention_groups:\n",
    "            if group.is_neg_sampling_feature(feature):\n",
    "                return True\n",
    "        return False\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "298364a1",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "6ce7fd9e",
   "metadata": {},
   "source": [
    "## 三，Movielens数据集完整范例"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "47364ceb",
   "metadata": {},
   "source": [
    "下面是一个基于Movielens评价数据集的DIEN完整范例，根据用户过去对一些电影的评价结果，来预测用户对候选电影是否会给好评。\n",
    "\n",
    "这个数据集不大，用CPU就能跑。😁"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e11dd6e2",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "03d8d160",
   "metadata": {},
   "source": [
    "### 1，准备数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "e09e2ac7",
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np \n",
    "import pandas as pd \n",
    "from sklearn.base import BaseEstimator, TransformerMixin\n",
    "from sklearn.preprocessing import QuantileTransformer\n",
    "from sklearn.pipeline import Pipeline, FeatureUnion \n",
    "from sklearn.impute import SimpleImputer \n",
    "from collections import Counter\n",
    "\n",
    "class CategoryEncoder(BaseEstimator, TransformerMixin):\n",
    "    \n",
    "    def __init__(self, min_cnt=5, word2idx=None, idx2word=None):\n",
    "        super().__init__() \n",
    "        self.min_cnt = min_cnt\n",
    "        self.word2idx = word2idx if word2idx else dict()\n",
    "        self.idx2word = idx2word if idx2word else dict()\n",
    "\n",
    "    def fit(self, x, y=None):\n",
    "        if not self.word2idx:\n",
    "            counter = Counter(np.asarray(x).ravel())\n",
    "\n",
    "            selected_terms = sorted(\n",
    "                list(filter(lambda x: counter[x] >= self.min_cnt, counter)))\n",
    "\n",
    "            self.word2idx = dict(\n",
    "                zip(selected_terms, range(1, len(selected_terms) + 1)))\n",
    "            self.word2idx['__PAD__'] = 0\n",
    "            if '__UNKNOWN__' not in self.word2idx:\n",
    "                self.word2idx['__UNKNOWN__'] = len(self.word2idx)\n",
    "\n",
    "        if not self.idx2word:\n",
    "            self.idx2word = {\n",
    "                index: word for word, index in self.word2idx.items()}\n",
    "\n",
    "        return self\n",
    "\n",
    "    def transform(self, x):\n",
    "        transformed_x = list()\n",
    "        for term in np.asarray(x).ravel():\n",
    "            try:\n",
    "                transformed_x.append(self.word2idx[term])\n",
    "            except KeyError:\n",
    "                transformed_x.append(self.word2idx['__UNKNOWN__'])\n",
    "\n",
    "        return np.asarray(transformed_x, dtype=np.int64)\n",
    "\n",
    "    def dimension(self):\n",
    "        return len(self.word2idx)\n",
    "\n",
    "class SequenceEncoder(BaseEstimator, TransformerMixin):\n",
    "    def __init__(self, sep=' ', min_cnt=5, max_len=None,\n",
    "                 word2idx=None, idx2word=None):\n",
    "        super().__init__() \n",
    "        self.sep = sep\n",
    "        self.min_cnt = min_cnt\n",
    "        self.max_len = max_len\n",
    "\n",
    "        self.word2idx = word2idx if word2idx else dict()\n",
    "        self.idx2word = idx2word if idx2word else dict()\n",
    "\n",
    "    def fit(self, x, y=None):\n",
    "        if not self.word2idx:\n",
    "            counter = Counter()\n",
    "\n",
    "            max_len = 0\n",
    "            for sequence in np.array(x).ravel():\n",
    "                words = sequence.split(self.sep)\n",
    "                counter.update(words)\n",
    "                max_len = max(max_len, len(words))\n",
    "\n",
    "            if self.max_len is None:\n",
    "                self.max_len = max_len\n",
    "\n",
    "            # drop rare words\n",
    "            words = sorted(\n",
    "                list(filter(lambda x: counter[x] >= self.min_cnt, counter)))\n",
    "\n",
    "            self.word2idx = dict(zip(words, range(1, len(words) + 1)))\n",
    "            self.word2idx['__PAD__'] = 0\n",
    "            if '__UNKNOWN__' not in self.word2idx:\n",
    "                self.word2idx['__UNKNOWN__'] = len(self.word2idx)\n",
    "\n",
    "        if not self.idx2word:\n",
    "            self.idx2word = {\n",
    "                index: word for word, index in self.word2idx.items()}\n",
    "\n",
    "        if not self.max_len:\n",
    "            max_len = 0\n",
    "            for sequence in np.array(x).ravel():\n",
    "                words = sequence.split(self.sep)\n",
    "                max_len = max(max_len, len(words))\n",
    "            self.max_len = max_len\n",
    "\n",
    "        return self\n",
    "\n",
    "    def transform(self, x):\n",
    "        transformed_x = list()\n",
    "\n",
    "        for sequence in np.asarray(x).ravel():\n",
    "            words = list()\n",
    "            for word in sequence.split(self.sep):\n",
    "                try:\n",
    "                    words.append(self.word2idx[word])\n",
    "                except KeyError:\n",
    "                    words.append(self.word2idx['__UNKNOWN__'])\n",
    "\n",
    "            transformed_x.append(\n",
    "                np.asarray(words[0:self.max_len], dtype=np.int64))\n",
    "\n",
    "        return np.asarray(transformed_x, dtype=object)\n",
    "    \n",
    "    def dimension(self):\n",
    "        return len(self.word2idx)\n",
    "\n",
    "    def max_length(self):\n",
    "        return self.max_len\n",
    "    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a07aff3d",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "7c2549a7",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "preprocess number features...\n",
      "preprocess category features...\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 393.16it/s]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "preprocess sequence features...\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 67.73it/s]\n"
     ]
    }
   ],
   "source": [
    "from sklearn.preprocessing import QuantileTransformer\n",
    "from sklearn.pipeline import Pipeline \n",
    "from sklearn.impute import SimpleImputer \n",
    "from tqdm import tqdm \n",
    "\n",
    "dftrain = pd.read_csv(\"./eat_pytorch_datasets/ml_1m/train.csv\")\n",
    "dfval = pd.read_csv(\"./eat_pytorch_datasets/ml_1m/test.csv\")\n",
    "\n",
    "for col in [\"movieId\",\"histHighRatedMovieIds\",\"negHistMovieIds\",\"genres\"]:\n",
    "    dftrain[col] = dftrain[col].astype(str)\n",
    "    dfval[col] = dfval[col].astype(str)\n",
    "\n",
    "num_features = ['age']\n",
    "cat_features = ['gender', 'movieId', 'occupation', 'zipCode']\n",
    "seq_features = ['genres', 'histHighRatedMovieIds', 'negHistMovieIds']\n",
    "\n",
    "num_pipe = Pipeline(steps = [('impute',SimpleImputer()),('quantile',QuantileTransformer())])\n",
    "\n",
    "encoders = {}\n",
    "\n",
    "print(\"preprocess number features...\")\n",
    "dftrain[num_features] = num_pipe.fit_transform(dftrain[num_features]).astype(np.float32)\n",
    "dfval[num_features] = num_pipe.transform(dfval[num_features]).astype(np.float32)\n",
    "\n",
    "print(\"preprocess category features...\")\n",
    "for col in tqdm(cat_features):\n",
    "    encoders[col] = CategoryEncoder(min_cnt=5)\n",
    "    dftrain[col]  = encoders[col].fit_transform(dftrain[col])\n",
    "    dfval[col] =  encoders[col].transform(dfval[col])\n",
    "    \n",
    "print(\"preprocess sequence features...\")\n",
    "for col in tqdm(seq_features):\n",
    "    encoders[col] = SequenceEncoder(sep=\"|\",min_cnt=5)\n",
    "    dftrain[col]  = encoders[col].fit_transform(dftrain[col])\n",
    "    dfval[col] =  encoders[col].transform(dfval[col])\n",
    "    \n",
    "from collections import OrderedDict\n",
    "from itertools import chain\n",
    "from torch.utils.data import Dataset,DataLoader \n",
    "\n",
    "class Df2Dataset(Dataset):\n",
    "    def __init__(self, dfdata, num_features, cat_features,\n",
    "                 seq_features, encoders, label_col=\"label\"):\n",
    "        self.dfdata = dfdata\n",
    "        self.num_features = num_features\n",
    "        self.cat_features = cat_features \n",
    "        self.seq_features = seq_features\n",
    "        self.encoders = encoders\n",
    "        self.label_col = label_col\n",
    "        self.size = len(self.dfdata)\n",
    "\n",
    "    def __len__(self):\n",
    "        return self.size\n",
    "\n",
    "    @staticmethod\n",
    "    def pad_sequence(sequence,max_length):\n",
    "        #zero is special index for padding\n",
    "        padded_seq = np.zeros(max_length, np.int32)\n",
    "        padded_seq[0: sequence.shape[0]] = sequence\n",
    "        return padded_seq\n",
    "\n",
    "    def __getitem__(self, idx):\n",
    "        record = OrderedDict()\n",
    "        for col in self.num_features:\n",
    "            record[col] = self.dfdata[col].iloc[idx].astype(np.float32)\n",
    "            \n",
    "        for col in self.cat_features:\n",
    "            record[col] = self.dfdata[col].iloc[idx].astype(np.int64)\n",
    "            \n",
    "        for col in self.seq_features:\n",
    "            seq = self.dfdata[col].iloc[idx]\n",
    "            max_length = self.encoders[col].max_length()\n",
    "            record[col] = Df2Dataset.pad_sequence(seq,max_length)\n",
    "\n",
    "        if self.label_col is not None:\n",
    "            record['label'] = self.dfdata[self.label_col].iloc[idx].astype(np.float32)\n",
    "        return record\n",
    "\n",
    "    def get_num_batches(self, batch_size):\n",
    "        return np.ceil(self.size / batch_size)\n",
    "    \n",
    "ds_train = Df2Dataset(dftrain, num_features, cat_features, seq_features, encoders)\n",
    "ds_val = Df2Dataset(dfval,num_features, cat_features, seq_features, encoders)\n",
    "dl_train = DataLoader(ds_train, batch_size=128,shuffle=True)\n",
    "dl_val = DataLoader(ds_val,batch_size=128,shuffle=False)\n",
    "\n",
    "cat_nums = {k:v.dimension() for k,v in encoders.items()} \n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "1f78510e",
   "metadata": {},
   "outputs": [],
   "source": [
    "for batch in dl_train:\n",
    "    break "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "73039199",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'gender': 4, 'movieId': 280, 'occupation': 23, 'zipCode': 124, 'genres': 20, 'histHighRatedMovieIds': 1791, 'negHistMovieIds': 3868}\n"
     ]
    }
   ],
   "source": [
    "print(cat_nums)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "69bbbfc3",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "51a9ce12",
   "metadata": {},
   "source": [
    "### 2，定义模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 54,
   "id": "5483c9d7",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "--------------------------------------------------------------------------\n",
      "Layer (type)                            Output Shape              Param #\n",
      "==========================================================================\n",
      "Embedding-1                                 [-1, 16]                   64\n",
      "Embedding-2                                 [-1, 16]                4,480\n",
      "Embedding-3                                 [-1, 16]                  368\n",
      "Embedding-4                                 [-1, 16]                1,984\n",
      "Embedding-5                              [-1, 6, 16]                  320\n",
      "MaxPooling-6                                [-1, 16]                    0\n",
      "Embedding-7                             [-1, 10, 16]               28,656\n",
      "Embedding-8                             [-1, 10, 16]               61,888\n",
      "GRU-9                                       [-1, 16]                1,632\n",
      "Linear-10                                  [-1, 100]                3,300\n",
      "Sigmoid-11                                 [-1, 100]                    0\n",
      "Linear-12                                   [-1, 50]                5,050\n",
      "Sigmoid-13                                  [-1, 50]                    0\n",
      "Linear-14                                    [-1, 1]                   51\n",
      "Linear-15                                  [-1, 100]                3,300\n",
      "Sigmoid-16                                 [-1, 100]                    0\n",
      "Linear-17                                   [-1, 50]                5,050\n",
      "Sigmoid-18                                  [-1, 50]                    0\n",
      "Linear-19                                    [-1, 1]                   51\n",
      "Linear-20                                   [-1, 16]                1,040\n",
      "BatchNorm1d-21                              [-1, 16]                   32\n",
      "BatchNorm1d-22                              [-1, 16]                   32\n",
      "Sigmoid-23                                  [-1, 16]                    0\n",
      "Dropout-24                                  [-1, 16]                    0\n",
      "Linear-25                                    [-1, 8]                  136\n",
      "BatchNorm1d-26                               [-1, 8]                   16\n",
      "BatchNorm1d-27                               [-1, 8]                   16\n",
      "Sigmoid-28                                   [-1, 8]                    0\n",
      "Dropout-29                                   [-1, 8]                    0\n",
      "Linear-30                                    [-1, 1]                    9\n",
      "AttentionUpdateGateGRUCell-31               [-1, 16]                1,632\n",
      "AttentionUpdateGateGRUCell-32               [-1, 16]                1,632\n",
      "AttentionUpdateGateGRUCell-33               [-1, 16]                1,632\n",
      "AttentionUpdateGateGRUCell-34               [-1, 16]                1,632\n",
      "AttentionUpdateGateGRUCell-35               [-1, 16]                1,632\n",
      "AttentionUpdateGateGRUCell-36               [-1, 16]                1,632\n",
      "AttentionUpdateGateGRUCell-37               [-1, 16]                1,632\n",
      "AttentionUpdateGateGRUCell-38               [-1, 16]                1,632\n",
      "AttentionUpdateGateGRUCell-39               [-1, 16]                1,632\n",
      "AttentionUpdateGateGRUCell-40               [-1, 16]                1,632\n",
      "Linear-41                                   [-1, 32]                3,136\n",
      "BatchNorm1d-42                              [-1, 32]                   64\n",
      "PReLU-43                                    [-1, 32]                    1\n",
      "Dropout-44                                  [-1, 32]                    0\n",
      "Linear-45                                   [-1, 16]                  528\n",
      "BatchNorm1d-46                              [-1, 16]                   32\n",
      "PReLU-47                                    [-1, 16]                    1\n",
      "Dropout-48                                  [-1, 16]                    0\n",
      "Linear-49                                    [-1, 1]                   17\n",
      "==========================================================================\n",
      "Total params: 137,574\n",
      "Trainable params: 137,574\n",
      "Non-trainable params: 0\n",
      "--------------------------------------------------------------------------\n",
      "Input size (MB): 0.000801\n",
      "Forward/backward pass size (MB): 0.012115\n",
      "Params size (MB): 0.524803\n",
      "Estimated Total Size (MB): 0.537720\n",
      "--------------------------------------------------------------------------\n"
     ]
    }
   ],
   "source": [
    "def create_net():\n",
    "    augru_attention_groups_with_neg = [\n",
    "    AttentionGroup(\n",
    "        name='group1',\n",
    "        pairs=[{'ad': 'movieId', 'pos_hist': 'histHighRatedMovieIds', 'neg_hist': 'negHistMovieIds'}],\n",
    "        hidden_layers=[16, 8], att_dropout=0.1, gru_type='AUGRU')\n",
    "    ]\n",
    "\n",
    "    net = DIEN(num_features=num_features,\n",
    "           cat_features=cat_features,\n",
    "           seq_features=seq_features,\n",
    "           cat_nums = cat_nums,\n",
    "           embedding_size=16,\n",
    "           attention_groups=augru_attention_groups_with_neg,\n",
    "           mlp_hidden_layers=[32,16],\n",
    "           mlp_activation=\"prelu\",\n",
    "           mlp_dropout=0.25,\n",
    "           use_negsampling=True,\n",
    "           d_out=1\n",
    "           )\n",
    "    \n",
    "    return net \n",
    "\n",
    "net = create_net() \n",
    "\n",
    "out,aloss = net.forward(batch)\n",
    "\n",
    "from torchkeras.summary import summary \n",
    "summary(net,input_data=batch);\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "10909969",
   "metadata": {},
   "source": [
    "### 3，训练模型"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9b16c8af-d31c-49e4-8038-d8736187df49",
   "metadata": {},
   "source": [
    "我们使用梦中情炉torchkeras来实现最优雅的训练循环。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 55,
   "id": "653ab7c6",
   "metadata": {},
   "outputs": [],
   "source": [
    "from torchkeras import KerasModel \n",
    "\n",
    "class StepRunner:\n",
    "    def __init__(self, net, loss_fn, accelerator=None, stage = \"train\", metrics_dict = None, \n",
    "                 optimizer = None, lr_scheduler = None\n",
    "                 ):\n",
    "        self.net,self.loss_fn,self.metrics_dict,self.stage = net,loss_fn,metrics_dict,stage\n",
    "        self.optimizer,self.lr_scheduler = optimizer,lr_scheduler\n",
    "        self.accelerator = accelerator\n",
    "        if self.stage=='train':\n",
    "            self.net.train() \n",
    "        else:\n",
    "            self.net.eval()\n",
    "    \n",
    "    def __call__(self, batch):        \n",
    "        #loss\n",
    "        with self.accelerator.autocast():\n",
    "            #loss\n",
    "            preds, aux_loss = net(batch)\n",
    "            labels = batch['label']\n",
    "            loss = self.loss_fn(preds,labels)+aux_loss\n",
    "\n",
    "        #backward()\n",
    "        if self.stage==\"train\" and self.optimizer is not None:\n",
    "            self.accelerator.backward(loss)\n",
    "            if self.accelerator.sync_gradients:\n",
    "                self.accelerator.clip_grad_norm_(self.net.parameters(), 1.0)\n",
    "            self.optimizer.step()\n",
    "            if self.lr_scheduler is not None:\n",
    "                self.lr_scheduler.step()\n",
    "            self.optimizer.zero_grad()\n",
    "            \n",
    "        all_loss = self.accelerator.gather(loss).sum()\n",
    "        all_preds = self.accelerator.gather(preds)\n",
    "        all_labels = self.accelerator.gather(labels)\n",
    "        \n",
    "        #losses (or plain metrics that can be averaged)\n",
    "        step_losses = {self.stage+\"_loss\":all_loss.item()}\n",
    "        \n",
    "        #metrics (stateful metrics)\n",
    "        step_metrics = {self.stage+\"_\"+name:metric_fn(all_preds, all_labels).item() \n",
    "                        for name,metric_fn in self.metrics_dict.items()}\n",
    "        \n",
    "        if self.stage==\"train\":\n",
    "            if self.optimizer is not None:\n",
    "                step_metrics['lr'] = self.optimizer.state_dict()['param_groups'][0]['lr']\n",
    "            else:\n",
    "                step_metrics['lr'] = 0.0\n",
    "        return step_losses,step_metrics\n",
    "    \n",
    "\n",
    "KerasModel.StepRunner = StepRunner \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 56,
   "id": "25c23e40",
   "metadata": {},
   "outputs": [],
   "source": [
    "from torchkeras.metrics import AUC\n",
    "loss_fn = nn.BCEWithLogitsLoss()\n",
    "\n",
    "metrics_dict = {\"auc\":AUC()}\n",
    "optimizer = torch.optim.Adam(net.parameters(), lr=0.002, weight_decay=0.001) \n",
    "\n",
    "model = KerasModel(net,\n",
    "                   loss_fn = loss_fn,\n",
    "                   metrics_dict= metrics_dict,\n",
    "                   optimizer = optimizer,\n",
    "                  )    \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 57,
   "id": "551bf220",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[0;31m<<<<<< 🐌 cpu is used >>>>>>\u001b[0m\n"
     ]
    },
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAiEAAAGJCAYAAABcsOOZAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjYuMiwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8o6BhiAAAACXBIWXMAAA9hAAAPYQGoP6dpAAB1HUlEQVR4nO3deVxUVf8H8M8wwLAvyiqgoKi5YqKiprlvlalkadrjVpmapZmV+uReUpmGmkv+0mwxcwmrJ8tSUjNzKfddURQXQFwAQdlmzu+P64wMMwMzA8Nl+bxfr3nBnDlz51xm4H45y/cohBACREREROXMTu4GEBERUfXEIISIiIhkwSCEiIiIZMEghIiIiGTBIISIiIhkwSCEiIiIZMEghIiIiGTBIISIiIhkwSCEiIiIZMEghGQ1a9YsKBQK3Lx5U+6mlJtLly5BoVBgzZo1cjeFbGjcuHHo0aOH3M0oVyNGjICbm5vczdAzePBgPPfcc3I3g0xgEELV0rx58/DDDz/I3YwqLT09HaNHj4avry9cXV3RpUsXHDp0yOznazQaLF++HC1atICzszNq1qyJrl274ujRoyafs3btWigUCpMXwk8//RSNGjWCSqVCUFAQJk2ahOzsbKN1L1y4gCFDhsDPzw/Ozs6oX78+/vvf/5rV9sTERHz++eeYNm2aWfWzs7OhVqvNqltVaTQarFmzBk8//TRCQkLg6uqKpk2b4r333kNOTo5BfYVCYfT2wQcf6NV755138P333xf7uSH52MvdACI5zJs3DwMHDkT//v3lbkqVpNFo8OSTT+Lo0aN466234OPjg2XLlqFz5844ePAg6tevX+IxRo0ahbVr12LYsGEYP348srOzcfjwYdy4ccNo/aysLLz99ttwdXU1+vg777yDjz76CAMHDsSECRNw6tQpLFmyBCdPnsRvv/2mV/fIkSPo3LkzgoKC8Oabb6JmzZpISkrClStXzDr/RYsWISwsDF26dDFZZ9u2bVixYgX++OMPpKenQ6lUIiwsTNe+gIAAs16rqrh37x5GjhyJtm3bYsyYMfDz88PevXsxc+ZMxMfH448//oBCodB7To8ePTBs2DC9skcffdTgfqtWrbBgwQJ89dVXNj8PspAgktHMmTMFAJGWllaur+vq6iqGDx9erq+plZiYKACIL774QpbXLw/r168XAMTGjRt1ZTdu3BBeXl7i+eefN/v5cXFxZr/mO++8Ixo2bCiGDh0qXF1d9R67fv26sLe3F//5z3/0ypcsWSIAiJ9++klXplarRdOmTUVUVJS4d++e2a+vlZeXJ3x8fMS7775r9PGsrCzxzDPPCIVCIfr06SOWLFkifv75Z7FhwwYxY8YMUb9+feHl5SU2bdpk8WvLbfjw4QY/e3Pl5uaKPXv2GJTPnj1bABDbtm3TKwcgXn31VbOO/fHHHwtXV1dx9+5dq9pGtsMghGSlDUJOnz4tnn32WeHu7i5q1KghXn/9dXH//n2D+l9//bVo2bKlcHJyEt7e3mLQoEEiKSlJr865c+dEdHS08Pf3FyqVSgQFBYlBgwaJ9PR0IYT0x6vozVRAkpKSIpRKpZg1a5bBY2fOnBEAxJIlS4QQQty6dUu8+eabomnTpsLV1VW4u7uL3r17iyNHjug9z5ogJDc3V0yfPl20bNlSeHh4CBcXF9GhQwfxxx9/6NXbsWOHACB27Nhh1mtqf+4+Pj7CyclJNGjQQEybNs3sdpny7LPPCn9/f6FWq/XKR48eLVxcXEROTk6xz4+KihJt2rQRQkhBQVZWVrH1z507JxwdHcWWLVuMXgi///57AUBs2bJFrzwtLU0AEEOGDNGV/frrrwKA+OWXX4QQQmRnZ4uCgoLiT7iQP/74QwAQO3fuNHgsPz9fdO7cWdSuXVscOHDA6PPz8/PFhx9+KBwdHcXPP/9s8Pjp06fFM888I7y9vYVKpRKRkZHixx9/1KvzxRdfCABi165dYvTo0aJGjRrC3d1d/Oc//xG3b982OObSpUtF48aNhaOjowgMDBTjxo0Td+7cMai3b98+0adPH+Hl5SVcXFxEs2bNRGxsrO5x7c/+6tWrol+/fsLV1VX4+PiIN99806KfYWHHjh0TAMTixYv1yrVByL1794z+rSjs6NGjFge1VD44J4QqhOeeew45OTmIiYnBE088gcWLF2P06NF6dd5//30MGzYM9evXx8KFCzFx4kTEx8fj8ccfR3p6OgAgLy8PvXr1wr59+/Daa69h6dKlGD16NC5evKir8/XXX0OlUqFjx474+uuv8fXXX+OVV14x2i5/f3906tQJGzZsMHhs/fr1UCqVePbZZwEAFy9exA8//ICnnnoKCxcuxFtvvYXjx4+jU6dOuH79eql+PpmZmfj888/RuXNnfPjhh5g1axbS0tLQq1cvHDlyxKpjHjt2DFFRUfjjjz/w8ssvY9GiRejfvz/+97//6erk5+fj5s2bZt00Go3ueYcPH0bLli1hZ6f/J6ZNmza4d+8ezp07V+y5HjhwAK1bt8a0adPg6ekJNzc31K1b1+j7AAATJ05Ely5d8MQTTxh9PDc3FwDg7OysV+7i4gIAOHjwoK5s+/btAACVSoVWrVrB1dUVLi4uGDx4MG7fvm2y3Vp///03FAqFwbAAAMTExODs2bPYt28fWrduDUAautLOS9FoNEhPT8fbb7+N2NhYjBo1Cnfv3tU9/+TJk2jbti1Onz6NKVOmYMGCBXB1dUX//v2xefNmg9cbP348Tp8+jVmzZmHYsGFYu3Yt+vfvDyGErs6sWbPw6quvolatWliwYAGeeeYZfPbZZ+jZsyfy8/N19bZt24bHH38cp06dwoQJE7BgwQJ06dIFP//8s95rqtVq9OrVCzVr1sTHH3+MTp06YcGCBVi5cmWJPztjUlJSAAA+Pj4Gj61Zswaurq5wdnZG48aN8e233xo9RuPGjeHs7Iw9e/ZY1QayIbmjIKretD0hTz/9tF75uHHjBABx9OhRIYQQly5dEkqlUrz//vt69Y4fPy7s7e115YcPHzYYBjDGkuGYzz77TAAQx48f1ytv3Lix6Nq1q+5+Tk6OwX/+iYmJQqVSiTlz5uiVwcKekIKCApGbm6tXdufOHeHv7y9GjRqlK7OkJ+Txxx8X7u7u4vLly3p1NRqNwfHMuSUmJuqe5+rqqtcurS1btggAYuvWrSbP9dChQwKAqFmzpvD39xfLli0Ta9euFW3atBEKhUL8+uuvevV//vlnYW9vL06ePCmEMD4kcPDgQQFAzJ07V69869atAoBwc3PTlT399NO61x86dKjYtGmTmD59urC3txft27fX+/kY88ILL4iaNWsalGdkZAgPDw/xww8/6MpWrlwpvL29BQDRpEkTXY+NVsuWLcXKlSt197t16yaaNWum15Ok0WhE+/btRf369XVl2p6QyMhIkZeXpyv/6KOPBABdz8mNGzeEo6Oj6Nmzp95n99NPPxUAxOrVq4UQ0ucvLCxM1KlTx6CHpPDPY/jw4QKA3uddCCEeffRRERkZWezPzZTu3bsLDw8Pg9dt3769iI2NFT/++KNYvny5aNq0qQAgli1bZvQ4DRo0EH369LGqDWQ7DEJIVtog5LffftMrP336tAAgYmJihBBCLFy4UCgUCnH+/HmRlpamd2vUqJHo3r27EEKIixcvCgDipZdeEtnZ2SZf15IgJC0tTdjb2+uN8R8/flwAEJ999pnR5xQUFIibN2+KtLQ00bx5c9G/f3/dY6WdE6JWq8WtW7dEWlqaePLJJ0WLFi10j5kbhNy4cUMAEBMmTCj2tW7fvi22bdtm1q1wl7idnZ0YO3aswfHi4+MFALF582aTr/nnn3/qApt9+/bpyu/evSt8fHzEY489pivLzc0V9evXF+PHj9eVmZqXEBUVJdzc3MTq1atFYmKi+OWXX0SdOnWEg4ODUCqVunpdu3YVAETv3r31nh8TE2N0bkJRffr0EeHh4QblGzduFKGhobqL9sGDB4VCoRAvvfSS2Lx5s5gzZ46oUaOGXhAye/Zs3VDRrVu3hEKhEHPnzjX4HdDOm7h69aoQ4mEQUvTzeffuXWFvby9eeeUVIYQQ3377rd7QU+Gfq4eHh3jmmWeEEEL8888/AoD45JNPij13bRBy48YNvfLXX39deHt7F/tcY95///1iA4uibW7atKnw8vIyOpcnKipKtG7d2uI2kG1xdQxVCEVXS9SrVw92dna4dOkSAOD8+fMQQphcVeHg4AAACAsLw6RJk7Bw4UKsXbsWHTt2xNNPP40XXngBnp6eVrXNx8cH3bp1w4YNGzB37lwA0lCMvb09oqOjdfU0Gg0WLVqEZcuWITExUW/JZc2aNa167cK+/PJLLFiwAGfOnNHrJg8LC7P4WBcvXgQANG3atNh63t7e6N69u8XHd3Z21g2BFKZdall0WKTocwHpvKKionTlbm5u6Nu3L7755hsUFBTA3t4en3zyCW7evInZs2eX2Kbvv/8egwYNwqhRowAASqUSkyZNwq5du3D27FmD13/++ef1nj9kyBBMnToVf//9d4k/E1FouEPr4MGD6NSpk26Fh3Z47f/+7/8AAP3794dardY7F39/f/z1118AgISEBAghMH36dEyfPt3o6964cQNBQUG6+0V/X9zc3BAYGKj7vbp8+TIAoGHDhnr1HB0dUbduXd3jFy5cAFDy5wUAnJyc4Ovrq1fm7e2NO3fulPjcwtavX493330XL774IsaOHVtifUdHR4wfPx5jxozBwYMH0aFDB73HhRAGq2tIfgxCqEIq+sdCo9FAoVDg119/hVKpNKhfOC/EggULMGLECPz444/4/fff8frrryMmJgb79u1DcHCwVe0ZPHgwRo4ciSNHjqBFixbYsGEDunXrpjdOPW/ePEyfPh2jRo3C3LlzUaNGDdjZ2WHixIl68yWs8c0332DEiBHo378/3nrrLfj5+UGpVCImJkZ3gQAMf25a1uagyMvLM2seBAD4+vrq3pvAwEAkJycb1NGW1apVy+RxtI/5+/sbPObn54f8/HzdHIr33nsP48aNQ2ZmJjIzMwFIS3WFELh06RJcXFzg5+cHAAgKCsJff/2F8+fPIyUlBfXr10dAQABq1aqFBg0alPj62uOUdDGtWbOm0Tq3bt3SO+9Lly7p5oVotWnTRu/+lStXdAGs9jM0efJk9OrVy+hrh4eHF9u28mDs99NS27Ztw7Bhw/Dkk09ixYoVZj8vJCQEAIx+Zu/cuWPW0nAqXwxCqEI4f/683n/0CQkJ0Gg0CA0NBSD1jAghEBYWpnfBMKVZs2Zo1qwZ3n33Xfz999947LHHsGLFCrz33nsATF+sTenfvz9eeeUVrF+/HgBw7tw5TJ06Va/Opk2b0KVLF6xatUqvPD093eikOkts2rQJdevWRVxcnF7bZ86cqVfP29tb95qFaf+j1apbty4A4MSJE8W+7t9//11srovCEhMTde9XixYtsHv3bmg0Gr3Jqfv374eLi0ux72GtWrUQEBCAa9euGTx2/fp1ODk5wd3dHUlJScjKysJHH32Ejz76yKBuWFgY+vXrZ5CUrn79+rqL0alTp5CcnIwRI0boHo+MjMT//d//Gby+dnJx0f/yi3rkkUewdu1aZGRk6PW+eXh4ICMjQ3c/ICBAL4AEHvZQAVKv0ddff40ZM2YAePieOTg4mN07df78eb33LysrC8nJyboJvHXq1AEAnD17Vnd8QAo+ExMTda9Tr149ANLnxZqeMUvs378fAwYMQKtWrbBhwwbY25t/mdL+/Iq+RwUFBbhy5QqefvrpMm0rlR5Xx1CFsHTpUr37S5YsAQD06dMHABAdHQ2lUonZs2cbdHULIXDr1i0A0sqKgoICvcebNWsGOzs7veEBV1dXgwt1cby8vNCrVy9s2LAB3333HRwdHQ0SnSmVSoO2bdy40ejF1FLa/y4LH3///v3Yu3evXr06depAqVTizz//1CtftmyZ3n1fX188/vjjWL16NZKSkvQeK/waERER2LZtm1m3wsm1Bg4ciNTUVMTFxenKbt68iY0bN6Jv375QqVS68gsXLhhcjAcNGoQrV65g27Ztes//8ccf0bVrV9jZ2cHPzw+bN282uHXp0gVOTk7YvHmzQaBYmEajwdtvvw0XFxeMGTNGV96vXz+oVCp88cUXej1Yn3/+OQCUmIq9Xbt2EELorbgBgEaNGmH//v26+wMGDMDmzZuxdOlSXL58Gb/88gvmzZsHANi9ezd69uwJb29vvPDCCwCknpjOnTvjs88+M9rLlJaWZlC2cuVKvaG75cuXo6CgQPd71b17dzg6OmLx4sV67/uqVauQkZGBJ598EgDQsmVLhIWFITY21uD3xtjQk7VOnz6NJ598EqGhofj5559NDtsZO9e7d+8iNjYWPj4+iIyM1Hvs1KlTyMnJQfv27cusrVRG5JiIQqSlnZjarFkz0bdvX7F06VLxwgsvGORuEOLhxMD27duLjz76SCxfvly8/fbbon79+mL+/PlCCCE2b94sgoKCxMSJE8WyZcvE4sWLRevWrYWDg4PYu3ev7lhPPPGEcHV1FQsWLBDr1q3TmwBpyjfffCMACHd3d9G3b1+Dx2fMmCEAiBEjRoiVK1eK1157TdSoUUPUrVtXdOrUSVfPmompq1ev1q0i+uyzz8SUKVOEl5eXaNKkiahTp45e3cGDBwt7e3sxadIksXTpUtGnTx8RGRlp8JpHjhwRbm5uombNmmLq1Kli5cqVYtq0aSIiIsLsdplSUFAg2rZtK9zc3MTs2bPF0qVLRZMmTYS7u7s4c+aMXt06deoYnENKSooIDAwU7u7uYubMmWLhwoWiQYMGwtnZ2SDvSlGmJqa+/vrrYvTo0WLZsmVi0aJFIioqSigUCvHVV18Z1J0zZ44AIHr06CGWLl0qRo8eLRQKhVmJ1nJzc3U/08KuXr0q7O3txaFDh3RlY8eO1U3CdXFxEfPnzxcAhJ2dnXjuuecMkvidPHlSeHt7i5o1a4opU6aIlStXirlz54onnnhCNG/eXFdPOzG1WbNmomPHjmLJkiVi/Pjxws7OTnTo0EFvRYv2d7Bnz57i008/Fa+99ppQKpWidevWeitrtm7dKhwcHESdOnXErFmzxGeffSbeeOMN0bNnzxJ/9trXKE5mZqYICQkRdnZ24oMPPhBff/213u3vv//WO15ERIR49913xcqVK8Xs2bNFnTp1hEKhEN98843BsT/++GPh4uIiMjMzi20DlT8GISQr7R+nU6dOiYEDBwp3d3fh7e0txo8fbzQB0ffffy86dOggXF1dhaurq3jkkUfEq6++Ks6ePSuEkFbHjBo1StSrV084OTmJGjVqiC5duojt27frHefMmTPi8ccfF87OzsUmKyssMzNTV9/YH7qcnBzx5ptvisDAQOHs7Cwee+wxsXfvXtGpU6dSByEajUbMmzdP1KlTR6hUKvHoo4+Kn3/+WQwfPtzgAp6WliaeeeYZ4eLiIry9vcUrr7wiTpw4YfQ1T5w4IQYMGCC8vLyEk5OTaNiwoZg+fbrZ7SrO7du3xYsvvihq1qwpXFxcRKdOncQ///xjUM9YECKEEBcuXBADBgwQHh4ewtnZWXTt2tVkgq/CTF0Iv/jiCxEREaFLJNetWzeDZG9aGo1GLFmyRDRo0EA4ODiIkJAQ8e677+pdlIvz+uuvG10hM3z4cBEVFaW33PrChQti9+7d4s6dO+L+/fti7969usR6xly4cEEMGzZMBAQECAcHBxEUFCSeeuopvQyrRZOVeXt7Czc3NzF06FBx69Ytg2N++umn4pFHHhEODg7C399fjB071miysr/++kv06NFDuLu7C1dXV9G8eXNdsj7t+VkbhGh/L0zdCv+O/v7776JHjx66n4GXl5fo2bOniI+PN3rsqKgo8cILLxT7+iQPhRBl2JdGRES4ePEiHnnkEfz666/o1q2brvzmzZuIjIxE06ZNsW7dOnh4eBg8V61WY/PmzRg4cKDVr79mzRqMHDkS//zzD1q1amX1caqCI0eOoGXLljh06BBatGghd3OoCM4JISIqY3Xr1sWLL75osKOrj48Ptm3bhnPnzqF+/fqYO3cu9u3bh6SkJJw4cQIrVqxAREQExowZYzBXh6zzwQcfYODAgQxAKij2hBDJyJwlsJ6ensXm1aDK5+7du5g/fz4+//xzvUmm7u7uGDp0KGbMmIHAwECrj8+eEKosuESXSEbmLIH94osv9JaQUuXn7u6OOXPmYPbs2UhISEBKSgo8PDzQqFEjODo6yt08onLDnhAiGd25c8dgKWdRTZo0KdV/xUREFRWDECIiIpIFJ6YSERGRLDgnxAiNRoPr16/D3d2dGx4RERFZQAiBu3fvolatWnrbNhjDIMSI69ev6zZCIiIiIstduXKlxE1DZQ9Cli5divnz5yMlJQURERFYsmSJwU6ShcXGxmL58uVISkqCj48PBg4ciJiYGDg5OVl9zKLc3d0BSD9AY8mEiIiIyLjMzEyEhITorqXFkTUIWb9+PSZNmoQVK1YgKioKsbGx6NWrF86ePavbNruwb7/9FlOmTMHq1avRvn17nDt3DiNGjIBCocDChQutOqYx2iEYDw8PBiFERERWMGc6g6yrY6KiotC6dWt8+umnAKS5GCEhIXjttdcwZcoUg/rjx4/H6dOnER8fryt78803sX//fvz1119WHdOYzMxMeHp6IiMjg0EIERGRBSy5hsq2OiYvLw8HDx5E9+7dHzbGzg7du3c32J5cq3379jh48CAOHDgAQNqf4ZdffsETTzxh9TEBIDc3F5mZmXo3IiIisi3ZhmNu3rwJtVoNf39/vXJ/f3+cOXPG6HOGDBmCmzdvokOHDhBCoKCgAGPGjMG0adOsPiYAxMTEYPbs2aU8IyIiIrKE7BNTLbFz507MmzcPy5YtQ1RUFBISEjBhwgTMnTsX06dPt/q4U6dOxaRJk3T3tZNqiqMNgtRqtdWvS/JxcHCAUqmUuxlERNWabEGIj48PlEolUlNT9cpTU1MREBBg9DnTp0/Hf/7zH7z00ksAgGbNmiE7OxujR4/Gf//7X6uOCQAqlQoqlcrstufl5SE5ORn37t0z+zlUsSgUCgQHB8PNzU3uphARVVuyBSGOjo6IjIxEfHw8+vfvD0CaRBofH4/x48cbfc69e/cMEp9o/5sVQlh1TEtpNBokJiZCqVSiVq1acHR0ZEKzSkYIgbS0NFy9ehX169dnjwgRkUxkHY6ZNGkShg8fjlatWqFNmzaIjY1FdnY2Ro4cCQAYNmwYgoKCEBMTAwDo27cvFi5ciEcffVQ3HDN9+nT07dtXdyEp6ZillZeXp1tx4+LiUibHpPLn6+uLS5cuIT8/n0EIEVVLajWwezeQnAwEBgIdOwLl/edQ1iBk0KBBSEtLw4wZM5CSkoIWLVpg69atuomlSUlJej0f7777LhQKBd59911cu3YNvr6+6Nu3L95//32zj1lWSkpFSxUbe6+IqDqLiwMmTACuXn1YFhwMLFoEREeXXzu4i64Rxa1xzsnJQWJiIsLCwvSytFLlwveRiKqruDhg4ECg6NVf+7/Zpk2lC0QqRZ4QIiIiKhtqNbBzJ7BunfTV1MJNtVrqATHW/aAtmzjR9PPLGoMQmZj7gamoQkNDERsbK3cziIiqvbg4IDQU6NIFGDJE+hoaKpUX9eOP+kMwRQkBXLkizRUpD5UqT0hVIddYXOfOndGiRYsyCR7++ecfuLq6lr5RRERkNVNDK9euSeUbNgC1awNbtgC//AL8+695x01OLvu2GsMgpJyV9IEp7VhcaQghoFarYW9f8sfC19e3HFpERESmmDO0MmgQoNFYfuzAwNK1zVwcjilD2dmmbzk55n1gJkwAsrJKPq6lRowYgV27dmHRokVQKBRQKBRYs2YNFAoFfv31V0RGRkKlUuGvv/7ChQsX0K9fP/j7+8PNzQ2tW7fG9u3b9Y5XdDhGoVDg888/x4ABA+Di4oL69evjp59+MqttarUaL774IsLCwuDs7IyGDRti0aJFenU6d+6MiRMn6pX1798fI0aM0N3Pzc3FO++8g5CQEKhUKoSHh2PVqlUW/ZyIiCqL3buLH1oBpADExUX6J/eLL6T6wcEPJ6EWpVAAISHSct3ywCCkDLm5mb4980zJHxghpMc7dNAvDw01PJ6lFi1ahHbt2uHll19GcnIykpOTdanpp0yZgg8++ACnT59G8+bNkZWVhSeeeALx8fE4fPgwevfujb59+yIpKanY15g9ezaee+45HDt2DE888QSGDh2K27dvl9g2jUaD4OBgbNy4EadOncKMGTMwbdo0bNiwwaJzHDZsGNatW4fFixfj9OnT+Oyzz5gRlYiqLHOHTFasADZuBEaMAIKCpKF/wDAQ0d6PjS2/fCEcjilH5n5g8vLK/rU9PT3h6OgIFxcXXQp77aZ+c+bMQY8ePXR1a9SogYiICN39uXPnYvPmzfjpp5+KzTw7YsQIPP/88wCAefPmYfHixThw4AB69+5dbNscHBz0NhAMCwvD3r17sWHDBjz33HNmnd+5c+ewYcMGbNu2TbeLct26dc16LhFRZWTukEnRrdCio6Whf2NzE2Njy3dKAIOQMlR0GKUwpRLYt8+843zyif79S5esbpJZWrVqpXc/KysLs2bNwpYtW5CcnIyCggLcv3+/xJ6Q5s2b6753dXWFh4cHbty4YVYbli5ditWrVyMpKQn3799HXl4eWrRoYfY5HDlyBEqlEp06dTL7OURElVnHjlLgYKqHXaGQHjc2tBIdDfTrV80zplY1JS0W0X5grl0zPi9E+4F58I+82cctraKrXCZPnoxt27bh448/Rnh4OJydnTFw4EDkldBF4+DgoHdfoVBAY8aMqO+++w6TJ0/GggUL0K5dO7i7u2P+/PnYv3+/ro6dnR2K5tXLz8/Xfe/s7Fzi6xARVSVKJTB8OFAoabiOOUMrSiXQubOtWmcezgkpR0qlvGNxjo6OUJuRkGTPnj0YMWIEBgwYgGbNmiEgIACXbNgds2fPHrRv3x7jxo3Do48+ivDwcFy4cEGvjq+vL5ILjWep1WqcOHFCd79Zs2bQaDTYtWuXzdpJRGQtW+SGSkoCli+Xvi/6z2pwsLyrLc3FIKScacfigoL0y8vjAxMaGor9+/fj0qVLuHnzpsleivr16yMuLg5HjhzB0aNHMWTIELN6NKxVv359/Pvvv/jtt99w7tw5TJ8+Hf/8849ena5du2LLli3YsmULzpw5g7FjxyI9PV3v3IYPH45Ro0bhhx9+QGJiInbu3Gnx5FYiorJmSTIxc+XmAs8+C9y+DbRqBdy4AezYAXz7rfQ1MbHiByAAgxBZREdL8zzK+wMzefJkKJVKNG7cGL6+vibneCxcuBDe3t5o3749+vbti169eqFly5Y2a9crr7yC6OhoDBo0CFFRUbh16xbGjRunV2fUqFEYPnw4hg0bhk6dOqFu3bro0qWLXp3ly5dj4MCBGDduHB555BG8/PLLyLZmPTMRURnR5oYqOm9DmxvK2kBk8mTgwAHA21ta+eLiIg2tPP+89LWybA7ODeyM4AZ2VR/fRyKyNbVa6vEoaeJoYqJlQUNeHtCnD/DHH8DPPwNPPlkmzS0z3MCOiIhIZubkhrJmnxZHR+D336VbRQtALMUghGxuzJgxcHNzM3obM2aM3M0jIrKJgwfNq2duDqnCk1mVSqBQeqdKi0t0yebmzJmDyZMnG32spK46IqLKJjUVmDULWLnSvPoP8kcWSwhpOa6Hh5RLSqUqVRMrDAYhZHN+fn7w8/OTuxlERGVCrTad5GvHDuDppx8mr3RyklayFDf7cu5cacVkgwam63z2GbB2rfQ6I0YAbdqU2enIisMxREREZippuW1kJODsDLRuDezaJQUOgOncUI6OUuDSrBkwZ44UsAD6eUU++wx4/XWp/IMPqk4AArAnhIiIyCza5bZFezWuXpXKtbme9u4FwsIAuwf/5he3T0uLFsC4ccBvvwEzZ0ppG4YNk5KQFZ3U2qYN8OabtjzD8sclukZwiW7Vx/eRiLSKG14pXKe45baAtFGcqeW2xb2GEMCGDVKgkppq+vgKReXIgsolukRERGYwN5vpjz8WH4AAxS+31e7TYiyZmEIBDBoEnDwpJR8rzsSJZZPyvaJgEEJERNVSSdlMY2Mfll2+bN4xzV1ua8zx48CdO6YftzavSEXGIEQmaiGw884drEtNxc47d6CuBKNioaGhiC38W0lEVEmp1dLwh7E/vUJIt2nTHvY6tGhh3nEDA61vk7kBTGkCnYqGE1NlEJeWhgkJCbiqnQYNIFilwqLwcET7+srYMiKi6qGkbKYAcP++VK9zZ+Dxx6XJpNeuGQ9ctCnYO3a0vk3mBjClCXQqGvaElLO4tDQMPHlSLwABgGu5uRh48iTi0tJkahkRUfVhaa+DUgksWiR9b2q5bWxs6TaO69hRCmSKHr/w64SElC7QqWgYhJQBIQSy1eoSb5kFBXj9/HkYG3jRlk1ISEBmQYFZx7NkYdPKlStRq1YtaDQavfJ+/fph1KhRuHDhAvr16wd/f3+4ubmhdevW2L59u9U/k4ULF6JZs2ZwdXVFSEgIxo0bhyxt9h4As2bNQosi/ZuxsbEIDQ3VK1u9ejWaNGkClUqFwMBAjB8/3uo2EVHlUzhfxs6dJU/KLK6+RgP89BOQnW1dr0N0tLQ6JShIv05wcNmsWimPQKei4XBMGbin0cCtDGYKCQBXc3Ph+ddfZtXP6tgRrmZ+Gp999lm89tpr2LFjB7p16wYAuH37NrZu3YpffvkFWVlZeOKJJ/D+++9DpVLhq6++Qt++fXH27FnUrl3b4nOxs7PD4sWLERYWhosXL2LcuHF4++23sWzZMrOPsXz5ckyaNAkffPAB+vTpg4yMDOzZs8fithBR5RQXZzy/xqJFxi/4puovWADk5AAffgicOgUsXizl5rBmeCU6GujXr+QlvdbSBjqm8opU9OW5lmIQUk14e3ujT58++Pbbb3VByKZNm+Dj44MuXbrAzs4OERERuvpz587F5s2b8dNPP1nV+zBx4kTd96GhoXjvvfcwZswYi4KQ9957D2+++SYmTJigK2vdurXFbSGiysdUYjDtypWiPQ/FJRIbNOjhfQ8PoKDgYa/DwIFSwFH4eSX1OmiX29qKrQOdioRBSBlwsbNDlhmDdH+mp+OJ48dLrPdLs2Z43MvLrNe1xNChQ/Hyyy9j2bJlUKlUWLt2LQYPHgw7OztkZWVh1qxZ2LJlC5KTk1FQUID79+8jKSnJotfQ2r59O2JiYnDmzBlkZmaioKAAOTk5uHfvHlxcXEp8/o0bN3D9+nVdwERE1UdJK1cUCilfRr9+UllmJjB+fPH7s9jZAe+9J/WAeHpKZRW518HWgU5FwSCkDCgUCrOGRXrWqIFglQrXcnONzgtRQFol07NGDShNzUwqhb59+0IIgS1btqB169bYvXs3PvnkEwDA5MmTsW3bNnz88ccIDw+Hs7MzBg4ciLy8PItf59KlS3jqqacwduxYvP/++6hRowb++usvvPjii8jLy4OLiwvs7OwM5rTk5+frvnd2di7dyRJRpVXSyhVtvgxX14d7rZREowHatXsYgGhVp16HiohBSDlSKhRYFB6OgSdPQgHoBSLakCM2PNwmAQgAODk5ITo6GmvXrkVCQgIaNmyIli1bAgD27NmDESNGYMCAAQCArKwsXLp0yarXOXjwIDQaDRYsWAC7B701GzZs0Kvj6+uLlJQUCCGgeHC+R44c0T3u7u6O0NBQxMfHo0uXLla1g4gqJ3NXrpgbgJR03OrS61ARcXVMOYv29cWmJk0QpFLplQerVNjUpInN84QMHToUW7ZswerVqzF06FBdef369REXF4cjR47g6NGjGDJkiMFKGnOFh4cjPz8fS5YswcWLF/H1119jxYoVenU6d+6MtLQ0fPTRR7hw4QKWLl2KX3/9Va/OrFmzsGDBAixevBjnz5/HoUOHsGTJEqvaRESVh7krV779FkhLkzZ/K8vjUvlhECKDaF9fXGrbFjsiIvBto0bYERGBxLZtyyVRWdeuXVGjRg2cPXsWQ4YM0ZUvXLgQ3t7eaN++Pfr27YtevXrpekksFRERgYULF+LDDz9E06ZNsXbtWsTExOjVadSoEZYtW4alS5ciIiICBw4cwOTJk/XqDB8+HLGxsVi2bBmaNGmCp556CufPn7eqTURUOdy/D3z5ZfF1tPkynnsO8PEBunWrfvk1qgruomsEd9Gt+vg+ElU8CQnSapWjRx+uWDG1csXU6hjAvPpkO9xFl4iIbM7SRGLF+eEHIDJSCkB8fYFt24Dvvzc/MZitE4mRbXBiKlls7dq1eOWVV4w+VqdOHZw8ebKcW0RE5c3SRGKAFKQYW4Vy/ToweLA00fSxx4D16x8GE5asXOFKl8qHQQhZ7Omnn0ZUVJTRxxwcHMq5NURU3ixNJKZ9TnFBy6JFwLlzwAcfAIX/jFi6coUrXSoXBiFkMXd3d7i7u8vdDCKSgSWJxLQ9EMVlM9UGLSY6V6mK45wQK3E+b+XG94/IOuYmEtNup5WQIAUYxf3KTZxYuvkkVHkxCLGQdrjh3r17MreESkObCVbJwWIii5ibSExbLzYWuHnTdL2iQQtVLxyOsZBSqYSXlxdu3LgBAHBxcdFl/KTKQaPRIC0tDS4uLrC3568AkSXMTfilrWdup6O5wQ1VLfwLbIWAgAAA0AUiVPnY2dmhdu3aDCCJCjG1eiUrS5o02rKlVBYcbHpIRqGQHtcmBnv2WcCczbOZzbR6qhBByNKlSzF//nykpKQgIiICS5YsQZs2bYzW7dy5M3bt2mVQ/sQTT2DLli0AgBEjRuDLIin3evXqha1bt5ZJexUKBQIDA+Hn56e36RpVHo6Ojrp9bYjI+OqVoCCgSxfg118BJyfgwgVApZJWshSXGCw29uGkVG3Qcu2a8V6RokELVS+yByHr16/HpEmTsGLFCkRFRSE2Nha9evXC2bNn4efnZ1A/Li5Ob2fXW7duISIiAs8++6xevd69e+OLL77Q3VcV2aulLCiVSs4pIKJKr7glt998I31fvz5w6RLQsOHDxGDGltzGxuovz1UqHwYtprKfFg5aqHqRPW17VFQUWrdujU8//RSANF4fEhKC1157DVOmTCnx+bGxsZgxYwaSk5Ph6uoKQOoJSU9Pxw8//GBVmyxJOUtEVJmp1UBoaPErXmrUkBKKFf1fztTwjTHGelpCQgyDFqr8LLmGytoTkpeXh4MHD2Lq1Km6Mjs7O3Tv3h179+416xirVq3C4MGDdQGI1s6dO+Hn5wdvb2907doV7733HmrWrGn0GLm5ucgttCd0ZmamFWdDRFT5lLTkFgBu3wb27jVMAmZJYjBmMyVjZA1Cbt68CbVaDX9/f71yf39/nDlzpsTnHzhwACdOnMCqVav0ynv37o3o6GiEhYXhwoULmDZtGvr06YO9e/caHT6JiYnB7NmzS3cyRESVkKVLbkuD2UypKNnnhJTGqlWr0KxZM4NJrIMHD9Z936xZMzRv3hz16tXDzp070a1bN4PjTJ06FZMmTdLdz8zMREhIiO0aTkRUAeTlAb/8Yl5drl4hW5B1eYCPjw+USiVSU1P1ylNTU3XLYE3Jzs7Gd999hxdffLHE16lbty58fHyQkJBg9HGVSgUPDw+9GxFRVZaUBDz+uDTx1NHx4STRohQKae4GV6+QLcgahDg6OiIyMhLx8fG6Mo1Gg/j4eLRr167Y527cuBG5ubl44YUXSnydq1ev4tatWwhkKE9EhC1bgEcfBfbvB7y8gDfekMqLBiJcvUK2JnuihEmTJuH//u//8OWXX+L06dMYO3YssrOzMXLkSADAsGHD9Cauaq1atQr9+/c3mGyalZWFt956C/v27cOlS5cQHx+Pfv36ITw8HL169SqXcyIikptaDezcCaxbJ31Vq4GCAmDqVOCpp6TJpq1aAYcOSTvXbtok5QUpLDjY+I64RGVF9jkhgwYNQlpaGmbMmIGUlBS0aNECW7du1U1WTUpKMkgqdfbsWfz111/4/fffDY6nVCpx7NgxfPnll0hPT0etWrXQs2dPzJ071ya5QoiIykNpl8MGBgLe3sCpU9L9114D5s9/uOyWq1dIDrLnCamImCeEiCoSY0FFcLCUBKxoL4WpxGPaRGFOTsCXXwLPPWf7dlP1ZMk1VPbhGCIiMk0bVBTN5XHtmlQeF/ewTK2WghVj/1oKIQUiXl7AM8/YtMlEZpN9OIaIiIwzJ6iYOBE4fx5ITweOHy8+8ZgQQEqKNOTCfB1UETAIISIqZ+bO7ygpm6kQwJUrwKxZQE6O+a9fFonHiMoCgxAionJkzvwOIYBp04CvvzbvmJ07SxvLZWYChfbtNInZCqii4MRUIzgxlYhsoaRJo99//zAQeewx4O+/zTvujh1SIKLdjO7aNeNDOAqFFPAkJnLVC9kOJ6YSEVUwJc3vAIDXX5fqAcCUKcDatUCtWuZnM1UqpR4V7WNF6wJMPEYVC4MQIqJyYM5utdeuSfUAoG9fYMgQYMkS6b65QUV0NBOPUeXBIISIqBxYu1utNUFFdDRw6ZI0TPPtt9LXxEQGIFTxcGIqEVE5MHcyqLF61mQzVSq5DJcqPgYhRETloGNHqffC1JCMdtKoqd1qGVRQVcThGCKicqCdNKpQcNIokRaDECIiG8vPl5KKcdIokT4OxxAR2di0acCqVdKSW+5WS/QQgxAiIhuKiwM+/lj6/v596SvndxBJOBxDRGQj584BI0ZI37/5JodbiIpiEEJEZAPZ2cAzzwB370rDLTExcreIqOJhEEJEVMaEAMaOBU6cAPz9gfXrAQcHuVtFVPEwCCEiKmMbNkg74CqVUgDCXWuJjOPEVCKiUlKr9Ve7PPUU8MorQL16QKdOcreOqOJiEEJEVApxcdLuuIUzoQYHS4nJBgyQr11ElQGHY4iIrBQXBwwcaJiK/do1qXzzZnnaRVRZMAghIrKCWi31gAhh+Ji2bOJEqR4RGccghIjICrt3m96MDpACkStXpHpEZByDECIiCxUUABs3mlc3Odm2bSGqzDgxlYiokKIrXYzt67JmDbBsmXnH4/JcItPYE0JE9EBcHBAaCnTpAgwZIn0NDQW++AI4fPhhvSFDgMaNAQ8PQKEwfiyFAggJkYIYIjKOQQgREUyvdLl6FRg1CujbF9BopDIXFykb6hdfSPeLBiLa+7Gx3B2XqDgMQoio2itupYvWjRv68zsUCmlDuk2bgKAg/brBwVI5N6wjKh7nhBBRtVfSShcAyM8Hzp83DDiio4F+/UqeR0JEhhiEEFG1Z+4KFlP1lEqgc+cyaw5RtcHhGCKq9sxdwcKVLkRli0EIEVV7HTtK8zi40oWofDEIIaJq7eZNYPZs4JNPpPtc6UJUfhiEEFG1dfs20KMHMHeuNLGUK12IyhcnphJRtZSRAfTqBRw5Avj7A2PHAo88wpUuROWJQQgRVSrmpFUvyd27QJ8+wL//AjVrAtu3SwEIwJUuROWJQQgRVRpxcVJSscI5PYKDgUWLTA+XFA1aWraUsp/u3Qt4e0sBSNOm5dN+ItLHIISIKgVtWvWiWU2vXZPKjc3bMBa0ODkBOTnSvi+//w60aGHzphORCZyYSkQVXnFp1bVlEydK9bRM7QWTkyN9nTIFaNXKJs0lIjMxCCGiCq+ktOpCAFeuAM89B3zwAbB+vTTR1NReMAoFsHy5ftBCROWPwzFEVOGZm1Y9Lk66lUQbtOzezUmoRHJiEEJEFZ656dKHD5d6N/bvlzabK4m5wQ0R2UaFGI5ZunQpQkND4eTkhKioKBw4cMBk3c6dO0OhUBjcnnzySV0dIQRmzJiBwMBAODs7o3v37jhvzl8kIqqQwsIAu2L+WmnTqq9aBXz9NbBypXnH5V4wRPKSPQhZv349Jk2ahJkzZ+LQoUOIiIhAr169cOPGDaP14+LikJycrLudOHECSqUSzz77rK7ORx99hMWLF2PFihXYv38/XF1d0atXL+RoZ6QRUaWydSug0Ujfm5NWnXvBEFUSQmZt2rQRr776qu6+Wq0WtWrVEjExMWY9/5NPPhHu7u4iKytLCCGERqMRAQEBYv78+bo66enpQqVSiXXr1pl1zIyMDAFAZGRkWHAmRGRLmzcLsWyZEMHBQkizOqRbSIgQ339vWP/774VQKKRb4fraMmPPIaLSs+QaKmtPSF5eHg4ePIju3bvryuzs7NC9e3fs3bvXrGOsWrUKgwcPhqurKwAgMTERKSkpesf09PREVFSUyWPm5uYiMzNT70ZEFUv//tKKl0uXgB07gG+/lb4mJhpPVBYdzb1giCo6WSem3rx5E2q1Gv7+/nrl/v7+OHPmTInPP3DgAE6cOIFVq1bpylJSUnTHKHpM7WNFxcTEYPbs2ZY2n4hs6MQJKffHmjVS4KBlSVr16GjuBUNUkck+J6Q0Vq1ahWbNmqFNmzalOs7UqVORkZGhu125cqWMWkhE1sjIkAKI+HjgrbdKdyxt0PL889JXBiBEFYesQYiPjw+USiVSU1P1ylNTUxEQEFDsc7Ozs/Hdd9/hxRdf1CvXPs+SY6pUKnh4eOjdiEgeQgAjR0pLbENCgCVL5G4REdmKrEGIo6MjIiMjER8fryvTaDSIj49Hu3btin3uxo0bkZubixdeeEGvPCwsDAEBAXrHzMzMxP79+0s8JhHJ7+OPgc2bAQcHYONGwMdH7hYRka3Inqxs0qRJGD58OFq1aoU2bdogNjYW2dnZGDlyJABg2LBhCAoKQkxMjN7zVq1ahf79+6NmzZp65QqFAhMnTsR7772H+vXrIywsDNOnT0etWrXQv3//8jotIrLCzp3Sni6AtDNuVJSszSEiG5M9CBk0aBDS0tIwY8YMpKSkoEWLFti6datuYmlSUhLsimQpOnv2LP766y/8/vvvRo/59ttvIzs7G6NHj0Z6ejo6dOiArVu3wsnJyebnQ0TmU6sfThp1cADGjZPygfznP8CYMXK3johsTSGEqS2eqq/MzEx4enoiIyOD80OIbCQuTtoZt/DGdA4O0gqW06cBFxf52kZE1rPkGlqpV8cQUeUUFwcMHGi4M25BgbSx3Nat8rSLiMoXgxAiKldqtdQDYqwPVls2caJUj4iqNgYhRFSudu827AEpTAipN2T37vJrExHJg0EIEZWra9fMq5ecbNt2EJH8GIQQUbnZvx8wd4eEwEDbtoWI5Cf7El0iqjoKL7ktuk/LggXA5MnS9wqF8Tkh2seCg6XnElHVxp4QIioTcXFAaCjQpQswZIj0NTRUKgek+0olMGIEsGqVFGwoFPrH0N6PjeUeL0TVAXtCiKjUtEtui/ZuXL0qlW/aJG1Il5AgBSYA4OlpmCckOFgKQKKjy6vlRCQnJiszgsnKiMynVkuBhakVL9rhlcREw96N4oZviKhysuQayp4QIioVS5bcdu6s/5hSaVhGRNUH54QQUamYu5SWS26JqCgGIURUKuYupeWSWyIqikEIEZVKx47SJFNTFAogJIRLbonIEIMQIioVpRJYvVr6nktuicgSDEKIyCqXLgFZWdL30dHA998DQUH6dYKDHy7PJSIqiqtjiMhily8DnTpJQcYvv0jDMdHRQL9+XHJLROZjEEJEFrl2DejaFUhKApydgZych3NCuOSWiCzB4RgiMltKihSAXLwI1K0LxMcD/v5yt4qIKiv2hBCRUUWzmTZqBHTvDpw7B9SuDfzxh+EcECIiSzAIISIDcXGG+7o4OAD5+UCtWlIPSJ068rWPiKoGBiFEpMfUZnT5+dLXd94BwsPLv11EVPVwTghRNaFWAzt3AuvWSV/VauN1JkwwDEC0FArg44+NP5eIyFIMQoiqgbg4aafbLl2AIUOkr6GhUnlhu3aZvxkdEVFpcTiGqIozNbxy7ZpU/skn0tLanTuB338375jcjI6IygKDEKIqrLjhFSGk4ZV33gFycy07LjejI6KywOEYoips9+6Sh1dyc4HISOD996X6QUGGe8BocTM6IipLVgUhGRkZuH37tkH57du3kZmZWepGEVHZMHfY5M03gWnTgA4dgMWLpTJuRkdEtmZVEDJ48GB89913BuUbNmzA4MGDS90oIio9tRrYt8+8uoWHV6KjpU3nuBkdEdmaQghTi/FMq1GjBvbs2YNGjRrplZ85cwaPPfYYbt26VWYNlENmZiY8PT2RkZEBDw8PuZtDZDG1WloBU9IqFoVCCi4SEw17N4pmTOVmdERkDkuuoVZNTM3NzUVBQYFBeX5+Pu7fv2/NIYnIAiUFCEol0K4dcOQIMHgw8PnnUnnhfzlKGl7hZnREZGtWDce0adMGK1euNChfsWIFIiMjS90oIjLNVM6POXOAkycf1ps5Ezh9Gli5ksMrRFQxWTUcs2fPHnTv3h2tW7dGt27dAADx8fH4559/8Pvvv6NjJZ86z+EYqqhM5fzQql9fCjyM9WxweIWIyoMl11CrghAAOHLkCObPn48jR47A2dkZzZs3x9SpU1G/fn2rGl2RMAihikitlno8ilty6+YmBRlubuXWLCIiPTafEwIALVq0wNq1a619OhFZqKScHwCQlQX8+y/nchBR5WBVEJKUlFTs47Vr17aqMURkmrk5P5hSnYgqC6uCkNDQUChMpVQEoOYWm0RlSgjzU6UzpToRVRZWBSGHDx/Wu5+fn4/Dhw9j4cKFeP/998ukYUTVialJo3l5wGefAatXA3/+Ka1ouXbN+MRUbc6PSj4vnOR25AgwdSoQEwO0aCF3a6iKsyoIiYiIMChr1aoVatWqhfnz5yOaa/6IzBYXJ20yV3i+R3CwlN/jhx+AhASpbM0aYNEiaXWMQmFZzg8is33/PbB1K9C6NYMQsrky3cCuYcOG+Oeff8rykERVmnbJbdEJp1evAh9/LAUgfn7A8uXA2LFMqU7l4H//0/9KZENWLdEtukmdEALJycmYNWsWzpw5gyNHjpRV+2TBJbpUHsxZcuvhAVy+DHh5GT6XOT+ozKWmAgEB+vf9/ORrD1VKNl+i6+XlZTAxVQiBkJAQoxvbEZEhc5bcZmZKQ/RFl9wypTrZxG+/Gd7/z3/kaQtVC1YFITt27NC7b2dnB19fX4SHh8Pe3urUI0TVCpfcVl9qIbA7PR3JeXkIdHRERy8vKItZcVheNFu2AEol7NRqaJRKYMsW2DEIIRuyKmLo1KkTAODUqVNISkpCXl4e7ty5g3PnzgEAnn76abOPtXTpUsyfPx8pKSmIiIjAkiVL0KZNG5P109PT8d///hdxcXG4ffs26tSpg9jYWDzxxBMAgFmzZmH27Nl6z2nYsCHOnDlj6WkS2RSX3FZPcWlpmJCQgKu5ubqyYJUKi8LDEe3ra9sXv3ZNGmIx4o/bt9H655/h/iDFgp1ajcyff8a/27eja40axo/n7284QYnIAlYFIRcvXkR0dDSOHTsGhUIB7bQS7RCNuXlC1q9fj0mTJmHFihWIiopCbGwsevXqhbNnz8LPyDhkXl4eevToAT8/P2zatAlBQUG4fPkyvIoMmDdp0gTbt29/eJLsnaEKqGNHaUKpqSEZLrmteuLS0jDw5EkUnYh3LTcXA0+exKYmTWwbiAwbBvzxh9GHugLQFOmNcbt3D1179DB9vG7dgEJ/a4ksZdXqmAkTJiA0NBQ3btyAi4sLTpw4gT///BOtWrXCzp07zT7OwoUL8fLLL2PkyJFo3LgxVqxYARcXF6xevdpo/dWrV+P27dv44Ycf8NhjjyE0NBSdOnUyWDJsb2+PgIAA3c3Hx8ea0ySyKaUSmDfP+GNcclv1qIXAhIQEgwAEgK5sYkIC1NZt52WeMWMMZzkXYlfktYve1+PlBbzyStm0i6otq4KQvXv3Ys6cOfDx8YGdnR2USiU6dOiAmJgYvP7662YdIy8vDwcPHkT37t0fNsbODt27d8fevXuNPuenn35Cu3bt8Oqrr8Lf3x9NmzbFvHnzDHpezp8/j1q1aqFu3boYOnRoiWnmc3NzkZmZqXcjKg/aXpCinXVcclv17E5P1xuCKUoAuJKbi93p6bZrxLPPAmfPAgMGSPctnYeirT9ggHScZ58t2/ZRtWNVEKJWq+Hu7g4A8PHxwfXr1wEAderUwdmzZ806xs2bN6FWq+Hv769X7u/vj5SUFKPPuXjxIjZt2gS1Wo1ffvkF06dPx4IFC/Dee+/p6kRFRWHNmjXYunUrli9fjsTERHTs2BF379412ZaYmBh4enrqbiEhIWadA1Fp3LsHfPKJ9P3nnwM7dgDffit9TUxkAFLVJOfllWk9q/n5SQlq1q+H8PSE2syuNo1SCXh6AuvXS8/n0l0qA1ZNlmjatCmOHj2KsLAwREVF4aOPPoKjoyNWrlyJunXrlnUbdTQaDfz8/LBy5UoolUpERkbi2rVrmD9/PmbOnAkA6NOnj65+8+bNERUVhTp16mDDhg148cUXjR536tSpmDRpku5+ZmYmAxGyuf/7PyAtDQgLA4YONewNoaol0NGxTOuVpLgVOFkFBVjTvj2+/vZbzJw1C30OHEBxfSICwJ0uXVBz7VoGH1SmrPqz9+677yI7OxsAMGfOHDz11FPo2LEjatasifXr15t1DB8fHyiVSqQWmamdmpqKgMLJcgoJDAyEg4MDlIUi90aNGiElJQV5eXlwNPLL6+XlhQYNGiBBm/vaCJVKBZVKZVa7icpCbi4wf770/ZQpDECqg45eXqhpb49bBQVGH1dAWiXTsZg5G+YytQLnv7VrI+H+fXyenIwMtRpwdsaJRx5Bz4MHYV/MggK1Ugnvtm0ZgJRCRV2WLTer/vT16tVL9314eDjOnDmD27dvw9vbu9jddQtzdHREZGQk4uPj0b9/fwBST0d8fDzGjx9v9DmPPfYYvv32W2g0GtjZSSNJ586dQ2BgoNEABACysrJw4cIF/Idr3akCWbdOWi0ZFAQMHy53a6g8bLt9GxkmAhBA6m2IDQ8v9YXJ1Aqcq7m5GHv+vO5+A2dnTAgOxitHjsCuhBWNSrUaip9/BubOLVXbquuFWNZl2RVcmf3/VcPUOvJiTJo0CcOHD0erVq3Qpk0bxMbGIjs7GyNHjgQADBs2DEFBQYiJiQEAjB07Fp9++ikmTJiA1157DefPn8e8efP0JsNOnjwZffv2RZ06dXD9+nXMnDkTSqUSzz//fNmcKFU7tkiRPmSIdFyVSrpR1bbjzh0MOHkSBQDaeXjgSk4OrhaZ+2EHINTJqVSvU9wKHC2VQoGNTZrgyZo1YZeaChw7pve4RqGAnRC6r4DUS4MjR6QcI0Xm8ZnL2gtxZQ9cZF+WXcHJ2gk8aNAgpKWlYcaMGUhJSUGLFi2wdetW3WTVpKQkXY8HAISEhOC3337DG2+8gebNmyMoKAgTJkzAO++8o6tz9epVPP/887h16xZ8fX3RoUMH7Nu3D77V+E0m65na4XbRotJNHHV0BExMUaIqZk9GBvoeP44cjQZ9a9bEpiZNoFQo9C6sS69dw6abN/HS2bM40LIl7O2s21u0pBU4AJArBNyVStgpFAZp2oVSCbWbG06++CIeWbUKiqwsKAr3kvz2m5RrxELWXogrew9CScuyFZCWZffz8alUgVVZsmoDu6qOG9gR8HCH26K/Idq/FdYsodVopBvngFQP/2ZmotvRo8hUq9HD2xs/NW0KJyPdaKl5eWh04ADuFBTgo7p18Vbt2la93rrUVAw5fbrEet82aoTn/f2BQYOkD7IQ0m3AAGDFCmnux40bUl6RzZuhUSggACT27YvwH3+0qE1qIRC6b5/J4Eg7FyaxbVu9C7GpwEVbozL0IOy8cwddjh4tsd6OiAh09vYuhxaVD0uuodaF20RVnFot9YAYC9G1ZRMnSvUs8cMPQMOGwDfflLaFVNEdy8pCz2PHkKlW43FPT/xgIgABAH9HRyyoVw8AMOPSJSTcu2fVa1q0AqegANi6VYqKjS29LbSUN9fdHUoh4Bsfj+1paRa1ydz8KJH//ouRZ85g9qVL+CI5GWPOnSu3xG5qIbDzzh2sS03Fzjt3SjyuOfXPZGfjwytXzHp9my/LrsD4/xiRESXtcCsEcOWKVM/c3WyFAN5/H7h4UcrzRFVH0XkLvg4O6H70KO4UFKCthwd+btYMLiVMJBoREIBvUlPxR3o6Xjl3DtsjIsye6K/V3M0NDgoF8k1cRPVW4GRlAXXrSmvEtb0fxjz3HJw6dcI/L7wA5aVLGHHoEH7v0AGNXV3NatN1My+wR7OzcfTBqsuSFE7sVtoeBEuHfIqr/1TNmvjh5k0sv34dOy1IOldWy7IrIwYhREbYYofb334DDh0CXFykXhaqGoxdlOwAaAA86uaGX5s1g7sZ428KhQIrGzZE03/+wR/p6ViTkoKRFuxemKfRYNCpU7oARAHo9SRowxndChx3d+Dff82aZa3w90fz335Dz0OHcC0rC08dP459LVvCr4SL56G7d/FRCRmrtabVrg0XpRKXc3KwNzMTJ8wISK4ZCXAsmchq6VyV4uo/c/IkPJRKZGo3AATwVI0a+PvuXdzKzzc5WdhLqUQHT88Sz9UWKsKkXwYhREUUFAAHD5pX19xrhBCANrHvmDEAtzOqGkxdlDQPvr4eFAQvBwezj1fP2RlzQkPx9sWLePPCBfSpUQMBZiyfEkJg9Nmz2H7nDlzt7DArNBSLrl0z+G89tuh/9xYs81LZ2eH75s3R9tAhXMjJQf8TJ7CteXP8c/euwUXsem4u/puYiC9TUopdqQM87J2ZExamuwCaO5diYkICjmZlYai/P5q7umLzzZtm92pYOmnUnL1/MtVqBDo4YHStWngpMBDBTk66z0jRoFArXa3Gy+fOYUWDBlBZOSFZez6WBBQVZdIvJ6YawYmpVZupJbdCSHP03n0XOHeu5OMEBkpDMub8Hd+1Sxq2cXSUUrLXqlXq0yCZWTvhsiQFGg2iDh3CoawsPOfri/VNmpT4nFmJiZh9+TKUAP7XrBn61Kxps/9yz2Rno93hw0gvKICznR3uazS6x4IcHdHB0xP/u3UL9x6UD/Hzw+NeXhj74JfKWO9M0R4H7c/2Wm6uySCm6EU92NHRYNmzsde4r1YjJS8PW27dwmvFJLHUClGpYAcgo6AA6WZMAtvevDm6FUlZYeyCH6JSobuXF75MTYUGQFsPD8Q1aYJAK9bsWzOkZMtJv5ZcQxmEGMEgpOoyteT29deleXnaHhAfH6BvX2DNGum+sd+SGjWAP/8EzLhGoGdPYNs2qRdk+fJSnwZZwNKLsbn1bbny4fDdu2h98CDUAH5s2hRPF9N19kVyMkY9mGS0skEDvFwOEe7sxETMuny52DrtPDywsF49tH0w1GDqQmzQO/OA9kIJGA9cvm3UCI52dvg2NRX/u3kTJc08sVco4KJQILNQ0GQLupVHRZj6XP1++zYGnTqF9IICBDk6YnPTpmjt4WH259DSgMJWwXNhDEJKiUFI1VTSklshADc34M03gUmTAA8P40FLYCBgZydlPPX2Bn7+GWjf3vTrnjwJNG0q9ZicPy/NA6TyUZaTDrX10/Pzse3OHSy9dg27MjJKbIOpi1JJply4gA+vXEGQoyNOtWkDDyPzSn6/fRtPHj+OAiHw39q18Z4N9+7SKukiBgA17e2R0r69Qb6TshgyMBa4/HzzJvqeOGH2OagUCnjZ2yM1P7/Eup/Uq4f2np44kZWFF83oIrUm6Dx/7x76nTiB0/fuwcnODq8EBuL7mzdL/NyaE1D4OThgUXg4ruXl4UpuLg5lZuJPM3aKL82yYQYhpcQgpOpRq4HQUNMrXhQKKaA4ccJwnoex4Zv0dOCpp4B9+wBnZ2DjRuDJJ02//u7d0hzAN94oqzOiklj6H2JJ9Yf4+SEpNxd/Z2TAkpXZ1v4xv69Wo9k//+BCTg7G1qqFZQ0a6D1+NCsLHQ8fxl21Gi/4++OrRx6xeDWNNco794U5gYu5+VHm162LlwID4WlvDw1Q7JBP0R6BkoaIStuDkFlQgKGnT+PnW7eMPm7sc7vl5k08ZUHwZQlrg2fAsmsoJ6ZStWDOktvbt6Wls0WDEKXScBluzZrA9u3As88Cv/4KXL9e/Ot37CjdSJ+t5i2YO+nwyRo1kCcEstVqjD9/vthJh2tv3NCVPeLigt7e3vjmxg2TKx9KuyGds1KJ/2vYEF2PHsXy69cxyNcXAlJOCfsH7b+rVqOLlxdWNWxYLgEIYH5Oi7LKfaFUKEoMZsxd4trK3V03UVgJYFF4uNFJowYriR60w5L6lvKwt8f3TZrAZ88e3DUy90T7ev85fRrzk5JwIScHaWb05ABAfWdntHRzQ20nJ+So1VhS0h8slN+yYQYhVC3YYsmtqyvw449SEPL00w/LC/eceHsDPXqUfq+ZysLS5ZG2mp1vboIsp927LTruhKAgTAgORpizMwBpZ1xbXZQAoIu3N14MCMCqlBR0P3YMBUU6roMdHRHXpAkcS7GqwlIWJUQrJx29vBCsUpXYS1E0IIz29cWmJk2Mfg6NzVWxtL6l/s7IMBqAFHZPo8G+u3ctOu7KBg10gZxaCGy+dcvin5WtMAihasHcpbQWpGUAADg46Acga9ZImVQLTxVwcwO++EKaj1KVWRJU2HpTL3MTZFkqysNDF4AAtr8oAcDjXl5YlZJiEIAAUp6MP9LTy3VJpbUXfFsqTS9FtK8v+vn4mB08W1rfEub2Hr0eFIQRAQEIdXJC83//tei9sHWPjqU4J8QIzgmpetRqwNcXuHPH+OMKhbRKJjHR+l6LjRuB554zfXxr9pqRU1kkfTI2jl3a2fnFtUsIgf/duoW3L1zA2fv3SzzHH5o2RQ9vb+x7sMdLSUzNc7DlsJKtVzJYo6SVK3Lt62LpCpyKxpr5Nta+F7b8WXFiaikxCKl6Ll6UltLm5Bg+VpoN6bTMmfha2iCntGw1VGLOagk3pRL9atZEhlqNS/fv44QZe6MYu+Cbatcn9epBDWDe5cs4ZkamzfKedGitirwBWkW94FeELKDWsvZzaO17YaufFSemEhXx009SANKoEXD3rmGekNjY0vVS2GKvmbJki6GS+2o1jmZlYd2NGyVuH5+lVutN7DTHmxcuYFhAALp5eaHJg2yYxtp1NTcXz546pbvvrlTi1aAgNHB2xosP8mfIPenQWuU9CdQSthyWKA1zJrJWVNZ+Dq19LyrCz4pBCFULEycC4eFSb0jt2sYzppaGLSa+lhVL5l+Yk5p62OnTmJWYiFP37lm0VPV5Pz909fLCtbw8zLp0qcT6h7KycOhBRks/e3vc1WiKTQGuADCjTh1MCA6G94MVEJ729hVm0qE1KuIk0MIqwkWsqrH2c1hZ3wsOxxjB4Riy1M6dQJcuJdfbsaN8e0LMmVMQ6OiILc2a4XZBAXbeuYO5Zm44BgC+Dg4Ic3LCATNm62uHDMzpcvZzcMDE4GDsSE/H7owMvdTg5rxGYbbKmFoeKuowEdleRfocWopzQkqJQUjVkJsrZT+dOhUICrLta2nnhFy7ZjzFu1xzQsydU2CpScHBmBgcjGCVyuKkT4Blk+lyNRq8f/ky5paQJhwoXYKliqqiTgIlMsWSa2j5LS4nKmczZgBLlwJdu0pBgi0plcCiRdL3Rf9Z0d6PjS3/SanmzhVwVyrR2MUFEa6uZtXvW7MmQpycoFAodOPYwMMLo5apcWxtl3NQkc26glUqg4uqys4OXc1c7inXsIQtWfKzIqps2BNiBHtCKr/du4FOnaReiR9+APr1K5/XNbbXTEhI6Se+WsvS1RWl6f63Zoa+uV3OHJao3N3zVL1wOKaUGITIx9g+LZb2Hty9C0RESEMfI0cCq1fbpq2mlMU5lIU8jQYTz5/H8mJmw5Z2qKQoW14oOSxBVDkwCCklBiHyMNaLEBwsDXNY0ovw0kvAqlXSHI2jR6XdcKub5NxcPHvyJPYU2i3T1JI/YxfvipoDoqK2i4geYhBSSgxCyl9cnJTWvOin0dJEYj/9JA29KBTSipXHHy/zplZ4f2dkYODJk0jOy4OnUolvGjVC3oOlt7YYKilvFbVdRCRhEFJKDELKV2myjRYe+ggIAGbOlO6/9Rbw0Uc2b7psjF2I7QAsv34dExMSkC8Emri4YHPTpqjv4mLyObx4E1FZY8ZUqlSszTZqbPgmKAgYMgSYO9dmzZWdsSGJIEdHNHBxwY70dADAs76+WN2wIdzsH/6KV9ZkRkRUdTEIIdlZkm100yagYUPg7Flps7ii/XjXrwPr1gHPPFO5Noszl8nsp3l5uJaXBwWAD+vWxeSQECjYy0FEFRyDEJJdQIB59by8gL59pSEYhcJ4UjAhpMcmTpTmhsi1WZwtFJdSXcvHwQGTGIAQUSXBZGUkO7sSPoUKhZRro2FDKfeHk5PxAESr8PBNVbI7Pb3EjeLS8vOx+8GQDBFRRccghGTXqRMwYID0fXHZRuvWBeLjgf/7P/OOK8dmcbZ0vQLvqEpEZA0GISSLixeBtLSH9+PigO+/N9zjJTjYcHlucLB5rxEYWPp2lhe1ENh55w7WpaZi5507UBfq6snTaLA2NRWzExPNOlZVTF1ORFUTl+gawSW6trVnD9C/P/DII8D27UDhLTHMyTZaUTeLs5ax1S7BKhXeCwvDtdxcLL12zaxekOqQupyIKj4u0aUKwVhA8d13wKhRQF4ecO8ekJEB+Pk9fI5SWfJW99rN4gYONJygKudmcdYwtdrlam4uRpw5o7sf4OiIV2vVQpBKhRfPngVgPPtp0Y3iiIgqMgYhZBPGcnh4eADaLOIDBgBffw2YuWmrgehoaZjGWJr3kjaLszRpl62SfJmz2sVBocDKBg3wvL8/VA9m8Hra2xvtOWHqciKqbBiEUJkzlYJdG4D07y8FECWtiilJdLS0DNeSzeJMDX0sMnEBt7S+JcxZ7ZIvBEKdnHQBCCBt7d7Px4fZT4mo0mMQQmVKrZZ6J4qbaXTwYPGPW8Kc4Rstk4m+cnMx8ORJg43cLK1vqcs5OWbVM7bahdlPiagq4OoYKlMlpWAH5MnhUdzQh7bstfPncen+fVy8fx+nsrMx7ty5YutPTEjQW8ViLiEEfrx5E+9cvGhWfa52IaKqij0hVKYsScFenkoa+hCQ8nCE7d9v1vEEgCu5udidnm7QI1HcHJJz9+5hQkICtt6+DQBQAlCbeA3tapeOXl5mtYmIqLJhEEJlytzcHOWdw8PcBF5KACo7OwghcN+MXo7Vycmo7+KCoAfrjE3NIfkgLAzHs7Ox8OpV5AsBR4UCk0NC0MTVFS+cPg2Aq12IqPphnhAjmCfEenl50h4v9+8bf1yOHB6J9+9j6OnT2KudGVuMHRER6OztjZ137qDL0aNmHV8BoKOnJxq6uODz5ORiV7sAQJ8aNbAoPBz1XVwAGA9cQrjahYgqKeYJIVkIAbz+evEBCFB+OTxy1Gp8dOUKYpKSkKPRFFu36NBHRy8vBKtUuJabazSoUEBaKtvY2Rl/372LPzMy8GdGRrGvoQSwqUkT9PPx0dtgjqtdiKi64sRUKjPTpwOffSYFG2++aZhe3VgK9tIyle58y61baPLPP5h56RJyNBp09fLConr1oMDDoQ4tY0MfSoUCi8LD9R4vWn9Vw4bYExmJy23bYkytWiW3FYCXvb3RHW61q12e9/dHZ29vBiBEVC2wJ4TKxCefAO+/L32/YgUwejTw4YeW5fCwlLFhjABHRwQ7OuLfrCwAQC1HR3wSHo5nfX2hUCgQ7ORkdqKvaF9fbGrSpMT6tZ2c8LinJ1Zcv15im7m5HBHRQ7L3hCxduhShoaFwcnJCVFQUDhw4UGz99PR0vPrqqwgMDIRKpUKDBg3wyy+/lOqYVDrbtgGTJknfz5snBSDAwxwezz8vfS3rAGTgyZMGK15S8vLwb1YW7AC8FRKCM23a4Dk/P13vQ7SvLy61bYsdERH4tlEj7IiIQGLbtibnXphb39xltFxuS0T0kKw9IevXr8ekSZOwYsUKREVFITY2Fr169cLZs2fhV3hDkQfy8vLQo0cP+Pn5YdOmTQgKCsLly5fhVWgJo6XHpNLr3Bn4z3+kPWCmTLH965mT7tzP0RExdesaHdawNNGXOfXNmUPC5bZERPpkXR0TFRWF1q1b49NPPwUAaDQahISE4LXXXsMUI1ezFStWYP78+Thz5gwcHBzK5JjGcHWMccXtcKud91naVOzmMHflinalS3nR9s4AxpfbljbDKhFRZWDJNVS24Zi8vDwcPHgQ3bt3f9gYOzt0794de/fuNfqcn376Ce3atcOrr74Kf39/NG3aFPPmzYNarbb6mACQm5uLzMxMvRvpi4sDQkOBLl2AIUOkr15e0kRTQAo+yiIAMTXRVPvYllu38EZCglnHKu/5F9o5JNqcIVrBKhUDECIiI2Qbjrl58ybUajX8/f31yv39/XGm0BbmhV28eBF//PEHhg4dil9++QUJCQkYN24c8vPzMXPmTKuOCQAxMTGYPXt26U+qijK1IV1WFvDss8D335fNihdTib5m1amD1Px8rLx+HZdL2PCtMDnmX3C5LRGR+SrV6hiNRgM/Pz+sXLkSSqUSkZGRuHbtGubPn4+ZM2dafdypU6diknZmJaSupJCQkLJocqVX0oZ0CgUwcaK0m23RiafFpS8vytRmcVdzc/HSuXO6+9729hju74/v0tKQmpdXIedfcHM5IiLzyBaE+Pj4QKlUIjU1Va88NTUVAQEBRp8TGBgIBwcHKAtd7Ro1aoSUlBTk5eVZdUwAUKlUUBXpQidJSRvSCfFwQ7rCu9ma6tVYZGQprFoITDh/vtiJpo4KBT5r0ACD/PzgrFSio5cXBp48CQWY7pyIqLKSbU6Io6MjIiMjER8fryvTaDSIj49Hu3btjD7nscceQ0JCAjSFsl+eO3cOgYGBcHR0tOqYVDxrNqQztXz2Wm4uBp48iS9TUrA7PR3Lr13D+HPnEPnvv7hawvyNPCEQ6uQE5wcBKOdfEBFVfrIOx0yaNAnDhw9Hq1at0KZNG8TGxiI7OxsjR44EAAwbNgxBQUGIiYkBAIwdOxaffvopJkyYgNdeew3nz5/HvHnz8Prrr5t9TLKMpRvSFbd8Vls2opj5OcUpOtGU8y+IiCo3WYOQQYMGIS0tDTNmzEBKSgpatGiBrVu36iaWJiUlwa7QkouQkBD89ttveOONN9C8eXMEBQVhwoQJeOedd8w+JkmKW26blwds3gw895xUHhxsekhGuyFdx47S/d3p6QY9IMb4Ozgg0t0dTV1dYQfggytXSnyOsYmmnH9BRFR5cRddI6p6npC4OGmyaeHAIjj44cZyb70FJCQAP/4IPP30w9UxgP4EVW2HQ+H9YNalpmLIg63pi/Nto0Z4/kFgqBYCofv2lZjoK7FtW/ZyEBFVcJUiTwjJQxtQFO3ZuHpVKh8wQApA/P2B/HzpsehoKdAICtJ/jrEN6a6ZuYS2cK+GOZvFcaIpEVHVw54QI6pqT4haLSUcK261CwBMnSrd3N0Nn29qCCdbrcZbFy5geQmbuBXXq2FsRU2Iic3liIioYrLkGlqp8oRQ6ZS03FarZ0/DAAQAYCeAiHSgUR7g6AjYeQFQYE9GBoafPo0LOTkAgD41amDr7dsALFs+y4mmRETVC4OQasSa5bZaxnopghwd0drdHT/eugUBqddidcOG6F6jhsk8ISX1anCiKRFR9cEgpBrRW25rJ4Bm6UDNPOCWI3DcC9AoDOvBdDbTa3l5uHbrFgBguL8/FtWvD0976SPFXg0iIioJg5BqRLfctm4a8GoC4FdoEukNFbA0HCGJvrrltkDxeT+0fBwcsOqRRwwCDPZqEBFRcRiEVCNKJfD8sjTMdztp+KBPLjDrJAZnNcF94Y2r2bm4mpuLbXfulJj342Z+PnanpzPgICIiizAIqUbUQuArzwRADcO1sHYABPCx+0nM/8vyYxfNZkpERFQSBiHVyO70dKRqcg0DEC3Fw9UsnkolglUquNjZ4Z+srBKPbSybKRERUXEYhFQRaiGKnQR6Iy8PMw5dB5TFHOSBVQ0bYtSD2anmZjPt6OVVJudBRETVB4OQKsDUcthF4eEIUqnw6bVr2HDjBvKU5uWlq+vkpPtem8104MmTUMCyvB9ERETFYRBSyZlaPns1NxfPnCwyAfWUGxQhOYB7gUW9GtG+vtjUpIlVeT+IiIhMYRBSiZmzfBYA/uPnh3rHg/HpTA88+WEavnK3vFeDeT+IiKisMQipxHanp5e4fBYARgUGonNjD0zsDTg5+eLpTOt6NZj3g4iIyhKDkErM3GWx2nqentJ99moQEVFFwCCkEjN3Wey5fY4QTwOFYwz2ahARkdzs5G4AWa+9pyec7Ey/hQoAuKHCrGgvHDtWbs0iIiIyC4OQSmzKxYvI0WiMPqYAIASAT8Mx6FkFIiLKtWlEREQlYhBSSX169So+uXoVADAxKAjBKpXe4zU1KmBmEzju90VMjBwtJCIiKh7nhFRC/7t5ExMSEgAA88LCMLVOHXwcHq6baOqndMRrnb1w86QCE94CwsJkbjAREZERDEIqmYN372LwqVPQAHgpMBBTateWHtAogKPeQDKw5R/g9EmgZk1g2jRZm0tERGQSg5BK5HJODp46fhz3NBr09PbGsvr1oVAoEBcHTJgAPBid0enfH+CWLkREVFFxTkglkZ6fjyePHUNKXh6aubpiY5MmcLCzQ1wcMHCgYQACAKtXA3Fx5d9WIiIic7AnpIIqvCuuj4MDYi5fxsl791DL0RFbmjWDh7091GqpB0QUk7d94kSgXz9AacbuuUREROWJQUgFZGxXXABwsrPDlmbNEPJgl9vdu433gGgJAVy5ItXr3NmGDSYiIrICh2MqGO2uuMb2hMnRaHAxJ0d3PznZvGOaW4+IiKg8MQipQEraFVcBYGJCAtQPxl8CA807rrn1iIiIyhODkAqkpF1xBYArubnYnZ4OAOjYEQgONn08hQIICZHqERERVTQMQioQS3fFVSqBqVON19FuVhcby0mpRERUMTEIqUDM3RVXW08I4KefpLIiWdsRHAxs2gRER5dlC4mIiMoOV8dUIB08PeFiZ4d7xWxKF6xSoeODDGTr1wO//SYFIIcPA6mp0iTUwEBpCIY9IEREVJExCKlAll2/XmwAAgCx4eFQPhhruXoVsLeXUrM3aiTdiIiIKgsOx1QQO+7cwaQHm9KN8Pc32BU3WKXCpiZNEO3rqyubPBk4ehR4551ybSoREVGZYE9IBXA5JwfPnToFNYChfn5Y/cgj0AC6jKmBjo7o6OWl6wEprHHjcm8uERFRmWAQIrN7ajUGnDiBm/n5aOnmhv9r2BAKhQJKAJ29vQ3q5+cDL70kpWN/9NFyby4REVGZ4XCMjIQQGH32LA5nZcHHwQGbmzaFcwmzSWNjga++Anr3BgolTyUiIqp0GITIKPbqVay9cQNKABsbN0btB3vCmHLpEjBrlvT9Bx8AJVQnIiKq0BiEyCT+zh1MvnABALAwPNzo0EthQgDjxwP37gGPPw6MGFEOjSQiIrIhBiEySLx/H4NOnoQGwDB/f7wWFFTic+LigC1bAAcHYMWKhxlRiYiIKitOTC0HaiF0K1287e3xzoULuFVQgFbu7ljRoAEUJUQUmZnA669L37/zDvOBEBFR1cAgxMbi0tIwISHBYGM6D6UScU2amJyIqlYDu3dLGVB37QKuXwfq1ZMSkxEREVUFDEJsKC4tDQNPnoQw8limWo1/7t5FiJHZpXFxwIQJUkZULS8v4IUXAGdnmzWXiIioXFWIOSFLly5FaGgonJycEBUVhQMHDpisu2bNGigUCr2bU5EL+YgRIwzq9O7d29anoUctBCYkJBgNQAApDfvEhASohX6NuDhg4ED9AAQAMjKAOXOkx4mIiKoC2YOQ9evXY9KkSZg5cyYOHTqEiIgI9OrVCzdu3DD5HA8PDyQnJ+tuly9fNqjTu3dvvTrr1q2z5WkY2J2ebjAEU5gAcCU3F7vT03VlarXUAyKMRC7asokTpXpERESVnexByMKFC/Hyyy9j5MiRaNy4MVasWAEXFxesXr3a5HMUCgUCAgJ0N39/f4M6KpVKr453CUtgy1pyXp7F9XbvNuwBKUwI4MoVqR4REVFlJ2sQkpeXh4MHD6J79+66Mjs7O3Tv3h179+41+bysrCzUqVMHISEh6NevH06ePGlQZ+fOnfDz80PDhg0xduxY3Lp1y+TxcnNzkZmZqXcrrUBHR4vrJSebd2xz6xEREVVksgYhN2/ehFqtNujJ8Pf3R0pKitHnNGzYEKtXr8aPP/6Ib775BhqNBu3bt8fVQl0IvXv3xldffYX4+Hh8+OGH2LVrF/r06QO1iXGMmJgYeHp66m4hISGlPreOXl4IVqlgavGtAkCISoWOXl66ssBA845tbj0iIqKKTCGEsRkI5eP69esICgrC33//jXbt2unK3377bezatQv79+8v8Rj5+flo1KgRnn/+ecydO9donYsXL6JevXrYvn07unXrZvB4bm4ucgvN38jMzERISAgyMjLg4eFhxZlJtKtjAOhNUNUGJpuaNEG0r6+uXK0GQkNND8koFEBwMJCYCJSwxQwREZEsMjMz4enpadY1VNaeEB8fHyiVSqSmpuqVp6amIiAgwKxjODg44NFHH0VCQoLJOnXr1oWPj4/JOiqVCh4eHnq3shDt64tNTZogSKXSKw9WqQwCEEAKLGbMMH4sbT6z2FgGIEREVDXIGoQ4OjoiMjIS8fHxujKNRoP4+Hi9npHiqNVqHD9+HIHFjFFcvXoVt27dKraOrUT7+uJS27bYERGBbxs1wo6ICCS2bWsQgGgdPy59LTqlJDgY2LQJiI62cYOJiIjKiezJyiZNmoThw4ejVatWaNOmDWJjY5GdnY2RI0cCAIYNG4agoCDExMQAAObMmYO2bdsiPDwc6enpmD9/Pi5fvoyXXnoJgDRpdfbs2XjmmWcQEBCACxcu4O2330Z4eDh69eolyzkqFYoSN6gDpNUv2kmn//ufFIgkJ0tzQDp2ZA8IERFVLbIHIYMGDUJaWhpmzJiBlJQUtGjRAlu3btVNVk1KSoKd3cMOmzt37uDll19GSkoKvL29ERkZib///huNGzcGACiVShw7dgxffvkl0tPTUatWLfTs2RNz586FqsiwSEWjUAAbNwInTwKNG3OTOiIiqtpknZhaUVkyqYaIiIgeqjQTU+mhn34Crl2TuxVERETlh0FIBXDzJjBkCFC3rjQUQ0REVB0wCKkAPvkEyM4GmjaV5oIQERFVBwxCZHb7NrBkifT99OmcjEpERNUHgxCZLVoE3L0LNG8OPP203K0hIiIqPwxCZJSRIQUhgNQLYsd3g4iIqhFe9mS0ZIkUiDRuzEyoRERU/TAIkZEQgLMz8O677AUhIqLqh8nKjCjPZGU3bgA1azIlOxERVQ2WXENlT9te3fn5yd0CIiIieXAQQAZxccCePXK3goiISF4MQsrZvXvAuHFAhw7Azz/L3RoiIiL5MAgpZ//3f0BqKlCnDtCrl9ytISIikg/nhJQDtRrYvRu4fBmYM0cqmzYNcHCQt11ERERyYhBiY3FxwIQJwNWrD8uUSsDTU742ERERVQQMQmwoLg4YOFDKB1KYWg08/7zUE8IkZUREVF1xToiNqNVSD0hxWVgmTpTqERERVUcMQmxk9279IZiihACuXJHqERERVUcMQmwkObls6xEREVU1DEJsJDCwbOsRERFVNQxCbKRjRyA4GFAojD+uUAAhIVI9IiKi6ohBiI0olcCiRdL3RQMR7f3YWG5cR0RE1ReDEBuKjgY2bQKCgvTLg4Olci7PJSKi6ox5QmwsOhro109aBZOcLM0B6diRPSBEREQMQsqBUgl07ix3K4iIiCoWDscQERGRLBiEEBERkSwYhBAREZEsGIQQERGRLBiEEBERkSwYhBAREZEsuETXCCEEACAzM1PmlhAREVUu2mun9lpaHAYhRty9excAEBISInNLiIiIKqe7d+/C09Oz2DoKYU6oUs1oNBpcv34d7u7uUJjYgS4zMxMhISG4cuUKPDw8yrmF8uF587yrg+p63kD1PXeed9mdtxACd+/eRa1atWBnV/ysD/aEGGFnZ4fg4GCz6np4eFSrD6wWz7t64XlXP9X13HneZaOkHhAtTkwlIiIiWTAIISIiIlkwCLGSSqXCzJkzoVKp5G5KueJ587yrg+p63kD1PXeetzznzYmpREREJAv2hBAREZEsGIQQERGRLBiEEBERkSwYhBAREZEsGIRYYenSpQgNDYWTkxOioqJw4MABuZtkc7NmzYJCodC7PfLII3I3q8z9+eef6Nu3L2rVqgWFQoEffvhB73EhBGbMmIHAwEA4Ozuje/fuOH/+vDyNLUMlnfeIESMM3v/evXvL09gyFBMTg9atW8Pd3R1+fn7o378/zp49q1cnJycHr776KmrWrAk3Nzc888wzSE1NlanFZcOc8+7cubPBez5mzBiZWlw2li9fjubNm+sSc7Vr1w6//vqr7vGq+F4DJZ+3nO81gxALrV+/HpMmTcLMmTNx6NAhREREoFevXrhx44bcTbO5Jk2aIDk5WXf766+/5G5SmcvOzkZERASWLl1q9PGPPvoIixcvxooVK7B//364urqiV69eyMnJKeeWlq2SzhsAevfurff+r1u3rhxbaBu7du3Cq6++in379mHbtm3Iz89Hz549kZ2dravzxhtv4H//+x82btyIXbt24fr164iOjpax1aVnznkDwMsvv6z3nn/00UcytbhsBAcH44MPPsDBgwfx77//omvXrujXrx9OnjwJoGq+10DJ5w3I+F4LskibNm3Eq6++qruvVqtFrVq1RExMjIytsr2ZM2eKiIgIuZtRrgCIzZs36+5rNBoREBAg5s+frytLT08XKpVKrFu3ToYW2kbR8xZCiOHDh4t+/frJ0p7ydOPGDQFA7Nq1Swghvb8ODg5i48aNujqnT58WAMTevXvlamaZK3reQgjRqVMnMWHCBPkaVU68vb3F559/Xm3eay3teQsh73vNnhAL5OXl4eDBg+jevbuuzM7ODt27d8fevXtlbFn5OH/+PGrVqoW6deti6NChSEpKkrtJ5SoxMREpKSl677+npyeioqKqxfu/c+dO+Pn5oWHDhhg7dixu3bold5PKXEZGBgCgRo0aAICDBw8iPz9f7z1/5JFHULt27Sr1nhc9b621a9fCx8cHTZs2xdSpU3Hv3j05mmcTarUa3333HbKzs9GuXbtq814XPW8tud5rbmBngZs3b0KtVsPf31+v3N/fH2fOnJGpVeUjKioKa9asQcOGDZGcnIzZs2ejY8eOOHHiBNzd3eVuXrlISUkBAKPvv/axqqp3796Ijo5GWFgYLly4gGnTpqFPnz7Yu3cvlEql3M0rExqNBhMnTsRjjz2Gpk2bApDec0dHR3h5eenVrUrvubHzBoAhQ4agTp06qFWrFo4dO4Z33nkHZ8+eRVxcnIytLb3jx4+jXbt2yMnJgZubGzZv3ozGjRvjyJEjVfq9NnXegLzvNYMQMkufPn103zdv3hxRUVGoU6cONmzYgBdffFHGllF5GDx4sO77Zs2aoXnz5qhXrx527tyJbt26ydiysvPqq6/ixIkTVXKuU3FMnffo0aN13zdr1gyBgYHo1q0bLly4gHr16pV3M8tMw4YNceTIEWRkZGDTpk0YPnw4du3aJXezbM7UeTdu3FjW95rDMRbw8fGBUqk0mC2dmpqKgIAAmVolDy8vLzRo0AAJCQlyN6XcaN9jvv9A3bp14ePjU2Xe//Hjx+Pnn3/Gjh07EBwcrCsPCAhAXl4e0tPT9epXlffc1HkbExUVBQCV/j13dHREeHg4IiMjERMTg4iICCxatKjKv9emztuY8nyvGYRYwNHREZGRkYiPj9eVaTQaxMfH642tVQdZWVm4cOECAgMD5W5KuQkLC0NAQIDe+5+ZmYn9+/dXu/f/6tWruHXrVqV//4UQGD9+PDZv3ow//vgDYWFheo9HRkbCwcFB7z0/e/YskpKSKvV7XtJ5G3PkyBEAqPTveVEajQa5ublV9r02RXvexpTrey3LdNhK7LvvvhMqlUqsWbNGnDp1SowePVp4eXmJlJQUuZtmU2+++abYuXOnSExMFHv27BHdu3cXPj4+4saNG3I3rUzdvXtXHD58WBw+fFgAEAsXLhSHDx8Wly9fFkII8cEHHwgvLy/x448/imPHjol+/fqJsLAwcf/+fZlbXjrFnffdu3fF5MmTxd69e0ViYqLYvn27aNmypahfv77IycmRu+mlMnbsWOHp6Sl27twpkpOTdbd79+7p6owZM0bUrl1b/PHHH+Lff/8V7dq1E+3atZOx1aVX0nknJCSIOXPmiH///VckJiaKH3/8UdStW1c8/vjjMre8dKZMmSJ27dolEhMTxbFjx8SUKVOEQqEQv//+uxCiar7XQhR/3nK/1wxCrLBkyRJRu3Zt4ejoKNq0aSP27dsnd5NsbtCgQSIwMFA4OjqKoKAgMWjQIJGQkCB3s8rcjh07BACD2/Dhw4UQ0jLd6dOnC39/f6FSqUS3bt3E2bNn5W10GSjuvO/duyd69uwpfH19hYODg6hTp454+eWXq0TgbeycAYgvvvhCV+f+/fti3LhxwtvbW7i4uIgBAwaI5ORk+RpdBko676SkJPH444+LGjVqCJVKJcLDw8Vbb70lMjIy5G14KY0aNUrUqVNHODo6Cl9fX9GtWzddACJE1XyvhSj+vOV+rxVCCGH7/hYiIiIifZwTQkRERLJgEEJERESyYBBCREREsmAQQkRERLJgEEJERESyYBBCREREsmAQQkRERLJgEEJERESyYBBCRNXCzp07oVAoDDYoIyL5MAghIiIiWTAIISIiIlkwCCGicqHRaBATE4OwsDA4OzsjIiICmzZtAvBwqGTLli1o3rw5nJyc0LZtW5w4cULvGN9//z2aNGkClUqF0NBQLFiwQO/x3NxcvPPOOwgJCYFKpUJ4eDhWrVqlV+fgwYNo1aoVXFxc0L59e5w9e9a2J05EJjEIIaJyERMTg6+++gorVqzAyZMn8cYbb+CFF17Arl27dHXeeustLFiwAP/88w98fX3Rt29f5OfnA5CCh+eeew6DBw/G8ePHMWvWLEyfPh1r1qzRPX/YsGFYt24dFi9ejNOnT+Ozzz6Dm5ubXjv++9//YsGCBfj3339hb2+PUaNGlcv5E5ER5bJXLxFVazk5OcLFxUX8/fffeuUvvviieP7558WOHTsEAPHdd9/pHrt165ZwdnYW69evF0IIMWTIENGjRw+957/11luicePGQgghzp49KwCIbdu2GW2D9jW2b9+uK9uyZYsAIO7fv18m50lElmFPCBHZXEJCAu7du4cePXrAzc1Nd/vqq69w4cIFXb127drpvq9RowYaNmyI06dPAwBOnz6Nxx57TO+4jz32GM6fPw+1Wo0jR45AqVSiU6dOxbalefPmuu8DAwMBADdu3Cj1ORKR5ezlbgARVX1ZWVkAgC1btiAoKEjvMZVKpReIWMvZ2dmseg4ODrrvFQoFAGm+ChGVP/aEEJHNNW7cGCqVCklJSQgPD9e7hYSE6Ort27dP9/2dO3dw7tw5NGrUCADQqFEj7NmzR++4e/bsQYMGDaBUKtGsWTNoNBq9OSZEVLGxJ4SIbM7d3R2TJ0/GG2+8AY1Ggw4dOiAjIwN79uyBh4cH6tSpAwCYM2cOatasCX9/f/z3v/+Fj48P+vfvDwB488030bp1a8ydOxeDBg3C3r178emnn2LZsmUAgNDQUAwfPhyjRo3C4sWLERERgcuXL+PGjRt47rnn5Dp1IioGgxAiKhdz586Fr68vYmJicPHiRXh5eaFly5aYNm2abjjkgw8+wIQJE3D+/Hm0aNEC//vf/+Do6AgAaNmyJTZs2IAZM2Zg7ty5CAwMxJw5czBixAjdayxfvhzTpk3DuHHjcOvWLdSuXRvTpk2T43SJyAwKIYSQuxFEVL3t3LkTXbp0wZ07d+Dl5SV3c4ionHBOCBEREcmCQQgRERHJgsMxREREJAv2hBAREZEsGIQQERGRLBiEEBERkSwYhBAREZEsGIQQERGRLBiEEBERkSwYhBAREZEsGIQQERGRLP4fN8MRT+q4mD4AAAAASUVORK5CYII=\n",
      "text/plain": [
       "<Figure size 600x400 with 1 Axes>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/html": [
       "\n",
       "<style>\n",
       "    /* background: */\n",
       "    progress::-webkit-progress-bar {background-color: #CDCDCD; width: 100%;}\n",
       "    progress {background-color: #CDCDCD;}\n",
       "\n",
       "    /* value: */\n",
       "    progress::-webkit-progress-value {background-color: #00BFFF  !important;}\n",
       "    progress::-moz-progress-bar {background-color: #00BFFF  !important;}\n",
       "    progress {color: #00BFFF ;}\n",
       "\n",
       "    /* optional */\n",
       "    .progress-bar-interrupted, .progress-bar-interrupted::-webkit-progress-bar {\n",
       "        background: #000000;\n",
       "    }\n",
       "</style>\n"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/html": [
       "\n",
       "    <div>\n",
       "      <progress value='35' class='progress-bar-interrupted' max='100' style='width:300px; height:20px; vertical-align: middle;'></progress>\n",
       "      35.00% [35/100] [00:43<01:20]\n",
       "      <br>\n",
       "      ████████████████████100.00% [10/10] [val_loss=0.7237, val_auc=0.6365]\n",
       "    </div>\n",
       "    "
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[0;31m<<<<<< val_auc without improvement in 10 epoch,early stopping >>>>>> \n",
      "\u001b[0m\n"
     ]
    }
   ],
   "source": [
    "dfhistory = model.fit(train_data = dl_train,\n",
    "    val_data = dl_val,\n",
    "    epochs=100,\n",
    "    ckpt_path='checkpoint',\n",
    "    patience=10,\n",
    "    monitor='val_auc',\n",
    "    mode='max',\n",
    "    plot=True,\n",
    "    cpu=True\n",
    ")\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e1c57a30",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "9340ed18",
   "metadata": {},
   "source": [
    "### 4，评估模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 58,
   "id": "8e81c1d8",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "100%|█████████████████████████████████| 10/10 [00:00<00:00, 53.85it/s, val_auc=0.65, val_loss=0.709]\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'val_loss': 0.7085483193397522, 'val_auc': 0.6495699286460876}"
      ]
     },
     "execution_count": 58,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "model.evaluate(dl_val)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ce035e28",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "b1a4dfbe",
   "metadata": {},
   "source": [
    "### 5，使用模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 62,
   "id": "8db6ff42",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.6495558111992674\n"
     ]
    }
   ],
   "source": [
    "from sklearn.metrics import roc_auc_score\n",
    "model.eval()\n",
    "dl_val = model.accelerator.prepare(dl_val)\n",
    "with torch.no_grad():\n",
    "    result = torch.cat([model.forward(t)[0] for t in dl_val])\n",
    "\n",
    "preds = F.sigmoid(result)\n",
    "labels = torch.cat([x['label'] for x in dl_val])\n",
    "\n",
    "val_auc = roc_auc_score(labels.numpy(),preds.numpy())\n",
    "print(val_auc)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "70a5c24e",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "a69ecb80",
   "metadata": {},
   "source": [
    "### 6，保存模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 63,
   "id": "98e0dc20",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "<All keys matched successfully>"
      ]
     },
     "execution_count": 63,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "torch.save(model.net.state_dict(),\"best_dien.pt\")\n",
    "net_clone = create_net()\n",
    "net_clone.load_state_dict(torch.load(\"best_dien.pt\"))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 64,
   "id": "795fa7f3",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0.6495558111992674\n"
     ]
    }
   ],
   "source": [
    "net_clone.eval()\n",
    "labels = torch.tensor([x[\"label\"] for x in ds_val])\n",
    "preds = torch.cat([net_clone(x)[0].data for x in dl_val]) \n",
    "val_auc = roc_auc_score(labels.cpu().numpy(),preds.cpu().numpy())\n",
    "print(val_auc)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ce8f8acd-8ac2-489a-8c77-e51bed6eec72",
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "id": "c7801172",
   "metadata": {},
   "source": [
    "**如果本书对你有所帮助，想鼓励一下作者，记得给本项目加一颗星星star⭐️，并分享给你的朋友们喔😊!** \n",
    "\n",
    "如果对本书内容理解上有需要进一步和作者交流的地方，欢迎在公众号\"算法美食屋\"下留言。作者时间和精力有限，会酌情予以回复。\n",
    "\n",
    "也可以在公众号后台回复关键字：**加群**，加入读者交流群和大家讨论。\n",
    "\n",
    "![算法美食屋logo.png](https://tva1.sinaimg.cn/large/e6c9d24egy1h41m2zugguj20k00b9q46.jpg)\n"
   ]
  }
 ],
 "metadata": {
  "jupytext": {
   "cell_metadata_filter": "-all",
   "formats": "ipynb,md",
   "main_language": "python"
  },
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.0"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
