<!DOCTYPE html>

<html lang="en">
  <head>
    <meta charset="utf-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" /><meta name="generator" content="Docutils 0.19: https://docutils.sourceforge.io/" />

    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
    <meta http-equiv="x-ua-compatible" content="ie=edge">
    
    <title>3.3. 序列建模 &#8212; FunRec 推荐系统 0.0.1 documentation</title>

    <link rel="stylesheet" href="../_static/material-design-lite-1.3.0/material.blue-deep_orange.min.css" type="text/css" />
    <link rel="stylesheet" href="../_static/sphinx_materialdesign_theme.css" type="text/css" />
    <link rel="stylesheet" href="../_static/fontawesome/all.css" type="text/css" />
    <link rel="stylesheet" href="../_static/fonts.css" type="text/css" />
    <link rel="stylesheet" type="text/css" href="../_static/pygments.css" />
    <link rel="stylesheet" type="text/css" href="../_static/basic.css" />
    <link rel="stylesheet" type="text/css" href="../_static/d2l.css" />
    <script data-url_root="../" id="documentation_options" src="../_static/documentation_options.js"></script>
    <script src="../_static/jquery.js"></script>
    <script src="../_static/underscore.js"></script>
    <script src="../_static/_sphinx_javascript_frameworks_compat.js"></script>
    <script src="../_static/doctools.js"></script>
    <script src="../_static/sphinx_highlight.js"></script>
    <script src="../_static/d2l.js"></script>
    <script async="async" src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
    <link rel="index" title="Index" href="../genindex.html" />
    <link rel="search" title="Search" href="../search.html" />
    <link rel="next" title="3.4. 多目标建模" href="4.multi_objective/index.html" />
    <link rel="prev" title="3.2.2. 高阶特征交叉" href="2.feature_crossing/2.higher_order.html" /> 
  </head>
<body>
    <div class="mdl-layout mdl-js-layout mdl-layout--fixed-header mdl-layout--fixed-drawer"><header class="mdl-layout__header mdl-layout__header--waterfall ">
    <div class="mdl-layout__header-row">
        
        <nav class="mdl-navigation breadcrumb">
            <a class="mdl-navigation__link" href="index.html"><span class="section-number">3. </span>精排模型</a><i class="material-icons">navigate_next</i>
            <a class="mdl-navigation__link is-active"><span class="section-number">3.3. </span>序列建模</a>
        </nav>
        <div class="mdl-layout-spacer"></div>
        <nav class="mdl-navigation">
        
<form class="form-inline pull-sm-right" action="../search.html" method="get">
      <div class="mdl-textfield mdl-js-textfield mdl-textfield--expandable mdl-textfield--floating-label mdl-textfield--align-right">
        <label id="quick-search-icon" class="mdl-button mdl-js-button mdl-button--icon"  for="waterfall-exp">
          <i class="material-icons">search</i>
        </label>
        <div class="mdl-textfield__expandable-holder">
          <input class="mdl-textfield__input" type="text" name="q"  id="waterfall-exp" placeholder="Search" />
          <input type="hidden" name="check_keywords" value="yes" />
          <input type="hidden" name="area" value="default" />
        </div>
      </div>
      <div class="mdl-tooltip" data-mdl-for="quick-search-icon">
      Quick search
      </div>
</form>
        
<a id="button-show-source"
    class="mdl-button mdl-js-button mdl-button--icon"
    href="../_sources/chapter_2_ranking/3.sequence.rst.txt" rel="nofollow">
  <i class="material-icons">code</i>
</a>
<div class="mdl-tooltip" data-mdl-for="button-show-source">
Show Source
</div>
        </nav>
    </div>
    <div class="mdl-layout__header-row header-links">
      <div class="mdl-layout-spacer"></div>
      <nav class="mdl-navigation">
          
              <a  class="mdl-navigation__link" href="https://funrec-notebooks.s3.eu-west-3.amazonaws.com/fun-rec.zip">
                  <i class="fas fa-download"></i>
                  Jupyter 记事本
              </a>
          
              <a  class="mdl-navigation__link" href="https://github.com/datawhalechina/fun-rec">
                  <i class="fab fa-github"></i>
                  GitHub
              </a>
      </nav>
    </div>
</header><header class="mdl-layout__drawer">
    
          <!-- Title -->
      <span class="mdl-layout-title">
          <a class="title" href="../index.html">
              <span class="title-text">
                  FunRec 推荐系统
              </span>
          </a>
      </span>
    
    
      <div class="globaltoc">
        <span class="mdl-layout-title toc">Table Of Contents</span>
        
        
            
            <nav class="mdl-navigation">
                <ul>
<li class="toctree-l1"><a class="reference internal" href="../chapter_preface/index.html">前言</a></li>
<li class="toctree-l1"><a class="reference internal" href="../chapter_installation/index.html">安装</a></li>
<li class="toctree-l1"><a class="reference internal" href="../chapter_notation/index.html">符号</a></li>
</ul>
<ul class="current">
<li class="toctree-l1"><a class="reference internal" href="../chapter_0_introduction/index.html">1. 推荐系统概述</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../chapter_0_introduction/1.intro.html">1.1. 推荐系统是什么？</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_0_introduction/2.outline.html">1.2. 本书概览</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../chapter_1_retrieval/index.html">2. 召回模型</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../chapter_1_retrieval/1.cf/index.html">2.1. 协同过滤</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/1.cf/1.itemcf.html">2.1.1. 基于物品的协同过滤</a></li>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/1.cf/2.usercf.html">2.1.2. 基于用户的协同过滤</a></li>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/1.cf/3.mf.html">2.1.3. 矩阵分解</a></li>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/1.cf/4.summary.html">2.1.4. 总结</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_1_retrieval/2.embedding/index.html">2.2. 向量召回</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/2.embedding/1.i2i.html">2.2.1. I2I召回</a></li>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/2.embedding/2.u2i.html">2.2.2. U2I召回</a></li>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/2.embedding/3.summary.html">2.2.3. 总结</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_1_retrieval/3.sequence/index.html">2.3. 序列召回</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/3.sequence/1.user_interests.html">2.3.1. 深化用户兴趣表示</a></li>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/3.sequence/2.generateive_recall.html">2.3.2. 生成式召回方法</a></li>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/3.sequence/3.summary.html">2.3.3. 总结</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 current"><a class="reference internal" href="index.html">3. 精排模型</a><ul class="current">
<li class="toctree-l2"><a class="reference internal" href="1.wide_and_deep.html">3.1. 记忆与泛化</a></li>
<li class="toctree-l2"><a class="reference internal" href="2.feature_crossing/index.html">3.2. 特征交叉</a><ul>
<li class="toctree-l3"><a class="reference internal" href="2.feature_crossing/1.second_order.html">3.2.1. 二阶特征交叉</a></li>
<li class="toctree-l3"><a class="reference internal" href="2.feature_crossing/2.higher_order.html">3.2.2. 高阶特征交叉</a></li>
</ul>
</li>
<li class="toctree-l2 current"><a class="current reference internal" href="#">3.3. 序列建模</a></li>
<li class="toctree-l2"><a class="reference internal" href="4.multi_objective/index.html">3.4. 多目标建模</a><ul>
<li class="toctree-l3"><a class="reference internal" href="4.multi_objective/1.arch.html">3.4.1. 基础结构演进</a></li>
<li class="toctree-l3"><a class="reference internal" href="4.multi_objective/2.dependency_modeling.html">3.4.2. 任务依赖建模</a></li>
<li class="toctree-l3"><a class="reference internal" href="4.multi_objective/3.multi_loss_optim.html">3.4.3. 多目标损失融合</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="5.multi_scenario/index.html">3.5. 多场景建模</a><ul>
<li class="toctree-l3"><a class="reference internal" href="5.multi_scenario/1.multi_tower.html">3.5.1. 多塔结构</a></li>
<li class="toctree-l3"><a class="reference internal" href="5.multi_scenario/2.dynamic_weight.html">3.5.2. 动态权重建模</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../chapter_3_rerank/index.html">4. 重排模型</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../chapter_3_rerank/1.greedy.html">4.1. 基于贪心的重排</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_3_rerank/2.personalized.html">4.2. 基于个性化的重排</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_3_rerank/3.summary.html">4.3. 本章小结</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../chapter_4_trends/index.html">5. 难点及热点研究</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../chapter_4_trends/1.debias.html">5.1. 模型去偏</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_4_trends/2.cold_start.html">5.2. 冷启动问题</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_4_trends/3.generative.html">5.3. 生成式推荐</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_4_trends/4.summary.html">5.4. 本章小结</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../chapter_5_projects/index.html">6. 项目实践</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../chapter_5_projects/1.understanding.html">6.1. 赛题理解</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_5_projects/2.baseline.html">6.2. Baseline</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_5_projects/3.analysis.html">6.3. 数据分析</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_5_projects/4.recall.html">6.4. 多路召回</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_5_projects/5.feature_engineering.html">6.5. 特征工程</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_5_projects/6.ranking.html">6.6. 排序模型</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../chapter_appendix/index.html">7. Appendix</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../chapter_appendix/word2vec.html">7.1. Word2vec</a></li>
</ul>
</li>
</ul>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../chapter_references/references.html">参考文献</a></li>
</ul>

            </nav>
        
        </div>
    
</header>
        <main class="mdl-layout__content" tabIndex="0">

	<script type="text/javascript" src="../_static/sphinx_materialdesign_theme.js "></script>
    <header class="mdl-layout__drawer">
    
          <!-- Title -->
      <span class="mdl-layout-title">
          <a class="title" href="../index.html">
              <span class="title-text">
                  FunRec 推荐系统
              </span>
          </a>
      </span>
    
    
      <div class="globaltoc">
        <span class="mdl-layout-title toc">Table Of Contents</span>
        
        
            
            <nav class="mdl-navigation">
                <ul>
<li class="toctree-l1"><a class="reference internal" href="../chapter_preface/index.html">前言</a></li>
<li class="toctree-l1"><a class="reference internal" href="../chapter_installation/index.html">安装</a></li>
<li class="toctree-l1"><a class="reference internal" href="../chapter_notation/index.html">符号</a></li>
</ul>
<ul class="current">
<li class="toctree-l1"><a class="reference internal" href="../chapter_0_introduction/index.html">1. 推荐系统概述</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../chapter_0_introduction/1.intro.html">1.1. 推荐系统是什么？</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_0_introduction/2.outline.html">1.2. 本书概览</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../chapter_1_retrieval/index.html">2. 召回模型</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../chapter_1_retrieval/1.cf/index.html">2.1. 协同过滤</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/1.cf/1.itemcf.html">2.1.1. 基于物品的协同过滤</a></li>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/1.cf/2.usercf.html">2.1.2. 基于用户的协同过滤</a></li>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/1.cf/3.mf.html">2.1.3. 矩阵分解</a></li>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/1.cf/4.summary.html">2.1.4. 总结</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_1_retrieval/2.embedding/index.html">2.2. 向量召回</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/2.embedding/1.i2i.html">2.2.1. I2I召回</a></li>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/2.embedding/2.u2i.html">2.2.2. U2I召回</a></li>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/2.embedding/3.summary.html">2.2.3. 总结</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_1_retrieval/3.sequence/index.html">2.3. 序列召回</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/3.sequence/1.user_interests.html">2.3.1. 深化用户兴趣表示</a></li>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/3.sequence/2.generateive_recall.html">2.3.2. 生成式召回方法</a></li>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/3.sequence/3.summary.html">2.3.3. 总结</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 current"><a class="reference internal" href="index.html">3. 精排模型</a><ul class="current">
<li class="toctree-l2"><a class="reference internal" href="1.wide_and_deep.html">3.1. 记忆与泛化</a></li>
<li class="toctree-l2"><a class="reference internal" href="2.feature_crossing/index.html">3.2. 特征交叉</a><ul>
<li class="toctree-l3"><a class="reference internal" href="2.feature_crossing/1.second_order.html">3.2.1. 二阶特征交叉</a></li>
<li class="toctree-l3"><a class="reference internal" href="2.feature_crossing/2.higher_order.html">3.2.2. 高阶特征交叉</a></li>
</ul>
</li>
<li class="toctree-l2 current"><a class="current reference internal" href="#">3.3. 序列建模</a></li>
<li class="toctree-l2"><a class="reference internal" href="4.multi_objective/index.html">3.4. 多目标建模</a><ul>
<li class="toctree-l3"><a class="reference internal" href="4.multi_objective/1.arch.html">3.4.1. 基础结构演进</a></li>
<li class="toctree-l3"><a class="reference internal" href="4.multi_objective/2.dependency_modeling.html">3.4.2. 任务依赖建模</a></li>
<li class="toctree-l3"><a class="reference internal" href="4.multi_objective/3.multi_loss_optim.html">3.4.3. 多目标损失融合</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="5.multi_scenario/index.html">3.5. 多场景建模</a><ul>
<li class="toctree-l3"><a class="reference internal" href="5.multi_scenario/1.multi_tower.html">3.5.1. 多塔结构</a></li>
<li class="toctree-l3"><a class="reference internal" href="5.multi_scenario/2.dynamic_weight.html">3.5.2. 动态权重建模</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../chapter_3_rerank/index.html">4. 重排模型</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../chapter_3_rerank/1.greedy.html">4.1. 基于贪心的重排</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_3_rerank/2.personalized.html">4.2. 基于个性化的重排</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_3_rerank/3.summary.html">4.3. 本章小结</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../chapter_4_trends/index.html">5. 难点及热点研究</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../chapter_4_trends/1.debias.html">5.1. 模型去偏</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_4_trends/2.cold_start.html">5.2. 冷启动问题</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_4_trends/3.generative.html">5.3. 生成式推荐</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_4_trends/4.summary.html">5.4. 本章小结</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../chapter_5_projects/index.html">6. 项目实践</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../chapter_5_projects/1.understanding.html">6.1. 赛题理解</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_5_projects/2.baseline.html">6.2. Baseline</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_5_projects/3.analysis.html">6.3. 数据分析</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_5_projects/4.recall.html">6.4. 多路召回</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_5_projects/5.feature_engineering.html">6.5. 特征工程</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_5_projects/6.ranking.html">6.6. 排序模型</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../chapter_appendix/index.html">7. Appendix</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../chapter_appendix/word2vec.html">7.1. Word2vec</a></li>
</ul>
</li>
</ul>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../chapter_references/references.html">参考文献</a></li>
</ul>

            </nav>
        
        </div>
    
</header>

    <div class="document">
        <div class="page-content" role="main">
        
  <section id="sequence-modeling">
<span id="id1"></span><h1><span class="section-number">3.3. </span>序列建模<a class="headerlink" href="#sequence-modeling" title="Permalink to this heading">¶</a></h1>
<p>在上一节中，我们探讨了如何通过各类特征交叉模型，让机器自动学习特征之间复杂的组合关系。无论是二阶交叉的FM、AFM，还是高阶交叉的DCN、xDeepFM，它们的核心目标都是从一个静态的特征集合中挖掘出有价值的信息。然而，这些模型普遍存在一个共同的局限：它们大多将用户的历史行为看作一个无序的“物品袋”（a
bag of items），如同用户的兴趣是一个静态的表示。</p>
<p>但用户的兴趣不是静止的，而是具有明显的<strong>时序性</strong>和<strong>动态演化</strong>特点。一个用户先浏览“鼠标”再浏览“显示器”，与先浏览“小说”再浏览“显示器”，这两个行为序列背后指向的购买意图截然不同。前者可能是一位正在组装电脑的数码爱好者，而后者可能只是在工作之余的随性浏览。传统的特征交叉模型难以捕捉这种蕴含在行为顺序中的、随时间变化的意图。</p>
<p>因此，本节我们将转换视角，不再将用户历史看作一堆静态特征的集合，而是将其视为一个动态的序列。我们将聚焦于如何对用户的行为序列进行建模，从这个序列中挖掘出用户动态、演化的兴趣。接下来，我们将介绍工业界在序列建模方向上的三个代表性模型：DIN、DIEN和DSIN，看看它们是如何解决这个核心挑战的。</p>
<section id="id2">
<h2><span class="section-number">3.3.1. </span>局部激活的注意力机制<a class="headerlink" href="#id2" title="Permalink to this heading">¶</a></h2>
<p>在大型电商平台中，用户的兴趣是<strong>多样</strong>的。一个用户可能在一段时间内，既关注数码产品，又浏览运动装备，还会购买生活用品。在传统的深度学习模型（即Embedding&amp;MLP范式）中，通常的做法是将用户所有的历史行为（如点击过的商品ID）对应的Embedding向量通过池化（Pooling）操作，压缩成一个<strong>固定长度的向量</strong>来代表该用户。</p>
<p>这个固定长度的用户向量，很快就成为了表达用户多样兴趣的瓶颈。想象一下，无论系统准备向这个用户推荐“跑鞋”还是“手机”，代表他的都是同一个向量。这个向量试图“一视同仁”地蕴含该用户所有的兴趣点，这不仅非常困难，而且在面对具体推荐任务时显得不够聚焦。为了增强表达能力而粗暴地增加向量维度，又会带来参数量爆炸和过拟合的风险。</p>
<p><strong>DIN的核心思想：局部激活 (Local Activation)</strong></p>
<p>深度兴趣网络（Deep Interest Network, DIN） <span id="id3">(<a class="reference internal" href="../chapter_references/references.html#id52" title="Zhou, G., Zhu, X., Song, C., Fan, Y., Zhu, H., Ma, X., … Gai, K. (2018). Deep interest network for click-through rate prediction. Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery &amp; data mining (pp. 1059–1068).">Zhou <em>et al.</em>, 2018</a>)</span>
的提出者们发现，用户的某一次具体点击行为，通常只由其历史兴趣中的<strong>一部分</strong>所“激活”。当向一位数码爱好者推荐“机械键盘”时，真正起决定性作用的，很可能是他最近浏览“游戏鼠标”和“显卡”的行为，而不是他上个月购买的“跑鞋”。</p>
<p>基于此，DIN提出了一个观点：<strong>用户的兴趣表示不应该是固定的，而应是根据当前的候选广告（Candidate
Ad）不同而动态变化的。</strong></p>
<figure class="align-default" id="id8">
<span id="din-architecture"></span><a class="reference internal image-reference" href="../_images/din_architecture.png"><img alt="../_images/din_architecture.png" src="../_images/din_architecture.png" style="width: 800px;" /></a>
<figcaption>
<p><span class="caption-number">图3.3.1 </span><span class="caption-text">DIN模型架构图（右）及其与基准模型（左）的对比</span><a class="headerlink" href="#id8" title="Permalink to this image">¶</a></p>
</figcaption>
</figure>
<p><strong>技术实现：注意力机制</strong></p>
<p>为了实现“局部激活”这一思想，DIN在模型中引入了一个关键模块——<strong>局部激活单元（Local
Activation
Unit）</strong>，其本质就是<strong>注意力机制</strong>。如上图右侧所示，DIN不再像基准模型(<a class="reference internal" href="#din-architecture"><span class="std std-numref">图3.3.1</span></a>
左)那样对所有历史行为的Embedding进行简单的池化，而是进行了一次“加权求和”。</p>
<p>这个权重（即注意力分数）的计算，体现了DIN的核心思想。具体来说，对于一个给定的用户U和候选广告A，用户的兴趣表示向量<span class="math notranslate nohighlight">\(\boldsymbol{v}_{U}(A)\)</span>是这样计算的：</p>
<div class="math notranslate nohighlight" id="equation-chapter-2-ranking-3-sequence-0">
<span class="eqno">(3.3.1)<a class="headerlink" href="#equation-chapter-2-ranking-3-sequence-0" title="Permalink to this equation">¶</a></span>\[\boldsymbol{v}_{U}(A)=f(\boldsymbol{v}_{A},\boldsymbol{e}_{1},\boldsymbol{e}_{2},\ldots,\boldsymbol{e}_{H})=\sum_{j=1}^{H}a(\boldsymbol{e}_{j},\boldsymbol{v}_{A})\boldsymbol{e}_{j}=\sum_{j=1}^{H}w_{j}\boldsymbol{e}_{j}\]</div>
<p>其中：</p>
<ul class="simple">
<li><p><span class="math notranslate nohighlight">\(\boldsymbol{e}_{1}, \boldsymbol{e}_{2}, \ldots, \boldsymbol{e}_{H}\)</span>
是用户U的历史行为Embedding向量列表。</p></li>
<li><p><span class="math notranslate nohighlight">\(\boldsymbol{v}_{A}\)</span> 是候选广告A的Embedding向量。</p></li>
<li><p><span class="math notranslate nohighlight">\(a(\boldsymbol{e}_{j}, \boldsymbol{v}_{A})\)</span>
是一个激活单元（通常是一个小型前馈神经网络），它接收历史行为<span class="math notranslate nohighlight">\(\boldsymbol{e}_{j}\)</span>和候选广告<span class="math notranslate nohighlight">\(\boldsymbol{v}_{A}\)</span>作为输入，输出一个权重<span class="math notranslate nohighlight">\(\boldsymbol{w}_{j}\)</span>。这个权重就代表了历史行为<span class="math notranslate nohighlight">\(\boldsymbol{e}_{j}\)</span>在面对广告<span class="math notranslate nohighlight">\(\boldsymbol{v}_{A}\)</span>时的“相关性”或“注意力得分”。</p></li>
</ul>
<p>通过这个公式，用户的最终兴趣表示<span class="math notranslate nohighlight">\(\boldsymbol{v}_{U}(A)\)</span>不再是一个固定的向量，而是与候选广告A紧密相关。与广告A越相关的历史行为，会获得越高的权重，从而在最终的兴趣向量中占据主导地位。</p>
<p>一个值得注意的细节是，DIN计算出的注意力权重<span class="math notranslate nohighlight">\(\boldsymbol{w}_{j}\)</span>没有经过Softmax归一化。这意味着<span class="math notranslate nohighlight">\(\sum \boldsymbol{w}_{j}\)</span>不一定等于1。这样设计的目的是为了保留用户兴趣的绝对强度。例如，如果一个用户的历史行为大部分都与某个广告高度相关，那么加权和之后的向量模长就会比较大，反之则较小。这种设计使得模型不仅能捕捉兴趣的“方向”，还能感知兴趣的“强度”。</p>
<p><strong>核心代码</strong></p>
<p>DIN的注意力机制通过将候选广告与历史行为进行多角度交互来计算权重：</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="c1"># DIN注意力层的核心计算</span>
<span class="c1"># query: 候选广告 [batch_size, 1, embedding_dim]</span>
<span class="c1"># keys: 历史行为序列 [batch_size, seq_len, embedding_dim]</span>

<span class="n">query</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">squeeze</span><span class="p">(</span><span class="n">query</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>  <span class="c1"># [B, H]</span>
<span class="n">length</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">shape</span><span class="p">(</span><span class="n">keys</span><span class="p">)[</span><span class="o">-</span><span class="mi">2</span><span class="p">]</span>
<span class="n">query</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">expand_dims</span><span class="p">(</span><span class="n">query</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>  <span class="c1"># [B, 1, H]</span>

<span class="c1"># 构建多角度交互特征：query, keys, query-keys, query*keys</span>
<span class="n">att_inputs</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">concat</span><span class="p">([</span>
    <span class="n">tf</span><span class="o">.</span><span class="n">tile</span><span class="p">(</span><span class="n">query</span><span class="p">,</span> <span class="p">[</span><span class="mi">1</span><span class="p">,</span> <span class="n">length</span><span class="p">,</span> <span class="mi">1</span><span class="p">]),</span>  <span class="c1"># 重复query以匹配序列长度</span>
    <span class="n">keys</span><span class="p">,</span>                             <span class="c1"># 历史行为</span>
    <span class="n">query</span> <span class="o">-</span> <span class="n">keys</span><span class="p">,</span>                     <span class="c1"># 差异特征</span>
    <span class="n">query</span> <span class="o">*</span> <span class="n">keys</span>                      <span class="c1"># 元素积特征</span>
<span class="p">],</span> <span class="n">axis</span><span class="o">=-</span><span class="mi">1</span><span class="p">)</span>  <span class="c1"># [B, L, 4*H]</span>

<span class="c1"># 通过前馈网络计算注意力分数</span>
<span class="n">hidden_layer</span> <span class="o">=</span> <span class="n">ffn_layer</span><span class="p">(</span><span class="n">att_inputs</span><span class="p">)</span>  <span class="c1"># [B, L, hidden_units]</span>
<span class="n">scores</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">keras</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">1</span><span class="p">)(</span><span class="n">hidden_layer</span><span class="p">)</span>  <span class="c1"># [B, L, 1]</span>

<span class="c1"># 应用mask并进行加权求和（注意：不使用softmax归一化）</span>
<span class="n">attention_weights</span> <span class="o">=</span> <span class="n">scores</span> <span class="o">*</span> <span class="n">mask</span>  <span class="c1"># [B, L, 1]</span>
<span class="n">user_interest</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">reduce_sum</span><span class="p">(</span><span class="n">keys</span> <span class="o">*</span> <span class="n">attention_weights</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>  <span class="c1"># [B, H]</span>
</pre></div>
</div>
<p>这种设计的关键在于：通过<code class="docutils literal notranslate"><span class="pre">[query,</span> <span class="pre">keys,</span> <span class="pre">query-keys,</span> <span class="pre">query*keys]</span></code>四种交互方式，模型能够从多个角度衡量历史行为与候选广告的相关性，同时不使用softmax归一化以保留兴趣强度信息。</p>
</section>
<section id="id4">
<h2><span class="section-number">3.3.2. </span>兴趣的演化建模<a class="headerlink" href="#id4" title="Permalink to this heading">¶</a></h2>
<p>DIN成功地捕捉了用户兴趣的“多样性”和“局部激活”特性，但它仍然存在一个局限：它将用户的历史行为看作是一个无序的集合，忽略了行为之间的<strong>时序依赖关系</strong>。用户的兴趣不仅是多样的，更是在持续<strong>演化</strong>的。</p>
<p>怎么解决这个问题呢？深度兴趣演化网络（Deep Interest Evolution Network,
DIEN） <span id="id5">(<a class="reference internal" href="../chapter_references/references.html#id53" title="Zhou, G., Mou, N., Fan, Y., Pi, Q., Bian, W., Zhou, C., … Gai, K. (2019). Deep interest evolution network for click-through rate prediction. Proceedings of the AAAI conference on artificial intelligence (pp. 5941–5948).">Zhou <em>et al.</em>, 2019</a>)</span>
给出了答案。它的想法很简单：光知道用户过去喜欢什么还不够，还得搞清楚这些兴趣是怎么变化的，这样才能更好地预测用户接下来会喜欢什么。</p>
<figure class="align-default" id="id9">
<span id="dien-architecture"></span><a class="reference internal image-reference" href="../_images/dien.png"><img alt="../_images/dien.png" src="../_images/dien.png" style="width: 800px;" /></a>
<figcaption>
<p><span class="caption-number">图3.3.2 </span><span class="caption-text">DIEN模型架构图</span><a class="headerlink" href="#id9" title="Permalink to this image">¶</a></p>
</figcaption>
</figure>
<p>DIEN的核心想法其实很有意思：用户的点击、购买这些行为只是表面现象，真正重要的是藏在背后的
<strong>“兴趣”状态</strong>。比如你今天买了本编程书，明天又买了个键盘，这些行为背后可能反映的是你对编程越来越感兴趣。DIEN就是要抓住这种兴趣变化的规律，所以它设计了一个两阶段的结构来实现这个目标。</p>
<p><strong>第一阶段：兴趣提取层 (Interest Extractor Layer)</strong></p>
<p>这一层要做的事情就是从用户的行为序列中，找出真正能反映<strong>兴趣状态</strong>的信息。DIEN用GRU来按时间顺序处理用户的行为Embedding序列
<span class="math notranslate nohighlight">\({\boldsymbol{e}_1, \boldsymbol{e}_2, \dots, \boldsymbol{e}_T}\)</span>。按理说，GRU在
<span class="math notranslate nohighlight">\(t\)</span> 时刻的隐状态 <span class="math notranslate nohighlight">\(\boldsymbol{h}_t\)</span>
应该包含了到那个时刻为止的所有信息。但问题是，这个隐状态真的能准确表示用户的“兴趣”吗？</p>
<p>DIEN的做法很巧妙：既然我们说 <span class="math notranslate nohighlight">\(t\)</span>
时刻的兴趣状态能反映用户的真实想法，那它应该能预测用户接下来会做什么，对吧？所以DIEN加了一个<strong>辅助损失
(Auxiliary Loss)</strong>，让 <span class="math notranslate nohighlight">\(t\)</span> 时刻的兴趣状态
<span class="math notranslate nohighlight">\(\boldsymbol{h}_t\)</span> 去预测用户在 <span class="math notranslate nohighlight">\(t+1\)</span> 时刻的真实行为
<span class="math notranslate nohighlight">\(\boldsymbol{e}_{t+1}\)</span>
。这样一来，模型就被“逼着”学出更有意义的兴趣表示。具体地，辅助损失<span class="math notranslate nohighlight">\(L_{aux}\)</span>定义如下：</p>
<div class="math notranslate nohighlight" id="equation-chapter-2-ranking-3-sequence-1">
<span class="eqno">(3.3.2)<a class="headerlink" href="#equation-chapter-2-ranking-3-sequence-1" title="Permalink to this equation">¶</a></span>\[L_{aux}=-\frac{1}{N}\left(\sum_{i=1}^{N}\sum_{t=1}^{T}\log\sigma(\boldsymbol{h}^i_t,\boldsymbol{e}^i_{b[t+1]})+\log(1-\sigma(\boldsymbol{h}^i_t,\boldsymbol{\hat{e}}^i_{b[t+1]}))\right)\]</div>
<p>其中：</p>
<ul class="simple">
<li><p><span class="math notranslate nohighlight">\(\boldsymbol{h}^i_t\)</span>
是用户i在t时刻的兴趣状态（即GRU的隐状态）。</p></li>
<li><p><span class="math notranslate nohighlight">\(\boldsymbol{e}^i_{b[t+1]}\)</span>
是用户i在t+1时刻真实点击的物品Embedding（正样本）。</p></li>
<li><p><span class="math notranslate nohighlight">\(\boldsymbol{\hat{e}}^i_{b[t+1]}\)</span>
是从物品池中负采样得到的物品Embedding（负样本）。</p></li>
<li><p><span class="math notranslate nohighlight">\(\sigma(\cdot)\)</span>
是Sigmoid函数，这里用于计算两个向量的点积并映射到(0,1)区间。</p></li>
</ul>
<p>这个辅助损失会与模型最终的CTR预测损失<span class="math notranslate nohighlight">\(L_{target}\)</span>加在一起共同优化：<span class="math notranslate nohighlight">\(L = L_{target} + \alpha L_{aux}\)</span>。这个额外的监督信号，在每个时间步都对GRU的学习进行指导，使其产出的隐状态<span class="math notranslate nohighlight">\(\boldsymbol{h}_t\)</span>能够更精准地表达用户的潜在兴趣。</p>
<p><strong>核心代码</strong></p>
<p>辅助损失的计算通过预测下一个行为来监督兴趣状态的学习：</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="c1"># 兴趣提取层的辅助损失计算</span>
<span class="c1"># interest_states: [batch_size, seq_len, hidden_units]</span>
<span class="c1"># pos_behaviors: [batch_size, seq_len, embedding_dim] 正样本行为</span>
<span class="c1"># neg_behaviors: [batch_size, seq_len, embedding_dim] 负样本行为</span>

<span class="c1"># 用t时刻的兴趣预测t+1时刻的行为</span>
<span class="n">current_interests</span> <span class="o">=</span> <span class="n">interest_states</span><span class="p">[:,</span> <span class="p">:</span><span class="o">-</span><span class="mi">1</span><span class="p">,</span> <span class="p">:]</span>      <span class="c1"># [B, T-1, H]</span>
<span class="n">next_pos_behaviors</span> <span class="o">=</span> <span class="n">pos_behaviors</span><span class="p">[:,</span> <span class="mi">1</span><span class="p">:,</span> <span class="p">:]</span>        <span class="c1"># [B, T-1, D]</span>
<span class="n">next_neg_behaviors</span> <span class="o">=</span> <span class="n">neg_behaviors</span><span class="p">[:,</span> <span class="mi">1</span><span class="p">:,</span> <span class="p">:]</span>        <span class="c1"># [B, T-1, D]</span>

<span class="c1"># 拼接兴趣和行为，送入MLP预测</span>
<span class="n">pos_input</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">concat</span><span class="p">([</span><span class="n">current_interests</span><span class="p">,</span> <span class="n">next_pos_behaviors</span><span class="p">],</span> <span class="n">axis</span><span class="o">=-</span><span class="mi">1</span><span class="p">)</span>
<span class="n">neg_input</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">concat</span><span class="p">([</span><span class="n">current_interests</span><span class="p">,</span> <span class="n">next_neg_behaviors</span><span class="p">],</span> <span class="n">axis</span><span class="o">=-</span><span class="mi">1</span><span class="p">)</span>

<span class="c1"># 预测正负样本的概率</span>
<span class="n">pos_probs</span> <span class="o">=</span> <span class="n">auxiliary_mlp</span><span class="p">(</span><span class="n">pos_input</span><span class="p">)</span>  <span class="c1"># [B, T-1, 1]</span>
<span class="n">neg_probs</span> <span class="o">=</span> <span class="n">auxiliary_mlp</span><span class="p">(</span><span class="n">neg_input</span><span class="p">)</span>  <span class="c1"># [B, T-1, 1]</span>

<span class="c1"># 二元交叉熵损失</span>
<span class="n">aux_loss</span> <span class="o">=</span> <span class="o">-</span><span class="n">tf</span><span class="o">.</span><span class="n">reduce_mean</span><span class="p">(</span>
    <span class="n">tf</span><span class="o">.</span><span class="n">math</span><span class="o">.</span><span class="n">log</span><span class="p">(</span><span class="n">pos_probs</span> <span class="o">+</span> <span class="mf">1e-8</span><span class="p">)</span> <span class="o">+</span> <span class="n">tf</span><span class="o">.</span><span class="n">math</span><span class="o">.</span><span class="n">log</span><span class="p">(</span><span class="mi">1</span> <span class="o">-</span> <span class="n">neg_probs</span> <span class="o">+</span> <span class="mf">1e-8</span><span class="p">)</span>
<span class="p">)</span>
</pre></div>
</div>
<p>这种设计确保GRU的隐状态不仅能记录历史信息，还能有效预测未来行为，从而学到更有意义的兴趣表示。</p>
<p><strong>第二阶段：兴趣演化层 (Interest Evolving Layer)</strong></p>
<p>经过第一阶段，我们得到了一个更能代表用户内在兴趣的<strong>兴趣状态序列</strong>
<span class="math notranslate nohighlight">\(\boldsymbol{h}_1, \boldsymbol{h}_2, \dots, \boldsymbol{h}_T\)</span>。第二阶段的目标，就是对这个兴趣序列的演化过程进行建模。</p>
<p>然而，兴趣的演化并不总是平滑的，常常会伴随着<strong>兴趣漂移</strong>（Interest
Drifting）现象，即用户可能在不同的兴趣点之间快速切换。如果用一个标准的GRU来建模这个兴趣序列，不相关的历史兴趣（漂移）可能会干扰对当前主要兴趣演化的判断。</p>
<p>为了解决这个问题，DIEN再次借鉴了DIN的思想，并将其与序列模型融合，设计了带注意力更新门的GRU（AUGRU）。AUGRU的核心是在GRU的更新门（Update
Gate）上融入了注意力机制。注意力得分<span class="math notranslate nohighlight">\(a_t\)</span>由<span class="math notranslate nohighlight">\(t\)</span>时刻的兴趣状态<span class="math notranslate nohighlight">\(\boldsymbol{h}_t\)</span>和<strong>候选广告</strong><span class="math notranslate nohighlight">\(\boldsymbol{e}_a\)</span>共同决定：</p>
<div class="math notranslate nohighlight" id="equation-chapter-2-ranking-3-sequence-2">
<span class="eqno">(3.3.3)<a class="headerlink" href="#equation-chapter-2-ranking-3-sequence-2" title="Permalink to this equation">¶</a></span>\[a_t = \frac{\exp(\boldsymbol{h}_t W \boldsymbol{e}_a)}{\sum_{j=1}^T\exp(\boldsymbol{h}_j W \boldsymbol{e}_a)}\]</div>
<p>然后，这个注意力得分<span class="math notranslate nohighlight">\(a_t\)</span>会去调整（scale）GRU的原始更新门<span class="math notranslate nohighlight">\(\boldsymbol{u}'_t\)</span>：</p>
<div class="math notranslate nohighlight" id="equation-chapter-2-ranking-3-sequence-3">
<span class="eqno">(3.3.4)<a class="headerlink" href="#equation-chapter-2-ranking-3-sequence-3" title="Permalink to this equation">¶</a></span>\[\boldsymbol{\tilde{u}}'_t = a_t \cdot \boldsymbol{u}'_t\]</div>
<p>最后，使用这个被注意力调整过的更新门<span class="math notranslate nohighlight">\(\boldsymbol{\tilde{u}}'_t\)</span>来更新隐状态：</p>
<div class="math notranslate nohighlight" id="equation-chapter-2-ranking-3-sequence-4">
<span class="eqno">(3.3.5)<a class="headerlink" href="#equation-chapter-2-ranking-3-sequence-4" title="Permalink to this equation">¶</a></span>\[\boldsymbol{h}_{t}' = (1 - \boldsymbol{\tilde{u}}_t') \circ \boldsymbol{h}_{t-1}' + \boldsymbol{\tilde{u}}_t' \circ \boldsymbol{\tilde{h}}_{t}'\]</div>
<p>其中<span class="math notranslate nohighlight">\(\circ\)</span>表示元素级乘积（element-wise product）。</p>
<p>通过这种方式，AUGRU在兴趣演化的每一步，都会参考当前的候选广告，来判断历史兴趣的相关性。与候选广告越相关的兴趣，其对应的<span class="math notranslate nohighlight">\(a_t\)</span>越大，其信息在更新门中的权重也越大，从而能更顺畅地在序列中传递；反之，不相关的兴趣（漂移）其影响力就会被削弱。这使得模型能够聚焦于与当前推荐任务最相关的兴趣演化路径。</p>
<p><strong>核心代码</strong></p>
<p>AUGRU的核心在于用注意力分数调整GRU的更新门：</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="c1"># AUGRU的前向传播</span>
<span class="c1"># interest_states: [batch_size, seq_len, hidden_units]</span>
<span class="c1"># target_item_embedding: [batch_size, embedding_dim]</span>

<span class="c1"># 1. 计算双线性注意力分数</span>
<span class="c1"># h_t * W * e_a</span>
<span class="n">h_W</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">tensordot</span><span class="p">(</span><span class="n">interest_states</span><span class="p">,</span> <span class="n">bilinear_weight</span><span class="p">,</span> <span class="n">axes</span><span class="o">=</span><span class="p">[[</span><span class="mi">2</span><span class="p">],</span> <span class="p">[</span><span class="mi">0</span><span class="p">]])</span>
<span class="n">target_expanded</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">expand_dims</span><span class="p">(</span><span class="n">target_item_embedding</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>
<span class="n">attention_scores</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">reduce_sum</span><span class="p">(</span><span class="n">h_W</span> <span class="o">*</span> <span class="n">target_expanded</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">2</span><span class="p">)</span>  <span class="c1"># [B, T]</span>
<span class="n">attention_scores</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">softmax</span><span class="p">(</span><span class="n">attention_scores</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>  <span class="c1"># [B, T]</span>

<span class="c1"># 2. 逐步处理序列</span>
<span class="n">hidden_state</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">zeros</span><span class="p">([</span><span class="n">batch_size</span><span class="p">,</span> <span class="n">hidden_units</span><span class="p">])</span>
<span class="k">for</span> <span class="n">t</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">seq_len</span><span class="p">):</span>
    <span class="n">current_input</span> <span class="o">=</span> <span class="n">interest_states</span><span class="p">[:,</span> <span class="n">t</span><span class="p">,</span> <span class="p">:]</span>     <span class="c1"># [B, H]</span>
    <span class="n">current_attention</span> <span class="o">=</span> <span class="n">attention_scores</span><span class="p">[:,</span> <span class="n">t</span><span class="p">]</span>   <span class="c1"># [B]</span>

    <span class="c1"># 标准GRU计算</span>
    <span class="n">update_gate</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">sigmoid</span><span class="p">(</span>
        <span class="n">dense_input_update</span><span class="p">(</span><span class="n">current_input</span><span class="p">)</span> <span class="o">+</span> <span class="n">dense_hidden_update</span><span class="p">(</span><span class="n">hidden_state</span><span class="p">)</span>
    <span class="p">)</span>  <span class="c1"># [B, H]</span>

    <span class="n">reset_gate</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">sigmoid</span><span class="p">(</span>
        <span class="n">dense_input_reset</span><span class="p">(</span><span class="n">current_input</span><span class="p">)</span> <span class="o">+</span> <span class="n">dense_hidden_reset</span><span class="p">(</span><span class="n">hidden_state</span><span class="p">)</span>
    <span class="p">)</span>  <span class="c1"># [B, H]</span>

    <span class="n">candidate_state</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">tanh</span><span class="p">(</span>
        <span class="n">dense_input_candidate</span><span class="p">(</span><span class="n">current_input</span><span class="p">)</span> <span class="o">+</span>
        <span class="n">dense_hidden_candidate</span><span class="p">(</span><span class="n">reset_gate</span> <span class="o">*</span> <span class="n">hidden_state</span><span class="p">)</span>
    <span class="p">)</span>  <span class="c1"># [B, H]</span>

    <span class="c1"># 关键：用注意力分数缩放更新门</span>
    <span class="n">attention_expanded</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">expand_dims</span><span class="p">(</span><span class="n">current_attention</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>
    <span class="n">attention_expanded</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">tile</span><span class="p">(</span><span class="n">attention_expanded</span><span class="p">,</span> <span class="p">[</span><span class="mi">1</span><span class="p">,</span> <span class="n">hidden_units</span><span class="p">])</span>
    <span class="n">attention_update_gate</span> <span class="o">=</span> <span class="n">attention_expanded</span> <span class="o">*</span> <span class="n">update_gate</span>  <span class="c1"># [B, H]</span>

    <span class="c1"># 更新隐藏状态</span>
    <span class="n">hidden_state</span> <span class="o">=</span> <span class="p">(</span><span class="mi">1</span> <span class="o">-</span> <span class="n">attention_update_gate</span><span class="p">)</span> <span class="o">*</span> <span class="n">hidden_state</span> <span class="o">+</span> \
                   <span class="n">attention_update_gate</span> <span class="o">*</span> <span class="n">candidate_state</span>
</pre></div>
</div>
<p>AUGRU通过注意力分数动态调整更新门，使得与目标广告相关的兴趣能够顺利传递，而不相关的兴趣（漂移）被抑制，从而更精准地捕捉兴趣演化路径。</p>
</section>
<section id="id6">
<h2><span class="section-number">3.3.3. </span>从行为序列到会话序列<a class="headerlink" href="#id6" title="Permalink to this heading">¶</a></h2>
<p>从DIN到DIEN，我们看到了模型对用户兴趣的理解从“静态相关”走向了“动态演化”。然而，它们都将用户的行为看作一条连续的序列。但现实中，用户的行为模式更多是间断性的。用户通常在<strong>一个会话（Session）</strong>
内拥有一个明确且集中的意图，而在<strong>不同会话</strong>之间，兴趣点可能发生巨大转变。</p>
<figure class="align-default" id="id10">
<span id="dsin-session-structure"></span><a class="reference internal image-reference" href="../_images/dsin_session.png"><img alt="../_images/dsin_session.png" src="../_images/dsin_session.png" style="width: 350px;" /></a>
<figcaption>
<p><span class="caption-number">图3.3.3 </span><span class="caption-text">用户行为的会话结构示例</span><a class="headerlink" href="#id10" title="Permalink to this image">¶</a></p>
</figcaption>
</figure>
<p>如上图所示，一个用户可能在一个会话里集中浏览各种裤子，而在下一个会话则专注于戒指。这种<strong>会话内同质、会话间异质</strong>的现象非常普遍。如果直接用一个RNN模型处理这种“断层”明显的长序列，模型需要花费很大力气去学习这种兴趣的突变，效果并不理想。深度会话兴趣网络（Deep
Session Interest Network, DSIN） <span id="id7">(<a class="reference internal" href="../chapter_references/references.html#id54" title="Feng, Y., Lv, F., Shen, W., Wang, M., Sun, F., Zhu, Y., &amp; Yang, K. (2019). Deep session interest network for click-through rate prediction. arXiv preprint arXiv:1905.06482.">Feng <em>et al.</em>, 2019</a>)</span>
将“会话”作为分析用户行为的基本单元，并采用一种<strong>分层</strong>的思想来建模。</p>
<figure class="align-default" id="id11">
<span id="dsin-architecture"></span><a class="reference internal image-reference" href="../_images/dsin_architecture.png"><img alt="../_images/dsin_architecture.png" src="../_images/dsin_architecture.png" style="width: 400px;" /></a>
<figcaption>
<p><span class="caption-number">图3.3.4 </span><span class="caption-text">DSIN模型架构图</span><a class="headerlink" href="#id11" title="Permalink to this image">¶</a></p>
</figcaption>
</figure>
<p><strong>DSIN的技术实现：分层建模</strong></p>
<p>DSIN的架构如上图所示，其建模过程可以清晰地分为几个层次：</p>
<ol class="arabic">
<li><p><strong>会话划分层 (Session Division
Layer)</strong>：这是模型的第一步，也是DSIN的基础。它根据行为发生的时间间隔（例如，如果两个行为间隔超过30分钟），将原始的、连续的用户行为长序列<span class="math notranslate nohighlight">\(\mathbf{S}\)</span>，切分成多个独立的<strong>会话短序列</strong><span class="math notranslate nohighlight">\(\mathbf{Q} = [\mathbf{Q}_1, \mathbf{Q}_2, ..., \mathbf{Q}_K]\)</span>。</p></li>
<li><p><strong>会话兴趣提取层 (Session Interest Extractor
Layer)</strong>：这一层的目标是为每一个会话<span class="math notranslate nohighlight">\(\mathbf{Q}_k\)</span>提取出一个核心的兴趣向量。DSIN认为，一个会话内的行为虽然意图集中，但彼此之间的重要性也不同。因此，它没有使用简单的池化，而是采用了<strong>自注意力机制（Self−Attention）</strong>（与Transformer的核心思想一致）。自注意力网络能够捕捉该会话内部所有行为之间的内在关联，并聚合最重要的信息，最终为每个会话<span class="math notranslate nohighlight">\(\mathbf{Q}_k\)</span>生成一个浓缩的兴趣向量<span class="math notranslate nohighlight">\(\mathbf{I}_k\)</span>。</p></li>
<li><p><strong>会话兴趣交互层 (Session Interest Interacting
Layer)</strong>：经过上一步，我们得到了一个更高层次的序列——<strong>会话兴趣向量的序列</strong><span class="math notranslate nohighlight">\(\mathbf{I}_1, \mathbf{I}_2, ..., \mathbf{I}_K\)</span>。这个序列反映了用户兴趣在更长的时间尺度上的演变。DSIN使用一个
<strong>双向长短期记忆网络（Bi-LSTM）</strong>
来对这个会话序列进行建模，从而捕捉不同会话之间的演进和依赖关系。Bi-LSTM的输出是一个包含了上下文信息的会话兴趣序列<span class="math notranslate nohighlight">\([\mathbf{H}_1, \mathbf{H}_2, ..., \mathbf{H}_K]\)</span>。</p></li>
<li><p><strong>会话兴趣激活层 (Session Interest Activating
Layer)</strong>：最后一步与DIN的思想一脉相承。模型会根据当前的<strong>候选广告</strong><span class="math notranslate nohighlight">\(\mathbf{X}_I\)</span>，使用注意力机制来计算每个会话兴趣的重要性，并进行加权求和，得到最终的用户兴趣表示。DSIN分别对会话兴趣提取层和交互层的输出都进行了激活：</p>
<div class="math notranslate nohighlight" id="equation-chapter-2-ranking-3-sequence-5">
<span class="eqno">(3.3.6)<a class="headerlink" href="#equation-chapter-2-ranking-3-sequence-5" title="Permalink to this equation">¶</a></span>\[\mathbf{U}^{I} = \sum_{k=1}^{K} a_{k}^{I} \mathbf{I}_{k} \quad \text{和} \quad \mathbf{U}^{H} = \sum_{k=1}^{K} a_{k}^{H} \mathbf{H}_{k}\]</div>
<p>其中，<span class="math notranslate nohighlight">\(a_{k}^{I}\)</span> 和 <span class="math notranslate nohighlight">\(a_{k}^{H}\)</span>
是根据待打分的目标商品计算出的注意力权重。最终，将这两个激活后的向量<span class="math notranslate nohighlight">\(\mathbf{U}^{I}\)</span>和<span class="math notranslate nohighlight">\(\mathbf{U}^{H}\)</span>拼接，得到用户的最终兴趣表示。</p>
</li>
</ol>
<p>DSIN通过引入“会话”这一更符合用户实际行为模式的中间单元，将复杂的长序列建模问题分解为“<strong>会话内信息聚合</strong>”（通过自注意力）和“<strong>会话间信息传递</strong>”（通过Bi-LSTM）两个更清晰的子问题。这种分层建模思想，使得模型能够对用户兴趣进行更精细的刻画。</p>
<p><strong>核心代码</strong></p>
<p>DSIN的三个核心组件实现了分层的会话建模：</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="c1"># 1. 会话兴趣提取：使用多头自注意力聚合会话内信息</span>
<span class="c1"># session_embeddings: [batch_size, sess_max_count, sess_max_len, embedding_dim]</span>

<span class="n">session_interests</span> <span class="o">=</span> <span class="p">[]</span>
<span class="k">for</span> <span class="n">sess_idx</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">sess_max_count</span><span class="p">):</span>
    <span class="c1"># 获取单个会话的embedding</span>
    <span class="n">session_emb</span> <span class="o">=</span> <span class="n">session_embeddings</span><span class="p">[:,</span> <span class="n">sess_idx</span><span class="p">,</span> <span class="p">:,</span> <span class="p">:]</span>  <span class="c1"># [B, L, D]</span>

    <span class="c1"># 多头自注意力捕获会话内物品之间的关系</span>
    <span class="n">attention_output</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">keras</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">MultiHeadAttention</span><span class="p">(</span>
        <span class="n">num_heads</span><span class="o">=</span><span class="n">att_head_num</span><span class="p">,</span>
        <span class="n">key_dim</span><span class="o">=</span><span class="n">att_embedding_size</span><span class="p">,</span>
        <span class="n">dropout</span><span class="o">=</span><span class="n">dropout_rate</span>
    <span class="p">)(</span><span class="n">session_emb</span><span class="p">,</span> <span class="n">session_emb</span><span class="p">)</span>  <span class="c1"># [B, L, d_model]</span>

    <span class="c1"># 平均池化得到会话级表示</span>
    <span class="n">session_interest</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">reduce_mean</span><span class="p">(</span><span class="n">attention_output</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>  <span class="c1"># [B, d_model]</span>
    <span class="n">session_interests</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">session_interest</span><span class="p">)</span>

<span class="n">session_interests</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">stack</span><span class="p">(</span><span class="n">session_interests</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>  <span class="c1"># [B, K, d_model]</span>

<span class="c1"># 2. 会话兴趣交互：使用双向LSTM建模会话间的时序关系</span>
<span class="n">session_interactions</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">keras</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">Bidirectional</span><span class="p">(</span>
    <span class="n">tf</span><span class="o">.</span><span class="n">keras</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">LSTM</span><span class="p">(</span>
        <span class="n">d_model</span> <span class="o">//</span> <span class="mi">2</span><span class="p">,</span>
        <span class="n">return_sequences</span><span class="o">=</span><span class="kc">True</span><span class="p">,</span>
        <span class="n">dropout</span><span class="o">=</span><span class="n">dropout_rate</span>
    <span class="p">)</span>
<span class="p">)(</span><span class="n">session_interests</span><span class="p">)</span>  <span class="c1"># [B, K, d_model*2]</span>

<span class="c1"># 3. 会话兴趣激活：基于目标物品激活相关会话</span>
<span class="c1"># 扩展目标物品embedding以匹配会话维度</span>
<span class="n">target_expanded</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">expand_dims</span><span class="p">(</span><span class="n">target_item_embedding</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>  <span class="c1"># [B, 1, D]</span>
<span class="n">target_repeated</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">tile</span><span class="p">(</span><span class="n">target_expanded</span><span class="p">,</span> <span class="p">[</span><span class="mi">1</span><span class="p">,</span> <span class="n">sess_max_count</span><span class="p">,</span> <span class="mi">1</span><span class="p">])</span>  <span class="c1"># [B, K, D]</span>

<span class="c1"># 拼接会话特征和目标物品</span>
<span class="n">combined_features</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">concat</span><span class="p">(</span>
    <span class="p">[</span><span class="n">session_interactions</span><span class="p">,</span> <span class="n">target_repeated</span><span class="p">],</span> <span class="n">axis</span><span class="o">=-</span><span class="mi">1</span>
<span class="p">)</span>  <span class="c1"># [B, K, d_model*2 + D]</span>

<span class="c1"># 计算注意力权重</span>
<span class="n">attention_scores</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">keras</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="n">activation</span><span class="o">=</span><span class="s1">&#39;tanh&#39;</span><span class="p">)(</span><span class="n">combined_features</span><span class="p">)</span>
<span class="n">attention_weights</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">softmax</span><span class="p">(</span><span class="n">attention_scores</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>  <span class="c1"># [B, K, 1]</span>

<span class="c1"># 加权聚合会话特征</span>
<span class="n">activated_features</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">reduce_sum</span><span class="p">(</span>
    <span class="n">session_interactions</span> <span class="o">*</span> <span class="n">attention_weights</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">1</span>
<span class="p">)</span>  <span class="c1"># [B, d_model*2]</span>
</pre></div>
</div>
<p>这种设计体现了“分层建模”的思想：先用自注意力在会话内聚合，再用BiLSTM在会话间传递，最后用注意力激活相关会话，从而实现对用户行为的细粒度刻画。</p>
<p>本节介绍了序列建模的三个关键模型：DIN通过注意力机制解决用户兴趣多样性问题，DIEN进一步建模兴趣的时序演化过程，DSIN则引入会话概念进行分层建模。这些模型体现了序列建模的核心思想：动态性（根据任务调整兴趣表示）、序列性（利用时间顺序信息）和聚焦性（针对任务筛选相关信息）。随着技术发展，未来的序列建模方法将结合更多先进技术来更好地理解用户动态需求。</p>
<p><strong>代码实践</strong></p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span><span class="w"> </span><span class="nn">funrec</span><span class="w"> </span><span class="kn">import</span> <span class="n">compare_models</span>

<span class="n">compare_models</span><span class="p">([</span><span class="s1">&#39;din&#39;</span><span class="p">,</span> <span class="s1">&#39;dien&#39;</span><span class="p">,</span> <span class="s1">&#39;dsin&#39;</span><span class="p">])</span>
</pre></div>
</div>
<div class="output highlight-default notranslate"><div class="highlight"><pre><span></span><span class="o">+--------+--------+--------+------------+</span>
<span class="o">|</span> <span class="n">模型</span>   <span class="o">|</span>    <span class="n">auc</span> <span class="o">|</span>   <span class="n">gauc</span> <span class="o">|</span>   <span class="n">val_user</span> <span class="o">|</span>
<span class="o">+========+========+========+============+</span>
<span class="o">|</span> <span class="n">din</span>    <span class="o">|</span> <span class="mf">0.5555</span> <span class="o">|</span> <span class="mf">0.5337</span> <span class="o">|</span>        <span class="mi">928</span> <span class="o">|</span>
<span class="o">+--------+--------+--------+------------+</span>
<span class="o">|</span> <span class="n">dien</span>   <span class="o">|</span> <span class="mf">0.5852</span> <span class="o">|</span> <span class="mf">0.5587</span> <span class="o">|</span>        <span class="mi">928</span> <span class="o">|</span>
<span class="o">+--------+--------+--------+------------+</span>
<span class="o">|</span> <span class="n">dsin</span>   <span class="o">|</span> <span class="mf">0.5391</span> <span class="o">|</span> <span class="mf">0.5407</span> <span class="o">|</span>         <span class="mi">99</span> <span class="o">|</span>
<span class="o">+--------+--------+--------+------------+</span>
</pre></div>
</div>
</section>
</section>


        </div>
        <div class="side-doc-outline">
            <div class="side-doc-outline--content"> 
<div class="localtoc">
    <p class="caption">
      <span class="caption-text">Table Of Contents</span>
    </p>
    <ul>
<li><a class="reference internal" href="#">3.3. 序列建模</a><ul>
<li><a class="reference internal" href="#id2">3.3.1. 局部激活的注意力机制</a></li>
<li><a class="reference internal" href="#id4">3.3.2. 兴趣的演化建模</a></li>
<li><a class="reference internal" href="#id6">3.3.3. 从行为序列到会话序列</a></li>
</ul>
</li>
</ul>

</div>
            </div>
        </div>

      <div class="clearer"></div>
    </div><div class="pagenation">
     <a id="button-prev" href="2.feature_crossing/2.higher_order.html" class="mdl-button mdl-js-button mdl-js-ripple-effect mdl-button--colored" role="botton" accesskey="P">
         <i class="pagenation-arrow-L fas fa-arrow-left fa-lg"></i>
         <div class="pagenation-text">
            <span class="pagenation-direction">Previous</span>
            <div>3.2.2. 高阶特征交叉</div>
         </div>
     </a>
     <a id="button-next" href="4.multi_objective/index.html" class="mdl-button mdl-js-button mdl-js-ripple-effect mdl-button--colored" role="botton" accesskey="N">
         <i class="pagenation-arrow-R fas fa-arrow-right fa-lg"></i>
        <div class="pagenation-text">
            <span class="pagenation-direction">Next</span>
            <div>3.4. 多目标建模</div>
        </div>
     </a>
  </div>
        
        </main>
    </div>
  </body>
</html>