<!DOCTYPE html>

<html lang="en">
  <head>
    <meta charset="utf-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" /><meta name="generator" content="Docutils 0.19: https://docutils.sourceforge.io/" />

    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
    <meta http-equiv="x-ua-compatible" content="ie=edge">
    
    <title>3.2.2. 高阶特征交叉 &#8212; FunRec 推荐系统 0.0.1 documentation</title>

    <link rel="stylesheet" href="../../_static/material-design-lite-1.3.0/material.blue-deep_orange.min.css" type="text/css" />
    <link rel="stylesheet" href="../../_static/sphinx_materialdesign_theme.css" type="text/css" />
    <link rel="stylesheet" href="../../_static/fontawesome/all.css" type="text/css" />
    <link rel="stylesheet" href="../../_static/fonts.css" type="text/css" />
    <link rel="stylesheet" type="text/css" href="../../_static/pygments.css" />
    <link rel="stylesheet" type="text/css" href="../../_static/basic.css" />
    <link rel="stylesheet" type="text/css" href="../../_static/d2l.css" />
    <script data-url_root="../../" id="documentation_options" src="../../_static/documentation_options.js"></script>
    <script src="../../_static/jquery.js"></script>
    <script src="../../_static/underscore.js"></script>
    <script src="../../_static/_sphinx_javascript_frameworks_compat.js"></script>
    <script src="../../_static/doctools.js"></script>
    <script src="../../_static/sphinx_highlight.js"></script>
    <script src="../../_static/d2l.js"></script>
    <script async="async" src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
    <link rel="index" title="Index" href="../../genindex.html" />
    <link rel="search" title="Search" href="../../search.html" />
    <link rel="next" title="3.3. 序列建模" href="../3.sequence.html" />
    <link rel="prev" title="3.2.1. 二阶特征交叉" href="1.second_order.html" /> 
  </head>
<body>
    <div class="mdl-layout mdl-js-layout mdl-layout--fixed-header mdl-layout--fixed-drawer"><header class="mdl-layout__header mdl-layout__header--waterfall ">
    <div class="mdl-layout__header-row">
        
        <nav class="mdl-navigation breadcrumb">
            <a class="mdl-navigation__link" href="../index.html"><span class="section-number">3. </span>精排模型</a><i class="material-icons">navigate_next</i>
            <a class="mdl-navigation__link" href="index.html"><span class="section-number">3.2. </span>特征交叉</a><i class="material-icons">navigate_next</i>
            <a class="mdl-navigation__link is-active"><span class="section-number">3.2.2. </span>高阶特征交叉</a>
        </nav>
        <div class="mdl-layout-spacer"></div>
        <nav class="mdl-navigation">
        
<form class="form-inline pull-sm-right" action="../../search.html" method="get">
      <div class="mdl-textfield mdl-js-textfield mdl-textfield--expandable mdl-textfield--floating-label mdl-textfield--align-right">
        <label id="quick-search-icon" class="mdl-button mdl-js-button mdl-button--icon"  for="waterfall-exp">
          <i class="material-icons">search</i>
        </label>
        <div class="mdl-textfield__expandable-holder">
          <input class="mdl-textfield__input" type="text" name="q"  id="waterfall-exp" placeholder="Search" />
          <input type="hidden" name="check_keywords" value="yes" />
          <input type="hidden" name="area" value="default" />
        </div>
      </div>
      <div class="mdl-tooltip" data-mdl-for="quick-search-icon">
      Quick search
      </div>
</form>
        
<a id="button-show-source"
    class="mdl-button mdl-js-button mdl-button--icon"
    href="../../_sources/chapter_2_ranking/2.feature_crossing/2.higher_order.rst.txt" rel="nofollow">
  <i class="material-icons">code</i>
</a>
<div class="mdl-tooltip" data-mdl-for="button-show-source">
Show Source
</div>
        </nav>
    </div>
    <div class="mdl-layout__header-row header-links">
      <div class="mdl-layout-spacer"></div>
      <nav class="mdl-navigation">
          
              <a  class="mdl-navigation__link" href="https://funrec-notebooks.s3.eu-west-3.amazonaws.com/fun-rec.zip">
                  <i class="fas fa-download"></i>
                  Jupyter 记事本
              </a>
          
              <a  class="mdl-navigation__link" href="https://github.com/datawhalechina/fun-rec">
                  <i class="fab fa-github"></i>
                  GitHub
              </a>
      </nav>
    </div>
</header><header class="mdl-layout__drawer">
    
          <!-- Title -->
      <span class="mdl-layout-title">
          <a class="title" href="../../index.html">
              <span class="title-text">
                  FunRec 推荐系统
              </span>
          </a>
      </span>
    
    
      <div class="globaltoc">
        <span class="mdl-layout-title toc">Table Of Contents</span>
        
        
            
            <nav class="mdl-navigation">
                <ul>
<li class="toctree-l1"><a class="reference internal" href="../../chapter_preface/index.html">前言</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../chapter_installation/index.html">安装</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../chapter_notation/index.html">符号</a></li>
</ul>
<ul class="current">
<li class="toctree-l1"><a class="reference internal" href="../../chapter_0_introduction/index.html">1. 推荐系统概述</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_0_introduction/1.intro.html">1.1. 推荐系统是什么？</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_0_introduction/2.outline.html">1.2. 本书概览</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../chapter_1_retrieval/index.html">2. 召回模型</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_1_retrieval/1.cf/index.html">2.1. 协同过滤</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../chapter_1_retrieval/1.cf/1.itemcf.html">2.1.1. 基于物品的协同过滤</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../chapter_1_retrieval/1.cf/2.usercf.html">2.1.2. 基于用户的协同过滤</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../chapter_1_retrieval/1.cf/3.mf.html">2.1.3. 矩阵分解</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../chapter_1_retrieval/1.cf/4.summary.html">2.1.4. 总结</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_1_retrieval/2.embedding/index.html">2.2. 向量召回</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../chapter_1_retrieval/2.embedding/1.i2i.html">2.2.1. I2I召回</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../chapter_1_retrieval/2.embedding/2.u2i.html">2.2.2. U2I召回</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../chapter_1_retrieval/2.embedding/3.summary.html">2.2.3. 总结</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_1_retrieval/3.sequence/index.html">2.3. 序列召回</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../chapter_1_retrieval/3.sequence/1.user_interests.html">2.3.1. 深化用户兴趣表示</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../chapter_1_retrieval/3.sequence/2.generateive_recall.html">2.3.2. 生成式召回方法</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../chapter_1_retrieval/3.sequence/3.summary.html">2.3.3. 总结</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 current"><a class="reference internal" href="../index.html">3. 精排模型</a><ul class="current">
<li class="toctree-l2"><a class="reference internal" href="../1.wide_and_deep.html">3.1. 记忆与泛化</a></li>
<li class="toctree-l2 current"><a class="reference internal" href="index.html">3.2. 特征交叉</a><ul class="current">
<li class="toctree-l3"><a class="reference internal" href="1.second_order.html">3.2.1. 二阶特征交叉</a></li>
<li class="toctree-l3 current"><a class="current reference internal" href="#">3.2.2. 高阶特征交叉</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../3.sequence.html">3.3. 序列建模</a></li>
<li class="toctree-l2"><a class="reference internal" href="../4.multi_objective/index.html">3.4. 多目标建模</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../4.multi_objective/1.arch.html">3.4.1. 基础结构演进</a></li>
<li class="toctree-l3"><a class="reference internal" href="../4.multi_objective/2.dependency_modeling.html">3.4.2. 任务依赖建模</a></li>
<li class="toctree-l3"><a class="reference internal" href="../4.multi_objective/3.multi_loss_optim.html">3.4.3. 多目标损失融合</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../5.multi_scenario/index.html">3.5. 多场景建模</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../5.multi_scenario/1.multi_tower.html">3.5.1. 多塔结构</a></li>
<li class="toctree-l3"><a class="reference internal" href="../5.multi_scenario/2.dynamic_weight.html">3.5.2. 动态权重建模</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../chapter_3_rerank/index.html">4. 重排模型</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_3_rerank/1.greedy.html">4.1. 基于贪心的重排</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_3_rerank/2.personalized.html">4.2. 基于个性化的重排</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_3_rerank/3.summary.html">4.3. 本章小结</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../chapter_4_trends/index.html">5. 难点及热点研究</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_4_trends/1.debias.html">5.1. 模型去偏</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_4_trends/2.cold_start.html">5.2. 冷启动问题</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_4_trends/3.generative.html">5.3. 生成式推荐</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_4_trends/4.summary.html">5.4. 本章小结</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../chapter_5_projects/index.html">6. 项目实践</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_5_projects/1.understanding.html">6.1. 赛题理解</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_5_projects/2.baseline.html">6.2. Baseline</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_5_projects/3.analysis.html">6.3. 数据分析</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_5_projects/4.recall.html">6.4. 多路召回</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_5_projects/5.feature_engineering.html">6.5. 特征工程</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_5_projects/6.ranking.html">6.6. 排序模型</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../chapter_appendix/index.html">7. Appendix</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_appendix/word2vec.html">7.1. Word2vec</a></li>
</ul>
</li>
</ul>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../../chapter_references/references.html">参考文献</a></li>
</ul>

            </nav>
        
        </div>
    
</header>
        <main class="mdl-layout__content" tabIndex="0">

	<script type="text/javascript" src="../../_static/sphinx_materialdesign_theme.js "></script>
    <header class="mdl-layout__drawer">
    
          <!-- Title -->
      <span class="mdl-layout-title">
          <a class="title" href="../../index.html">
              <span class="title-text">
                  FunRec 推荐系统
              </span>
          </a>
      </span>
    
    
      <div class="globaltoc">
        <span class="mdl-layout-title toc">Table Of Contents</span>
        
        
            
            <nav class="mdl-navigation">
                <ul>
<li class="toctree-l1"><a class="reference internal" href="../../chapter_preface/index.html">前言</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../chapter_installation/index.html">安装</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../chapter_notation/index.html">符号</a></li>
</ul>
<ul class="current">
<li class="toctree-l1"><a class="reference internal" href="../../chapter_0_introduction/index.html">1. 推荐系统概述</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_0_introduction/1.intro.html">1.1. 推荐系统是什么？</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_0_introduction/2.outline.html">1.2. 本书概览</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../chapter_1_retrieval/index.html">2. 召回模型</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_1_retrieval/1.cf/index.html">2.1. 协同过滤</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../chapter_1_retrieval/1.cf/1.itemcf.html">2.1.1. 基于物品的协同过滤</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../chapter_1_retrieval/1.cf/2.usercf.html">2.1.2. 基于用户的协同过滤</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../chapter_1_retrieval/1.cf/3.mf.html">2.1.3. 矩阵分解</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../chapter_1_retrieval/1.cf/4.summary.html">2.1.4. 总结</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_1_retrieval/2.embedding/index.html">2.2. 向量召回</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../chapter_1_retrieval/2.embedding/1.i2i.html">2.2.1. I2I召回</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../chapter_1_retrieval/2.embedding/2.u2i.html">2.2.2. U2I召回</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../chapter_1_retrieval/2.embedding/3.summary.html">2.2.3. 总结</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_1_retrieval/3.sequence/index.html">2.3. 序列召回</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../chapter_1_retrieval/3.sequence/1.user_interests.html">2.3.1. 深化用户兴趣表示</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../chapter_1_retrieval/3.sequence/2.generateive_recall.html">2.3.2. 生成式召回方法</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../chapter_1_retrieval/3.sequence/3.summary.html">2.3.3. 总结</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 current"><a class="reference internal" href="../index.html">3. 精排模型</a><ul class="current">
<li class="toctree-l2"><a class="reference internal" href="../1.wide_and_deep.html">3.1. 记忆与泛化</a></li>
<li class="toctree-l2 current"><a class="reference internal" href="index.html">3.2. 特征交叉</a><ul class="current">
<li class="toctree-l3"><a class="reference internal" href="1.second_order.html">3.2.1. 二阶特征交叉</a></li>
<li class="toctree-l3 current"><a class="current reference internal" href="#">3.2.2. 高阶特征交叉</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../3.sequence.html">3.3. 序列建模</a></li>
<li class="toctree-l2"><a class="reference internal" href="../4.multi_objective/index.html">3.4. 多目标建模</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../4.multi_objective/1.arch.html">3.4.1. 基础结构演进</a></li>
<li class="toctree-l3"><a class="reference internal" href="../4.multi_objective/2.dependency_modeling.html">3.4.2. 任务依赖建模</a></li>
<li class="toctree-l3"><a class="reference internal" href="../4.multi_objective/3.multi_loss_optim.html">3.4.3. 多目标损失融合</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../5.multi_scenario/index.html">3.5. 多场景建模</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../5.multi_scenario/1.multi_tower.html">3.5.1. 多塔结构</a></li>
<li class="toctree-l3"><a class="reference internal" href="../5.multi_scenario/2.dynamic_weight.html">3.5.2. 动态权重建模</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../chapter_3_rerank/index.html">4. 重排模型</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_3_rerank/1.greedy.html">4.1. 基于贪心的重排</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_3_rerank/2.personalized.html">4.2. 基于个性化的重排</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_3_rerank/3.summary.html">4.3. 本章小结</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../chapter_4_trends/index.html">5. 难点及热点研究</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_4_trends/1.debias.html">5.1. 模型去偏</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_4_trends/2.cold_start.html">5.2. 冷启动问题</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_4_trends/3.generative.html">5.3. 生成式推荐</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_4_trends/4.summary.html">5.4. 本章小结</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../chapter_5_projects/index.html">6. 项目实践</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_5_projects/1.understanding.html">6.1. 赛题理解</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_5_projects/2.baseline.html">6.2. Baseline</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_5_projects/3.analysis.html">6.3. 数据分析</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_5_projects/4.recall.html">6.4. 多路召回</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_5_projects/5.feature_engineering.html">6.5. 特征工程</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_5_projects/6.ranking.html">6.6. 排序模型</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../chapter_appendix/index.html">7. Appendix</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_appendix/word2vec.html">7.1. Word2vec</a></li>
</ul>
</li>
</ul>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../../chapter_references/references.html">参考文献</a></li>
</ul>

            </nav>
        
        </div>
    
</header>

    <div class="document">
        <div class="page-content" role="main">
        
  <section id="higher-order-feature-crossing">
<span id="id1"></span><h1><span class="section-number">3.2.2. </span>高阶特征交叉<a class="headerlink" href="#higher-order-feature-crossing" title="Permalink to this heading">¶</a></h1>
<p>前面我们学了各种二阶特征交叉技术，这些模型能够明确地处理二阶交互，但对于更高阶的特征组合，它们主要靠深度神经网络来学习。深度网络虽然能学到高阶交互，但我们不知道它具体学到了什么，也不清楚这些交互是怎么影响预测的。所以研究者们想：能不能像
FM 处理二阶交叉那样，设计出能够<strong>明确捕捉高阶交叉</strong>的网络结构？</p>
<section id="dcn">
<h2><span class="section-number">3.2.2.1. </span>DCN: 残差连接的高阶交叉<a class="headerlink" href="#dcn" title="Permalink to this heading">¶</a></h2>
<p>为了解决上述问题，Deep &amp; Cross Network (DCN) <span id="id2">(<a class="reference internal" href="../../chapter_references/references.html#id43" title="Wang, R., Fu, B., Fu, G., &amp; Wang, M. (2017). Deep &amp; cross network for ad click predictions. Proceedings of the ADKDD'17 (pp. 1–7).">Wang <em>et al.</em>, 2017</a>)</span>
用Cross Network替代了Wide &amp; Deep模型中的Wide部分。Cross
Network在每一层都会与原始输入特征做交叉，这样就能明确地学到高阶特征交互，减少了手工特征工程的工作。</p>
<figure class="align-default" id="id5">
<span id="dcn-model-structure"></span><a class="reference internal image-reference" href="../../_images/deepcross.png"><img alt="../../_images/deepcross.png" src="../../_images/deepcross.png" style="width: 400px;" /></a>
<figcaption>
<p><span class="caption-number">图3.2.7 </span><span class="caption-text">DCN模型结构</span><a class="headerlink" href="#id5" title="Permalink to this image">¶</a></p>
</figcaption>
</figure>
<p>DCN的整体结构由并行的Cross Network和Deep
Network两部分组成，它们共享相同的Embedding层输入。首先，模型将稀疏的类别特征转换为低维稠密的Embedding向量，并与数值型特征拼接在一起，形成统一的输入向量
<span class="math notranslate nohighlight">\(\mathbf{x}_0\)</span>。</p>
<div class="math notranslate nohighlight" id="equation-chapter-2-ranking-2-feature-crossing-2-higher-order-0">
<span class="eqno">(3.2.27)<a class="headerlink" href="#equation-chapter-2-ranking-2-feature-crossing-2-higher-order-0" title="Permalink to this equation">¶</a></span>\[\mathbf{x}_0 = [\mathbf{x}_{\text{embed}, 1}^T, \ldots, \mathbf{x}_{\text{embed}, k}^T, \mathbf{x}_{\text{dense}}^T]\]</div>
<p>这个初始向量 <span class="math notranslate nohighlight">\(\mathbf{x}_0\)</span> 会被同时送入Cross Network和Deep
Network。</p>
<p>Cross
Network是DCN的核心创新。它由多个交叉层堆叠而成，其精妙之处在于每一层的计算都保留了与原始输入
<span class="math notranslate nohighlight">\(\mathbf{x}_0\)</span> 的直接交互。第 <span class="math notranslate nohighlight">\(l+1\)</span> 层的计算公式如下：</p>
<div class="math notranslate nohighlight" id="equation-chapter-2-ranking-2-feature-crossing-2-higher-order-1">
<span class="eqno">(3.2.28)<a class="headerlink" href="#equation-chapter-2-ranking-2-feature-crossing-2-higher-order-1" title="Permalink to this equation">¶</a></span>\[\mathbf{x}_{l+1} = \mathbf{x}_0 \mathbf{x}_l^T \mathbf{w}_l + \mathbf{b}_l + \mathbf{x}_l\]</div>
<p>其中<span class="math notranslate nohighlight">\(\mathbf{x}_l, \mathbf{x}_{l+1} \in \mathbb{R}^d\)</span> 分别是第
<span class="math notranslate nohighlight">\(l\)</span> 层和第 <span class="math notranslate nohighlight">\(l+1\)</span>
层的输出列向量，<span class="math notranslate nohighlight">\(\mathbf{x}_0 \in \mathbb{R}^d\)</span> 是Cross
Network的初始输入向量，<span class="math notranslate nohighlight">\(\mathbf{w}_l, \mathbf{b}_l \in \mathbb{R}^d\)</span>
分别是第 <span class="math notranslate nohighlight">\(l\)</span> 层的权重和偏置列向量。</p>
<figure class="align-default" id="id6">
<span id="cross-network-structure"></span><a class="reference internal image-reference" href="../../_images/cross_network.png"><img alt="../../_images/cross_network.png" src="../../_images/cross_network.png" style="width: 300px;" /></a>
<figcaption>
<p><span class="caption-number">图3.2.8 </span><span class="caption-text">Cross Network</span><a class="headerlink" href="#id6" title="Permalink to this image">¶</a></p>
</figcaption>
</figure>
<p>这个结构其实就是残差网络。每一层都在上一层输出 <span class="math notranslate nohighlight">\(\mathbf{x}_l\)</span>
的基础上，加了一个交叉项
<span class="math notranslate nohighlight">\(\mathbf{x}_0 \mathbf{x}_l^T \mathbf{w}_l\)</span> 和偏置项
<span class="math notranslate nohighlight">\(\mathbf{b}_l\)</span>。交叉项很重要，它让原始输入 <span class="math notranslate nohighlight">\(\mathbf{x}_0\)</span>
和当前层输入 <span class="math notranslate nohighlight">\(\mathbf{x}_l\)</span>
做特征交叉。层数越深，<strong>特征交叉的阶数就越高</strong>。比如第一层（<span class="math notranslate nohighlight">\(l=0\)</span>）时，<span class="math notranslate nohighlight">\(\mathbf{x}_1\)</span>
包含二阶交叉项；第二层（<span class="math notranslate nohighlight">\(l=1\)</span>）时，<span class="math notranslate nohighlight">\(\mathbf{x}_1\)</span>
本身就有二阶信息，再和 <span class="math notranslate nohighlight">\(\mathbf{x}_0\)</span>
交叉就产生三阶交叉项。所以Cross
Network有多深，就能学到多高阶的特征交叉。而且参数量只和输入维度成正比，很高效。</p>
<p>与Cross Network并行的Deep
Network部分是一个标准的全连接神经网络，用于隐式地学习高阶非线性关系，其结构与我们熟悉的DeepFM中的DNN部分类似。最后，模型将Cross
Network的输出 <span class="math notranslate nohighlight">\(\mathbf{x}_{L_1}\)</span> 和Deep Network的输出
<span class="math notranslate nohighlight">\(\mathbf{h}_{L_2}\)</span>
拼接起来，通过一个逻辑回归层得到最终的预测概率。</p>
<div class="math notranslate nohighlight" id="equation-chapter-2-ranking-2-feature-crossing-2-higher-order-2">
<span class="eqno">(3.2.29)<a class="headerlink" href="#equation-chapter-2-ranking-2-feature-crossing-2-higher-order-2" title="Permalink to this equation">¶</a></span>\[\mathbf{p} = \sigma([\mathbf{x}_{L_1}^T, \mathbf{h}_{L_2}^T] \mathbf{w}_{\text{logits}})\]</div>
<p>DCN用Cross
Network明确地学习高阶特征交叉，再配合DNN学习复杂的非线性关系，这样就能更好地处理特征组合问题。</p>
<p><strong>核心代码</strong></p>
<p>DCN的核心在于Cross Network的交叉层计算。每一层都保持与原始输入
<span class="math notranslate nohighlight">\(\mathbf{x}_0\)</span> 的交叉：</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="c1"># Cross Network的交叉层：x_{l+1} = x_0 * (x_l^T * w_l) + b_l + x_l</span>
<span class="c1"># 输入 x_0: [batch_size, feature_dim]</span>

<span class="n">x_l</span> <span class="o">=</span> <span class="n">x_0</span>  <span class="c1"># 初始化为原始输入</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">num_cross_layers</span><span class="p">):</span>
    <span class="c1"># 计算 x_l^T * w_l：得到一个标量权重</span>
    <span class="n">xlw</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">matmul</span><span class="p">(</span><span class="n">x_l</span><span class="p">,</span> <span class="n">w_l</span><span class="p">)</span>  <span class="c1"># [B, 1]</span>

    <span class="c1"># 计算 x_0 * (x_l^T * w_l)：交叉项</span>
    <span class="n">cross_term</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">multiply</span><span class="p">(</span><span class="n">x_0</span><span class="p">,</span> <span class="n">xlw</span><span class="p">)</span>  <span class="c1"># [B, D]</span>

    <span class="c1"># 残差连接：x_{l+1} = cross_term + b_l + x_l</span>
    <span class="n">x_l</span> <span class="o">=</span> <span class="n">cross_term</span> <span class="o">+</span> <span class="n">b_l</span> <span class="o">+</span> <span class="n">x_l</span>  <span class="c1"># [B, D]</span>
</pre></div>
</div>
<p>这个设计的巧妙之处在于：通过残差连接保留了原始信息，通过与
<span class="math notranslate nohighlight">\(\mathbf{x}_0\)</span>
的持续交叉实现了高阶特征组合，且参数量仅与输入维度线性相关。</p>
</section>
<section id="xdeepfm">
<h2><span class="section-number">3.2.2.2. </span>xDeepFM: 向量级别的特征交互<a class="headerlink" href="#xdeepfm" title="Permalink to this heading">¶</a></h2>
<p>DCN虽然能明确地学习高阶特征交叉，但它是在 <strong>元素级别(bit-wise)</strong>
上做交叉的。也就是说，Embedding向量中的每个元素都单独和其他特征的元素交互，这样就把Embedding向量拆散了，没有把它当作一个完整的特征来看待。为了解决这个问题，xDeepFM提出了压缩交互网络（Compressed
Interaction Network, CIN） <span id="id3">(<a class="reference internal" href="../../chapter_references/references.html#id51" title="Lian, J., Zhou, X., Zhang, F., Chen, Z., Xie, X., &amp; Sun, G. (2018). Xdeepfm: combining explicit and implicit feature interactions for recommender systems. Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery &amp; data mining (pp. 1754–1763).">Lian <em>et al.</em>, 2018</a>)</span> ，改为在
<strong>向量级别(vector-wise)</strong>
上做特征交互，这样更符合我们的直觉。xDeepFM包含三个部分：线性部分、DNN部分（隐式高阶交叉）和CIN网络（显式高阶交叉），最后把三部分的输出合并得到预测结果。</p>
<figure class="align-default" id="id7">
<span id="xdeepfm-architecture"></span><a class="reference internal image-reference" href="../../_images/xdeepfm.png"><img alt="../../_images/xdeepfm.png" src="../../_images/xdeepfm.png" style="width: 400px;" /></a>
<figcaption>
<p><span class="caption-number">图3.2.9 </span><span class="caption-text">xdDeepFM模型架构</span><a class="headerlink" href="#id7" title="Permalink to this image">¶</a></p>
</figcaption>
</figure>
<p>CIN的设计目标是实现向量级别的显式高阶交互，同时控制网络复杂度。它的输入是一个<span class="math notranslate nohighlight">\(m \times D\)</span>的矩阵
<span class="math notranslate nohighlight">\(\mathbf{X}_0\)</span>，其中 <span class="math notranslate nohighlight">\(m\)</span>
是特征域（Field）的数量，<span class="math notranslate nohighlight">\(D\)</span> 是Embedding的维度，矩阵的第
<span class="math notranslate nohighlight">\(i\)</span> 行就是第 <span class="math notranslate nohighlight">\(i\)</span> 个特征域的Embedding向量
<span class="math notranslate nohighlight">\(\mathbf{e}_i\)</span>。</p>
<p>CIN的计算过程在每一层都分为两步。在计算第 <span class="math notranslate nohighlight">\(k\)</span> 层的输出
<span class="math notranslate nohighlight">\(\mathbf{X}_k\)</span> 时，它依赖于上一层的输出 <span class="math notranslate nohighlight">\(\mathbf{X}_{k-1}\)</span>
和最原始的输入 <span class="math notranslate nohighlight">\(\mathbf{X}_0\)</span>。</p>
<p>第一步，模型计算出上一层输出的 <span class="math notranslate nohighlight">\(H_{k-1}\)</span> 个向量与原始输入层的
<span class="math notranslate nohighlight">\(m\)</span>
个向量之间的所有成对交互，生成一个中间结果。具体来说，是通过哈达玛积（Hadamard
product）<span class="math notranslate nohighlight">\(\circ\)</span> 来实现的。这个操作会产生
<span class="math notranslate nohighlight">\(H_{k-1} \times m\)</span> 个交互向量，每个向量的维度仍然是 <span class="math notranslate nohighlight">\(D\)</span>。</p>
<p>第二步，为了生成第 <span class="math notranslate nohighlight">\(k\)</span> 层的第 <span class="math notranslate nohighlight">\(h\)</span> 个新特征向量
<span class="math notranslate nohighlight">\(\mathbf{X}_{h,*}^k\)</span>，模型对上一步产生的所有交互向量进行加权求和。这个过程可以看作是对所有潜在的交叉特征进行一次“压缩”或“提炼”。</p>
<p>综合起来，其核心计算公式如下：</p>
<div class="math notranslate nohighlight" id="equation-chapter-2-ranking-2-feature-crossing-2-higher-order-3">
<span class="eqno">(3.2.30)<a class="headerlink" href="#equation-chapter-2-ranking-2-feature-crossing-2-higher-order-3" title="Permalink to this equation">¶</a></span>\[\mathbf{X}_{h,*}^k = \sum_{i=1}^{H_{k-1}} \sum_{j=1}^{m} \mathbf{W}_{i,j}^{k,h} (\mathbf{X}_{i,*}^{k-1} \circ \mathbf{X}_{j,*}^0)\]</div>
<p>其中：</p>
<ul class="simple">
<li><p><span class="math notranslate nohighlight">\(\mathbf{X}_k \in \mathbb{R}^{H_k \times D}\)</span> 是CIN第 <span class="math notranslate nohighlight">\(k\)</span>
层的输出，可以看作是一个包含了 <span class="math notranslate nohighlight">\(H_k\)</span>
个特征向量的集合，称为“特征图”。<span class="math notranslate nohighlight">\(H_k\)</span> 是第 <span class="math notranslate nohighlight">\(k\)</span>
层特征图的数量。</p></li>
<li><p><span class="math notranslate nohighlight">\(\mathbf{X}_{i,*}^{k-1}\)</span> 是第 <span class="math notranslate nohighlight">\(k-1\)</span> 层输出的第 <span class="math notranslate nohighlight">\(i\)</span>
个 <span class="math notranslate nohighlight">\(D\)</span> 维向量。</p></li>
<li><p><span class="math notranslate nohighlight">\(\mathbf{X}_{j,*}^0\)</span> 是原始输入矩阵的第 <span class="math notranslate nohighlight">\(j\)</span> 个 <span class="math notranslate nohighlight">\(D\)</span>
维向量（即第 <span class="math notranslate nohighlight">\(j\)</span> 个特征域的Embedding）。</p></li>
<li><p><span class="math notranslate nohighlight">\(\circ\)</span> 是哈达玛积，它实现了<strong>向量级别的交互</strong>，保留了
<span class="math notranslate nohighlight">\(D\)</span> 维的向量结构。</p></li>
<li><p><span class="math notranslate nohighlight">\(\mathbf{W}_{k,h} \in \mathbb{R}^{H_{k-1} \times m}\)</span>
是一个参数矩阵。它为每一个由
<span class="math notranslate nohighlight">\((\mathbf{X}_{i,*}^{k-1}, \mathbf{X}_{j,*}^0)\)</span>
产生的交互向量都提供了一个权重，通过加权求和的方式，将
<span class="math notranslate nohighlight">\(H_{k-1} \times m\)</span> 个交互向量的信息“压缩”成一个全新的 <span class="math notranslate nohighlight">\(D\)</span>
维向量 <span class="math notranslate nohighlight">\(\mathbf{X}_{h,*}^k\)</span>。</p></li>
</ul>
<p>这个过程清晰地展示了特征交互是如何在向量级别上逐层发生的。第 <span class="math notranslate nohighlight">\(k\)</span>
层的输出 <span class="math notranslate nohighlight">\(\mathbf{X}_k\)</span> 包含了所有 <span class="math notranslate nohighlight">\(k+1\)</span> 阶的特征交互信息。</p>
<p>在计算出每一层（从第<span class="math notranslate nohighlight">\(1\)</span>层到第<span class="math notranslate nohighlight">\(T\)</span>层）的特征图
<span class="math notranslate nohighlight">\(\mathbf{X}_k\)</span> 后，CIN会对每个特征图 <span class="math notranslate nohighlight">\(\mathbf{X}_k\)</span>
的所有向量（<span class="math notranslate nohighlight">\(H_k\)</span>个）在维度 <span class="math notranslate nohighlight">\(D\)</span> 上进行求和池化（Sum
Pooling），得到一个池化后的向量
<span class="math notranslate nohighlight">\(\mathbf{p}_k \in \mathbb{R}^{H_k}\)</span>。最后，将所有层的池化向量拼接起来，形成CIN部分的最终输出。</p>
<div class="math notranslate nohighlight" id="equation-chapter-2-ranking-2-feature-crossing-2-higher-order-4">
<span class="eqno">(3.2.31)<a class="headerlink" href="#equation-chapter-2-ranking-2-feature-crossing-2-higher-order-4" title="Permalink to this equation">¶</a></span>\[\mathbf{p}^+ = [\mathbf{p}_1, \mathbf{p}_2, \ldots, \mathbf{p}_T]\]</div>
<p>这个输出 <span class="math notranslate nohighlight">\(\mathbf{p}^+\)</span> 捕获了从二阶到 <span class="math notranslate nohighlight">\(T+1\)</span>
阶的所有显式、向量级别的交叉特征信息。最终，xDeepFM将线性部分、DNN部分和CIN部分的输出结合起来，通过一个Sigmoid函数得到最终的预测结果。</p>
<div class="math notranslate nohighlight" id="equation-chapter-2-ranking-2-feature-crossing-2-higher-order-5">
<span class="eqno">(3.2.32)<a class="headerlink" href="#equation-chapter-2-ranking-2-feature-crossing-2-higher-order-5" title="Permalink to this equation">¶</a></span>\[\hat{y} = \sigma(\mathbf{w}_{\text{linear}}^T \mathbf{a} + \mathbf{w}_{\text{dnn}}^T \mathbf{x}_{\text{dnn}}^k + \mathbf{w}_{\text{cin}}^T \mathbf{p}^+ + \mathbf{b})\]</div>
<p>其中<span class="math notranslate nohighlight">\(\mathbf{a}\)</span>
表示原始特征，<span class="math notranslate nohighlight">\(\mathbf{x}_{\text{dnn}}^k\)</span>
表示DNN的输出，<span class="math notranslate nohighlight">\(\mathbf{b}\)</span> 是可学习参数。</p>
<p>通过CIN网络，<strong>xDeepFM把向量级别的显式交互和元素级别的隐式交互结合到了一起</strong>，为高阶特征交互提供了一个更好的解决方案。</p>
<p><strong>核心代码</strong></p>
<p>CIN的核心在于向量级别的特征交互计算。每一层都通过哈达玛积实现上一层输出与原始输入的交叉：</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="c1"># CIN层的向量级别交互</span>
<span class="c1"># inputs: [batch_size, field_num, embed_dim]</span>

<span class="n">cin_layers</span> <span class="o">=</span> <span class="p">[</span><span class="n">inputs</span><span class="p">]</span>  <span class="c1"># X^0：原始输入</span>
<span class="n">pooled_outputs</span> <span class="o">=</span> <span class="p">[]</span>

<span class="k">for</span> <span class="n">layer_size</span> <span class="ow">in</span> <span class="n">cin_layer_sizes</span><span class="p">:</span>
    <span class="c1"># 获取上一层输出 X^{k-1} 和原始输入 X^0</span>
    <span class="n">x_k_minus_1</span> <span class="o">=</span> <span class="n">cin_layers</span><span class="p">[</span><span class="o">-</span><span class="mi">1</span><span class="p">]</span>  <span class="c1"># [B, H_{k-1}, D]</span>
    <span class="n">x_0</span> <span class="o">=</span> <span class="n">cin_layers</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span>  <span class="c1"># [B, m, D]</span>

    <span class="c1"># 扩展维度以便进行广播计算</span>
    <span class="n">x_k_minus_1_expand</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">expand_dims</span><span class="p">(</span><span class="n">x_k_minus_1</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">2</span><span class="p">)</span>  <span class="c1"># [B, H_{k-1}, 1, D]</span>
    <span class="n">x_0_expand</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">expand_dims</span><span class="p">(</span><span class="n">x_0</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>  <span class="c1"># [B, 1, m, D]</span>

    <span class="c1"># 向量级别的哈达玛积交互</span>
    <span class="n">z_k</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">multiply</span><span class="p">(</span><span class="n">x_k_minus_1_expand</span><span class="p">,</span> <span class="n">x_0_expand</span><span class="p">)</span>  <span class="c1"># [B, H_{k-1}, m, D]</span>

    <span class="c1"># 压缩：通过线性变换将 H_{k-1}*m 个交互向量压缩为 H_k 个</span>
    <span class="n">z_k_reshape</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="n">z_k</span><span class="p">,</span> <span class="p">[</span><span class="n">batch_size</span><span class="p">,</span> <span class="o">-</span><span class="mi">1</span><span class="p">,</span> <span class="n">embed_dim</span><span class="p">])</span>  <span class="c1"># [B, H_{k-1}*m, D]</span>
    <span class="n">x_k</span> <span class="o">=</span> <span class="n">dense_layer</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">transpose</span><span class="p">(</span><span class="n">z_k_reshape</span><span class="p">,</span> <span class="p">[</span><span class="mi">0</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="mi">1</span><span class="p">]))</span>  <span class="c1"># [B, D, H_k]</span>
    <span class="n">x_k</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">transpose</span><span class="p">(</span><span class="n">x_k</span><span class="p">,</span> <span class="p">[</span><span class="mi">0</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="mi">1</span><span class="p">])</span>  <span class="c1"># [B, H_k, D]</span>

    <span class="n">cin_layers</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">x_k</span><span class="p">)</span>

    <span class="c1"># 求和池化：将向量维度聚合为标量</span>
    <span class="n">pooled_outputs</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">reduce_sum</span><span class="p">(</span><span class="n">x_k</span><span class="p">,</span> <span class="n">axis</span><span class="o">=-</span><span class="mi">1</span><span class="p">))</span>  <span class="c1"># [B, H_k]</span>

<span class="c1"># 拼接所有层的输出</span>
<span class="n">cin_output</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">concat</span><span class="p">(</span><span class="n">pooled_outputs</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>  <span class="c1"># [B, sum(H_k)]</span>
</pre></div>
</div>
<p>CIN通过保持向量结构的交互方式，既实现了显式的高阶特征组合，又避免了参数量的过快增长。</p>
</section>
<section id="autoint">
<h2><span class="section-number">3.2.2.3. </span>AutoInt: 自注意力的自适应交互<a class="headerlink" href="#autoint" title="Permalink to this heading">¶</a></h2>
<p>DCN用残差连接做元素级别的高阶交互，xDeepFM用CIN网络做向量级别的高阶交互，但这两种方法都有个问题：交互方式比较固定。DCN每一层都要和原始输入交叉，xDeepFM的CIN网络也是按固定方式做向量交互。那么，<strong>能不能设计一种更灵活的高阶特征交互方法，让模型自己决定哪些特征要交互，交互强度多大？</strong></p>
<p>AutoInt (Automatic Feature Interaction) <span id="id4">(<a class="reference internal" href="../../chapter_references/references.html#id44" title="Song, W., Shi, C., Xiao, Z., Duan, Z., Xu, Y., Zhang, M., &amp; Tang, J. (2019). Autoint: automatic feature interaction learning via self-attentive neural networks. Proceedings of the 28th ACM international conference on information and knowledge management (pp. 1161–1170).">Song <em>et al.</em>, 2019</a>)</span>
通过Transformer的自注意力机制，<strong>让模型自动学习各种阶数的特征交互</strong>。和前面的方法不同，AutoInt不用固定的交互模式，而是在训练中学出最好的特征交互组合。</p>
<figure class="align-default" id="id8">
<span id="autoint-overview"></span><a class="reference internal image-reference" href="../../_images/autoint_overview.png"><img alt="../../_images/autoint_overview.png" src="../../_images/autoint_overview.png" style="width: 400px;" /></a>
<figcaption>
<p><span class="caption-number">图3.2.10 </span><span class="caption-text">AutoInt模型原理示意图</span><a class="headerlink" href="#id8" title="Permalink to this image">¶</a></p>
</figcaption>
</figure>
<p>AutoInt
的整体架构相对简洁，它将所有输入特征（无论是类别型还是数值型）都转换为相同维度的嵌入向量
<span class="math notranslate nohighlight">\(\mathbf{e}_m \in \mathbb{R}^d\)</span>，其中 <span class="math notranslate nohighlight">\(m\)</span> 代表第 <span class="math notranslate nohighlight">\(m\)</span>
个特征域。这些嵌入向量构成了自注意力网络的输入，类似于 Transformer 中的
Token Embeddings。</p>
<p><strong>多头自注意力机制</strong></p>
<p>AutoInt
的核心是其交互层，该层由多头自注意力机制构成。对于任意两个特征的嵌入向量
<span class="math notranslate nohighlight">\(\mathbf{e}_m\)</span> 和
<span class="math notranslate nohighlight">\(\mathbf{e}_k\)</span>，自注意力机制会计算它们之间的相关性得分。这个过程在每个“注意力头”（Head）
<span class="math notranslate nohighlight">\(h\)</span> 中独立进行。具体来说，对于特征 <span class="math notranslate nohighlight">\(m\)</span> 和特征
<span class="math notranslate nohighlight">\(k\)</span>，它们在第 <span class="math notranslate nohighlight">\(h\)</span> 个注意力头中的相关性得分
<span class="math notranslate nohighlight">\(\alpha_{m,k}^{(h)}\)</span> 计算如下：</p>
<div class="math notranslate nohighlight" id="equation-chapter-2-ranking-2-feature-crossing-2-higher-order-6">
<span class="eqno">(3.2.33)<a class="headerlink" href="#equation-chapter-2-ranking-2-feature-crossing-2-higher-order-6" title="Permalink to this equation">¶</a></span>\[\alpha_{m,k}^{(h)} = \frac{\exp(\psi^{(h)}(\mathbf{e}_m, \mathbf{e}_k))}{\sum_{l=1}^{M}\exp(\psi^{(h)}(\mathbf{e}_m, \mathbf{e}_l))}\]</div>
<p>这里的 <span class="math notranslate nohighlight">\(M\)</span> 是特征域的总数，而
<span class="math notranslate nohighlight">\(\psi^{(h)}(\mathbf{e}_m, \mathbf{e}_k)\)</span>
是一个用于衡量两个嵌入向量相似度的函数，通常是缩放点积注意力：</p>
<div class="math notranslate nohighlight" id="equation-chapter-2-ranking-2-feature-crossing-2-higher-order-7">
<span class="eqno">(3.2.34)<a class="headerlink" href="#equation-chapter-2-ranking-2-feature-crossing-2-higher-order-7" title="Permalink to this equation">¶</a></span>\[\psi^{(h)}\left(\mathbf{e}_{\mathbf{m}}, \mathbf{e}_{\mathbf{k}}\right)=\left\langle\mathbf{W}_{\text {Query }}^{(h)} \mathbf{e}_{\mathbf{m}}, \mathbf{W}_{\text {Key }}^{(h)} \mathbf{e}_{\mathbf{k}}\right\rangle\]</div>
<p>其中
<span class="math notranslate nohighlight">\(\mathbf{W}_{\text{Query}}^{(h)} \in \mathbb{R}^{d' \times d}\)</span> 和
<span class="math notranslate nohighlight">\(\mathbf{W}_{\text{Key}}^{(h)} \in \mathbb{R}^{d' \times d}\)</span>
是可学习的投影矩阵，它们分别将原始嵌入向量映射到“查询”（Query）和“键”（Key）空间。<span class="math notranslate nohighlight">\(d'\)</span>
是投影后的维度。</p>
<p>在计算出所有特征对之间的相关性得分后，模型会利用这些得分来对所有特征的“值”（Value）向量进行加权求和，从而为特征
<span class="math notranslate nohighlight">\(\mathbf{e}_m\)</span> 生成一个新的、融合了其他特征信息的表示
<span class="math notranslate nohighlight">\(\mathbf{\tilde{e}}_m^{(h)}\)</span>：</p>
<div class="math notranslate nohighlight" id="equation-chapter-2-ranking-2-feature-crossing-2-higher-order-8">
<span class="eqno">(3.2.35)<a class="headerlink" href="#equation-chapter-2-ranking-2-feature-crossing-2-higher-order-8" title="Permalink to this equation">¶</a></span>\[\mathbf{\tilde{e}}_m^{(h)} = \sum_{k=1}^{M} \alpha_{m,k}^{(h)} (\mathbf{W}_{\text{Value}}^{(h)} \mathbf{e}_k)\]</div>
<p>其中
<span class="math notranslate nohighlight">\(\mathbf{W}_{\text{Value}}^{(h)} \in \mathbb{R}^{d' \times d}\)</span>
同样是一个可学习的投影矩阵。这个新的表示
<span class="math notranslate nohighlight">\(\mathbf{\tilde{e}}_m^{(h)}\)</span>
本质上就是一个通过自适应学习得到的新组合特征。</p>
<figure class="align-default" id="id9">
<span id="autoint-attention"></span><a class="reference internal image-reference" href="../../_images/autoint_attention.png"><img alt="../../_images/autoint_attention.png" src="../../_images/autoint_attention.png" style="width: 350px;" /></a>
<figcaption>
<p><span class="caption-number">图3.2.11 </span><span class="caption-text">自注意力机制示意图</span><a class="headerlink" href="#id9" title="Permalink to this image">¶</a></p>
</figcaption>
</figure>
<p><strong>多层交互与高阶特征学习</strong></p>
<p>“多头”机制允许模型在不同的子空间中并行地学习不同方面的特征交互。模型将所有
<span class="math notranslate nohighlight">\(H\)</span> 个头的输出拼接起来，形成一个更丰富的特征表示：</p>
<div class="math notranslate nohighlight" id="equation-chapter-2-ranking-2-feature-crossing-2-higher-order-9">
<span class="eqno">(3.2.36)<a class="headerlink" href="#equation-chapter-2-ranking-2-feature-crossing-2-higher-order-9" title="Permalink to this equation">¶</a></span>\[\mathbf{\tilde{e}}_m = \mathbf{\tilde{e}}_m^{(1)} \oplus \mathbf{\tilde{e}}_m^{(2)} \oplus \cdots \oplus \mathbf{\tilde{e}}_m^{(H)}\]</div>
<p>其中 <span class="math notranslate nohighlight">\(\oplus\)</span>
表示拼接操作。为了保留原始信息并稳定训练过程，AutoInt
还引入了残差连接（Residual
Connection），将新生成的交互特征与原始特征相结合：</p>
<div class="math notranslate nohighlight" id="equation-chapter-2-ranking-2-feature-crossing-2-higher-order-10">
<span class="eqno">(3.2.37)<a class="headerlink" href="#equation-chapter-2-ranking-2-feature-crossing-2-higher-order-10" title="Permalink to this equation">¶</a></span>\[\mathbf{e}_m^{\text{Res}}= \text{ReLU}(\mathbf{e}_m + \mathbf{W}_{\text{Res}} \mathbf{\tilde{e}}_m)\]</div>
<p>其中 <span class="math notranslate nohighlight">\(\mathbf{W}_{\text{Res}}\)</span> 是一个用于匹配维度的投影矩阵。</p>
<p><strong>AutoInt
的关键创新在于其高阶特征交互的构建方式</strong>。通过堆叠多个这样的交互层，AutoInt
能够显式地构建任意高阶的特征交互。第一层的输出包含了二阶交互信息，第二层的输出则包含了三阶交互信息，以此类推。每一层的输出都代表了更高一阶的、自适应学习到的特征组合。与
DCN 和 xDeepFM 不同，AutoInt
中的高阶交互不是通过固定的数学公式构建的，而是通过注意力权重动态决定的，这使得模型能够学习到更加灵活和有效的特征交互模式。</p>
<p>最终，所有层输出的特征表示被拼接在一起，送入一个简单的逻辑回归层进行最终的点击率预测：</p>
<div class="math notranslate nohighlight" id="equation-chapter-2-ranking-2-feature-crossing-2-higher-order-11">
<span class="eqno">(3.2.38)<a class="headerlink" href="#equation-chapter-2-ranking-2-feature-crossing-2-higher-order-11" title="Permalink to this equation">¶</a></span>\[\hat{y}=\sigma\left(\mathbf{w}^{\mathrm{T}}\left(\mathbf{e}_{1}^{\mathbf{Res}} \oplus \mathbf{e}_{2}^{\mathbf{Res}} \oplus \cdots \oplus \mathbf{e}_{\mathbf{M}}^{\text {Res}}\right)+b\right)\]</div>
<p>AutoInt 的一个好处是比较好解释，通过看注意力权重矩阵
<span class="math notranslate nohighlight">\(\alpha^{(h)}\)</span>，我们能直接看出模型觉得哪些特征组合重要。这种用自注意力做高阶特征交互的方法，不仅让模型表达能力更强，还提供了一种更灵活的学习方式。</p>
<p><strong>核心代码</strong></p>
<p>AutoInt的核心在于多头自注意力机制，它能够自适应地学习特征之间的交互关系：</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="c1"># 多头自注意力层的前向传播</span>
<span class="c1"># inputs: [batch_size, num_features, embed_dim]</span>

<span class="n">head_outputs</span> <span class="o">=</span> <span class="p">[]</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">num_heads</span><span class="p">):</span>
    <span class="c1"># 计算Query、Key、Value矩阵</span>
    <span class="n">query</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">einsum</span><span class="p">(</span><span class="s1">&#39;bfe,ea-&gt;bfa&#39;</span><span class="p">,</span> <span class="n">inputs</span><span class="p">,</span> <span class="n">query_weights</span><span class="p">[</span><span class="n">i</span><span class="p">])</span>  <span class="c1"># [B, N, d&#39;]</span>
    <span class="n">key</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">einsum</span><span class="p">(</span><span class="s1">&#39;bfe,ea-&gt;bfa&#39;</span><span class="p">,</span> <span class="n">inputs</span><span class="p">,</span> <span class="n">key_weights</span><span class="p">[</span><span class="n">i</span><span class="p">])</span>      <span class="c1"># [B, N, d&#39;]</span>
    <span class="n">value</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">einsum</span><span class="p">(</span><span class="s1">&#39;bfe,ea-&gt;bfa&#39;</span><span class="p">,</span> <span class="n">inputs</span><span class="p">,</span> <span class="n">value_weights</span><span class="p">[</span><span class="n">i</span><span class="p">])</span>  <span class="c1"># [B, N, d&#39;]</span>

    <span class="c1"># 计算注意力得分：Query和Key的点积</span>
    <span class="n">attention_score</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">matmul</span><span class="p">(</span><span class="n">query</span><span class="p">,</span> <span class="n">key</span><span class="p">,</span> <span class="n">transpose_b</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>  <span class="c1"># [B, N, N]</span>

    <span class="c1"># Softmax归一化得到注意力权重</span>
    <span class="n">attention_weights</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">softmax</span><span class="p">(</span><span class="n">attention_score</span><span class="p">,</span> <span class="n">axis</span><span class="o">=-</span><span class="mi">1</span><span class="p">)</span>  <span class="c1"># [B, N, N]</span>

    <span class="c1"># 加权求和：用注意力权重对Value进行聚合</span>
    <span class="n">head_output</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">matmul</span><span class="p">(</span><span class="n">attention_weights</span><span class="p">,</span> <span class="n">value</span><span class="p">)</span>  <span class="c1"># [B, N, d&#39;]</span>
    <span class="n">head_outputs</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">head_output</span><span class="p">)</span>

<span class="c1"># 拼接多个注意力头的输出</span>
<span class="n">multi_head_output</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">concat</span><span class="p">(</span><span class="n">head_outputs</span><span class="p">,</span> <span class="n">axis</span><span class="o">=-</span><span class="mi">1</span><span class="p">)</span>  <span class="c1"># [B, N, d&#39;*H]</span>

<span class="c1"># 残差连接：保留原始信息</span>
<span class="n">residual_input</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">tensordot</span><span class="p">(</span><span class="n">inputs</span><span class="p">,</span> <span class="n">residual_weights</span><span class="p">,</span> <span class="n">axes</span><span class="o">=</span><span class="p">[[</span><span class="mi">2</span><span class="p">],</span> <span class="p">[</span><span class="mi">0</span><span class="p">]])</span>
<span class="n">output</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">keras</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">ReLU</span><span class="p">()(</span><span class="n">multi_head_output</span> <span class="o">+</span> <span class="n">residual_input</span><span class="p">)</span>
</pre></div>
</div>
<p>通过堆叠多层这样的自注意力层，AutoInt能够显式地构建任意高阶的特征交互，且交互模式完全由数据驱动学习得到。</p>
<p><strong>代码实践</strong></p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span><span class="w"> </span><span class="nn">funrec</span><span class="w"> </span><span class="kn">import</span> <span class="n">compare_models</span>

<span class="n">compare_models</span><span class="p">([</span><span class="s1">&#39;dcn&#39;</span><span class="p">,</span> <span class="s1">&#39;xdeepfm&#39;</span><span class="p">,</span> <span class="s1">&#39;autoint&#39;</span><span class="p">])</span>
</pre></div>
</div>
<div class="output highlight-default notranslate"><div class="highlight"><pre><span></span><span class="o">+---------+--------+--------+------------+</span>
<span class="o">|</span> <span class="n">模型</span>    <span class="o">|</span>    <span class="n">auc</span> <span class="o">|</span>   <span class="n">gauc</span> <span class="o">|</span>   <span class="n">val_user</span> <span class="o">|</span>
<span class="o">+=========+========+========+============+</span>
<span class="o">|</span> <span class="n">dcn</span>     <span class="o">|</span> <span class="mf">0.5968</span> <span class="o">|</span> <span class="mf">0.574</span>  <span class="o">|</span>        <span class="mi">928</span> <span class="o">|</span>
<span class="o">+---------+--------+--------+------------+</span>
<span class="o">|</span> <span class="n">xdeepfm</span> <span class="o">|</span> <span class="mf">0.5964</span> <span class="o">|</span> <span class="mf">0.5741</span> <span class="o">|</span>        <span class="mi">928</span> <span class="o">|</span>
<span class="o">+---------+--------+--------+------------+</span>
<span class="o">|</span> <span class="n">autoint</span> <span class="o">|</span> <span class="mf">0.5988</span> <span class="o">|</span> <span class="mf">0.571</span>  <span class="o">|</span>        <span class="mi">928</span> <span class="o">|</span>
<span class="o">+---------+--------+--------+------------+</span>
</pre></div>
</div>
</section>
</section>


        </div>
        <div class="side-doc-outline">
            <div class="side-doc-outline--content"> 
<div class="localtoc">
    <p class="caption">
      <span class="caption-text">Table Of Contents</span>
    </p>
    <ul>
<li><a class="reference internal" href="#">3.2.2. 高阶特征交叉</a><ul>
<li><a class="reference internal" href="#dcn">3.2.2.1. DCN: 残差连接的高阶交叉</a></li>
<li><a class="reference internal" href="#xdeepfm">3.2.2.2. xDeepFM: 向量级别的特征交互</a></li>
<li><a class="reference internal" href="#autoint">3.2.2.3. AutoInt: 自注意力的自适应交互</a></li>
</ul>
</li>
</ul>

</div>
            </div>
        </div>

      <div class="clearer"></div>
    </div><div class="pagenation">
     <a id="button-prev" href="1.second_order.html" class="mdl-button mdl-js-button mdl-js-ripple-effect mdl-button--colored" role="botton" accesskey="P">
         <i class="pagenation-arrow-L fas fa-arrow-left fa-lg"></i>
         <div class="pagenation-text">
            <span class="pagenation-direction">Previous</span>
            <div>3.2.1. 二阶特征交叉</div>
         </div>
     </a>
     <a id="button-next" href="../3.sequence.html" class="mdl-button mdl-js-button mdl-js-ripple-effect mdl-button--colored" role="botton" accesskey="N">
         <i class="pagenation-arrow-R fas fa-arrow-right fa-lg"></i>
        <div class="pagenation-text">
            <span class="pagenation-direction">Next</span>
            <div>3.3. 序列建模</div>
        </div>
     </a>
  </div>
        
        </main>
    </div>
  </body>
</html>