<!DOCTYPE html>

<html lang="en">
  <head>
    <meta charset="utf-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" /><meta name="generator" content="Docutils 0.19: https://docutils.sourceforge.io/" />

    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
    <meta http-equiv="x-ua-compatible" content="ie=edge">
    
    <title>3.1. 记忆与泛化 &#8212; FunRec 推荐系统 0.0.1 documentation</title>

    <link rel="stylesheet" href="../_static/material-design-lite-1.3.0/material.blue-deep_orange.min.css" type="text/css" />
    <link rel="stylesheet" href="../_static/sphinx_materialdesign_theme.css" type="text/css" />
    <link rel="stylesheet" href="../_static/fontawesome/all.css" type="text/css" />
    <link rel="stylesheet" href="../_static/fonts.css" type="text/css" />
    <link rel="stylesheet" type="text/css" href="../_static/pygments.css" />
    <link rel="stylesheet" type="text/css" href="../_static/basic.css" />
    <link rel="stylesheet" type="text/css" href="../_static/d2l.css" />
    <script data-url_root="../" id="documentation_options" src="../_static/documentation_options.js"></script>
    <script src="../_static/jquery.js"></script>
    <script src="../_static/underscore.js"></script>
    <script src="../_static/_sphinx_javascript_frameworks_compat.js"></script>
    <script src="../_static/doctools.js"></script>
    <script src="../_static/sphinx_highlight.js"></script>
    <script src="../_static/d2l.js"></script>
    <script async="async" src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
    <link rel="index" title="Index" href="../genindex.html" />
    <link rel="search" title="Search" href="../search.html" />
    <link rel="next" title="3.2. 特征交叉" href="2.feature_crossing/index.html" />
    <link rel="prev" title="3. 精排模型" href="index.html" /> 
  </head>
<body>
    <div class="mdl-layout mdl-js-layout mdl-layout--fixed-header mdl-layout--fixed-drawer"><header class="mdl-layout__header mdl-layout__header--waterfall ">
    <div class="mdl-layout__header-row">
        
        <nav class="mdl-navigation breadcrumb">
            <a class="mdl-navigation__link" href="index.html"><span class="section-number">3. </span>精排模型</a><i class="material-icons">navigate_next</i>
            <a class="mdl-navigation__link is-active"><span class="section-number">3.1. </span>记忆与泛化</a>
        </nav>
        <div class="mdl-layout-spacer"></div>
        <nav class="mdl-navigation">
        
<form class="form-inline pull-sm-right" action="../search.html" method="get">
      <div class="mdl-textfield mdl-js-textfield mdl-textfield--expandable mdl-textfield--floating-label mdl-textfield--align-right">
        <label id="quick-search-icon" class="mdl-button mdl-js-button mdl-button--icon"  for="waterfall-exp">
          <i class="material-icons">search</i>
        </label>
        <div class="mdl-textfield__expandable-holder">
          <input class="mdl-textfield__input" type="text" name="q"  id="waterfall-exp" placeholder="Search" />
          <input type="hidden" name="check_keywords" value="yes" />
          <input type="hidden" name="area" value="default" />
        </div>
      </div>
      <div class="mdl-tooltip" data-mdl-for="quick-search-icon">
      Quick search
      </div>
</form>
        
<a id="button-show-source"
    class="mdl-button mdl-js-button mdl-button--icon"
    href="../_sources/chapter_2_ranking/1.wide_and_deep.rst.txt" rel="nofollow">
  <i class="material-icons">code</i>
</a>
<div class="mdl-tooltip" data-mdl-for="button-show-source">
Show Source
</div>
        </nav>
    </div>
    <div class="mdl-layout__header-row header-links">
      <div class="mdl-layout-spacer"></div>
      <nav class="mdl-navigation">
          
              <a  class="mdl-navigation__link" href="https://funrec-notebooks.s3.eu-west-3.amazonaws.com/fun-rec.zip">
                  <i class="fas fa-download"></i>
                  Jupyter 记事本
              </a>
          
              <a  class="mdl-navigation__link" href="https://github.com/datawhalechina/fun-rec">
                  <i class="fab fa-github"></i>
                  GitHub
              </a>
      </nav>
    </div>
</header><header class="mdl-layout__drawer">
    
          <!-- Title -->
      <span class="mdl-layout-title">
          <a class="title" href="../index.html">
              <span class="title-text">
                  FunRec 推荐系统
              </span>
          </a>
      </span>
    
    
      <div class="globaltoc">
        <span class="mdl-layout-title toc">Table Of Contents</span>
        
        
            
            <nav class="mdl-navigation">
                <ul>
<li class="toctree-l1"><a class="reference internal" href="../chapter_preface/index.html">前言</a></li>
<li class="toctree-l1"><a class="reference internal" href="../chapter_installation/index.html">安装</a></li>
<li class="toctree-l1"><a class="reference internal" href="../chapter_notation/index.html">符号</a></li>
</ul>
<ul class="current">
<li class="toctree-l1"><a class="reference internal" href="../chapter_0_introduction/index.html">1. 推荐系统概述</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../chapter_0_introduction/1.intro.html">1.1. 推荐系统是什么？</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_0_introduction/2.outline.html">1.2. 本书概览</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../chapter_1_retrieval/index.html">2. 召回模型</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../chapter_1_retrieval/1.cf/index.html">2.1. 协同过滤</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/1.cf/1.itemcf.html">2.1.1. 基于物品的协同过滤</a></li>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/1.cf/2.usercf.html">2.1.2. 基于用户的协同过滤</a></li>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/1.cf/3.mf.html">2.1.3. 矩阵分解</a></li>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/1.cf/4.summary.html">2.1.4. 总结</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_1_retrieval/2.embedding/index.html">2.2. 向量召回</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/2.embedding/1.i2i.html">2.2.1. I2I召回</a></li>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/2.embedding/2.u2i.html">2.2.2. U2I召回</a></li>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/2.embedding/3.summary.html">2.2.3. 总结</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_1_retrieval/3.sequence/index.html">2.3. 序列召回</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/3.sequence/1.user_interests.html">2.3.1. 深化用户兴趣表示</a></li>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/3.sequence/2.generateive_recall.html">2.3.2. 生成式召回方法</a></li>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/3.sequence/3.summary.html">2.3.3. 总结</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 current"><a class="reference internal" href="index.html">3. 精排模型</a><ul class="current">
<li class="toctree-l2 current"><a class="current reference internal" href="#">3.1. 记忆与泛化</a></li>
<li class="toctree-l2"><a class="reference internal" href="2.feature_crossing/index.html">3.2. 特征交叉</a><ul>
<li class="toctree-l3"><a class="reference internal" href="2.feature_crossing/1.second_order.html">3.2.1. 二阶特征交叉</a></li>
<li class="toctree-l3"><a class="reference internal" href="2.feature_crossing/2.higher_order.html">3.2.2. 高阶特征交叉</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="3.sequence.html">3.3. 序列建模</a></li>
<li class="toctree-l2"><a class="reference internal" href="4.multi_objective/index.html">3.4. 多目标建模</a><ul>
<li class="toctree-l3"><a class="reference internal" href="4.multi_objective/1.arch.html">3.4.1. 基础结构演进</a></li>
<li class="toctree-l3"><a class="reference internal" href="4.multi_objective/2.dependency_modeling.html">3.4.2. 任务依赖建模</a></li>
<li class="toctree-l3"><a class="reference internal" href="4.multi_objective/3.multi_loss_optim.html">3.4.3. 多目标损失融合</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="5.multi_scenario/index.html">3.5. 多场景建模</a><ul>
<li class="toctree-l3"><a class="reference internal" href="5.multi_scenario/1.multi_tower.html">3.5.1. 多塔结构</a></li>
<li class="toctree-l3"><a class="reference internal" href="5.multi_scenario/2.dynamic_weight.html">3.5.2. 动态权重建模</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../chapter_3_rerank/index.html">4. 重排模型</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../chapter_3_rerank/1.greedy.html">4.1. 基于贪心的重排</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_3_rerank/2.personalized.html">4.2. 基于个性化的重排</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_3_rerank/3.summary.html">4.3. 本章小结</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../chapter_4_trends/index.html">5. 难点及热点研究</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../chapter_4_trends/1.debias.html">5.1. 模型去偏</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_4_trends/2.cold_start.html">5.2. 冷启动问题</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_4_trends/3.generative.html">5.3. 生成式推荐</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_4_trends/4.summary.html">5.4. 本章小结</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../chapter_5_projects/index.html">6. 项目实践</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../chapter_5_projects/1.understanding.html">6.1. 赛题理解</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_5_projects/2.baseline.html">6.2. Baseline</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_5_projects/3.analysis.html">6.3. 数据分析</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_5_projects/4.recall.html">6.4. 多路召回</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_5_projects/5.feature_engineering.html">6.5. 特征工程</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_5_projects/6.ranking.html">6.6. 排序模型</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../chapter_appendix/index.html">7. Appendix</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../chapter_appendix/word2vec.html">7.1. Word2vec</a></li>
</ul>
</li>
</ul>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../chapter_references/references.html">参考文献</a></li>
</ul>

            </nav>
        
        </div>
    
</header>
        <main class="mdl-layout__content" tabIndex="0">

	<script type="text/javascript" src="../_static/sphinx_materialdesign_theme.js "></script>
    <header class="mdl-layout__drawer">
    
          <!-- Title -->
      <span class="mdl-layout-title">
          <a class="title" href="../index.html">
              <span class="title-text">
                  FunRec 推荐系统
              </span>
          </a>
      </span>
    
    
      <div class="globaltoc">
        <span class="mdl-layout-title toc">Table Of Contents</span>
        
        
            
            <nav class="mdl-navigation">
                <ul>
<li class="toctree-l1"><a class="reference internal" href="../chapter_preface/index.html">前言</a></li>
<li class="toctree-l1"><a class="reference internal" href="../chapter_installation/index.html">安装</a></li>
<li class="toctree-l1"><a class="reference internal" href="../chapter_notation/index.html">符号</a></li>
</ul>
<ul class="current">
<li class="toctree-l1"><a class="reference internal" href="../chapter_0_introduction/index.html">1. 推荐系统概述</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../chapter_0_introduction/1.intro.html">1.1. 推荐系统是什么？</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_0_introduction/2.outline.html">1.2. 本书概览</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../chapter_1_retrieval/index.html">2. 召回模型</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../chapter_1_retrieval/1.cf/index.html">2.1. 协同过滤</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/1.cf/1.itemcf.html">2.1.1. 基于物品的协同过滤</a></li>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/1.cf/2.usercf.html">2.1.2. 基于用户的协同过滤</a></li>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/1.cf/3.mf.html">2.1.3. 矩阵分解</a></li>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/1.cf/4.summary.html">2.1.4. 总结</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_1_retrieval/2.embedding/index.html">2.2. 向量召回</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/2.embedding/1.i2i.html">2.2.1. I2I召回</a></li>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/2.embedding/2.u2i.html">2.2.2. U2I召回</a></li>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/2.embedding/3.summary.html">2.2.3. 总结</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_1_retrieval/3.sequence/index.html">2.3. 序列召回</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/3.sequence/1.user_interests.html">2.3.1. 深化用户兴趣表示</a></li>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/3.sequence/2.generateive_recall.html">2.3.2. 生成式召回方法</a></li>
<li class="toctree-l3"><a class="reference internal" href="../chapter_1_retrieval/3.sequence/3.summary.html">2.3.3. 总结</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1 current"><a class="reference internal" href="index.html">3. 精排模型</a><ul class="current">
<li class="toctree-l2 current"><a class="current reference internal" href="#">3.1. 记忆与泛化</a></li>
<li class="toctree-l2"><a class="reference internal" href="2.feature_crossing/index.html">3.2. 特征交叉</a><ul>
<li class="toctree-l3"><a class="reference internal" href="2.feature_crossing/1.second_order.html">3.2.1. 二阶特征交叉</a></li>
<li class="toctree-l3"><a class="reference internal" href="2.feature_crossing/2.higher_order.html">3.2.2. 高阶特征交叉</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="3.sequence.html">3.3. 序列建模</a></li>
<li class="toctree-l2"><a class="reference internal" href="4.multi_objective/index.html">3.4. 多目标建模</a><ul>
<li class="toctree-l3"><a class="reference internal" href="4.multi_objective/1.arch.html">3.4.1. 基础结构演进</a></li>
<li class="toctree-l3"><a class="reference internal" href="4.multi_objective/2.dependency_modeling.html">3.4.2. 任务依赖建模</a></li>
<li class="toctree-l3"><a class="reference internal" href="4.multi_objective/3.multi_loss_optim.html">3.4.3. 多目标损失融合</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="5.multi_scenario/index.html">3.5. 多场景建模</a><ul>
<li class="toctree-l3"><a class="reference internal" href="5.multi_scenario/1.multi_tower.html">3.5.1. 多塔结构</a></li>
<li class="toctree-l3"><a class="reference internal" href="5.multi_scenario/2.dynamic_weight.html">3.5.2. 动态权重建模</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../chapter_3_rerank/index.html">4. 重排模型</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../chapter_3_rerank/1.greedy.html">4.1. 基于贪心的重排</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_3_rerank/2.personalized.html">4.2. 基于个性化的重排</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_3_rerank/3.summary.html">4.3. 本章小结</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../chapter_4_trends/index.html">5. 难点及热点研究</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../chapter_4_trends/1.debias.html">5.1. 模型去偏</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_4_trends/2.cold_start.html">5.2. 冷启动问题</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_4_trends/3.generative.html">5.3. 生成式推荐</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_4_trends/4.summary.html">5.4. 本章小结</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../chapter_5_projects/index.html">6. 项目实践</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../chapter_5_projects/1.understanding.html">6.1. 赛题理解</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_5_projects/2.baseline.html">6.2. Baseline</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_5_projects/3.analysis.html">6.3. 数据分析</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_5_projects/4.recall.html">6.4. 多路召回</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_5_projects/5.feature_engineering.html">6.5. 特征工程</a></li>
<li class="toctree-l2"><a class="reference internal" href="../chapter_5_projects/6.ranking.html">6.6. 排序模型</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../chapter_appendix/index.html">7. Appendix</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../chapter_appendix/word2vec.html">7.1. Word2vec</a></li>
</ul>
</li>
</ul>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../chapter_references/references.html">参考文献</a></li>
</ul>

            </nav>
        
        </div>
    
</header>

    <div class="document">
        <div class="page-content" role="main">
        
  <section id="wide-and-deep">
<span id="id1"></span><h1><span class="section-number">3.1. </span>记忆与泛化<a class="headerlink" href="#wide-and-deep" title="Permalink to this heading">¶</a></h1>
<p>在构建推荐模型时，我们常常追求两个看似矛盾的目标：<strong>记忆（Memorization）</strong>与<strong>泛化（Generalization）</strong>。</p>
<ul class="simple">
<li><p><strong>记忆能力</strong>，指的是模型能够学习并记住那些在历史数据中频繁共同出现的特征组合。例如，模型记住“买了A的用户，通常也会买B”。这种能力可以精准地捕捉显性、高频的关联，为用户提供与他们历史行为高度相关的推荐。</p></li>
<li><p><strong>泛化能力</strong>，就是模型能学到特征间的深层关系，处理训练时很少见到的特征组合。举个例子，模型发现“物品A和物品C都是同一类的，用户喜欢这类东西”，那就可以给喜欢A的用户推荐C，哪怕用户以前没见过C。这能让推荐更丰富一些。</p></li>
</ul>
<p>怎么让一个模型同时做好这两件事呢？这确实不容易。2016年Google提出的Wide &amp;
Deep模型给了一个不错的思路。这个模型的想法很直接：既然需要两种能力，那就设计两个部分，然后让它们一起训练，通过
<strong>联合训练（Joint Training）</strong> 的方式配合工作。</p>
<p>模型的设计思路是把结构分成两块，各自负责不同的事情：</p>
<figure class="align-default" id="id5">
<span id="wide-and-deep-model-structure"></span><a class="reference internal image-reference" href="../_images/wide_and_deep.png"><img alt="../_images/wide_and_deep.png" src="../_images/wide_and_deep.png" style="width: 400px;" /></a>
<figcaption>
<p><span class="caption-number">图3.1.1 </span><span class="caption-text">Wide &amp; Deep 模型结构图</span><a class="headerlink" href="#id5" title="Permalink to this image">¶</a></p>
</figcaption>
</figure>
<p><strong>记忆的捷径：Wide部分</strong></p>
<p>Wide部分本质上是一个广义线性模型，比如逻辑回归。它的优势在于结构简单、可解释更强，并且能高效地“记忆”那些显而易见的关联规则。其数学表达形式如下：</p>
<div class="math notranslate nohighlight" id="equation-chapter-2-ranking-1-wide-and-deep-0">
<span class="eqno">(3.1.1)<a class="headerlink" href="#equation-chapter-2-ranking-1-wide-and-deep-0" title="Permalink to this equation">¶</a></span>\[y=\mathbf{w}^T \mathbf{x}+b\]</div>
<p>其中，y是预测值，<span class="math notranslate nohighlight">\(\mathbf{w}\)</span>
是模型权重，<span class="math notranslate nohighlight">\(\mathbf{x}\)</span>是特征向量，b是偏置项。</p>
<p>Wide部分的关键在于其输入的特征向量<span class="math notranslate nohighlight">\(\mathbf{x}\)</span>。它不仅包含原始特征，更重要的是包含了大量<strong>人工设计的交叉特征（Cross-product
Features）</strong>。交叉特征可以将多个独立的特征组合成一个新的特征，用于捕捉特定的共现模式。例如，在应用商店的推荐场景中，我们可以创建一个交叉特征<code class="docutils literal notranslate"><span class="pre">AND(installed_app=photo_editor,</span> <span class="pre">impression_app=filter_pack)</span></code>，它代表用户已经安装了“照片编辑器”应用，并且现在看到了“滤镜包”应用的推荐。</p>
<p>通过这种方式，Wide部分能够直接、快速地学习到“照片编辑器用户对滤镜包应用有更高的安装意愿”这类强关联规则，正是“记忆能力”的直接体现。</p>
<p><strong>核心代码</strong></p>
<p>Wide部分的关键在于处理交叉特征。对于每一对需要交叉的特征，模型会创建一个专门的权重来记住它们的共现模式：</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="c1"># 遍历所有需要交叉的特征对</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="nb">len</span><span class="p">(</span><span class="n">cross_feature_columns</span><span class="p">)):</span>
    <span class="k">for</span> <span class="n">j</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">i</span> <span class="o">+</span> <span class="mi">1</span><span class="p">,</span> <span class="nb">len</span><span class="p">(</span><span class="n">cross_feature_columns</span><span class="p">)):</span>
        <span class="n">fc_i</span> <span class="o">=</span> <span class="n">cross_feature_columns</span><span class="p">[</span><span class="n">i</span><span class="p">]</span>
        <span class="n">fc_j</span> <span class="o">=</span> <span class="n">cross_feature_columns</span><span class="p">[</span><span class="n">j</span><span class="p">]</span>

        <span class="c1"># 获取两个特征的输入</span>
        <span class="n">feat_i</span> <span class="o">=</span> <span class="n">input_layer_dict</span><span class="p">[</span><span class="n">fc_i</span><span class="o">.</span><span class="n">name</span><span class="p">]</span>  <span class="c1"># [B, 1]</span>
        <span class="n">feat_j</span> <span class="o">=</span> <span class="n">input_layer_dict</span><span class="p">[</span><span class="n">fc_j</span><span class="o">.</span><span class="n">name</span><span class="p">]</span>  <span class="c1"># [B, 1]</span>

        <span class="c1"># 为每个特征对创建独立的权重表</span>
        <span class="n">cross_vocab_size</span> <span class="o">=</span> <span class="n">fc_i</span><span class="o">.</span><span class="n">vocab_size</span> <span class="o">*</span> <span class="n">fc_j</span><span class="o">.</span><span class="n">vocab_size</span>
        <span class="n">cross_embedding</span> <span class="o">=</span> <span class="n">Embedding</span><span class="p">(</span>
            <span class="n">input_dim</span><span class="o">=</span><span class="n">cross_vocab_size</span><span class="p">,</span>
            <span class="n">output_dim</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span>  <span class="c1"># 标量权重，直接记住这对特征的影响</span>
            <span class="n">name</span><span class="o">=</span><span class="sa">f</span><span class="s2">&quot;cross_</span><span class="si">{</span><span class="n">fc_i</span><span class="o">.</span><span class="n">name</span><span class="si">}</span><span class="s2">_</span><span class="si">{</span><span class="n">fc_j</span><span class="o">.</span><span class="n">name</span><span class="si">}</span><span class="s2">&quot;</span>
        <span class="p">)</span>

        <span class="c1"># 将特征对组合成单一索引并查找权重</span>
        <span class="n">combined_index</span> <span class="o">=</span> <span class="n">feat_i</span> <span class="o">*</span> <span class="n">fc_j</span><span class="o">.</span><span class="n">vocab_size</span> <span class="o">+</span> <span class="n">feat_j</span>
        <span class="n">cross_weight</span> <span class="o">=</span> <span class="n">cross_embedding</span><span class="p">(</span><span class="n">combined_index</span><span class="p">)</span>  <span class="c1"># 查表得到这对特征的权重</span>
        <span class="n">cross_weights</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">cross_weight</span><span class="p">)</span>

<span class="c1"># 所有交叉特征权重相加</span>
<span class="n">cross_logits</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">add_n</span><span class="p">(</span><span class="n">cross_weights</span><span class="p">)</span>
</pre></div>
</div>
<p>这段代码的设计体现了Wide部分的本质：为每个特征组合分配一个独立的权重，通过查表操作直接“记住”历史数据中的共现模式。</p>
<p><strong>学习复杂关系：Deep部分</strong></p>
<p>Deep部分是一个标准的前馈神经网络（DNN），它负责模型的“泛化能力”。与Wide部分依赖人工特征工程不同，Deep部分可以自动学习特征之间的高阶、非线性关系。</p>
<p>它的工作流程如下：首先，对于那些高维稀疏的类别特征（如用户ID、物品ID），通过一个<strong>嵌入层（Embedding
Layer）</strong>将它们映射为低维、稠密的向量。这些嵌入向量能够捕捉到特征的潜在语义信息，是实现泛化的基础。例如，《流浪地球》和《三体》的电影ID在嵌入空间中的距离，可能会比《流浪地球》和《熊出没》更近。</p>
<p>随后，这些嵌入向量与其他数值特征拼接在一起，被送入多层神经网络中进行前向传播：</p>
<div class="math notranslate nohighlight" id="equation-chapter-2-ranking-1-wide-and-deep-1">
<span class="eqno">(3.1.2)<a class="headerlink" href="#equation-chapter-2-ranking-1-wide-and-deep-1" title="Permalink to this equation">¶</a></span>\[a^{(l+1)}=f(W^{(l)}a^{(l)}+b^{(l)})\]</div>
<p>其中，<span class="math notranslate nohighlight">\(a^{(l)}\)</span>是第<span class="math notranslate nohighlight">\(l\)</span>层的激活值，<span class="math notranslate nohighlight">\(W^{(l)}\)</span>和<span class="math notranslate nohighlight">\(b^{(l)}\)</span>是该层的权重和偏置，<span class="math notranslate nohighlight">\(f\)</span>是激活函数（如ReLU）。通过逐层抽象，DNN能够发掘出数据中隐藏的复杂模式，从而对未曾见过的特征组合也能做出合理的预测。</p>
<p><strong>核心代码</strong></p>
<p>Deep部分的实现分为两个关键步骤：首先将类别特征映射为稠密向量，然后通过多层神经网络学习高阶特征交互：</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="c1"># 1. 特征嵌入：将稀疏的类别特征转换为稠密向量</span>
<span class="n">group_feature_dict</span> <span class="o">=</span> <span class="p">{}</span>
<span class="k">for</span> <span class="n">group_name</span><span class="p">,</span> <span class="n">_</span> <span class="ow">in</span> <span class="n">group_embedding_feature_dict</span><span class="o">.</span><span class="n">items</span><span class="p">():</span>
    <span class="n">group_feature_dict</span><span class="p">[</span><span class="n">group_name</span><span class="p">]</span> <span class="o">=</span> <span class="n">concat_group_embedding</span><span class="p">(</span>
        <span class="n">group_embedding_feature_dict</span><span class="p">,</span> <span class="n">group_name</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">flatten</span><span class="o">=</span><span class="kc">True</span>
    <span class="p">)</span>  <span class="c1"># B x (N * D) - 拼接所有特征的嵌入向量</span>

<span class="c1"># 2. 深度神经网络：逐层学习特征的非线性组合</span>
<span class="n">deep_logits</span> <span class="o">=</span> <span class="p">[]</span>
<span class="k">for</span> <span class="n">group_name</span><span class="p">,</span> <span class="n">group_feature</span> <span class="ow">in</span> <span class="n">group_feature_dict</span><span class="o">.</span><span class="n">items</span><span class="p">():</span>
    <span class="c1"># 构建多层神经网络</span>
    <span class="n">deep_out</span> <span class="o">=</span> <span class="n">DNNs</span><span class="p">(</span>
        <span class="n">units</span><span class="o">=</span><span class="n">dnn_units</span><span class="p">,</span>  <span class="c1"># 例如 [64, 32]</span>
        <span class="n">activation</span><span class="o">=</span><span class="s2">&quot;relu&quot;</span><span class="p">,</span>  <span class="c1"># ReLU激活函数</span>
        <span class="n">dropout_rate</span><span class="o">=</span><span class="n">dnn_dropout_rate</span>
    <span class="p">)(</span><span class="n">group_feature</span><span class="p">)</span>

    <span class="c1"># 输出层：将深度特征映射为预测分数</span>
    <span class="n">deep_logit</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">keras</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="n">activation</span><span class="o">=</span><span class="kc">None</span><span class="p">)(</span><span class="n">deep_out</span><span class="p">)</span>
    <span class="n">deep_logits</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">deep_logit</span><span class="p">)</span>
</pre></div>
</div>
<p>这种设计使得模型能够自动学习特征的语义表示，例如将“物品A”相关的特征映射到向量空间的相近位置，从而实现对未见过的特征组合的泛化预测。</p>
<p><strong>两者结合</strong></p>
<p>Wide &amp;
Deep模型通过联合训练，将两部分的输出结合起来进行最终的预测。其预测概率如下：</p>
<div class="math notranslate nohighlight" id="equation-chapter-2-ranking-1-wide-and-deep-2">
<span class="eqno">(3.1.3)<a class="headerlink" href="#equation-chapter-2-ranking-1-wide-and-deep-2" title="Permalink to this equation">¶</a></span>\[P(Y=1|\mathbf{x})=\sigma(\mathbf{w}_{wide}^T[\mathbf{x},\phi(\mathbf{x})]+\mathbf{w}_{deep}^T a^{(lf)}+b)\]</div>
<p>在这里，<span class="math notranslate nohighlight">\(\sigma\)</span>
是Sigmoid函数，<span class="math notranslate nohighlight">\([\mathbf{x}, \phi(\mathbf{x})]\)</span>代表Wide部分的输入（包含原始特征和交叉特征），<span class="math notranslate nohighlight">\(a^{(lf)}\)</span>是Deep部分最后一层的输出向量，<span class="math notranslate nohighlight">\(\mathbf{w}_{wide}\)</span>，<span class="math notranslate nohighlight">\(\mathbf{w}_{deep}\)</span>和<span class="math notranslate nohighlight">\(b\)</span>是最终预测层的权重和偏置。模型的梯度在反向传播时会同时更新Wide和Deep两部分的所有参数。</p>
<p>一个值得注意的工程细节是，由于两部分处理的特征类型不同，它们通常会采用不同的优化器。</p>
<ul class="simple">
<li><p><strong>Wide部分</strong>的输入特征非常稀疏，常使用带L1正则化的FTRL
<span id="id2">(<a class="reference internal" href="../chapter_references/references.html#id46" title="Ferreira, R. N., &amp; Soares, C. (2025). Follow-the-regularized-leader with adversarial constraints. arXiv preprint arXiv:2503.13366.">Ferreira and Soares, 2025</a>)</span>
等优化器。L1正则化可以产生稀疏的权重，相当于自动进行特征选择，让模型只“记住”重要的规则。</p></li>
<li><p><strong>Deep部分</strong>的参数是稠密的，更适合使用像AdaGrad
<span id="id3">(<a class="reference internal" href="../chapter_references/references.html#id69" title="Duchi, J., Hazan, E., &amp; Singer, Y. (2011). Adaptive subgradient methods for online learning and stochastic optimization. Journal of machine learning research, 12(7).">Duchi <em>et al.</em>, 2011</a>)</span> 或Adam <span id="id4">(<a class="reference internal" href="../chapter_references/references.html#id70" title="Kingma, D. P., &amp; Ba, J. (2014). Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980.">Kingma and Ba, 2014</a>)</span>
这样的优化器。</p></li>
</ul>
<p><strong>核心代码</strong></p>
<p>联合训练的核心是将Wide和Deep两部分的输出进行融合：</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="c1"># Wide部分：线性特征 + 交叉特征</span>
<span class="n">linear_logit</span> <span class="o">=</span> <span class="n">get_linear_logits</span><span class="p">(</span><span class="n">input_layer_dict</span><span class="p">,</span> <span class="n">feature_columns</span><span class="p">)</span>
<span class="n">cross_logit</span> <span class="o">=</span> <span class="n">get_cross_logits</span><span class="p">(</span><span class="n">input_layer_dict</span><span class="p">,</span> <span class="n">feature_columns</span><span class="p">)</span>

<span class="c1"># Deep部分：多个特征组的深度网络输出</span>
<span class="n">deep_logits</span> <span class="o">=</span> <span class="p">[]</span>
<span class="k">for</span> <span class="n">group_name</span><span class="p">,</span> <span class="n">group_feature</span> <span class="ow">in</span> <span class="n">group_feature_dict</span><span class="o">.</span><span class="n">items</span><span class="p">():</span>
    <span class="n">deep_out</span> <span class="o">=</span> <span class="n">DNNs</span><span class="p">(</span><span class="n">units</span><span class="o">=</span><span class="n">dnn_units</span><span class="p">,</span> <span class="n">activation</span><span class="o">=</span><span class="s2">&quot;relu&quot;</span><span class="p">,</span> <span class="n">dropout_rate</span><span class="o">=</span><span class="n">dnn_dropout_rate</span><span class="p">)(</span>
        <span class="n">group_feature</span>
    <span class="p">)</span>
    <span class="n">deep_logit</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">keras</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="n">activation</span><span class="o">=</span><span class="kc">None</span><span class="p">)(</span><span class="n">deep_out</span><span class="p">)</span>
    <span class="n">deep_logits</span><span class="o">.</span><span class="n">append</span><span class="p">(</span><span class="n">deep_logit</span><span class="p">)</span>

<span class="c1"># 联合训练：将Wide和Deep的输出相加</span>
<span class="n">wide_deep_logits</span> <span class="o">=</span> <span class="n">add_tensor_func</span><span class="p">(</span><span class="n">deep_logits</span> <span class="o">+</span> <span class="p">[</span><span class="n">linear_logit</span><span class="p">,</span> <span class="n">cross_logit</span><span class="p">])</span>

<span class="c1"># 最终预测：通过sigmoid函数输出点击概率</span>
<span class="n">output</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">keras</span><span class="o">.</span><span class="n">layers</span><span class="o">.</span><span class="n">Dense</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="n">activation</span><span class="o">=</span><span class="s2">&quot;sigmoid&quot;</span><span class="p">)(</span><span class="n">wide_deep_logits</span><span class="p">)</span>
</pre></div>
</div>
<p>Wide &amp;
Deep模型的意义不只是提供了一个新的网络结构，更重要的是给出了一个思路：怎么把记忆能力和泛化能力结合起来。该模型不仅成为了许多推荐业务的基线模型，更为后续精排模型的发展提供了重要的参考。</p>
<p><strong>代码实践</strong></p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span><span class="w"> </span><span class="nn">funrec</span><span class="w"> </span><span class="kn">import</span> <span class="n">run_experiment</span>

<span class="n">run_experiment</span><span class="p">(</span><span class="s1">&#39;wide_deep&#39;</span><span class="p">)</span>
</pre></div>
</div>
<div class="output highlight-default notranslate"><div class="highlight"><pre><span></span><span class="o">+--------+--------+------------+</span>
<span class="o">|</span>    <span class="n">auc</span> <span class="o">|</span>   <span class="n">gauc</span> <span class="o">|</span>   <span class="n">val_user</span> <span class="o">|</span>
<span class="o">+========+========+============+</span>
<span class="o">|</span> <span class="mf">0.6038</span> <span class="o">|</span> <span class="mf">0.5754</span> <span class="o">|</span>        <span class="mi">928</span> <span class="o">|</span>
<span class="o">+--------+--------+------------+</span>
</pre></div>
</div>
</section>


        </div>
        <div class="side-doc-outline">
            <div class="side-doc-outline--content"> 
            </div>
        </div>

      <div class="clearer"></div>
    </div><div class="pagenation">
     <a id="button-prev" href="index.html" class="mdl-button mdl-js-button mdl-js-ripple-effect mdl-button--colored" role="botton" accesskey="P">
         <i class="pagenation-arrow-L fas fa-arrow-left fa-lg"></i>
         <div class="pagenation-text">
            <span class="pagenation-direction">Previous</span>
            <div>3. 精排模型</div>
         </div>
     </a>
     <a id="button-next" href="2.feature_crossing/index.html" class="mdl-button mdl-js-button mdl-js-ripple-effect mdl-button--colored" role="botton" accesskey="N">
         <i class="pagenation-arrow-R fas fa-arrow-right fa-lg"></i>
        <div class="pagenation-text">
            <span class="pagenation-direction">Next</span>
            <div>3.2. 特征交叉</div>
        </div>
     </a>
  </div>
        
        </main>
    </div>
  </body>
</html>