<!DOCTYPE html>

<html lang="en">
  <head>
    <meta charset="utf-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" /><meta name="generator" content="Docutils 0.19: https://docutils.sourceforge.io/" />

    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
    <meta http-equiv="x-ua-compatible" content="ie=edge">
    
    <title>2.2.1. I2I召回 &#8212; FunRec 推荐系统 0.0.1 documentation</title>

    <link rel="stylesheet" href="../../_static/material-design-lite-1.3.0/material.blue-deep_orange.min.css" type="text/css" />
    <link rel="stylesheet" href="../../_static/sphinx_materialdesign_theme.css" type="text/css" />
    <link rel="stylesheet" href="../../_static/fontawesome/all.css" type="text/css" />
    <link rel="stylesheet" href="../../_static/fonts.css" type="text/css" />
    <link rel="stylesheet" type="text/css" href="../../_static/pygments.css" />
    <link rel="stylesheet" type="text/css" href="../../_static/basic.css" />
    <link rel="stylesheet" type="text/css" href="../../_static/d2l.css" />
    <script data-url_root="../../" id="documentation_options" src="../../_static/documentation_options.js"></script>
    <script src="../../_static/jquery.js"></script>
    <script src="../../_static/underscore.js"></script>
    <script src="../../_static/_sphinx_javascript_frameworks_compat.js"></script>
    <script src="../../_static/doctools.js"></script>
    <script src="../../_static/sphinx_highlight.js"></script>
    <script src="../../_static/d2l.js"></script>
    <script async="async" src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
    <link rel="index" title="Index" href="../../genindex.html" />
    <link rel="search" title="Search" href="../../search.html" />
    <link rel="next" title="2.2.2. U2I召回" href="2.u2i.html" />
    <link rel="prev" title="2.2. 向量召回" href="index.html" /> 
  </head>
<body>
    <div class="mdl-layout mdl-js-layout mdl-layout--fixed-header mdl-layout--fixed-drawer"><header class="mdl-layout__header mdl-layout__header--waterfall ">
    <div class="mdl-layout__header-row">
        
        <nav class="mdl-navigation breadcrumb">
            <a class="mdl-navigation__link" href="../index.html"><span class="section-number">2. </span>召回模型</a><i class="material-icons">navigate_next</i>
            <a class="mdl-navigation__link" href="index.html"><span class="section-number">2.2. </span>向量召回</a><i class="material-icons">navigate_next</i>
            <a class="mdl-navigation__link is-active"><span class="section-number">2.2.1. </span>I2I召回</a>
        </nav>
        <div class="mdl-layout-spacer"></div>
        <nav class="mdl-navigation">
        
<form class="form-inline pull-sm-right" action="../../search.html" method="get">
      <div class="mdl-textfield mdl-js-textfield mdl-textfield--expandable mdl-textfield--floating-label mdl-textfield--align-right">
        <label id="quick-search-icon" class="mdl-button mdl-js-button mdl-button--icon"  for="waterfall-exp">
          <i class="material-icons">search</i>
        </label>
        <div class="mdl-textfield__expandable-holder">
          <input class="mdl-textfield__input" type="text" name="q"  id="waterfall-exp" placeholder="Search" />
          <input type="hidden" name="check_keywords" value="yes" />
          <input type="hidden" name="area" value="default" />
        </div>
      </div>
      <div class="mdl-tooltip" data-mdl-for="quick-search-icon">
      Quick search
      </div>
</form>
        
<a id="button-show-source"
    class="mdl-button mdl-js-button mdl-button--icon"
    href="../../_sources/chapter_1_retrieval/2.embedding/1.i2i.rst.txt" rel="nofollow">
  <i class="material-icons">code</i>
</a>
<div class="mdl-tooltip" data-mdl-for="button-show-source">
Show Source
</div>
        </nav>
    </div>
    <div class="mdl-layout__header-row header-links">
      <div class="mdl-layout-spacer"></div>
      <nav class="mdl-navigation">
          
              <a  class="mdl-navigation__link" href="https://funrec-notebooks.s3.eu-west-3.amazonaws.com/fun-rec.zip">
                  <i class="fas fa-download"></i>
                  Jupyter 记事本
              </a>
          
              <a  class="mdl-navigation__link" href="https://github.com/datawhalechina/fun-rec">
                  <i class="fab fa-github"></i>
                  GitHub
              </a>
      </nav>
    </div>
</header><header class="mdl-layout__drawer">
    
          <!-- Title -->
      <span class="mdl-layout-title">
          <a class="title" href="../../index.html">
              <span class="title-text">
                  FunRec 推荐系统
              </span>
          </a>
      </span>
    
    
      <div class="globaltoc">
        <span class="mdl-layout-title toc">Table Of Contents</span>
        
        
            
            <nav class="mdl-navigation">
                <ul>
<li class="toctree-l1"><a class="reference internal" href="../../chapter_preface/index.html">前言</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../chapter_installation/index.html">安装</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../chapter_notation/index.html">符号</a></li>
</ul>
<ul class="current">
<li class="toctree-l1"><a class="reference internal" href="../../chapter_0_introduction/index.html">1. 推荐系统概述</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_0_introduction/1.intro.html">1.1. 推荐系统是什么？</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_0_introduction/2.outline.html">1.2. 本书概览</a></li>
</ul>
</li>
<li class="toctree-l1 current"><a class="reference internal" href="../index.html">2. 召回模型</a><ul class="current">
<li class="toctree-l2"><a class="reference internal" href="../1.cf/index.html">2.1. 协同过滤</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../1.cf/1.itemcf.html">2.1.1. 基于物品的协同过滤</a></li>
<li class="toctree-l3"><a class="reference internal" href="../1.cf/2.usercf.html">2.1.2. 基于用户的协同过滤</a></li>
<li class="toctree-l3"><a class="reference internal" href="../1.cf/3.mf.html">2.1.3. 矩阵分解</a></li>
<li class="toctree-l3"><a class="reference internal" href="../1.cf/4.summary.html">2.1.4. 总结</a></li>
</ul>
</li>
<li class="toctree-l2 current"><a class="reference internal" href="index.html">2.2. 向量召回</a><ul class="current">
<li class="toctree-l3 current"><a class="current reference internal" href="#">2.2.1. I2I召回</a></li>
<li class="toctree-l3"><a class="reference internal" href="2.u2i.html">2.2.2. U2I召回</a></li>
<li class="toctree-l3"><a class="reference internal" href="3.summary.html">2.2.3. 总结</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../3.sequence/index.html">2.3. 序列召回</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../3.sequence/1.user_interests.html">2.3.1. 深化用户兴趣表示</a></li>
<li class="toctree-l3"><a class="reference internal" href="../3.sequence/2.generateive_recall.html">2.3.2. 生成式召回方法</a></li>
<li class="toctree-l3"><a class="reference internal" href="../3.sequence/3.summary.html">2.3.3. 总结</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../chapter_2_ranking/index.html">3. 精排模型</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_2_ranking/1.wide_and_deep.html">3.1. 记忆与泛化</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_2_ranking/2.feature_crossing/index.html">3.2. 特征交叉</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../chapter_2_ranking/2.feature_crossing/1.second_order.html">3.2.1. 二阶特征交叉</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../chapter_2_ranking/2.feature_crossing/2.higher_order.html">3.2.2. 高阶特征交叉</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_2_ranking/3.sequence.html">3.3. 序列建模</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_2_ranking/4.multi_objective/index.html">3.4. 多目标建模</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../chapter_2_ranking/4.multi_objective/1.arch.html">3.4.1. 基础结构演进</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../chapter_2_ranking/4.multi_objective/2.dependency_modeling.html">3.4.2. 任务依赖建模</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../chapter_2_ranking/4.multi_objective/3.multi_loss_optim.html">3.4.3. 多目标损失融合</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_2_ranking/5.multi_scenario/index.html">3.5. 多场景建模</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../chapter_2_ranking/5.multi_scenario/1.multi_tower.html">3.5.1. 多塔结构</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../chapter_2_ranking/5.multi_scenario/2.dynamic_weight.html">3.5.2. 动态权重建模</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../chapter_3_rerank/index.html">4. 重排模型</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_3_rerank/1.greedy.html">4.1. 基于贪心的重排</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_3_rerank/2.personalized.html">4.2. 基于个性化的重排</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_3_rerank/3.summary.html">4.3. 本章小结</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../chapter_4_trends/index.html">5. 难点及热点研究</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_4_trends/1.debias.html">5.1. 模型去偏</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_4_trends/2.cold_start.html">5.2. 冷启动问题</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_4_trends/3.generative.html">5.3. 生成式推荐</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_4_trends/4.summary.html">5.4. 本章小结</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../chapter_5_projects/index.html">6. 项目实践</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_5_projects/1.understanding.html">6.1. 赛题理解</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_5_projects/2.baseline.html">6.2. Baseline</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_5_projects/3.analysis.html">6.3. 数据分析</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_5_projects/4.recall.html">6.4. 多路召回</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_5_projects/5.feature_engineering.html">6.5. 特征工程</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_5_projects/6.ranking.html">6.6. 排序模型</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../chapter_appendix/index.html">7. Appendix</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_appendix/word2vec.html">7.1. Word2vec</a></li>
</ul>
</li>
</ul>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../../chapter_references/references.html">参考文献</a></li>
</ul>

            </nav>
        
        </div>
    
</header>
        <main class="mdl-layout__content" tabIndex="0">

	<script type="text/javascript" src="../../_static/sphinx_materialdesign_theme.js "></script>
    <header class="mdl-layout__drawer">
    
          <!-- Title -->
      <span class="mdl-layout-title">
          <a class="title" href="../../index.html">
              <span class="title-text">
                  FunRec 推荐系统
              </span>
          </a>
      </span>
    
    
      <div class="globaltoc">
        <span class="mdl-layout-title toc">Table Of Contents</span>
        
        
            
            <nav class="mdl-navigation">
                <ul>
<li class="toctree-l1"><a class="reference internal" href="../../chapter_preface/index.html">前言</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../chapter_installation/index.html">安装</a></li>
<li class="toctree-l1"><a class="reference internal" href="../../chapter_notation/index.html">符号</a></li>
</ul>
<ul class="current">
<li class="toctree-l1"><a class="reference internal" href="../../chapter_0_introduction/index.html">1. 推荐系统概述</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_0_introduction/1.intro.html">1.1. 推荐系统是什么？</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_0_introduction/2.outline.html">1.2. 本书概览</a></li>
</ul>
</li>
<li class="toctree-l1 current"><a class="reference internal" href="../index.html">2. 召回模型</a><ul class="current">
<li class="toctree-l2"><a class="reference internal" href="../1.cf/index.html">2.1. 协同过滤</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../1.cf/1.itemcf.html">2.1.1. 基于物品的协同过滤</a></li>
<li class="toctree-l3"><a class="reference internal" href="../1.cf/2.usercf.html">2.1.2. 基于用户的协同过滤</a></li>
<li class="toctree-l3"><a class="reference internal" href="../1.cf/3.mf.html">2.1.3. 矩阵分解</a></li>
<li class="toctree-l3"><a class="reference internal" href="../1.cf/4.summary.html">2.1.4. 总结</a></li>
</ul>
</li>
<li class="toctree-l2 current"><a class="reference internal" href="index.html">2.2. 向量召回</a><ul class="current">
<li class="toctree-l3 current"><a class="current reference internal" href="#">2.2.1. I2I召回</a></li>
<li class="toctree-l3"><a class="reference internal" href="2.u2i.html">2.2.2. U2I召回</a></li>
<li class="toctree-l3"><a class="reference internal" href="3.summary.html">2.2.3. 总结</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../3.sequence/index.html">2.3. 序列召回</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../3.sequence/1.user_interests.html">2.3.1. 深化用户兴趣表示</a></li>
<li class="toctree-l3"><a class="reference internal" href="../3.sequence/2.generateive_recall.html">2.3.2. 生成式召回方法</a></li>
<li class="toctree-l3"><a class="reference internal" href="../3.sequence/3.summary.html">2.3.3. 总结</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../chapter_2_ranking/index.html">3. 精排模型</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_2_ranking/1.wide_and_deep.html">3.1. 记忆与泛化</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_2_ranking/2.feature_crossing/index.html">3.2. 特征交叉</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../chapter_2_ranking/2.feature_crossing/1.second_order.html">3.2.1. 二阶特征交叉</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../chapter_2_ranking/2.feature_crossing/2.higher_order.html">3.2.2. 高阶特征交叉</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_2_ranking/3.sequence.html">3.3. 序列建模</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_2_ranking/4.multi_objective/index.html">3.4. 多目标建模</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../chapter_2_ranking/4.multi_objective/1.arch.html">3.4.1. 基础结构演进</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../chapter_2_ranking/4.multi_objective/2.dependency_modeling.html">3.4.2. 任务依赖建模</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../chapter_2_ranking/4.multi_objective/3.multi_loss_optim.html">3.4.3. 多目标损失融合</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_2_ranking/5.multi_scenario/index.html">3.5. 多场景建模</a><ul>
<li class="toctree-l3"><a class="reference internal" href="../../chapter_2_ranking/5.multi_scenario/1.multi_tower.html">3.5.1. 多塔结构</a></li>
<li class="toctree-l3"><a class="reference internal" href="../../chapter_2_ranking/5.multi_scenario/2.dynamic_weight.html">3.5.2. 动态权重建模</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../chapter_3_rerank/index.html">4. 重排模型</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_3_rerank/1.greedy.html">4.1. 基于贪心的重排</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_3_rerank/2.personalized.html">4.2. 基于个性化的重排</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_3_rerank/3.summary.html">4.3. 本章小结</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../chapter_4_trends/index.html">5. 难点及热点研究</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_4_trends/1.debias.html">5.1. 模型去偏</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_4_trends/2.cold_start.html">5.2. 冷启动问题</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_4_trends/3.generative.html">5.3. 生成式推荐</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_4_trends/4.summary.html">5.4. 本章小结</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../chapter_5_projects/index.html">6. 项目实践</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_5_projects/1.understanding.html">6.1. 赛题理解</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_5_projects/2.baseline.html">6.2. Baseline</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_5_projects/3.analysis.html">6.3. 数据分析</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_5_projects/4.recall.html">6.4. 多路召回</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_5_projects/5.feature_engineering.html">6.5. 特征工程</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_5_projects/6.ranking.html">6.6. 排序模型</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../../chapter_appendix/index.html">7. Appendix</a><ul>
<li class="toctree-l2"><a class="reference internal" href="../../chapter_appendix/word2vec.html">7.1. Word2vec</a></li>
</ul>
</li>
</ul>
<ul>
<li class="toctree-l1"><a class="reference internal" href="../../chapter_references/references.html">参考文献</a></li>
</ul>

            </nav>
        
        </div>
    
</header>

    <div class="document">
        <div class="page-content" role="main">
        
  <section id="i2i">
<span id="id1"></span><h1><span class="section-number">2.2.1. </span>I2I召回<a class="headerlink" href="#i2i" title="Permalink to this heading">¶</a></h1>
<p>在推荐系统中，I2I（Item-to-Item）召回是一个核心任务：给定一个物品，如何快速找出与之相似的其他物品？这个看似简单的问题，实际上蕴含着深刻的洞察——“相似性”并非仅仅由物品的内在属性决定，而是与用户的行为所共同定义的。如果两个商品经常被同一批用户购买，两部电影被同一群观众喜欢，那么它们之间就可能存在某种关联。</p>
<p>这种思想的灵感来源于自然语言处理领域的一个重要发现。在语言学中，有一个著名的分布假说
<span id="id2">(<a class="reference internal" href="../../chapter_references/references.html#id16" title="Firth, J. R. (1957). Studies in Linguistic Analysis. Blackwell.">Firth, 1957</a>)</span> ：“You shall know a word by the
company it
keeps”（观其伴，知其义）。一个词的含义可以通过它经常与哪些词一起出现来推断。Word2Vec正是基于这一思想，通过分析大量文本中词语的共现关系，学习出了能够捕捉语义相似性的词向量。本节将首先介绍Word2Vec的核心思想，为后续的I2I召回模型奠定理论基础。</p>
<p>接下来，我们将看到所有I2I召回方法的本质都是在回答同一个问题：如何更好地定义和利用“序列”来学习物品之间的相似性。从最直接的用户行为序列，到融合属性信息的增强序列，再到面向业务目标的会话序列，每一种方法都是对“序列”概念的不同诠释和深化。</p>
<section id="word2vec">
<h2><span class="section-number">2.2.1.1. </span>Word2Vec：序列建模的理论基础<a class="headerlink" href="#word2vec" title="Permalink to this heading">¶</a></h2>
<p>Word2Vec <span id="id3">(<a class="reference internal" href="../../chapter_references/references.html#id15" title="Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., &amp; Dean, J. (2013). Distributed representations of words and phrases and their compositionality. Advances in neural information processing systems, 26.">Mikolov <em>et al.</em>, 2013</a>)</span>
的成功建立在一个简单而深刻的假设之上：在相似语境中出现的词语往往具有相似的含义。通过分析海量文本中词语的共现模式，我们可以为每个词学习一个稠密的向量表示，使得语义相近的词在向量空间中距离更近。</p>
<p>Word2Vec主要包含两种模型架构：<strong>Skip-Gram</strong>和<strong>CBOW</strong>（Continuous
Bag of
Words）。Skip-Gram模型通过给定的中心词来预测其周围的上下文词，而CBOW模型则相反，通过上下文词来预测中心词。在推荐系统中，Skip-Gram模型由于其更好的性能表现而被更广泛地采用。</p>
<section id="skip-gram">
<h3><span class="section-number">2.2.1.1.1. </span>Skip-Gram模型详解<a class="headerlink" href="#skip-gram" title="Permalink to this heading">¶</a></h3>
<figure class="align-default" id="id16">
<span id="w2v-skip-gram"></span><a class="reference internal image-reference" href="../../_images/w2v_skip_gram.svg"><img alt="../../_images/w2v_skip_gram.svg" src="../../_images/w2v_skip_gram.svg" width="300px" /></a>
<figcaption>
<p><span class="caption-number">图2.2.1 </span><span class="caption-text">Word2Vec Skip-Gram模型示意图</span><a class="headerlink" href="#id16" title="Permalink to this image">¶</a></p>
</figcaption>
</figure>
<p>在Skip-Gram模型中，给定文本序列中位置<span class="math notranslate nohighlight">\(t\)</span>的中心词<span class="math notranslate nohighlight">\(w_t\)</span>，模型的目标是最大化其上下文窗口内所有词语的出现概率。具体而言，对于窗口大小为<span class="math notranslate nohighlight">\(m\)</span>的情况，模型要预测<span class="math notranslate nohighlight">\(w_{t-m}, w_{t-m+1}, \ldots, w_{t-1}, w_{t+1}, \ldots, w_{t+m}\)</span>这些上下文词的概率。</p>
<p>中心词<span class="math notranslate nohighlight">\(w_t\)</span>预测上下文词<span class="math notranslate nohighlight">\(w_{t+j}\)</span>的条件概率定义为：</p>
<div class="math notranslate nohighlight" id="equation-chapter-1-retrieval-2-embedding-1-i2i-0">
<span class="eqno">(2.2.1)<a class="headerlink" href="#equation-chapter-1-retrieval-2-embedding-1-i2i-0" title="Permalink to this equation">¶</a></span>\[P(w_{t+j} | w_t) = \frac{e^{v_{w_{t+j}}^T v_{w_t}}}{\sum_{k=1}^{|V|} e^{v_{w_k}^T v_{w_t}}}\]</div>
<p>其中<span class="math notranslate nohighlight">\(v_{w_i}\)</span>表示词<span class="math notranslate nohighlight">\(w_i\)</span>的向量表示，<span class="math notranslate nohighlight">\(V\)</span>是词汇表。这个softmax公式确保了所有词的概率之和为1，而分子中的内积<span class="math notranslate nohighlight">\(v_{w_{t+j}}^T v_{w_t}\)</span>衡量了中心词与上下文词的相似度。</p>
</section>
<section id="id4">
<h3><span class="section-number">2.2.1.1.2. </span>负采样优化<a class="headerlink" href="#id4" title="Permalink to this heading">¶</a></h3>
<p>直接计算上述softmax的分母需要遍历整个词汇表，在实际应用中计算代价过高。为了解决这个问题，Word2Vec采用了负采样（Negative
Sampling）技术。这种方法将原本的多分类问题转化为多个二分类问题：</p>
<div class="math notranslate nohighlight" id="equation-chapter-1-retrieval-2-embedding-1-i2i-1">
<span class="eqno">(2.2.2)<a class="headerlink" href="#equation-chapter-1-retrieval-2-embedding-1-i2i-1" title="Permalink to this equation">¶</a></span>\[\log \sigma(v_{w_{t+j}}^T v_{w_t}) + \sum_{i=1}^{k} \mathbb{E}_{w_i \sim P_n(w)} \log \sigma(-v_{w_i}^T v_{w_t})\]</div>
<p>其中<span class="math notranslate nohighlight">\(\sigma(x) = \frac{1}{1 + e^{-x}}\)</span>是sigmoid函数，<span class="math notranslate nohighlight">\(k\)</span>是负样本数量，<span class="math notranslate nohighlight">\(P_n(w)\)</span>是负采样分布。负采样的直观解释是：对于真实的词对，我们希望增加它们的相似度；对于随机采样的负样本词对，我们希望降低它们的相似度。</p>
<p>这种优化策略不仅大幅提升了训练效率，还为后续推荐系统中的模型训练提供了重要的技术范式。当我们将这一思想迁移到推荐领域时，“词语”变成了“物品”，“句子”变成了“用户行为序列”，但核心的序列建模思想保持不变。</p>
</section>
</section>
<section id="item2vec">
<h2><span class="section-number">2.2.1.2. </span>Item2Vec：最直接的迁移<a class="headerlink" href="#item2vec" title="Permalink to this heading">¶</a></h2>
<p>Word2Vec在自然语言处理领域的成功，自然引发了一个问题：能否将这种基于序列的学习方法直接应用到推荐系统中？Item2Vec给出了肯定的答案。</p>
<section id="id5">
<h3><span class="section-number">2.2.1.2.1. </span>从词语到物品的映射<a class="headerlink" href="#id5" title="Permalink to this heading">¶</a></h3>
<p>Item2Vec <span id="id6">(<a class="reference internal" href="../../chapter_references/references.html#id18" title="Barkan, O., &amp; Koenigstein, N. (2016). Item2vec: neural item embedding for collaborative filtering. 2016 IEEE 26th international workshop on machine learning for signal processing (MLSP) (pp. 1–6).">Barkan and Koenigstein, 2016</a>)</span>
的核心洞察在于发现了用户行为数据与文本数据的结构相似性。在文本中，一个句子由多个词语组成，词语之间的共现关系反映了语义相似性。类似地，在推荐系统中，每个用户的交互历史可以看作一个“句子”，其中包含的物品就是“词语”。如果两个物品经常被同一个用户交互，那么它们之间就存在相似性。</p>
<p>这种映射关系可以表示为：</p>
<ul class="simple">
<li><p><strong>词语</strong> → <strong>物品</strong></p></li>
<li><p><strong>句子</strong> → <strong>用户交互序列</strong></p></li>
<li><p><strong>词语共现</strong> → <strong>物品共同被用户交互</strong></p></li>
</ul>
</section>
<section id="id7">
<h3><span class="section-number">2.2.1.2.2. </span>模型实现<a class="headerlink" href="#id7" title="Permalink to this heading">¶</a></h3>
<p>Item2Vec直接采用Word2Vec的Skip-Gram架构，但在序列构建上有所简化。给定数据集<span class="math notranslate nohighlight">\(\mathcal{S} = \{s_1, s_2, \ldots, s_n\}\)</span>，其中每个<span class="math notranslate nohighlight">\(s_i\)</span>包含用户<span class="math notranslate nohighlight">\(i\)</span>交互过的所有物品，Item2Vec将每个用户的交互历史视为一个集合而非序列，忽略了交互的时间顺序。</p>
<p>优化目标函数与Word2Vec保持一致：</p>
<div class="math notranslate nohighlight" id="equation-chapter-1-retrieval-2-embedding-1-i2i-2">
<span class="eqno">(2.2.3)<a class="headerlink" href="#equation-chapter-1-retrieval-2-embedding-1-i2i-2" title="Permalink to this equation">¶</a></span>\[\mathcal{L} = \sum_{s \in \mathcal{S}} \sum_{l_{i} \in s} \sum_{-m \leq j \leq m, j \neq 0} \log P(l_{i+j} | l_{i})\]</div>
<p>其中<span class="math notranslate nohighlight">\(l_i\)</span>表示物品，<span class="math notranslate nohighlight">\(m\)</span>是上下文窗口大小，<span class="math notranslate nohighlight">\(P(l_{i+j} | l_{i})\)</span>采用与Word2Vec相同的softmax形式计算。</p>
<p><strong>核心代码</strong></p>
<p>Item2Vec的实现可以直接调用gensim库 <span id="id8">(<a class="reference internal" href="../../chapter_references/references.html#id100" title="Řehůřek, R., &amp; Sojka, P. (2010 , May). Software Framework for Topic Modelling with Large Corpora. Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks (pp. 45–50). Valletta, Malta: ELRA.">Řehůřek and Sojka, 2010</a>)</span>
的Word2Vec模型。核心在于将用户交互序列作为训练语料：</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="k">def</span><span class="w"> </span><span class="nf">fit</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">train_hist_movie_id_list</span><span class="p">):</span>
    <span class="c1"># train_hist_movie_id_list: 用户交互序列列表</span>
    <span class="c1"># 每个元素是一个用户的物品ID序列</span>
    <span class="bp">self</span><span class="o">.</span><span class="n">model</span> <span class="o">=</span> <span class="n">Word2Vec</span><span class="p">(</span>
        <span class="n">train_hist_movie_id_list</span><span class="p">,</span>
        <span class="n">vector_size</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">model_config</span><span class="p">[</span><span class="s2">&quot;EmbDim&quot;</span><span class="p">],</span>      <span class="c1"># 嵌入维度</span>
        <span class="n">window</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">model_config</span><span class="p">[</span><span class="s2">&quot;Window&quot;</span><span class="p">],</span>           <span class="c1"># 上下文窗口大小</span>
        <span class="n">min_count</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">model_config</span><span class="p">[</span><span class="s2">&quot;MinCount&quot;</span><span class="p">],</span>      <span class="c1"># 最小出现次数</span>
        <span class="n">workers</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">model_config</span><span class="p">[</span><span class="s2">&quot;Workers&quot;</span><span class="p">],</span>         <span class="c1"># 并行线程数</span>
    <span class="p">)</span>
</pre></div>
</div>
<p>这里的<code class="docutils literal notranslate"><span class="pre">train_hist_movie_id_list</span></code>就是前面提到的数据集<span class="math notranslate nohighlight">\(\mathcal{S}\)</span>，其中每个用户的交互历史被视为一个“句子”，物品ID对应“词语”。训练完成后，每个物品都得到一个稠密的向量表示。</p>
<p><strong>训练和评估</strong></p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span><span class="w"> </span><span class="nn">funrec</span><span class="w"> </span><span class="kn">import</span> <span class="n">run_experiment</span>

<span class="n">run_experiment</span><span class="p">(</span><span class="s1">&#39;item2vec&#39;</span><span class="p">)</span>
</pre></div>
</div>
<div class="output highlight-default notranslate"><div class="highlight"><pre><span></span><span class="o">+---------------+--------------+-----------+----------+----------------+---------------+</span>
<span class="o">|</span>   <span class="n">hit_rate</span><span class="o">@</span><span class="mi">10</span> <span class="o">|</span>   <span class="n">hit_rate</span><span class="o">@</span><span class="mi">5</span> <span class="o">|</span>   <span class="n">ndcg</span><span class="o">@</span><span class="mi">10</span> <span class="o">|</span>   <span class="n">ndcg</span><span class="o">@</span><span class="mi">5</span> <span class="o">|</span>   <span class="n">precision</span><span class="o">@</span><span class="mi">10</span> <span class="o">|</span>   <span class="n">precision</span><span class="o">@</span><span class="mi">5</span> <span class="o">|</span>
<span class="o">+===============+==============+===========+==========+================+===============+</span>
<span class="o">|</span>        <span class="mf">0.0066</span> <span class="o">|</span>       <span class="mf">0.0033</span> <span class="o">|</span>    <span class="mf">0.0025</span> <span class="o">|</span>   <span class="mf">0.0014</span> <span class="o">|</span>         <span class="mf">0.0007</span> <span class="o">|</span>        <span class="mf">0.0007</span> <span class="o">|</span>
<span class="o">+---------------+--------------+-----------+----------+----------------+---------------+</span>
</pre></div>
</div>
</section>
</section>
<section id="eges">
<h2><span class="section-number">2.2.1.3. </span>EGES：用属性信息增强序列<a class="headerlink" href="#eges" title="Permalink to this heading">¶</a></h2>
<p>Item2Vec虽然验证了序列建模在推荐系统中的可行性，但其简单的设计也带来了明显的局限性。首先，将用户交互历史简单视为无序集合，忽略了时序信息可能丢失重要的用户行为模式。其次，对于新上架的物品由于缺乏用户交互历史，Item2Vec无法生成有意义的向量表示。</p>
<p>EGES（Enhanced Graph Embedding with Side
Information）:cite:<cite>wang2018billion</cite>
正是为了解决这些核心挑战而提出的。该方法通过两个关键创新来改进传统的序列建模：一是基于会话构建更精细的商品关系图来更好地反映用户行为模式，二是融合商品的辅助信息来解决冷启动问题。</p>
<section id="id9">
<h3><span class="section-number">2.2.1.3.1. </span>构建商品关系图<a class="headerlink" href="#id9" title="Permalink to this heading">¶</a></h3>
<p>EGES的第一个创新是将物品序列的概念从简单的用户交互扩展为更精细的会话级序列。考虑到用户行为的复杂性和计算效率，研究者设置了一小时的时间窗口，只选择窗口内的用户行为构建商品关系图。</p>
<p>具体构建过程如图所示：当两个商品在同一会话（时间窗口内）的用户行为序列中连续出现时，在它们之间建立一条有向边，边的权重等于这种商品转移模式在所有用户行为历史中出现的频率。相比于传统方法将整个用户历史视为一个序列，这种基于会话的图构建方法能够更准确地捕捉用户在特定时间段内的连续兴趣转移模式。</p>
<figure class="align-default" id="id17">
<span id="eges-item-graph"></span><a class="reference internal image-reference" href="../../_images/eges_item_graph.png"><img alt="../../_images/eges_item_graph.png" src="../../_images/eges_item_graph.png" style="width: 500px;" /></a>
<figcaption>
<p><span class="caption-number">图2.2.2 </span><span class="caption-text">商品图构建过程</span><a class="headerlink" href="#id17" title="Permalink to this image">¶</a></p>
</figcaption>
</figure>
<p>在构建好的商品图上，EGES采用带权随机游走策略生成训练序列。从一个节点出发，转移概率由边权重决定：</p>
<div class="math notranslate nohighlight" id="equation-chapter-1-retrieval-2-embedding-1-i2i-3">
<span class="eqno">(2.2.4)<a class="headerlink" href="#equation-chapter-1-retrieval-2-embedding-1-i2i-3" title="Permalink to this equation">¶</a></span>\[\begin{split}P(v_j|v_i) = \begin{cases}
\frac{M_{ij}}{\sum_{j=1}^{|N_+(v_i)|}M_{ij}} &amp; \text{if } v_j \in N_+(v_i) \\
0 &amp; \text{if } e_{ij} \notin E
\end{cases}\end{split}\]</div>
<p>其中<span class="math notranslate nohighlight">\(M_{ij}\)</span>表示节点<span class="math notranslate nohighlight">\(v_i\)</span>到节点<span class="math notranslate nohighlight">\(v_j\)</span>的边权重，<span class="math notranslate nohighlight">\(N_+(v_i)\)</span>表示节点<span class="math notranslate nohighlight">\(v_i\)</span>的邻居集合。通过这种随机游走过程，可以生成大量的商品序列用于后续的Embedding学习。</p>
</section>
<section id="id10">
<h3><span class="section-number">2.2.1.3.2. </span>融合辅助信息解决稀疏性问题<a class="headerlink" href="#id10" title="Permalink to this heading">¶</a></h3>
<p>基于上述商品图和随机游走策略，我们可以采用类似Word2Vec的方法学习商品的向量表示。然而，这种纯粹基于行为序列的方法面临一个关键挑战：对于用户交互稀少的商品，由于缺乏足够的共现信息，很难学习到高质量的Embedding表示。</p>
<p>为了解决这种稀疏性问题，EGES方法的第二个创新是引入商品的辅助信息（如类别、品牌、价格区间等）来增强商品的向量表示。</p>
<p>GES的核心思想是将商品本身的Embedding与其各种属性的Embedding进行平均聚合：</p>
<div class="math notranslate nohighlight" id="equation-chapter-1-retrieval-2-embedding-1-i2i-4">
<span class="eqno">(2.2.5)<a class="headerlink" href="#equation-chapter-1-retrieval-2-embedding-1-i2i-4" title="Permalink to this equation">¶</a></span>\[H_v=\frac{1}{n+1} \sum_{s=0}^n{W_v^s}\]</div>
<p>其中<span class="math notranslate nohighlight">\(W_v^s\)</span>表示商品<span class="math notranslate nohighlight">\(v\)</span>的第<span class="math notranslate nohighlight">\(s\)</span>种属性的向量表示，<span class="math notranslate nohighlight">\(W_v^0\)</span>表示商品ID的向量表示。这种方法虽然有效缓解了稀疏性问题，但存在一个明显的局限：它假设所有类型的辅助信息对商品表示的贡献是相等的，这显然不符合实际情况。</p>
<p><strong>EGES的核心创新</strong>在于认识到不同类型的辅助信息应该有不同的重要性。对于手机，品牌可能比价格更重要；对于日用品，价格可能比品牌更关键。</p>
<figure class="align-default" id="id18">
<span id="eges-model"></span><a class="reference internal image-reference" href="../../_images/eges_model.png"><img alt="../../_images/eges_model.png" src="../../_images/eges_model.png" style="width: 400px;" /></a>
<figcaption>
<p><span class="caption-number">图2.2.3 </span><span class="caption-text">EGES模型架构</span><a class="headerlink" href="#id18" title="Permalink to this image">¶</a></p>
</figcaption>
</figure>
<p>对于具有<span class="math notranslate nohighlight">\(n\)</span>种辅助信息的商品<span class="math notranslate nohighlight">\(v\)</span>，EGES为其维护<span class="math notranslate nohighlight">\(n+1\)</span>个向量表示：一个商品ID的向量表示，以及<span class="math notranslate nohighlight">\(n\)</span>个属性的向量表示。商品的最终向量表示通过加权聚合得到：</p>
<div class="math notranslate nohighlight" id="equation-chapter-1-retrieval-2-embedding-1-i2i-5">
<span class="eqno">(2.2.6)<a class="headerlink" href="#equation-chapter-1-retrieval-2-embedding-1-i2i-5" title="Permalink to this equation">¶</a></span>\[H_v = \frac{\sum_{j=0}^n e^{a_v^j} W_v^j}{\sum_{j=0}^n e^{a_v^j}}\]</div>
<p>其中<span class="math notranslate nohighlight">\(a_v^j\)</span>是可学习的权重参数。这种设计的精妙之处在于，不同类型的辅助信息对不同商品的重要性是不同的——对于手机，品牌可能比价格更重要；对于日用品，价格可能比品牌更关键。</p>
<p><strong>核心代码</strong></p>
<p>EGES的核心在于商品特定注意力层（ItemSpecificAttentionLayer），它为每个商品学习一组特征权重：</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="k">def</span><span class="w"> </span><span class="nf">call</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">inputs</span><span class="p">,</span> <span class="n">item_indices</span><span class="p">):</span>
<span class="w">    </span><span class="sd">&quot;&quot;&quot;</span>
<span class="sd">    参数:</span>
<span class="sd">        inputs: 特征嵌入 [batch_size, n+1, emb_dim]</span>
<span class="sd">        item_indices: 商品索引 [batch_size]</span>
<span class="sd">    &quot;&quot;&quot;</span>
    <span class="c1"># 获取每个商品对应的权重参数 a_v^j</span>
    <span class="n">batch_attention_weights</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">gather</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">attention_weights</span><span class="p">,</span> <span class="n">item_indices</span><span class="p">)</span>

    <span class="c1"># 计算 e^(a_v^j)</span>
    <span class="n">exp_attention</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">exp</span><span class="p">(</span><span class="n">batch_attention_weights</span><span class="p">)</span>  <span class="c1"># [batch_size, n+1]</span>

    <span class="c1"># 归一化权重: e^(a_v^j) / sum(e^(a_v^j))</span>
    <span class="n">attention_sum</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">reduce_sum</span><span class="p">(</span><span class="n">exp_attention</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">keepdims</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
    <span class="n">normalized_attention</span> <span class="o">=</span> <span class="n">exp_attention</span> <span class="o">/</span> <span class="n">attention_sum</span>

    <span class="c1"># 应用权重到特征嵌入</span>
    <span class="n">normalized_attention</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">expand_dims</span><span class="p">(</span><span class="n">normalized_attention</span><span class="p">,</span> <span class="n">axis</span><span class="o">=-</span><span class="mi">1</span><span class="p">)</span>
    <span class="n">weighted_embedding</span> <span class="o">=</span> <span class="n">inputs</span> <span class="o">*</span> <span class="n">normalized_attention</span>  <span class="c1"># [batch_size, n+1, emb_dim]</span>

    <span class="c1"># 求和得到最终的商品表示 H_v</span>
    <span class="n">output</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">reduce_sum</span><span class="p">(</span><span class="n">weighted_embedding</span><span class="p">,</span> <span class="n">axis</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span>  <span class="c1"># [batch_size, emb_dim]</span>

    <span class="k">return</span> <span class="n">output</span><span class="p">,</span> <span class="n">normalized_attention</span>
</pre></div>
</div>
<p>这里的<code class="docutils literal notranslate"><span class="pre">attention_weights</span></code>是一个形状为<span class="math notranslate nohighlight">\(|V| \times (n+1)\)</span>的参数矩阵，其中<span class="math notranslate nohighlight">\(|V|\)</span>是商品总数，<span class="math notranslate nohighlight">\(n+1\)</span>是特征数量（商品ID
+
<span class="math notranslate nohighlight">\(n\)</span>种辅助信息）。对于每个商品，模型会学习到一组特定的权重，自动发现哪些特征对该商品更重要。这种商品特定的注意力机制是EGES相比简单平均聚合的关键优势。</p>
<p><strong>冷启动商品的处理</strong>：对于新上架且没有任何用户交互历史的商品，EGES提供了有效的冷启动解决方案。由于这类商品缺乏行为数据，无法通过随机游走生成训练序列，因此既不存在基于ID的向量表示，也没有经过训练的注意力权重参数<span class="math notranslate nohighlight">\(a_v^j\)</span>。</p>
<p>在这种情况下，系统采用简单而有效的mean
pooling策略：直接对该商品的所有辅助信息向量（类别、品牌、价格区间等）进行平均聚合来构建商品表示。虽然这种方法无法体现不同属性的差异化重要性，但能够有效利用商品的内容特征，从而支持基于向量相似度的商品召回（I2I召回）。</p>
</section>
<section id="id11">
<h3><span class="section-number">2.2.1.3.3. </span>训练优化<a class="headerlink" href="#id11" title="Permalink to this heading">¶</a></h3>
<p>EGES采用与Word2Vec类似的负采样策略，但损失函数经过了优化：</p>
<div class="math notranslate nohighlight" id="equation-chapter-1-retrieval-2-embedding-1-i2i-6">
<span class="eqno">(2.2.7)<a class="headerlink" href="#equation-chapter-1-retrieval-2-embedding-1-i2i-6" title="Permalink to this equation">¶</a></span>\[L(v,u,y) = -[y\log(\sigma(H_v^TZ_u)) + (1-y)\log(1-\sigma(H_v^TZ_u))]\]</div>
<p>其中<span class="math notranslate nohighlight">\(y\)</span>是标签（1表示正样本，0表示负样本），<span class="math notranslate nohighlight">\(H_v\)</span>是商品<span class="math notranslate nohighlight">\(v\)</span>的向量表示，<span class="math notranslate nohighlight">\(Z_u\)</span>是上下文节点<span class="math notranslate nohighlight">\(u\)</span>的向量表示。</p>
<p>通过这种方式，即使是刚上架、没有任何用户交互的新商品，也能通过其属性信息获得有意义的向量表示，从而被纳入推荐候选集。</p>
<p>EGES在淘宝的实际部署效果显著：在包含十亿级训练样本的大规模数据集上，相比传统方法在推荐准确率上有了显著的提升，同时有效解决了新商品的冷启动问题。</p>
<p><strong>训练和评估</strong></p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="n">run_experiment</span><span class="p">(</span><span class="s1">&#39;eges&#39;</span><span class="p">)</span>
</pre></div>
</div>
<div class="output highlight-default notranslate"><div class="highlight"><pre><span></span><span class="o">+---------------+--------------+-----------+----------+----------------+---------------+</span>
<span class="o">|</span>   <span class="n">hit_rate</span><span class="o">@</span><span class="mi">10</span> <span class="o">|</span>   <span class="n">hit_rate</span><span class="o">@</span><span class="mi">5</span> <span class="o">|</span>   <span class="n">ndcg</span><span class="o">@</span><span class="mi">10</span> <span class="o">|</span>   <span class="n">ndcg</span><span class="o">@</span><span class="mi">5</span> <span class="o">|</span>   <span class="n">precision</span><span class="o">@</span><span class="mi">10</span> <span class="o">|</span>   <span class="n">precision</span><span class="o">@</span><span class="mi">5</span> <span class="o">|</span>
<span class="o">+===============+==============+===========+==========+================+===============+</span>
<span class="o">|</span>        <span class="mf">0.0136</span> <span class="o">|</span>       <span class="mf">0.0061</span> <span class="o">|</span>    <span class="mf">0.0064</span> <span class="o">|</span>   <span class="mf">0.0041</span> <span class="o">|</span>         <span class="mf">0.0014</span> <span class="o">|</span>        <span class="mf">0.0012</span> <span class="o">|</span>
<span class="o">+---------------+--------------+-----------+----------+----------------+---------------+</span>
</pre></div>
</div>
</section>
</section>
<section id="airbnb">
<h2><span class="section-number">2.2.1.4. </span>Airbnb：将业务目标融入序列<a class="headerlink" href="#airbnb" title="Permalink to this heading">¶</a></h2>
<p>Airbnb作为全球最大的短租平台，面临着与传统电商不同的挑战。房源不是标准化商品，用户的预订行为远比点击浏览稀疏，而且地理位置成为了一个关键因素。更重要的是，Airbnb需要的不仅仅是相似性，而是能够真正促进最终预订转化的推荐。</p>
<section id="id12">
<h3><span class="section-number">2.2.1.4.1. </span>面向业务的序列构建<a class="headerlink" href="#id12" title="Permalink to this heading">¶</a></h3>
<p>Airbnb重新定义了“序列”的概念
<span id="id13">(<a class="reference internal" href="../../chapter_references/references.html#id19" title="Grbovic, M., &amp; Cheng, H. (2018). Real-time personalization using embeddings for search ranking at airbnb. Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery &amp; data mining (pp. 311–320).">Grbovic and Cheng, 2018</a>)</span>，采用基于会话的序列构建策略。具体而言：</p>
<p><strong>会话切分机制</strong>：系统不再简单地将用户交互过的所有房源串联，而是基于用户的点击会话（Click
Sessions）构建序列。当用户连续点击间隔超过30分钟时，系统会自动开始一个新的会话。这种时间窗口的设计能够更准确地捕捉用户在特定搜索场景下的连贯意图。</p>
<p><strong>行为权重差异化</strong>：Airbnb引入了重要的业务洞察——用户行为的信号强度存在显著差异。最终的预订行为相比于简单的点击浏览，包含了更强烈的用户偏好信号，因此在模型训练中应当给予更高的权重。</p>
</section>
<section id="id14">
<h3><span class="section-number">2.2.1.4.2. </span>全局上下文机制<a class="headerlink" href="#id14" title="Permalink to this heading">¶</a></h3>
<p>为了强化模型对最终转化行为的学习，Airbnb设计了全局上下文机制。在传统的Skip-Gram模型中，只有在滑动窗口内的物品才被视为上下文，但这种局部窗口无法充分利用最终预订这一强烈的正向信号。因此，Airbnb让用户最终预订的房源（booked
listing）与序列中的每一个浏览房源都形成正样本对进行训练，无论它们在序列中的距离有多远。</p>
<figure class="align-default" id="id19">
<span id="airbnb-global-context"></span><a class="reference internal image-reference" href="../../_images/airbnb_global_context.png"><img alt="../../_images/airbnb_global_context.png" src="../../_images/airbnb_global_context.png" style="width: 500px;" /></a>
<figcaption>
<p><span class="caption-number">图2.2.4 </span><span class="caption-text">Airbnb预订房源全局上下文</span><a class="headerlink" href="#id19" title="Permalink to this image">¶</a></p>
</figcaption>
</figure>
<p>针对有预订行为的会话（booked
sessions），Airbnb修改了优化目标函数，增加了全局上下文项：</p>
<div class="math notranslate nohighlight" id="equation-chapter-1-retrieval-2-embedding-1-i2i-7">
<span class="eqno">(2.2.8)<a class="headerlink" href="#equation-chapter-1-retrieval-2-embedding-1-i2i-7" title="Permalink to this equation">¶</a></span>\[\underset{\theta}{\text{argmax}} \sum_{(l,c) \in \mathcal{D}_p} \log \frac{1}{1 + e^{-v_c^T v_l}} + \sum_{(l,c) \in \mathcal{D}_n} \log \frac{1}{1 + e^{v_c^T v_l}} + \log \frac{1}{1 + e^{-v_{l_b}^T v_l}}\]</div>
<p>在这个公式中，前两项是标准的Skip-Gram目标函数：第一项最大化正样本对<span class="math notranslate nohighlight">\((l,c)\)</span>的相似度，其中<span class="math notranslate nohighlight">\(l\)</span>是目标房源，<span class="math notranslate nohighlight">\(c\)</span>是滑动窗口内的上下文房源；第二项最小化负样本对的相似度。关键的创新在于第三项<span class="math notranslate nohighlight">\(\log \frac{1}{1 + e^{-v_{l_b}^T v_l}}\)</span>，这里<span class="math notranslate nohighlight">\(l_b\)</span>表示用户在该会话中最终预订的房源。</p>
<p>通过这种全局上下文机制，预订房源为序列中的每个房源都提供了额外的学习信号，使得模型能够更有效地捕捉“什么样的房源组合最终会导致预订”这一关键转化模式。</p>
</section>
<section id="id15">
<h3><span class="section-number">2.2.1.4.3. </span>市场感知的负采样<a class="headerlink" href="#id15" title="Permalink to this heading">¶</a></h3>
<p>Airbnb的另一个创新是改进了负采样策略。传统方法从整个物品库中随机选择负样本，但Airbnb观察到用户通常只会在同一个市场（城市或地区）内进行预订。如果负样本来自不同的地理位置，模型就容易学到地理位置这种“简单特征”，而忽略了房源本身的特点。</p>
<p>因此，Airbnb增加了“同市场负采样”策略，一部分负样本从与正样本相同的地理市场中选择：</p>
<div class="math notranslate nohighlight" id="equation-chapter-1-retrieval-2-embedding-1-i2i-8">
<span class="eqno">(2.2.9)<a class="headerlink" href="#equation-chapter-1-retrieval-2-embedding-1-i2i-8" title="Permalink to this equation">¶</a></span>\[\sum_{(l, l_m^-) \in \mathcal{D_m}} \log \frac{1}{1 + e^{v_{l_m^-}^T v_l}}\]</div>
<p>其中<span class="math notranslate nohighlight">\(l_m^-\)</span>表示来自相同市场的负样本。这迫使模型学习同一地区内房源的细微差别，提升了推荐的精细度。</p>
</section>
</section>
</section>


        </div>
        <div class="side-doc-outline">
            <div class="side-doc-outline--content"> 
<div class="localtoc">
    <p class="caption">
      <span class="caption-text">Table Of Contents</span>
    </p>
    <ul>
<li><a class="reference internal" href="#">2.2.1. I2I召回</a><ul>
<li><a class="reference internal" href="#word2vec">2.2.1.1. Word2Vec：序列建模的理论基础</a><ul>
<li><a class="reference internal" href="#skip-gram">2.2.1.1.1. Skip-Gram模型详解</a></li>
<li><a class="reference internal" href="#id4">2.2.1.1.2. 负采样优化</a></li>
</ul>
</li>
<li><a class="reference internal" href="#item2vec">2.2.1.2. Item2Vec：最直接的迁移</a><ul>
<li><a class="reference internal" href="#id5">2.2.1.2.1. 从词语到物品的映射</a></li>
<li><a class="reference internal" href="#id7">2.2.1.2.2. 模型实现</a></li>
</ul>
</li>
<li><a class="reference internal" href="#eges">2.2.1.3. EGES：用属性信息增强序列</a><ul>
<li><a class="reference internal" href="#id9">2.2.1.3.1. 构建商品关系图</a></li>
<li><a class="reference internal" href="#id10">2.2.1.3.2. 融合辅助信息解决稀疏性问题</a></li>
<li><a class="reference internal" href="#id11">2.2.1.3.3. 训练优化</a></li>
</ul>
</li>
<li><a class="reference internal" href="#airbnb">2.2.1.4. Airbnb：将业务目标融入序列</a><ul>
<li><a class="reference internal" href="#id12">2.2.1.4.1. 面向业务的序列构建</a></li>
<li><a class="reference internal" href="#id14">2.2.1.4.2. 全局上下文机制</a></li>
<li><a class="reference internal" href="#id15">2.2.1.4.3. 市场感知的负采样</a></li>
</ul>
</li>
</ul>
</li>
</ul>

</div>
            </div>
        </div>

      <div class="clearer"></div>
    </div><div class="pagenation">
     <a id="button-prev" href="index.html" class="mdl-button mdl-js-button mdl-js-ripple-effect mdl-button--colored" role="botton" accesskey="P">
         <i class="pagenation-arrow-L fas fa-arrow-left fa-lg"></i>
         <div class="pagenation-text">
            <span class="pagenation-direction">Previous</span>
            <div>2.2. 向量召回</div>
         </div>
     </a>
     <a id="button-next" href="2.u2i.html" class="mdl-button mdl-js-button mdl-js-ripple-effect mdl-button--colored" role="botton" accesskey="N">
         <i class="pagenation-arrow-R fas fa-arrow-right fa-lg"></i>
        <div class="pagenation-text">
            <span class="pagenation-direction">Next</span>
            <div>2.2.2. U2I召回</div>
        </div>
     </a>
  </div>
        
        </main>
    </div>
  </body>
</html>