{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 单词计数及按频度排序，单机算法"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Problem Definition"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- **Find `100 most` common words from input files**\n",
    "  - What if input files are too large to fit in RAM?\n",
    "  - What if input files are too large to fit on one machine?\n",
    "- `$ sort input | uniq -c | sort -nr | head -100`\n",
    "- https://www.cnblogs.com/Solstice/archive/2013/01/13/2858173.html"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 核心问题：从输入文件中找出100个最常见的单词\n",
    "- 扩展问题：\n",
    "  - 输入文件过大无法放入内存\n",
    "  - 输入文件分布在多台机器上\n",
    "- 命令行解决方案：\n",
    "  - `sort input`：对输入进行排序\n",
    "  - `uniq -c`：统计并输出每个单词的出现次数\n",
    "  - `sort -nr`：按数字倒序排序\n",
    "  - `head -100`：取前100个结果"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Fit in RAM"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "vscode": {
     "languageId": "cpp"
    }
   },
   "outputs": [],
   "source": [
    "// https://github.com/chenshuo/recipes/blob/master/topk/word_freq.cc\n",
    "\n",
    "// sort word by frequency, in-memory version.\n",
    "#include <algorithm>\n",
    "#include <iostream>\n",
    "#include <unordered_map>\n",
    "#include <vector>\n",
    "\n",
    "typedef std::unordered_map<std::string, int> WordCount;\n",
    "\n",
    "int main()\n",
    "{\n",
    "  WordCount counts;\n",
    "  std::string word;\n",
    "  while (std::cin >> word)\n",
    "  {\n",
    "    counts[word]++;\n",
    "  }\n",
    "\n",
    "  std::vector<std::pair<int, WordCount::const_iterator>> freq;\n",
    "  freq.reserve(counts.size());\n",
    "  for (auto it = counts.cbegin(); it != counts.cend(); ++it)\n",
    "  {\n",
    "    freq.push_back(make_pair(it->second, it));\n",
    "  }\n",
    "\n",
    "  std::sort(freq.begin(), freq.end(), [](const std::pair<int, WordCount::const_iterator>& lhs,  // const auto& lhs in C++14\n",
    "                                         const std::pair<int, WordCount::const_iterator>& rhs) {\n",
    "    return lhs.first > rhs.first;\n",
    "  });\n",
    "  // printf(\"%zd\\n\", sizeof(freq[0]));\n",
    "  for (auto item : freq)\n",
    "  {\n",
    "    std::cout << item.first << '\\t' << item.second->first << '\\n';\n",
    "  }\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 实现步骤：\n",
    "  - 使用unordered_map统计单词出现次数\n",
    "  - 将统计结果转换为`vector<pair<int, iterator>>`\n",
    "  - 按出现次数降序排序\n",
    "  - 输出结果\n",
    "- 技术细节：\n",
    "  - 使用C++14语法（GCC 4.9+支持）\n",
    "  - 排序时使用lambda表达式`[](const auto& lhs, const auto& rhs){return lhs.first > rhs.first;}`\n",
    "  - 注意：比较函数在元素相等时应返回false"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# MapReductions"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- **WordCount**\n",
    "  - TokenizerMapper + HashPartitioner\n",
    "    `(atom, stone, atom) => (atom, 1), (stone, 1), (atom, 1)`\n",
    "  - IntSumReducer\n",
    "    `(atom, 1), (stone, 1), (atom, 1) => (atom, 2), (stone, 1)`\n",
    "\n",
    "- **Sort by Count**\n",
    "  - InverseMapper + TotalOrderPartitioner\n",
    "    `(atom, 2), (stone, 1), (clock, 5) => (5, clock), (2, atom), (1, stone)`\n",
    "  - IdentityReducer"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "WordCount\n",
    "\n",
    "- MapReduce方案：\n",
    "  - TokenizerMapper：将单词映射为(word, 1)键值对\n",
    "  - HashPartitioner：按单词哈希值分配到不同分区\n",
    "  - IntSumReducer：对相同单词的计数求和\n",
    "- 优势：适合分布式处理大规模数据"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "SortByCount \n",
    "\n",
    "- MapReduce方案：\n",
    "  - InverseMapper：将(word, count)转换为(count, word)\n",
    "  - TotalOrderPartitioner：按计数全序排序\n",
    "  - IdentityReducer：直接输出结果\n",
    "- 特点：需要良好的shuffle函数，简单哈希可能不够"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Fit on one machine"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 4GB RAM, 10GB input files  \n",
    "- Read input and write to 10 shards (buckets), words with same hash go to same shard  \n",
    "  - `hash(word) % bucket_count`  \n",
    "- Sort each shard by word count  \n",
    "- Merge 10 shards to one output file"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- https://github.com/chenshuo/recipes/blob/master/topk/word_freq_shards.cc"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 处理流程：\n",
    "  - 将输入文件按单词哈希值分到10个分片\n",
    "  - 对每个分片单独排序\n",
    "  - 合并10个分片的结果\n",
    "- 性能考虑：\n",
    "  - 4GB内存处理10GB输入文件\n",
    "  - 每个分片约1GB大小"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 学习：大文件统计与排序"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- https://www.cnblogs.com/baiyanhuang/archive/2012/11/11/2764914.html"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 关键点：\n",
    "  - 分而治之的思想\n",
    "  - 10路归并排序使用堆结构优化\n",
    "  - 实际运行时间约20分钟（15GB文件，1G内存虚拟机）\n",
    "- 优化建议：\n",
    "  - 编译时添加-O2优化选项\n",
    "  - 使用更新的代码版本word_freq_shards.cc"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 单机版代码阅读"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 单机版单词频率统计"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 实现原理: 使用内存直接处理全部数据，通过哈希表统计单词出现频率\n",
    "- 核心数据结构:\n",
    "  - `std::unordered_map<std::string, int>`存储单词及其出现次数\n",
    "  - `std::vector<std::pair<int, WordCount::const_iterator>>`用于排序\n",
    "- 处理流程:\n",
    "  - 从标准输入读取单词并更新计数器\n",
    "  - 将哈希表内容转换为可排序的`vector`\n",
    "  - 使用lambda表达式定义排序规则\n",
    "- 特点:\n",
    "  - 完全在内存中操作\n",
    "  - 适合处理中小规模数据\n",
    "  - 时间复杂度主要取决于排序操作`O(nlogn)`"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 硬盘版单词频率统计"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 实现原理: 采用分片(sharding)技术处理大规模数据\n",
    "- 核心改进:\n",
    "  - 引入`boost::noncopyable`保证线程安全\n",
    "  - 使用`boost::ptr_vector`管理分片数据\n",
    "  - 通过`kMaxSize`控制单分片最大尺寸(10MB)\n",
    "- 优化点:\n",
    "  - 支持字符串优化选项(`STD_STRING`或`_sso_string`)\n",
    "  - 避免单机内存限制\n",
    "  - 适合处理超大规模文本数据\n",
    "- 与单机版区别:\n",
    "  - 需要处理磁盘I/O\n",
    "  - 增加了分片管理开销\n",
    "  - 实现复杂度显著提高"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# word_freq_shards.cc"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "vscode": {
     "languageId": "cpp"
    }
   },
   "outputs": [],
   "source": [
    "// https://github.com/chenshuo/recipes/blob/master/topk/word_freq_shards.cc\n",
    "\n",
    "/* sort word by frequency, sharding while counting version.\n",
    "\n",
    "  1. read input file, do counting, if counts > 10M keys, write counts to 10 shard files:\n",
    "       word \\t count\n",
    "  2. assume each shard file fits in memory, read each shard file, accumulate counts, and write to 10 count files:\n",
    "       count \\t word\n",
    "  3. merge 10 count files using heap.\n",
    "\n",
    "Limits: each shard must fit in memory.\n",
    "*/\n",
    "#include <boost/noncopyable.hpp>\n",
    "#include <boost/ptr_container/ptr_vector.hpp>\n",
    "\n",
    "#include <fstream>\n",
    "#include <iostream>\n",
    "#include <unordered_map>\n",
    "\n",
    "#ifdef STD_STRING\n",
    "#warning \"STD STRING\"\n",
    "#include <string>\n",
    "using std::string;\n",
    "#else\n",
    "#include <ext/vstring.h>\n",
    "typedef __gnu_cxx::__sso_string string;\n",
    "#endif\n",
    "\n",
    "const size_t kMaxSize = 10 * 1000 * 1000;\n",
    "\n",
    "class Sharder : boost::noncopyable\n",
    "{\n",
    " public:\n",
    "  explicit Sharder(int nbuckets)\n",
    "    : buckets_(nbuckets)\n",
    "  {\n",
    "    for (int i = 0; i < nbuckets; ++i)\n",
    "    {\n",
    "      char buf[256];\n",
    "      snprintf(buf, sizeof buf, \"shard-%05d-of-%05d\", i, nbuckets);\n",
    "      buckets_.push_back(new std::ofstream(buf));\n",
    "    }\n",
    "    assert(buckets_.size() == static_cast<size_t>(nbuckets));\n",
    "  }\n",
    "\n",
    "  void output(const string& word, int64_t count)\n",
    "  {\n",
    "    size_t idx = std::hash<string>()(word) % buckets_.size();\n",
    "    buckets_[idx] << word << '\\t' << count << '\\n';\n",
    "  }\n",
    "\n",
    " protected:\n",
    "  boost::ptr_vector<std::ofstream> buckets_;\n",
    "};\n",
    "\n",
    "void shard(int nbuckets, int argc, char* argv[])\n",
    "{\n",
    "  Sharder sharder(nbuckets);\n",
    "  for (int i = 1; i < argc; ++i)\n",
    "  {\n",
    "    std::cout << \"  processing input file \" << argv[i] << std::endl;\n",
    "    std::unordered_map<string, int64_t> counts;\n",
    "    std::ifstream in(argv[i]);\n",
    "    while (in && !in.eof())\n",
    "    {\n",
    "      counts.clear();\n",
    "      string word;\n",
    "      while (in >> word)\n",
    "      {\n",
    "        counts[word]++;\n",
    "        if (counts.size() > kMaxSize)\n",
    "        {\n",
    "          std::cout << \"    split\" << std::endl;\n",
    "          break;\n",
    "        }\n",
    "      }\n",
    "\n",
    "      for (const auto& kv : counts)\n",
    "      {\n",
    "        sharder.output(kv.first, kv.second);\n",
    "      }\n",
    "    }\n",
    "  }\n",
    "  std::cout << \"shuffling done\" << std::endl;\n",
    "}\n",
    "\n",
    "// ======= sort_shards =======\n",
    "\n",
    "std::unordered_map<string, int64_t> read_shard(int idx, int nbuckets)\n",
    "{\n",
    "  std::unordered_map<string, int64_t> counts;\n",
    "\n",
    "  char buf[256];\n",
    "  snprintf(buf, sizeof buf, \"shard-%05d-of-%05d\", idx, nbuckets);\n",
    "  std::cout << \"  reading \" << buf << std::endl;\n",
    "  {\n",
    "    std::ifstream in(buf);\n",
    "    string line;\n",
    "\n",
    "    while (getline(in, line))\n",
    "    {\n",
    "      size_t tab = line.find('\\t');\n",
    "      if (tab != string::npos)\n",
    "      {\n",
    "        int64_t count = strtol(line.c_str() + tab, NULL, 10);\n",
    "        if (count > 0)\n",
    "        {\n",
    "          counts[line.substr(0, tab)] += count;\n",
    "        }\n",
    "      }\n",
    "    }\n",
    "  }\n",
    "\n",
    "  ::unlink(buf);\n",
    "  return counts;\n",
    "}\n",
    "\n",
    "void sort_shards(const int nbuckets)\n",
    "{\n",
    "  for (int i = 0; i < nbuckets; ++i)\n",
    "  {\n",
    "    // std::cout << \"  sorting \" << std::endl;\n",
    "    std::vector<std::pair<int64_t, string>> counts;\n",
    "    for (const auto& entry : read_shard(i, nbuckets))\n",
    "    {\n",
    "      counts.push_back(make_pair(entry.second, entry.first));\n",
    "    }\n",
    "    std::sort(counts.begin(), counts.end());\n",
    "\n",
    "    char buf[256];\n",
    "    snprintf(buf, sizeof buf, \"count-%05d-of-%05d\", i, nbuckets);\n",
    "    std::ofstream out(buf);\n",
    "    std::cout << \"  writing \" << buf << std::endl;\n",
    "    for (auto it = counts.rbegin(); it != counts.rend(); ++it)\n",
    "    {\n",
    "      out << it->first << '\\t' << it->second << '\\n';\n",
    "    }\n",
    "  }\n",
    "\n",
    "  std::cout << \"reducing done\" << std::endl;\n",
    "}\n",
    "\n",
    "// ======= merge =======\n",
    "\n",
    "class Source  // copyable\n",
    "{\n",
    " public:\n",
    "  explicit Source(std::istream* in)\n",
    "    : in_(in),\n",
    "      count_(0),\n",
    "      word_()\n",
    "  {\n",
    "  }\n",
    "\n",
    "  bool next()\n",
    "  {\n",
    "    string line;\n",
    "    if (getline(*in_, line))\n",
    "    {\n",
    "      size_t tab = line.find('\\t');\n",
    "      if (tab != string::npos)\n",
    "      {\n",
    "        count_ = strtol(line.c_str(), NULL, 10);\n",
    "        if (count_ > 0)\n",
    "        {\n",
    "          word_ = line.substr(tab+1);\n",
    "          return true;\n",
    "        }\n",
    "      }\n",
    "    }\n",
    "    return false;\n",
    "  }\n",
    "\n",
    "  bool operator<(const Source& rhs) const\n",
    "  {\n",
    "    return count_ < rhs.count_;\n",
    "  }\n",
    "\n",
    "  void outputTo(std::ostream& out) const\n",
    "  {\n",
    "    out << count_ << '\\t' << word_ << '\\n';\n",
    "  }\n",
    "\n",
    " private:\n",
    "  std::istream* in_;\n",
    "  int64_t count_;\n",
    "  string word_;\n",
    "};\n",
    "\n",
    "void merge(const int nbuckets)\n",
    "{\n",
    "  boost::ptr_vector<std::ifstream> inputs;\n",
    "  std::vector<Source> keys;\n",
    "\n",
    "  for (int i = 0; i < nbuckets; ++i)\n",
    "  {\n",
    "    char buf[256];\n",
    "    snprintf(buf, sizeof buf, \"count-%05d-of-%05d\", i, nbuckets);\n",
    "    inputs.push_back(new std::ifstream(buf));\n",
    "    Source rec(&inputs.back());\n",
    "    if (rec.next())\n",
    "    {\n",
    "      keys.push_back(rec);\n",
    "    }\n",
    "    ::unlink(buf);\n",
    "  }\n",
    "\n",
    "  std::ofstream out(\"output\");\n",
    "  std::make_heap(keys.begin(), keys.end());\n",
    "  while (!keys.empty())\n",
    "  {\n",
    "    std::pop_heap(keys.begin(), keys.end());\n",
    "    keys.back().outputTo(out);\n",
    "\n",
    "    if (keys.back().next())\n",
    "    {\n",
    "      std::push_heap(keys.begin(), keys.end());\n",
    "    }\n",
    "    else\n",
    "    {\n",
    "      keys.pop_back();\n",
    "    }\n",
    "  }\n",
    "  std::cout << \"merging done\\n\";\n",
    "}\n",
    "\n",
    "int main(int argc, char* argv[])\n",
    "{\n",
    "  int nbuckets = 10;\n",
    "  shard(nbuckets, argc, argv);\n",
    "  sort_shards(nbuckets);\n",
    "  merge(nbuckets);\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 处理流程：分为三个阶段：\n",
    "  - shard阶段：将输入数据分到多个桶（buckets）中\n",
    "  - combine阶段：对每个桶内部进行合并计算\n",
    "  - merge阶段：将所有桶的结果最终合并\n",
    "- 桶数量：默认设置10个桶（`nbuckets=10`）\n",
    "- 文件命名：使用`\"shard-%05d-of-%05d\"`格式，如`shard-00001-of-00010`"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## `Sharder`类：初始化与输出"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 构造函数：\n",
    "  - 创建指定数量（nbuckets）的输出文件流\n",
    "  - 使用`boost::ptr_vector`管理文件流指针\n",
    "- 核心方法：\n",
    "  - 哈希分桶：通过`std::hash()(word) % buckets_.size()$`\n",
    "    计算单词所属桶\n",
    "  - 输出格式：单词+制表符+计数（如\"a\\t100\\n\"）\n",
    "- 文件管理：\n",
    "  - 每个桶对应独立文件\n",
    "  - 使用断言确保桶数量正确（`assert`验证）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## `shard`函数：文件处理与计数"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 处理逻辑：\n",
    "  - 逐文件读取输入（`argv`参数指定）\n",
    "  - 维护本地计数`unordered_map<string, int64_t>`\n",
    "- 关键优化：\n",
    "  - 内存控制：设置`counts`最大容量为1000万条目\n",
    "  - 批量写入：当`counts`达到上限时批量输出到对应桶文件\n",
    "- 边界处理：\n",
    "  - 文件结束时强制输出剩余计数\n",
    "  - 使用制表符分隔单词与计数（`tab = line.find('\\t')`）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 优化措施：减少内存使用"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 设计思想：\n",
    "  - 读写平衡：利用CPU/内存速度优势压缩磁盘I/O\n",
    "  - 局部聚合：在读取阶段完成部分聚合计算（如100个`a`记为`a\\t100`）\n",
    "- 实现要点：\n",
    "  - 避免全量内存处理（防止OOM）\n",
    "  - 减少中间文件体积（通过预聚合）\n",
    "  - 分阶段处理保证可扩展性\n",
    "- 性能权衡：\n",
    "  - 单次遍历无法完成全部计算\n",
    "  - 通过内存缓冲（`counts`）减少磁盘读写次数"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## `combine`函数：合并与排序"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 功能作用：将分片文件中重复的单词计数合并，并按出现次数排序输出。例如单词\"a\"在分片中出现3次（100次、150次、50次），合并后会输出`\"a\\t300\"`。\n",
    "- 实现逻辑：\n",
    "  - 循环处理每个分片（执行次数等于`nbuckets`参数）\n",
    "  - 使用`read_shard`读取分片数据，返回已合并的`<单词,出现次数>`对\n",
    "  - 将结果存入`vector`并按`<出现次数,单词>`格式存储\n",
    "  - 使用`std::sort`默认排序（从小到大）\n",
    "  - 通过`reverse_iterator`实现从大到小输出\n",
    "- 设计技巧：\n",
    "  - 利用pair的默认比较规则（先比较第一个元素，相等时比较第二个）\n",
    "  - 通过逆序迭代器避免自定义比较器\n",
    "  - 输出文件命名为`\"count-%05d-of-%05d\"`格式\n",
    "- 注意事项：\n",
    "  - 处理后会删除中间`shard`文件（可通过注释`unlink`调试）\n",
    "  - 最终每个单词在十个输出文件中只出现一次"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## `read_shard`函数：读取分片文件"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 核心功能：读取单个分片文件并合并相同单词的计数\n",
    "- 实现细节：\n",
    "  - 文件格式：每行`\"单词\\t次数\"`（如`\"a\\t100\"`）\n",
    "  - 使用制表符定位分隔点（`line.find('\\t')`）\n",
    "  - 单词提取：`line.substr(0, tab)`\n",
    "  - 次数转换：`strtol(line.c_str()+tab, NULL, 10)`\n",
    "  - 自动累加：`counts[word] += count`\n",
    "- 调试建议：\n",
    "  - 注释`unlink(buf)`可保留中间文件\n",
    "  - 可查看`shard-xxxxx-of-xxxxx`文件验证格式\n",
    "- 特殊处理：\n",
    "  - 仅处理有效计数（`count > 0`）\n",
    "  - 文件读取使用`while(getline)`逐行处理"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## `merge`函数：多路归并"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 传统方法：采用两两归并方式，对于$2^n$﻿个文件需要n次归并操作。这种方法源于磁带存储时代，适合磁带机读写特性但效率较低。\n",
    "- 现代优化：直接使用多路归并算法，通过堆数据结构实现高效合并，避免产生大量中间文件。\n",
    "- 实现原理：\n",
    "  - 使用`boost::ptr_vector<std::ifstream>`管理输入文件流\n",
    "  - 通过`std::vector<Source>`存储待合并数据\n",
    "  - 每次从堆顶取出最大值写入输出文件\n",
    "- 关键步骤：\n",
    "  - 动态生成中间文件名格式：\"count-%05d-of-%05d\"\n",
    "  - 使用`std::make_heap`建立最大堆\n",
    "  - 循环取出堆顶元素直至堆空"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## `Source`类：定义与成员"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 设计约束：必须实现可拷贝(copyable)，因需要存入STL容器参与堆操作。\n",
    "- 核心成员：\n",
    "  - 文件指针：`std::ifstream* in_`，指向输入文件流但不拥有资源\n",
    "  - 计数字段：`int64_t count_`，存储单词出现次数\n",
    "  - 单词内容：`string word_`，存储单词字符串\n",
    "- 关键方法：\n",
    "  - `operator<`：重载比较运算符实现堆排序依据\n",
    "  - `outputTo`：格式化输出到指定流\n",
    "  - `next`：从文件读取下一条记录（代码未完整展示）\n",
    "- 实现细节：\n",
    "  - 使用`strtol`解析计数字段\n",
    "  - 通过`substr`分离计数与单词内容\n",
    "  - 当`count_<=0`时跳过无效记录"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## `Source`类：方法与比较符"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 比较运算符重载：实现$operator<$按单词出现次数`count_`进行比较，用于后续堆排序时确定优先级（`count_ < rhs.count_`）\n",
    "- 输出方法：`outputTo`将当前单词及其出现次数写入输出流，格式为`\"count_\\tword_\\n\"`\n",
    "- 迭代器设计：采用JAVA风格迭代器，`next()`方法合并了`hasNext`功能\n",
    "  - 返回布尔值表示是否还有数据\n",
    "  - 内部读取并解析`count`文件（数字在前，单词在后）\n",
    "  - 通过制表符定位分隔，`strtol`转换数字，`substr`提取单词\n",
    "  - 包含校验逻辑（$count_ > 0$才处理）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## `merge`函数：构建堆与处理 "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 初始化阶段：\n",
    "  - 使用`boost::ptr_vector`管理文件流指针（C++11可用`vector`替代）\n",
    "  - 巧妙利用`unlink`删除文件链接但不立即释放空间（Linux文件引用计数机制）\n",
    "- 堆构建：\n",
    "  - 为每个`bucket`创建`Source`对象\n",
    "  - 预读首行数据填充`keys`向量\n",
    "  - `make_heap`在`O(n)`时间内建堆\n",
    "- 多路归并核心：\n",
    "  - 循环处理直到堆为空\n",
    "  - `pop_heap`将最大值移到最后位置（堆排序特性）\n",
    "  - 输出最大值后尝试读取下一行\n",
    "    - 成功：`push_heap`维持堆结构（$O(\\log n)$）\n",
    "    - 失败：`pop_back`缩小堆规模"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 堆操作与多路归并的复杂度"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 时间复杂度：\n",
    "  - 整体为$O(N\\log M)$，其中`N`为总行数，`M`为`bucket`数\n",
    "  - 实际应用中`M`为常数（如`10`），可视为线性复杂度\n",
    "- 算法优势：\n",
    "  - 仅需维护`M`个元素在内存中\n",
    "  - 适用于大规模数据（数百/千个分片）\n",
    "- 实现细节：\n",
    "  - 比较运算符设计决定排序方向（最小堆/最大堆）\n",
    "  - 堆操作每次调整仅影响$O(\\log M)$个元素"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 内存消耗分析"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 分阶段内存使用：\n",
    "  - shard阶段：严格受限（约1000万单词上限）\n",
    "  - combine阶段：依赖单个分片大小（可能成为瓶颈）\n",
    "  - merge阶段：最优（仅需$M+1$行数据内存）\n",
    "- 优化建议：\n",
    "  - 增大分片数量降低combine内存压力\n",
    "  - 对超大分片可采用外部排序\n",
    "- 系统知识：\n",
    "  - 文件删除与空间释放关系（`rm`实际执行`unlink`）\n",
    "  - 通过`/proc`目录处理被进程占用的已删除文件"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 学习：大文件统计与排序 2"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "优化1：用`char[]`表示文件\n",
    "\n",
    "- 优化尝试：将文件整个读入内存，用`char[]`表示，并用\"指针+长度\"表示`query`（`StringPiece`实现）\n",
    "- 效果评估：\n",
    "  - 确实能节约内存\n",
    "  - 但整体速度反而下降\n",
    "  - 最终放弃该优化方案\n",
    "- 类似方案：使用`mmap`也会出现同样问题（节约内存但速度变慢）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "优化2：读内存时把换行换成零\n",
    "\n",
    "- 优化思路：在读入内存时将`'\\n'`替换为`'\\0'`，用单个指针表示`query`\n",
    "- 效果评估：\n",
    "  - 速度下降更明显\n",
    "  - 虽然内存更节省但性能更差\n",
    "- 经验总结：\n",
    "  - 不能想当然地进行优化\n",
    "  - 需要实际测试验证效果\n",
    "  - 效果不好时应及时放弃"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 用一台4GiB内存的机器对磁盘上的单个100GB文件排序 (12.8.3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 标准解法：\n",
    "  - 先分块排序\n",
    "  - 然后多路归并成输出文件\n",
    "- 性能对比：\n",
    "  - 传统二路归并需要多次磁盘读写\n",
    "  - 多路归并可显著减少磁盘I/O"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 分块排序的流水线设计"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 分块大小选择：\n",
    "  - 假设磁盘顺序读写速度100MB/s\n",
    "  - 4GB内存选择1GB分块\n",
    "  - 读写一个分块耗时约10秒\n",
    "- 处理流程：\n",
    "  - 读入分块\n",
    "  - 内存排序\n",
    "  - 写出分块"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# N-way merge using heap"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 实现步骤：\n",
    "  - 初始化：`make_heap`建立最小堆（`min-heap`）\n",
    "  - 循环处理：\n",
    "    - `pop_heap`取出堆顶元素（当前最小）\n",
    "    - 写入输出文件\n",
    "    - 从对应输入文件读取下一条记录\n",
    "    - `push_heap`将新记录加入堆\n",
    "  - 终止条件：堆为空时完成归并\n",
    "- 优势：\n",
    "  - 堆大小动态变化（输入文件处理完时堆减小）\n",
    "  - 相比两两归并显著减少磁盘I/O\n",
    "- 代码实现：\n",
    "  - 使用`std::make_heap`、`std::push_heap`、`std::pop_heap`\n",
    "  - 每个堆元素为`std::pair<Record, FILE*>`"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 多机单词计数算法与代码"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Multiple machines"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- Input files are stored on multiple machines  \n",
    "- Shuffle input words to shards by their hash code  \n",
    "  - Each shard fit on one machine’s RAM  \n",
    "- Sort words by count on each machine  \n",
    "- Merge top K words  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![11.1](./images/11.1.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 处理流程：\n",
    "  - 输入分布：输入文件存储在多个机器上\n",
    "  - 哈希分片：通过单词哈希值将数据分配到不同机器（shard）\n",
    "  - 本地排序：每个分片在单机内存中按词频排序\n",
    "  - 合并结果：使用堆排序合并各机器的top K结果\n",
    "- 类比案例：统计全国高频姓名时，各省先统计本地数据，再按姓氏合并统计"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Code locations\n",
    "\n",
    "- [`muduo/example/wordcount`  ](https://github.com/chenshuo/muduo/tree/master/examples/wordcount)\n",
    "- [`recipes/topk` ](https://github.com/chenshuo/recipes/tree/master/topk)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- iostream over network is usually a bad idea"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## wordcount"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### hash.h"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 实现要点：\n",
    "  - 使用`boost`库的`hash_range`生成字符串哈希值\n",
    "  - 定义`WordCountMap`类型为`unordered_map<string, int64_t>`\n",
    "  - 特殊处理：为兼容旧标准未使用C++11特性"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### hasher.cc"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 核心组件：\n",
    "  - 连接管理：`connectAll()`同步建立所有分片连接\n",
    "  - 流量控制：采用64KB批量发送和拥塞检测机制\n",
    "  - 缓冲区设计：本地累积1000万条记录后触发发送\n",
    "- 关键技术：\n",
    "  - 水位控制：通过高/低水位回调实现网络拥塞检测\n",
    "  - 条件变量：使用`MutexLock`和`Condition`实现跨线程协调\n",
    "  - 性能优化：消息格式为`\"单词\\t次数\\r\\n\"`的紧凑二进制格式\n",
    "- 测试工具：\n",
    "  - 可调节接收速率（默认1MB/s）\n",
    "  - 每接收100KB数据sleep 100ms模拟网络延迟\n",
    "  - 支持动态计算实际吞吐量"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### receiver.cc"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 核心机制：\n",
    "  - 连接计数：通过`senders_`变量跟踪发送端数量\n",
    "  - 消息解析：按`\"\\t\"`分隔单词和计数，`\"\\r\\n\"`分割记录\n",
    "  - 内存管理：使用WordCountMap实时聚合词频\n",
    "- 输出特性：\n",
    "  - 所有发送端断开后自动写入`shard`文件\n",
    "  - 输出格式为`\"单词\\t次数\\n\"`的纯文本\n",
    "  - 注意事项：需为每个`receiver`创建独立工作目录避免文件冲突"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## topk"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### sender.cc"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 功能定位：sender是只发送不接收的服务，当连接建立时会将整个文件内容发送给对方，不处理任何请求。\n",
    "- 数据预处理：\n",
    "  - 使用全局变量`g_wordCounts`存储单词和出现次数的键值对\n",
    "  - 在`read()`函数中完成数据加载和排序，排序采用降序方式`std::greater`\n",
    "- 发送机制：\n",
    "  - 采用迭代器记录发送进度，将当前迭代器保存在TCP连接的`context`中\n",
    "  - 每次构造64KB大小的`buffer`进行批量发送（通过`fillBuffer`函数实现）\n",
    "  - 在`onWriteComplete`回调中继续发送剩余数据，直到全部发送完成关闭连接\n",
    "- 内存限制解决方案：\n",
    "  - 可通过外部排序将数据先存入磁盘文件\n",
    "  - 发送时改为基于文件指针的发送方式（参考书中71-72节文件发送示例）\n",
    "- 关键实现：\n",
    "  - `fillBuffer`函数：从当前迭代器开始填充`buffer`，格式为\"次数\\t单词\\n\"\n",
    "  - `send`函数：发送`buffer`并更新`context`中的迭代器位置\n",
    "  - 连接建立时从`g_wordCounts.begin()`开始发送"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### merger.cc"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 设计特点：\n",
    "  - 使用阻塞IO和`boost::asio`实现\n",
    "  - 采用堆结构进行多路归并（与本地文件版算法相同）\n",
    "  - 只输出top K结果，避免单机磁盘容量问题\n",
    "- 网络适配：\n",
    "  - 将文件输入流替换为`boost::asio::ip::tcp::iostream`\n",
    "  - 保持`Source`类不变，通过`iostream`适配网络输入\n",
    "- 工作流程：\n",
    "  - 建立与各`sender`的连接\n",
    "  - 初始化堆结构\n",
    "  - 循环取出堆顶元素写入输出文件\n",
    "  - 补充新元素到堆中直至处理完top K\n",
    "- 注意事项：\n",
    "  - 网络`iostream`通常不是好主意，仅适用于简单场景\n",
    "  - 错误处理困难（网络错误与文件错误差异大）\n",
    "  - 本例适用是因为merger的读操作是完全主动的"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Distributed Sorting"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- http://sortbenchmark.org/\n",
    "  - http://sortbenchmark.org/2011_06_tritonsort.pdf\n",
    "- CloudRAMSort\n",
    "  - http://pcl.intel-research.net/publications/CloudRAMsort-2012.pdf"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 与top K的区别：\n",
    "  - 分布式排序需要完整排序所有数据\n",
    "  - 采用分区(partition)替代归并(merge)阶段\n",
    "- 核心算法：\n",
    "  - 采样排序：通过数据采样确定分区点(pivot)\n",
    "  - 精确直方图：避免采样误差（参考CloudRAMSort论文）\n",
    "- 实现要点：\n",
    "  - 找出n-1个分区点将数据划分为n个区间\n",
    "  - 确保各区间元素数量均衡\n",
    "  - 各节点独立排序分配到的数据区间\n",
    "- 行业实践：\n",
    "  - 参考sortbenchmark.org年度竞赛\n",
    "  - 2011年TritonSort方案采用采样分区算法\n",
    "  - 国内BAT近年多次获得冠军\n",
    "- 学习建议：\n",
    "  - 优先阅读2011年TritonSort论文了解基础思路\n",
    "  - 深入研究2012年CloudRAMSort论文掌握精确方法\n",
    "  - 关注sortbenchmark.org最新竞赛结果"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Revive 4.4BSD TCP/IP stack"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- [tcpipv2.org](http://tcpipv2.org) (Working in progress)  \n",
    "- Unofficial companion site of *TCP/IP Illustrated, Volume 2: The Implementation*  \n",
    "  - Dedicated to late **W. Richard Stevens**  \n",
    "- [https://github.com/chenshuo/4.4BSD-Lite2.git](https://github.com/chenshuo/4.4BSD-Lite2.git)  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 必读书籍: 网络编程必读Richard Stevens的三套系列著作：\n",
    "  - UNIX环境高级编程(APUE)\n",
    "  - UNIX网络编程(分第一卷和第二卷)\n",
    "  - TCP/IP详解(共三卷，第一卷讲协议，第二卷讲4.4BSD协议栈源码解析)\n",
    "- 版本差异: 4.4BSD协议栈是1993-1994年的实现，距今已有20多年历史"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 项目背景: 为解决阅读TCP/IP详解第二卷时源码无法运行的问题\n",
    "- 网站定位: TCP/IPv2.org是非官方配套网站(v2表示第二卷而非版本2)\n",
    "- 代码仓库: https://github.com/chenshuo/4.4BSD-Lite2.git\n",
    "- 项目意义: 献给已故的TCP/IP专家W. Richard Stevens"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# But 4.4BSD is too old, how about Linux?"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- User Mode Linux  \n",
    "  - `make ARCH=um`  \n",
    "- Linux kernel library project (linux-lkl)  \n",
    "  - [https://github.com/lkl/linux](https://github.com/lkl/linux)  \n",
    "- NUSE: Network stack in userspace  \n",
    "  - [http://libos-nuse.github.io/](http://libos-nuse.github.io/)  \n",
    "- FreeBSD 9.1 TCP/IP stack in user space (libuinet)  \n",
    "  - [https://github.com/pkelsey/libuinet](https://github.com/pkelsey/libuinet)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1. 4.4BSD的局限性\n",
    "- 协议陈旧: 缺少近20年TCP协议改进，如2016年Google提出的BBR拥塞控制算法\n",
    "\n",
    "2. 学习经典与学习实用的选择\n",
    "- 经典学习: 4.4BSD适合学习基本原理\n",
    "- 实用学习: Linux内核更适合实际应用但学习难度更大\n",
    "\n",
    "3. Linux内核调试的复杂性\n",
    "- 定时器问题: 调试时TCP定时器可能干扰代码执行流程\n",
    "- 理想方案: 将网络协议栈抽离为单线程程序便于调试\n",
    "\n",
    "4. 用户模式Linux与多线程问题\n",
    "- User Mode Linux: 可直接编译内核网络部分到用户态\n",
    "- 调试问题: 信号处理会干扰GDB调试过程\n",
    "\n",
    "5. Linux内核库项目与多线程问题\n",
    "- Linux内核库项目: 将内核作为库使用(基于4.7版本)\n",
    "- 现存问题: 仍使用多线程架构，不便于单步调试\n",
    "\n",
    "6. NUSE与用户空间网络实现\n",
    "- NUSE项目: 将Linux网络栈移植到用户空间\n",
    "- 项目状态: 需要进一步调研其适用性\n",
    "\n",
    "7. FreeBSD 9.1与TCP快速打开\n",
    "- libuinet项目: FreeBSD 9.1协议栈的用户空间实现\n",
    "- 功能缺失: 缺少TCP快速打开等新特性\n",
    "\n",
    "8. 用户态调试网络库的思路\n",
    "- 核心思想: 将网络协议栈作为用户态库调用\n",
    "- 优势: 避免内核环境限制，可自由添加调试信息\n",
    "- 实现路径: 基于现有项目进行简化改造"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Conclusions"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- Network programming is easy to learn but difficult to master  \n",
    "  - Follow good examples  \n",
    "- Network programming is a tool, not the purpose  \n",
    "- Most people won’t deal with Sockets API, but they need to understand network programming / TCP  \n",
    "\n",
    "References:  \n",
    "- https://chenshuo-public.s3.amazonaws.com/pdf/appendix.pdf\n",
    "- [http://chenshuo.com/book/#sec2](http://chenshuo.com/book/#sec2)  \n",
    "- [https://chenshuo-public.s3.amazonaws.com/pdf/allinone.pdf](https://chenshuo-public.s3.amazonaws.com/pdf/allinone.pdf)  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- **易学难精**  \n",
    "  网络编程的基础函数大约 10 个 （`socket`, `connect`, `close`, `bind`, `listen`, `accept`, `read`, `write`, `poll/epoll`） ,一个周末可以入门，但难以精通  \n",
    "\n",
    "- **学习方法**  \n",
    "  - 重点学习优秀案例（课程精选约 11–12 个典型案例）  \n",
    "  - 业务逻辑与网络代码区分清晰（网络部分通常仅占几百至一两千行）  \n",
    "\n",
    "- **定位认知**  \n",
    "  - 网络编程是工具而非目的（例如广告架构常需处理网络请求）  \n",
    "  - 多数开发者不需直接操作 **Sockets API**，但需要理解 TCP 协议特性  \n",
    "\n",
    "- **TCP 关键特性**  \n",
    "  - 连接建立可靠，断开无疑义（会在后续详细解释）  \n",
    "  - 需正确理解 TCP 提供的服务保证  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 网络编程学习经验"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- **推荐资料**  \n",
    "  - 作者在 CSDN 高阅读量文章《谈谈网络编程学习经验》  \n",
    "    （收录于 *《Linux 多线程服务端编程》* 附录 A）  \n",
    "  - *《TCP/IP Illustrated vol.2》* 与 *《Linux 多线程服务端编程》*  \n",
    "\n",
    "- **开源实践**  \n",
    "  - 功能按需实现原则（如 `disableReading` 功能在 2014 年由需求驱动添加）  \n",
    "  - 代理服务器开发需 `stop/startRead` 控制流（2015 年 11 月通过 PR 实现）  \n",
    "\n",
    "- **开发建议**  \n",
    "  - 避免业务代码混杂 `read/write` 调用  \n",
    "  - 应采用消息收发或 RPC 调用等高层抽象  "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 课程评价"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 课程特点：\n",
    "  - 原计划半年完成，实际历时两年（卡点在代理服务器实现）\n",
    "  - 延迟开发带来意外收益（可引用2016年Google论文内容）\n",
    "- 技术交流：\n",
    "  - 推荐邮件沟通（giantchen@gmail.com）\n",
    "  - 慎用公开讨论（易偏离技术主题）\n",
    "  - Issue仅用于代码缺陷报告\n",
    "- 学习建议：\n",
    "  - 关注正确抽象层级\n",
    "  - 理解TCP协议本质而非机械调用API"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "C++17",
   "language": "C++17",
   "name": "xcpp17"
  },
  "language_info": {
   "name": "C++17"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
