{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Agenda"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- Library or service\n",
    "- Concurrent models\n",
    "- Built-in inspection for QPS and latency\n",
    "- Load testing, latency vs. QPS\n",
    "- Load balancingof CPU-bounded stateless service\n",
    "- FastCGI\n",
    "- Heartbeat, failover, upgrading"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Line protocol"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![7.1](./images/7.1.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "数独是一种基于逻辑的数字填充类益智游戏。基本规则如下：\n",
    "\n",
    "- 棋盘结构：一个 9×9 的方格，被划分为 9 个 3×3 的小宫格。\n",
    "- 目标：在空白格中填入 1 到 9 的数字。\n",
    "- 规则：\n",
    "  - 每一行必须包含 1–9，且不能重复。\n",
    "  - 每一列必须包含 1–9，且不能重复。\n",
    "  - 每个 3×3 宫格必须包含 1–9，且不能重复。\n",
    "\n",
    "数独的特点是：只需逻辑推理，不需要猜测或运算（如加减乘除）。根据给出的初始数字，玩家一步步推导出唯一解。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 数独程序运行"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- https://github.com/chenshuo/muduo/tree/master/examples/sudoku\n",
    "- https://github.com/chenshuo/recipes/tree/master/sudoku"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## sudoku_server_basic/基本版"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "```shell\n",
    "bin/sudoku_server_basic\n",
    "```\n",
    "\n",
    "```shell\n",
    "cd recipes/sudoku\n",
    "cat test1 # 输入的测试数据\n",
    "```\n",
    "\n",
    "```shell\n",
    "telnet localhost 9981\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Sudoku solver as a service"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- Library or Service\n",
    "  - Cross process boundary, some overhead of course\n",
    "  - Cross language\n",
    "  - Independent development andrelease cycle, bug fix\n",
    "  - managed service: running by dedicated team\n",
    "- 库/服务的取舍, 作为服务时\n",
    "  - 跨进程, 有额外开销\n",
    "  - 跨语言, 前端不同的语言, 不需要再实现一遍库\n",
    "  - 协议不变, 独立于其他开发\n",
    "  - 做成二进制文件提供给其他使用者, 或者提供维护服务正常运行"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- CPU-bounded stateless service\n",
    "  - One representative service type\n",
    "  - Probably the easiesttype for load balancing and provisioning\n",
    "    - CPU is the dominant resource, how many cores needed to serve the traffic\n",
    "- 无状态CPU密集型服务\n",
    "  - 具有代表性的服务\n",
    "  - 负载均衡; 支撑服务所需要的资源"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Concurrent Models"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 5.Single event loop, `server_basic.cc`: 单线程事件循环模型; 纯并发无并行，需多进程利用多核（如8核机起8进程）\n",
    "- 8.Single event loop with thread pool for computing, `server_threadpool.cc`: 单线程事件循环做IO+线程池做计算, 可通过配置线程数利用多核（如4核机器配置4线程池）; \n",
    "- 9.Multiple event loops, `server_multiloop.cc`: 多个事件循环, 每个客户端连接固定绑定到一个事件循环线程, 同一连接的所有请求始终由同一线程处理, 可能出现线程负载不均（如4线程服务5连接时必有线程过载）; 单连接请求始终单线程处理\n",
    "- 11.Multiple event loops with thread pool for computing, `server_hybrid.cc`: 多事件循环线程并行处理网络IO, 共享线程池处理计算任务\n",
    "- 8/9/11 utilize multiple cores, 8/11 even for single client conn: 单进程内支持多核并行; 单连接高并发请求可占满线程池（如1000请求用满4线程）;9号模型可能出现线程资源分配不均（如两连接共享一线程各得\"半核\"资源）\n",
    "- Different behavior when overloaded/并发模型过载表现\n",
    "  - Memory usage shots up/内存使用量飙升:线程数合理配置时CPU使用率存在明确上限（如12线程在16核机最高75%利用率）; (队列堆积)IO线程与计算线程速度不匹配导致任务队列膨胀; (输出阻塞)响应数据积压增加内存压力\n",
    "  - How to monitor and protect?/并发模型监控与保护: 实时监控内存使用量、队列深度等关键指标; 实现服务降级机制（如拒绝新请求）避免系统崩溃; server_hybrid.cc展示完整保护方案\n",
    "\n",
    "> 书 6.6节"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Load testing"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- Local test, `examples/sudoku/batch.cc`: 本地测试, 建立性能基线（baseline）, 直接调用数独求解函数（不经过网络）\n",
    "- Batch client, `examples/sudoku/batch.cc`: 批量发送所有请求, 测量端到端处理时间; 量化网络传输开销（相比本地测试增量时间）\n",
    "- Load test for maximum capacity, `examples/sudoku/pipeline.cc`:  负载测试, 测试服务器最大容量, 收到响应后发送下一个请求; 通过流水线填满网络传输间隙（如RTT=100ms时需10+流水线保持CPU满载）;实际案例, 跨地域VPS测试需调整流水线长度适配网络延迟\n",
    "  - Number of connections/并发连接数\n",
    "  - Number of in-fly-requests a.k.a. pipelines/流水线深度（in-flight请求数）\n",
    "- Stress testing, `recipes/tpc/sudoku_stress.cc`: 压力测试, 无限流水线持续发送; 只发不收：测试服务端抗压极限; 不同并发模型下的崩溃阈值差异; 内存管理机制有效性验证\n",
    "  - Free running ie. pipelines = inf.\n",
    "- Performance testing, `examples/sudoku/loadtest.cc`: 性能测试, 延迟分布（P50/P99等）,QPS与延迟的平衡点; 定时脉冲请求（如1秒发100次，每次10请求=1000 RPS）,环形缓冲区记录实时指标"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# fastcgi"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "> 使用HTTP/1.1协议; 采用分块传输编码（chunked）;保持长连接（keep-alive）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- https://github.com/chenshuo/muduo/tree/master/examples/fastcgi"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "vscode": {
     "languageId": "cpp"
    }
   },
   "outputs": [],
   "source": [
    "// https://github.com/chenshuo/muduo/blob/master/examples/fastcgi/fastcgi_test.cc\n",
    "\n",
    "#include \"examples/fastcgi/fastcgi.h\"\n",
    "#include \"examples/sudoku/sudoku.h\"\n",
    "\n",
    "#include \"muduo/base/Logging.h\"\n",
    "#include \"muduo/net/EventLoop.h\"\n",
    "#include \"muduo/net/TcpServer.h\"\n",
    "\n",
    "using namespace muduo;\n",
    "using namespace muduo::net;\n",
    "\n",
    "const string kPath = \"/sudoku/\";\n",
    "\n",
    "void onRequest(const TcpConnectionPtr& conn,\n",
    "               FastCgiCodec::ParamMap& params,\n",
    "               Buffer* in)\n",
    "{\n",
    "  string uri = params[\"REQUEST_URI\"];\n",
    "  LOG_INFO << conn->name() << \": \" << uri;\n",
    "\n",
    "  for (FastCgiCodec::ParamMap::const_iterator it = params.begin();\n",
    "       it != params.end(); ++it)\n",
    "  {\n",
    "    LOG_DEBUG << it->first << \" = \" << it->second;\n",
    "  }\n",
    "  if (in->readableBytes() > 0)\n",
    "    LOG_DEBUG << \"stdin \" << in->retrieveAllAsString();\n",
    "  Buffer response;\n",
    "  response.append(\"Context-Type: text/plain\\r\\n\\r\\n\");\n",
    "  if (uri.size() == kCells + kPath.size() && uri.find(kPath) == 0)\n",
    "  {\n",
    "    response.append(solveSudoku(uri.substr(kPath.size())));\n",
    "  }\n",
    "  else\n",
    "  {\n",
    "    // FIXME: set http status code 400\n",
    "    response.append(\"bad request\");\n",
    "  }\n",
    "\n",
    "  FastCgiCodec::respond(&response);\n",
    "  conn->send(&response);\n",
    "}\n",
    "\n",
    "void onConnection(const TcpConnectionPtr& conn)\n",
    "{\n",
    "  if (conn->connected())\n",
    "  {\n",
    "    typedef std::shared_ptr<FastCgiCodec> CodecPtr;\n",
    "    CodecPtr codec(new FastCgiCodec(onRequest));\n",
    "    conn->setContext(codec);\n",
    "    conn->setMessageCallback(\n",
    "        std::bind(&FastCgiCodec::onMessage, codec, _1, _2, _3));\n",
    "    conn->setTcpNoDelay(true);\n",
    "  }\n",
    "}\n",
    "\n",
    "int main(int argc, char* argv[])\n",
    "{\n",
    "  int port = 19981;\n",
    "  int threads = 0;\n",
    "  if (argc > 1)\n",
    "    port = atoi(argv[1]);\n",
    "  if (argc > 2)\n",
    "    threads = atoi(argv[2]); // 指定IO线程数\n",
    "  InetAddress addr(static_cast<uint16_t>(port));\n",
    "  LOG_INFO << \"Sudoku FastCGI listens on \" << addr.toIpPort()\n",
    "           << \" threads \" << threads;\n",
    "  muduo::net::EventLoop loop;\n",
    "  TcpServer server(&loop, addr, \"FastCGI\");\n",
    "  server.setConnectionCallback(onConnection);\n",
    "  server.setThreadNum(threads);\n",
    "  server.start();\n",
    "  loop.loop();\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- A\n",
    "\n",
    "```shell\n",
    "bin/fastcgi_test 19981 8\n",
    "```\n",
    "\n",
    "```shell\n",
    "utop\n",
    "```\n",
    "\n",
    "- B\n",
    "\n",
    "```shell\n",
    "bin/fastcgi_test 19981 8\n",
    "```\n",
    "\n",
    "```shell\n",
    "utop\n",
    "```\n",
    "\n",
    "- 配置,启动nginx\n",
    "\n",
    "```json\n",
    "upstream muduo_backend {\n",
    "    server 10.0.0.37:19981;\n",
    "    server 10.0.0.49:19981;\n",
    "    #server 1ocalhost:19981;\n",
    "    keepalive32;\n",
    "}\n",
    "```\n",
    "\n",
    "```shell\n",
    "objs/nginx -p /home/schen/muduo/examples/fastcgi\n",
    "```\n",
    "\n",
    "- 使用curl测试\n",
    "```shell\n",
    "curl http://localhost:10080/sudoku/000000010400000000020000000000050407008000300001090000300400200050100000000806000\n",
    "```\n",
    "\n",
    "```shell\n",
    "netstat -tpna | grep 19981\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# weighttp"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "> 不能使用apache bench（AB）,不支持分块传输\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "```shell\n",
    "build/default/weighttp -n 100000 -c 1 -k localhost:10080/sudoku/000000010400000000020000000000050407008000300001090000300400200050100000000806000\n",
    "```\n",
    "\n",
    "```shell\n",
    "build/default/weighttp -n 100000 -c 10 -k localhost:10080/sudoku/000000010400000000020000000000050407008000300001090000300400200050100000000806000\n",
    "```\n",
    "\n",
    "```shell\n",
    "build/default/weighttp -n 500000 -c 100 -k localhost:10080/sudoku/000000010400000000020000000000050407008000300001090000300400200050100000000806000\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Productionize\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- Heartbeat, health reporting\n",
    "- Failover\n",
    "- Upgrading"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Built-in inspection for RPS and latency *"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- Many counters, total requests, total responses, total solved, etc.\n",
    "- Two circular buffers for per second requests and latency: 使用两个环形缓冲区存储每秒请求数和延迟数据; 缓冲区长度为60，记录过去60秒的数据; 新数据始终更新头部（索引0处）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "```shell\n",
    "bin/sudoku_solver_hybrid 0 0\n",
    "```\n",
    "\n",
    "- 10.0.0.37:9982\n",
    "\n",
    "![7.2](./images/7.2.png)\n",
    "\n",
    "- 10.0.0.37:9982/proc/overview\n",
    "\n",
    "\n",
    "- 10.0.0.37:9982/sudoku/stats\n",
    "\n",
    "![7.3](./images/7.3.png)\n",
    "\n",
    "- task_queue_size=0 当前队列长度\n",
    "- total_requests=49151 总请求数\n",
    "- total_solved=49151 成功求解数\n",
    "- bad_requests=0 错误请求数\n",
    "- requests_per_second显示每秒约13000个请求\n",
    "- latency_sum_us_per_second记录每秒延迟总和\n",
    "- latency_us_avg=27866显示平均延迟27.866毫秒"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 最大容量及伸缩性测试 **"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "| 主题               | 核心内容                                           | 关键数据/现象                                                | 对比维度                                         |\n",
    "| ------------------ | -------------------------------------------------- | ------------------------------------------------------------ | ------------------------------------------------ |\n",
    "| Pipeline测试原理   | 通过流水线深度控制请求发送节奏（非固定RPS）        | 默认深度1（单请求串行）3000 RPS时CPU使用率28%                | 与Load Test的RPS模式对比                         |\n",
    "| 延迟差异分析       | Pipeline延迟显著低于Load Test（同RPS）             | Pipeline 3000 RPS时平均延迟300μsLoad Test 1000 RPS时平均延迟800μs | 请求发送模式（均匀vs批量）单线程服务梯级延迟效应 |\n",
    "| 服务器性能极限测试 | 单连接+深度10可占满单核                            | 单核容量：12000-13000 RPS4线程扩展后46000 RPS（100连接）     | 线程利用率IO与计算线程配比                       |\n",
    "| 延迟分布特征       | 长尾分布明显，99%延迟远低于最大值                  | 99%延迟1795μs（单线程）99%延迟845μs（20连接）                | 直方图双峰现象客户端/服务端CPU监控必要性         |\n",
    "| 线程池优化效果     | 计算任务分摊至多线程                               | 4计算线程+1 IO线程：48000 RPS8计算线程+1 IO线程：70000 RPS（八核极限） | 核数限制线程切换开销                             |\n",
    "| 关键实验变量       | 客户端：连接数×Pipeline深度服务端：IO线程×计算线程 | 最优组合：20连接+深度10（低延迟高吞吐）                      | 参数组合测试方法论                               |\n",
    "\n",
    "> AI总结"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**要自己操作, 看没有意义**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "pipeline与load test的区别: \n",
    "- load test可以设置rps(每秒请求数)\n",
    "- pipeline不设置rps，而是控制pipeline深度(连接数)\n",
    "\n",
    "pipeline测试方式\n",
    "- pipeline默认连接数为1; 默认长度为1(发一个请求收到响应后再发第二个)\n",
    "- 工作流程: 发送一个请求; 等待收到响应; 再发送第二个请求; 循环重复此过程"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "```shell\n",
    "bin/sudoku_client_pipeline sudoku 17 10.0.0.49\n",
    "```\n",
    "\n",
    "同时观察客户端, 服务端的CPU性能\n",
    "\n",
    "性能指标:\n",
    "- PS达到3000左右\n",
    "- 服务器CPU使用率28%\n",
    "- 平均延迟330微秒\n",
    "- 99延迟约500微秒"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "```shell\n",
    "bin/sudoku_loadtest sudoku17 10.0.0.49 12000\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**pipeline与load test延迟比较**\n",
    "\n",
    "反直觉现象: RPS低的延迟大\n",
    "- pipeline在3000 RPS时p99延迟500微秒\n",
    "- load test在1000 RPS时p99延迟1348微秒\n",
    "\n",
    "原因分析:\n",
    "- pipeline请求间隔均匀(约300微秒一个)\n",
    "- load test请求集中(10毫秒发10个请求)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**pipeline延迟计算方式**\n",
    "\n",
    "关键差异\n",
    "- pipeline测量单个请求的往返时间\n",
    "- load test测量批量请求的累积延迟\n",
    "\n",
    "数学解释:\n",
    "- load test的延迟呈等差数列增长\n",
    "- 单线程服务器处理批量请求时，后续请求延迟会递增"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**pipeline定时器调整**\n",
    "\n",
    "调整方法:\n",
    "- 可在代码中修改定时器周期\n",
    "- 调整后可能改变测试结果\n",
    "\n",
    "注意事项:\n",
    "- 延迟测量是复杂问题\n",
    "- 不同测试方式结果可能差异很大"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**pipeline连接数增加对CPU的影响**\n",
    "\n",
    "```shell\n",
    "bin/sudoku_client_pipeline sudoku17 10.0.0.49 10\n",
    "```\n",
    "\n",
    "\n",
    "性能变化:\n",
    "- 10个连接时CPU使用率接近100%\n",
    "- 单核RPS达到12000-13000\n",
    "- p99延迟升至16毫秒\n",
    "\n",
    "结论:\n",
    "- 加连接数可提高吞吐量\n",
    "- 会显著增加延迟和CPU负载"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**单线程服务器容量测试**\n",
    "\n",
    "```shell\n",
    "bin/sudoku_client_pipeline sudoku17 10.0.0.49 1 10 # 1个连接发10个请求\n",
    "```\n",
    "\n",
    "服务端CPU使用率100%，客户端CPU使用率25%，表明服务端已达性能瓶颈"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**客户端性能影响测试结果**\n",
    "\n",
    "- 性能上限估算：当客户端CPU使用25%时RPS为12000-13000，推算客户端100%负载时理论最大RPS约为50000\n",
    "- 测试误区警示：若客户端先达到性能瓶颈（如CPU跑满），测得的是客户端性能而非服务器真实性能\n",
    "- 监控要点：必须同时监控服务端和客户端的资源使用率，确保测试反映的是服务端真实容量"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**流水线深度对性能的影响**\n",
    "\n",
    "```shell\n",
    "bin/sudoku_client_pipeline sudoku17 10.0.0.49 10 # 增加了IO开销\n",
    "```\n",
    "\n",
    "测试方法对比：\n",
    "- 单连接深度10：测得RPS峰值12700-12800\n",
    "- 10连接深度1：测得RPS约12000，略低于前者\n",
    "\n",
    "性能差异原因：\n",
    "- IO开销：多连接方案因需处理更多连接上的数据，客户端CPU使用率略高（约增加4%）\n",
    "- 网络延迟影响：测试环境网络延迟为140微秒，影响不同方案的吞吐量表现"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**流水线深度设为四, 六, 八的测试结果**\n",
    "\n",
    "```shell\n",
    "bin/sudoku_client_pipeline sudoku17 10.0.0.49 4\n",
    "```\n",
    "\n",
    "```shell\n",
    "bin/sudoku_client_pipeline sudoku17 10.0.0.49 6\n",
    "```\n",
    "\n",
    "```shell\n",
    "bin/sudoku_client_pipeline sudoku17 10.0.0.49 8\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**流水线深度与服务器延迟的关系**\n",
    "\n",
    "理论分析：\n",
    "- 管道填满原理：流水线深度需足够填补客户端与服务端之间的网络延迟\n",
    "- 计算公式：若服务端最小延迟为100微秒，理论需要10个并发请求才能填满管道\n",
    "\n",
    "实践验证：\n",
    "- 当前网络延迟140微秒环境下，深度10可使服务端CPU接近满载\n",
    "- 在更高延迟环境中（如跨地域部署），需要更大的流水线深度才能达到相同效果"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**程序功能与数据记录**\n",
    "\n",
    "- 数据记录功能：测试程序会自动记录服务器响应数据，包括连接时间（如20150412 06:08:43.558438Z建立连接）、请求处理量（如12860次/秒）和延迟统计\n",
    "- 实时统计：程序每秒钟输出一次统计信息，包含in-fly请求数、最小/最大延迟（如min 324μs/max 3020μs）、平均值（avg 760μs）及百分位数（p90 905μs/p99 1314μs）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**直方图与延迟分布**\n",
    "\n",
    "- 数据格式：第一列为延迟区间（5微秒间隔），第二列为该区间的请求数量，第三列为累积百分比\n",
    "- 典型分布：示例数据显示485-490μs区间有222个请求，490-495μs区间有32个请求，呈现明显的长尾分布特征"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**百分位数与延迟分析**\n",
    "\n",
    "- 百分位定位：通过累积分布曲线可以定位p99（99%请求）在1795μs处，p90（90%请求）在1010μs处\n",
    "- 分布解读：89.98%请求延迟低于1005μs，90%请求低于1010μs，说明系统存在少量高延迟请求影响整体性能\n",
    "\n",
    "![7.4](./images/7.4.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**累积分布与密度函数**\n",
    "\n",
    "- 图表元素：红色柱状图表示延迟的概率密度函数(PDF)，绿色曲线表示累积分布函数(CDF)\n",
    "- 坐标对应：右侧Y轴显示0-100%的累积百分比，可直观观察特定延迟阈值对应的请求比例\n",
    "\n",
    "![7.5](./images/7.5.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**数据文件与图形化展示**\n",
    "\n",
    "- 文件存储：测试数据自动保存为文本文件（如p0003），包含三列数据（延迟区间、计数、累积百分比）\n",
    "- 对比分析：通过叠加多个数据文件（如p0002和p0003）可以观察不同负载条件下的延迟分布变化"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**另一个数据点的图形比较**\n",
    "\n",
    "![7.6](./images/7.6.png)\n",
    "\n",
    "- 分布偏移：第二个数据集(p0002)显示整体延迟向左偏移，峰值出现在更低延迟区间（约700μs）\n",
    "- 工具价值：通过可视化对比可以快速识别服务器性能变化，验证优化措施的实际效果"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**服务器扩展性测试**\n",
    "\n",
    "单连接与多IO线程\n",
    "- 单连接性能瓶颈：当服务器仅使用1个IO线程处理单个连接时，即使增加pipeline到10，也只能占满1个线程的CPU资源，无法提升整体吞吐量（rps维持在13000左右）。\n",
    "- 线程分配机制：每个TCP连接会被固定分配到特定的IO线程处理，这是muduo网络库的默认行为。通过inspector工具可观察到2576号线程（第一个IO线程）承担全部计算负载。\n",
    "- `10.0.0.49:9982/proc/threads`\n",
    "- 多线程负载特征：当使用4个IO线程时，单个连接仍只能利用其中1个线程。通过重启服务器观察发现，前两个线程（2576、2577）的CPU使用率（55%）明显高于后两个（40%），这与连接分配算法相关。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "多连接与IO线程分配\n",
    "- 连接数与吞吐量关系：使用10个连接时，rps提升至约23000（原13000的1.7倍），四个IO线程开始均分负载。此时各线程CPU使用率呈现2:2分布（前两个线程55%，后两个40%）。\n",
    "- 延迟指标：10连接时延迟表现良好，最小延迟367μs，最大4234μs，平均767μs，p99值1740μs。\n",
    "- `10.0.0.49:9982/proc/threads`\n",
    "- 百连接性能：使用100个连接时，rps达到46000（接近单连接的4倍），四个IO线程负载完全均衡（各约25个连接），CPU使用率均接近100%。\n",
    "- 性能损耗原因：由于连接数增加带来的IO开销，实际吞吐量（46000rps）略低于理论值（12500×4=50000rps）。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "CPU使用率与连接数\n",
    "- 临界点测试：20个连接时即可基本用满CPU资源，此时延迟分布呈现双峰特征（最小136μs，最大6869μs，平均386μs），具体原因尚未明确。\n",
    "- 线程利用率：通过实验发现，当连接数≥20时，四个IO线程的CPU使用率均能保持在95%以上，此时系统达到最佳性能状态。\n",
    "\n",
    "```shell\n",
    "bin/sudoku_client_pipeline sudoku17 10.0.0.49 20\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "延迟分布与数据点分析\n",
    "- 分布特征：延迟分布图始终显示双峰形态，可能与网络栈处理机制或线程调度有关，具体原因需要进一步分析。\n",
    "- 数据规模：测试共收集47000个数据点，最小延迟136μs，最大延迟达1844μs，平均延迟保持在400μs左右。\n",
    "\n",
    "```shell\n",
    "less p0002\n",
    "head p0002\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**突发性网络测试分析**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "延迟测试数据统计\n",
    "- 数据规模: 测试包含47000个数据点\n",
    "- 延迟范围: 最小延迟136微秒，最大延迟7350微秒\n",
    "- 关键指标:\n",
    "  - 平均延迟380微秒\n",
    "  - 中位数延迟336微秒\n",
    "  - P90延迟499微秒\n",
    "  - P99延迟845微秒"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![7.7](./images/7.7.png)\n",
    "\n",
    "\n",
    "延迟分布特征\n",
    "- 长尾现象: 系统表现出典型的*长尾特征*，虽然最大延迟可达7000多微秒，但99%的请求延迟都在845微秒以下\n",
    "- 实际应用建议:\n",
    "  - 系统指标设定应考虑99%或99.9%分位点而非100%\n",
    "  - 根据业务需求确定可接受的延迟阈值，反推最大RPS"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "负载测试基本原理\n",
    "- 基本规律: 延迟通常随RPS(每秒请求数)增加而上升\n",
    "- 理论模型: M/M/c队列是简单有效的起始分析模型\n",
    "- 测量要点\n",
    "  - 需要在不同负载水平下测量延迟\n",
    "  - 应关注百分位延迟而非仅最大RPS\n",
    "  - 客户端测量的延迟可能与服务器端存在差异\n",
    "- 系统资源监控\n",
    "  - CPU使用率显示100%空闲状态\n",
    "  - 内存使用情况：15.6GB总量中仅536MB被使用\n",
    "  - 交换空间完全空闲(7632MB总量全部可用)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**网络容量及伸缩性测试**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "```shell\n",
    "bin/sudoku_client_pipeline sudoku17 10.0.0.49 1 10\n",
    "```\n",
    "\n",
    "```shell\n",
    "bin/sudoku_solver_hybrid 4 4 -n # 4个IO, 4个计算\n",
    "```\n",
    "\n",
    "单线程与多线程对比\n",
    "- 单线程瓶颈：当只有一个连接且pipeline深度为10时，系统只能使用单个线程，CPU利用率约30%，性能受限\n",
    "- 多线程优势：采用线程池后，单个客户端连接的计算任务可分配到多个线程，CPU利用率提升至90%+\n",
    "- 性能数据：单线程处理能力约14000rps，四线程可达46000rps，接近单核计算能力的线性扩展"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "线程池与计算线程的配置\n",
    "- 线程组成：系统包含4个IO线程、4个计算线程、1个主线程（负责accept连接）和1个inspector线程\n",
    "- 动态调整：可根据连接数灵活配置IO线程和计算线程比例，如2个IO线程搭配8个计算线程\n",
    "- 资源分配：IO线程与计算线程的比例影响整体性能，实验表明1个IO线程可支持约8个计算线程"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "系统资源利用率分析\n",
    "- 计算密集型特征：在pipeline深度1000时，8个计算线程CPU利用率达97%，IO线程仅60%\n",
    "- 性能上限：八核机器最大处理能力约70000rps，此时延迟显著增加（P99达15毫秒）\n",
    "- 瓶颈分析：当线程数超过CPU核数时，线程切换开销导致性能下降，8核机器最佳配置为7计算线程+1IO线程"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "多核CPU的性能测试\n",
    "- 线性扩展性：处理能力基本与CPU核心数呈线性关系，四核46000rps→八核70000rps\n",
    "- 线程争用：8计算线程+1IO线程在8核机器上平均利用率88%（理论最大值8/9≈88.8%）\n",
    "- 延迟特性：高负载下延迟显著增加，70000rps时最小延迟6ms，P99延迟15ms"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "多连接下的系统表现\n",
    "- 连接模式影响：10个连接各100 pipeline深度时，两个IO线程均被充分利用\n",
    "- 最佳实践：合理搭配连接数与pipeline深度（如10连接×100深度）比单连接×1000深度性能更优\n",
    "- 资源耗尽表现：CPU利用率达90%+时系统空闲资源仅12%，延迟指标明显恶化"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "服务器扩展性验证\n",
    "- 可配置性：服务端支持动态调整IO线程数和计算线程数，客户端可调整连接数和pipeline深度\n",
    "- 测试方法论：通过组合不同参数（如2IO+7计算、4IO+4计算等）验证系统扩展性\n",
    "- 验证结论：系统展现良好线性扩展能力，建议在更高核数机器（如16核）上进一步测试极限性能"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 延迟分布于请求数及并发模型的关系 *"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "给定PRS/每秒请求数下的延迟的分部"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**测量延迟分布**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "服务器配置\n",
    "- 基础配置\n",
    "  - 使用4个IO线程\n",
    "  - 未启用线程池\n",
    "  - 开启TCP no delay选项\n",
    "- 线程组成\n",
    "  - 4个IO线程\n",
    "  - 1个主线程\n",
    "  - 1个inspector线程\n",
    "\n",
    "\n",
    "```shell\n",
    "utop # H\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "压力测试\n",
    "\n",
    "- 测试参数\n",
    "  - 初始RPS设置为100（最低负载）\n",
    "  - 单连接测试\n",
    "- 性能表现\n",
    "  - CPU负载仅1%（单核）\n",
    "  - 服务器延迟88微秒\n",
    "  - 客户端测试延迟400-700微秒（含网络延迟）\n",
    "- 测试方法\n",
    "  - 运行超过1分钟达到稳定状态\n",
    "  - 每分钟处理约6000个请求\n",
    "- 关键指标\n",
    "  - 总请求数：9447次\n",
    "  - 平均延迟：88微秒\n",
    "  - 请求成功率：100%\n",
    "  - 无错误请求或丢弃请求\n",
    "\n",
    "`10.0.0.49:9982/sudoku/stats`\n",
    "\n",
    "```shell\n",
    "bin/sudoku_loadtest sudoku17 10.0.0.49 100 # 负载在一个线程上\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "画图\n",
    "\n",
    "![7.8](./images/7.8.png)\n",
    "\n",
    "- 分布特点\n",
    "  - 基于100个数据点的分布\n",
    "  - 最大延迟900多微秒\n",
    "  - 分布图形较粗糙（样本量小）\n",
    "- 数据分析方法\n",
    "  - 可合并多个数据文件进行分析\n",
    "  - 使用gnuplot绘制分布图\n",
    "- 命名规则\n",
    "  - 文件命名包含配置参数：\n",
    "    - 4个event loop\n",
    "    - 0个计算线程\n",
    "    - RPS 100\n",
    "    - 1个连接"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**改变条件**\n",
    "\n",
    "```shell\n",
    "bin/sudoku_loadtest sudoku17 10.0.0.49 1000\n",
    "```\n",
    "\n",
    "图画合并\n",
    "- 数据对比方法：将rps=100和rps=1000的延迟分布数据绘制在同一坐标系中进行对比\n",
    "- 比例调整：由于y轴比例差异较大，需要对rps=1000的数据进行归一化处理（除以10）才能与rps=100的数据进行有效对比\n",
    "\n",
    "![7.9](./images/7.9.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "均一化\n",
    "\n",
    "- 归一化方法：rps=1000的数据除以1000，rps=100的数据除以100，转换为百分比表示\n",
    "- 分布特征\n",
    "  - rps=1000时延迟分布更分散，最大值更大\n",
    "  - 实际数据：rps=100时99%延迟为800μs，rps=1000时99%延迟为15ms\n",
    "\n",
    "![7.10](./images/7.10.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "应用案例\n",
    "\n",
    "- 例题:rps 1000与100的延迟分布\n",
    "  - 关键指标对比\n",
    "    - 平均值：rps=100时为480μs，rps=1000时为800μs\n",
    "    - 99%延迟：rps=100时为800μs，rps=1000时为15ms\n",
    "    - 分布特征：负载越大延迟越高，符合预期\n",
    "\n",
    "![7.11](./images/7.11.png)\n",
    "\n",
    "- 例题:rps 10000的延迟分布\n",
    "  - 关键指标\n",
    "    - 60秒平均延迟：2790μs\n",
    "    - 整体平均延迟：1392μs\n",
    "    - 99%延迟接近10ms\n",
    "  - 滑动窗口机制：超过1分钟的数据会被清除重新计算，保证统计数据的实时性"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "改变服务器运行方式\n",
    "\n",
    "```shell\n",
    "bin/sudoku_solver_hybrid 4 4 -n\n",
    "```\n",
    "\n",
    "```shell\n",
    "bin/sudoku_loadtest sudoku17 10.0.0.49 100\n",
    "```\n",
    "\n",
    "![7.12](./images/7.12.png)\n",
    "\n",
    "- 例题:rps 100的延迟分布\n",
    "  - 性能对比\n",
    "    - 最小值：IO线程412μs vs 线程池460μs（增加50μs）\n",
    "    - 最大值：IO线程800μs vs 线程池变小\n",
    "    - 平均值：IO线程480μs vs 线程池530μs\n",
    "    - 99%延迟：线程池比IO线程好150μs\n",
    "  - 分布特征：线程池使延迟分布更集中，系统行为更可预测\n",
    "\n",
    "![7.13](./images/7.13.png)\n",
    "\n",
    "- 例题:rps 1000的延迟分布\n",
    "  - 关键指标\n",
    "    - 最小值：480μs（IO线程）vs 530μs（线程池）\n",
    "    - 最大值：2789μs vs 2312μs（减少477μs）\n",
    "    - 平均值：800μs vs 600μs\n",
    "    - 99%延迟：1519μs vs 662μs\n",
    "  - 结论：在rps=1000时，线程池在各项指标上均优于纯IO线程计算\n",
    "\n",
    "![7.14](./images/7.14.png)\n",
    "\n",
    "- 例题:rps 100与1000的延迟分布\n",
    "  - 负载影响：请求率每增加10倍，延迟增加50-60μs\n",
    "  - 分布特征：红色（rps=1000）比绿色（rps=100）向右偏移，表明负载越大延迟越高\n",
    "\n",
    "![7.15](./images/7.15.png)\n",
    "\n",
    "- 例题:rps 10000的延迟分布\n",
    "  - 性能表现\n",
    "    - 线程池CPU使用率达24%\n",
    "    - 99%延迟接近3ms\n",
    "    - 相比无线程池的9.89ms有显著改善\n",
    "  - 分布特征：线程池使延迟分布更集中，系统行为更可预测"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**总结**\n",
    "\n",
    "- 适用场景\n",
    "  - 低负载（rps<100）：IO线程计算更优\n",
    "  - 中高负载（rps≥1000）：线程池优势明显\n",
    "- 性能提升：在rps=10000时，线程池可将99%延迟从9.89ms降至2.9ms\n",
    "- 核心优势：线程池使延迟分布更集中，提高系统行为的可预测性\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 过载保护 *"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "```shell\n",
    "bin/sudoku_solver_hybrid 4 0 -n # 没有计算线程\n",
    "```\n",
    "\n",
    "```shell\n",
    "utop\n",
    "```\n",
    "\n",
    "```shell\n",
    "bin/sudoku_loadtest sudoku17 10.0.0.49 20000 # 超过处理能力, 数据堆积在客户端\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "```shell\n",
    "bin/sudoku_solver_hybrid 4 2 -n # 开启2个线程池\n",
    "```\n",
    "\n",
    "```shell\n",
    "utop\n",
    "```\n",
    "\n",
    "```shell\n",
    "bin/sudoku_loadtest sudoku17 10.0.0.49 20000\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "```shell\n",
    "bin/sudoku_solver_hybrid 4 2 -n # 开启2个线程池\n",
    "```\n",
    "\n",
    "```shell\n",
    "utop\n",
    "```\n",
    "\n",
    "```shell\n",
    "bin/sudoku_loadtest sudoku17 10.0.0.49 40000\n",
    "```\n",
    "\n",
    "客户端, 服务端内存都在增加, IO线程处理能力高于计算线程处理能力, 造成速度的不匹配. 停止客户端, 线程池线程仍在100%, 说明数据堆积在IO线程和线程池之间的队列上(并不是内存泄漏, 内存分配器没有还给操作系统)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "```shell\n",
    "bin/sudoku_solver_hybrid # 单线程版本\n",
    "```\n",
    "\n",
    "```shell\n",
    "utop\n",
    "```\n",
    "\n",
    "压力测试, 测试服务器过载表现\n",
    "\n",
    "```shell\n",
    "./sudoku_stress 127.0.0.1 # 1.只发不收; 2接受发送不同步; 3.默认发送1000000000个request\n",
    "```\n",
    "`sudoku_stress`: 一个线程发送, 一个线程读响应; 读取响应比较快时, 没有造成堆积, 发送数和收到的响应数差不多, 没有造成过载; 一开始发送比较多, 是因为缓冲区(TCP)是空的\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "```shell\n",
    "bin/sudoku_solver_hybrid\n",
    "```\n",
    "\n",
    "```shell\n",
    "utop\n",
    "```\n",
    "\n",
    "```shell\n",
    "./sudoku_stress 127.0.0.1 200000\n",
    "```\n",
    "\n",
    "发送完请求, 几秒后收后续的响应"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "```shell\n",
    "bin/sudoku_solver_hybrid\n",
    "```\n",
    "\n",
    "```shell\n",
    "utop\n",
    "```\n",
    "\n",
    "```shell\n",
    "./sudoku_stress 127.0.0.1 200000 -r # 只发送, 不接受\n",
    "```\n",
    "\n",
    "发送完请求, 几秒后收后续的响应. 服务器是阻塞制(处理完一个请求,才读处理下一个), 服务器内存增长(用户态的发送缓冲区)\n",
    "\n",
    "`10.0.0.49:9982/pprof/memstats` : 查看服务器内存使用(使用了tcmalloc)\n",
    "\n",
    "```shell\n",
    "./sudoku_stress 127.0.0.1 300000 -r # 一个请求100byte\n",
    "```\n",
    "\n",
    "`10.0.0.49:9982/pprof/memstats` : tcmalloc显示将内存放在free list中了\n",
    "\n",
    "`10.0.0.49:9982/pprof/releasefreememory` : 强制释放内存"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "```shell\n",
    "bin/sudoku_solver_hybrid 0 2 # 不启用IO线程, 线程池用2个\n",
    "```\n",
    "\n",
    "```shell\n",
    "utop\n",
    "```\n",
    "\n",
    "```shell\n",
    "./sudoku_stress 127.0.0.1 300000\n",
    "```\n",
    "\n",
    "请求发送的很快, 然后很长时间收到响应; 服务器内存增长很快, 然后释放; 只要客户端一直在接受, 服务器会短时间过载\n",
    "\n",
    "```shell\n",
    "./sudoku_stress 127.0.0.1 300000 -r # 若客户端不接收, 服务器发送数据会累积在发送缓冲区, 导致内存增加\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 过载保护总结"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "过载保护的概念与问题引入\n",
    "\n",
    "- What if client send massive requests without reading responses?\n",
    "- For server with threadpool, request processing is asynchronous, IO threads read requests faster than worker threads could process,requests queue up in thread pool\n",
    "- Both use a lot memory in server\n",
    "- Not a problem ifthread-per-connection model is used, but non-blocking IO is difficult."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 问题现象：客户端发送大量请求但不读取响应时，服务器内存会在十几秒内迅速增长到GB级别\n",
    "- 根本原因：IO线程读取请求速度(20-30万/秒)远快于工作线程处理速度(2万/秒)，导致请求在队列中堆积\n",
    "- 模型差异\n",
    "  - 线程池模型：请求处理是异步的，容易在用户态队列堆积\n",
    "  - 线程连接模型(go语言)：自带截流效果，当内核TCP缓冲区满时会阻塞发送线程"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "线程池过载保护机制\n",
    "\n",
    "- 保护阈值：设置线程池队列长度上限为100万条未处理请求\n",
    "- 处理逻辑\n",
    "  - 当队列长度<100万时：正常将计算任务加入线程池\n",
    "  - 当队列长度≥100万时：直接返回\"ServerTooBusy\"响应并记录丢弃请求\n",
    "- 设计考虑\n",
    "  - 共享资源保护：线程池是所有客户端共享的资源\n",
    "  - 响应式保护：避免内存无限增长，将内存使用控制在合理范围内"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "vscode": {
     "languageId": "cpp"
    }
   },
   "outputs": [],
   "source": [
    "// https://github.com/chenshuo/muduo/blob/master/examples/sudoku/server_prod.cc\n",
    "\n",
    "#include \"examples/sudoku/sudoku.h\"\n",
    "\n",
    "#include \"muduo/base/Atomic.h\"\n",
    "#include \"muduo/base/Logging.h\"\n",
    "#include \"muduo/base/Thread.h\"\n",
    "#include \"muduo/base/ThreadPool.h\"\n",
    "#include \"muduo/net/EventLoop.h\"\n",
    "#include \"muduo/net/EventLoopThread.h\"\n",
    "#include \"muduo/net/InetAddress.h\"\n",
    "#include \"muduo/net/TcpServer.h\"\n",
    "#include \"muduo/net/inspect/Inspector.h\"\n",
    "\n",
    "#include <boost/circular_buffer.hpp>\n",
    "\n",
    "//#include <stdio.h>\n",
    "//#include <unistd.h>\n",
    "\n",
    "using namespace muduo;\n",
    "using namespace muduo::net;\n",
    "\n",
    "#include \"examples/sudoku/stat.h\"\n",
    "\n",
    "class SudokuServer : noncopyable\n",
    "{\n",
    " public:\n",
    "  SudokuServer(EventLoop* loop,\n",
    "               const InetAddress& listenAddr,\n",
    "               int numEventLoops,\n",
    "               int numThreads,\n",
    "               bool nodelay)\n",
    "    : server_(loop, listenAddr, \"SudokuServer\"),\n",
    "      threadPool_(),\n",
    "      numThreads_(numThreads),\n",
    "      tcpNoDelay_(nodelay),\n",
    "      startTime_(Timestamp::now()),\n",
    "      stat_(threadPool_),\n",
    "      inspectThread_(),\n",
    "      inspector_(inspectThread_.startLoop(), InetAddress(9982), \"sudoku-solver\")\n",
    "  {\n",
    "    LOG_INFO << \"Use \" << numEventLoops << \" IO threads.\";\n",
    "    LOG_INFO << \"TCP no delay \" << nodelay;\n",
    "\n",
    "    server_.setConnectionCallback(\n",
    "        std::bind(&SudokuServer::onConnection, this, _1));\n",
    "    server_.setMessageCallback(\n",
    "        std::bind(&SudokuServer::onMessage, this, _1, _2, _3));\n",
    "    server_.setThreadNum(numEventLoops);\n",
    "\n",
    "    inspector_.add(\"sudoku\", \"stats\", std::bind(&SudokuStat::report, &stat_),\n",
    "                   \"statistics of sudoku solver\");\n",
    "    inspector_.add(\"sudoku\", \"reset\", std::bind(&SudokuStat::reset, &stat_),\n",
    "                   \"reset statistics of sudoku solver\");\n",
    "  }\n",
    "\n",
    "  void start()\n",
    "  {\n",
    "    LOG_INFO << \"Starting \" << numThreads_ << \" computing threads.\";\n",
    "    threadPool_.start(numThreads_);\n",
    "    server_.start();\n",
    "  }\n",
    "\n",
    " private:\n",
    "  void onConnection(const TcpConnectionPtr& conn)\n",
    "  {\n",
    "    LOG_TRACE << conn->peerAddress().toIpPort() << \" -> \"\n",
    "        << conn->localAddress().toIpPort() << \" is \"\n",
    "        << (conn->connected() ? \"UP\" : \"DOWN\");\n",
    "    if (conn->connected())\n",
    "    {\n",
    "      if (tcpNoDelay_)\n",
    "        conn->setTcpNoDelay(true);\n",
    "      conn->setHighWaterMarkCallback(\n",
    "          std::bind(&SudokuServer::highWaterMark, this, _1, _2), 5 * 1024 * 1024); // 发送缓冲区大于5M时, highWaterMark是针对一个客户, 100字节, 5W个消息\n",
    "      bool throttle = false;\n",
    "      conn->setContext(throttle);\n",
    "    }\n",
    "  }\n",
    "\n",
    "  void highWaterMark(const TcpConnectionPtr& conn, size_t tosend)\n",
    "  {\n",
    "    LOG_WARN << conn->name() << \" high water mark \" << tosend;\n",
    "    if (tosend < 10 * 1024 * 1024) \n",
    "    {\n",
    "      conn->setHighWaterMarkCallback(\n",
    "          std::bind(&SudokuServer::highWaterMark, this, _1, _2), 10 * 1024 * 1024); // 发送完5W后\n",
    "      conn->setWriteCompleteCallback(std::bind(&SudokuServer::writeComplete, this, _1));\n",
    "      bool throttle = true;\n",
    "      conn->setContext(throttle);\n",
    "    }\n",
    "    else\n",
    "    {\n",
    "      conn->send(\"Bad Request!\\r\\n\");\n",
    "      conn->shutdown();  // FIXME: forceClose() ?\n",
    "      stat_.recordBadRequest();\n",
    "    }\n",
    "  }\n",
    "\n",
    "  void writeComplete(const TcpConnectionPtr& conn)\n",
    "  {\n",
    "    LOG_INFO << conn->name() << \" write complete\";\n",
    "    conn->setHighWaterMarkCallback(\n",
    "        std::bind(&SudokuServer::highWaterMark, this, _1, _2), 5 * 1024 * 1024);\n",
    "    conn->setWriteCompleteCallback(WriteCompleteCallback());\n",
    "    bool throttle = false;\n",
    "    conn->setContext(throttle);\n",
    "  }\n",
    "\n",
    "  void onMessage(const TcpConnectionPtr& conn, Buffer* buf, Timestamp receiveTime)\n",
    "  {\n",
    "    size_t len = buf->readableBytes();\n",
    "    while (len >= kCells + 2)\n",
    "    {\n",
    "      const char* crlf = buf->findCRLF();\n",
    "      if (crlf)\n",
    "      {\n",
    "        string request(buf->peek(), crlf);\n",
    "        buf->retrieveUntil(crlf + 2);\n",
    "        len = buf->readableBytes();\n",
    "        stat_.recordRequest();\n",
    "        if (!processRequest(conn, request, receiveTime))\n",
    "        {\n",
    "          conn->send(\"Bad Request!\\r\\n\");\n",
    "          conn->shutdown();\n",
    "          stat_.recordBadRequest();\n",
    "          break;\n",
    "        }\n",
    "      }\n",
    "      else if (len > 100) // id + \":\" + kCells + \"\\r\\n\"\n",
    "      {\n",
    "        conn->send(\"Id too long!\\r\\n\");\n",
    "        conn->shutdown();\n",
    "        stat_.recordBadRequest();\n",
    "        break;\n",
    "      }\n",
    "      else\n",
    "      {\n",
    "        break;\n",
    "      }\n",
    "    }\n",
    "  }\n",
    "\n",
    "  struct Request\n",
    "  {\n",
    "    string id;\n",
    "    string puzzle;\n",
    "    Timestamp receiveTime;\n",
    "  };\n",
    "\n",
    "  bool processRequest(const TcpConnectionPtr& conn, const string& request, Timestamp receiveTime)\n",
    "  {\n",
    "    Request req;\n",
    "    req.receiveTime = receiveTime;\n",
    "\n",
    "    string::const_iterator colon = find(request.begin(), request.end(), ':');\n",
    "    if (colon != request.end())\n",
    "    {\n",
    "      req.id.assign(request.begin(), colon);\n",
    "      req.puzzle.assign(colon+1, request.end());\n",
    "    }\n",
    "    else\n",
    "    {\n",
    "      // when using thread pool, an id must be provided in the request.\n",
    "      if (numThreads_ > 1)\n",
    "        return false;\n",
    "      req.puzzle = request;\n",
    "    }\n",
    "\n",
    "    if (req.puzzle.size() == implicit_cast<size_t>(kCells))\n",
    "    {\n",
    "      bool throttle = boost::any_cast<bool>(conn->getContext());\n",
    "      if (threadPool_.queueSize() < 1000 * 1000 && !throttle) // 线程池队列长度大于100 0000\n",
    "      {\n",
    "        threadPool_.run(std::bind(&SudokuServer::solve, this, conn, req));\n",
    "      }\n",
    "      else\n",
    "      {\n",
    "        if (req.id.empty())\n",
    "        {\n",
    "          conn->send(\"ServerTooBusy\\r\\n\");\n",
    "        }\n",
    "        else\n",
    "        {\n",
    "          conn->send(req.id + \":ServerTooBusy\\r\\n\");\n",
    "        }\n",
    "        stat_.recordDroppedRequest();\n",
    "      }\n",
    "      return true;\n",
    "    }\n",
    "    return false;\n",
    "  }\n",
    "\n",
    "  void solve(const TcpConnectionPtr& conn, const Request& req)\n",
    "  {\n",
    "    LOG_DEBUG << conn->name();\n",
    "    string result = solveSudoku(req.puzzle);\n",
    "    if (req.id.empty())\n",
    "    {\n",
    "      conn->send(result + \"\\r\\n\");\n",
    "    }\n",
    "    else\n",
    "    {\n",
    "      conn->send(req.id + \":\" + result + \"\\r\\n\");\n",
    "    }\n",
    "    stat_.recordResponse(Timestamp::now(), req.receiveTime, result != kNoSolution);\n",
    "  }\n",
    "\n",
    "  TcpServer server_;\n",
    "  ThreadPool threadPool_;\n",
    "  const int numThreads_;\n",
    "  const bool tcpNoDelay_;\n",
    "  const Timestamp startTime_;\n",
    "\n",
    "  SudokuStat stat_;\n",
    "  EventLoopThread inspectThread_;\n",
    "  Inspector inspector_;\n",
    "};\n",
    "\n",
    "int main(int argc, char* argv[])\n",
    "{\n",
    "  LOG_INFO << argv[0] << \" [number of IO threads] [number of worker threads] [-n]\";\n",
    "  LOG_INFO << \"pid = \" << getpid() << \", tid = \" << CurrentThread::tid();\n",
    "  int numEventLoops = 0;\n",
    "  int numThreads = 0;\n",
    "  bool nodelay = false;\n",
    "  if (argc > 1)\n",
    "  {\n",
    "    numEventLoops = atoi(argv[1]);\n",
    "  }\n",
    "  if (argc > 2)\n",
    "  {\n",
    "    numThreads = atoi(argv[2]);\n",
    "  }\n",
    "  if (argc > 3 && string(argv[3]) == \"-n\")\n",
    "  {\n",
    "    nodelay = true;\n",
    "  }\n",
    "\n",
    "  EventLoop loop;\n",
    "  InetAddress listenAddr(9981);\n",
    "  SudokuServer server(&loop, listenAddr, numEventLoops, numThreads, nodelay);\n",
    "\n",
    "  server.start();\n",
    "\n",
    "  loop.loop();\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "线程池过载保护机制\n",
    "\n",
    "- 保护阈值：设置线程池队列长度上限为100万条未处理请求\n",
    "- 处理逻辑\n",
    "  - 当队列长度<100万时：正常将计算任务加入线程池\n",
    "  - 当队列长度≥100万时：直接返回\"ServerTooBusy\"响应并记录丢弃请求\n",
    "- 设计考虑\n",
    "  - 共享资源保护：线程池是所有客户端共享的资源\n",
    "  - 响应式保护：避免内存无限增长，将内存使用控制在合理范围内"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "发送缓冲区过载保护机制\n",
    "\n",
    "- 两级保护机制\n",
    "  - 第一级(5MB)：触发高水位回调，进入截流状态(throttle)\n",
    "  - 第二级(10MB)：直接断开连接并记录错误请求\n",
    "\n",
    "![7.16](./images/7.16.png)\n",
    "- 状态转换\n",
    "  - 正常状态→截流状态(5MB)：注册10MB高水位回调\n",
    "  - 截流状态→正常状态(发送完成)：重新注册5MB回调\n",
    "  - 截流状态→断开连接(10MB)：发送错误信息并关闭连接\n",
    "- 设计考虑\n",
    "  - 单连接保护：每个连接独立计算发送缓冲区\n",
    "  - 渐进式响应：给予客户端调整机会而非直接断开\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "```shell\n",
    "bin/sudoku_solver_prod 0 2 -n # 线程池带有过载保护\n",
    "```\n",
    "\n",
    "```shell\n",
    "utop\n",
    "```\n",
    "\n",
    "```shell\n",
    "./sudoku_stress 127.0.0.1 \n",
    "```\n",
    "\n",
    "客户端收到影响在几秒后增加, 是因为线程池过载\n",
    "\n",
    "```shell\n",
    "./sudoku_stress 127.0.0.1 1500000\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "过载保护机制的实际测试与验证\n",
    "\n",
    "- 测试场景：发送150万请求，其中约100万被正常处理\n",
    "- 测试结果\n",
    "  - 约30.68万响应为\"ServerTooBusy\"(21.3%拒绝率)\n",
    "  - 内存使用稳定在600MB左右，未出现持续增长\n",
    "- 性能表现\n",
    "  - 处理速度稳定在3.4万响应/秒\n",
    "  - CPU利用率达到98%(双工作线程)\n",
    "- 保护效果：有效防止了内存爆炸性增长，将服务器负载控制在可接受范围内"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 过载保护"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**突发性网络请求过载**\n",
    "\n",
    "\n",
    "```shell\n",
    "bin/sudoku_solver_prod 0 0 -n # 发送缓冲区\n",
    "```\n",
    "\n",
    "```shell\n",
    "utop\n",
    "```\n",
    "\n",
    "```shell\n",
    "./sudoku_stress 127.0.0.1 1000000 -r\n",
    "```\n",
    "\n",
    "```shell\n",
    "./sudoku_stress 127.0.0.1 100000 -r\n",
    "```\n",
    "\n",
    "```shell\n",
    "./sudoku_stress 127.0.0.1 100000 10 # 等10s\n",
    "```\n",
    "\n",
    "- 高水位线机制：当客户端发送大量请求而不接收响应时，服务器会触发高水位线保护机制。首次触发阈值为5MB（约5万请求），随后会达到10MB（约10万请求）。\n",
    "- 保护措施\n",
    "  - 第一阶段保护：当请求堆积达到5MB时，服务器会进入\"threatening\"状态，但仍保持连接。\n",
    "  - 第二阶段保护：达到10MB高水位线后，服务器会断开连接以防止内存耗尽。\n",
    "  - 延迟读取验证：实验显示，若等待10秒后再读取响应，服务器能处理完堆积的10万请求，但会产生约11.5%的错误响应。\n",
    "- 错误处理机制：当连接因过载被重置时，服务器会记录\"SO_ERROR = 104 Connection reset by peer\"错误，并安全移除连接。\n",
    "- 性能数据：在10万请求测试中，服务器最终处理了全部请求，但产生11,531个错误响应（约11.5%错误率）。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**持续网络请求过载**\n",
    "\n",
    "```shell\n",
    "./sudoku_stress 127.0.0.1 200000 10 # 等10s\n",
    "```\n",
    "\n",
    "```shell\n",
    "./sudoku_stress 127.0.0.1 300000 10 # 等10s\n",
    "```\n",
    "\n",
    "- 请求量递增测试\n",
    "  - 20万请求：服务器处理了全部请求，产生111,725个错误响应（约55.9%错误率）。\n",
    "  - 30万请求：服务器仅处理了25万响应，其中16万为错误响应（约64%错误率），连接最终断开。\n",
    "- 保护机制特点\n",
    "  - 非线性保护：错误率不与请求量成线性关系，因后续响应体积较小。\n",
    "  - 双重阈值：设置5MB和10MB两级阈值，提供渐进式保护。\n",
    "  - 主动断开：当持续过载时，服务器会主动断开连接防止资源耗尽。\n",
    "- 实现原理\n",
    "  - 通过setHighwaterMarkCallback设置5MB阈值回调\n",
    "  - 使用setWriteCompleteCallback重置状态\n",
    "  - 对错误请求直接返回\"Bad Request\"并关闭连接\n",
    "  - 通过throttle变量控制请求处理状态"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 课程总结\n",
    "\n",
    "- 设计价值\n",
    "  - 有效防止线程池模型中请求堆积导致的内存耗尽\n",
    "  - 相比单连接线程模型，非阻塞IO实现更复杂但性能更好\n",
    "- 优化方向\n",
    "  - 可进一步细化保护阈值\n",
    "  - 增加更精细的资源监控\n",
    "  - 实现动态调整的保护策略"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Load balancing *"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 负载均衡"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- Where does load balancer fit?\n",
    "  - Load balancing client library\n",
    "  - Load balancer - reverse proxy\n",
    "  - Or combined\n",
    "- Connectionlevel or request level?\n",
    "- Load balancing policy – random, round robin, more advanced?\n",
    "- FastCGI as a rudimentary load balancer\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 核心作用：当单个服务器处理能力达到上限时，通过多服务器分布式处理提升系统吞吐量\n",
    "- 典型场景：以数独求解器(sudoku)为例，展示多服务器协同工作的必要性"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 负载均衡器"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "作为库的形式: payload\n",
    "- 实现方式：以客户端库(client library)形式嵌入应用程序\n",
    "- 工作流程\n",
    "  - 应用程序调用库函数请求服务\n",
    "  - 库自动维护与后端服务器集群的连接池\n",
    "  - 智能选择合适服务器转发请求\n",
    "- 连接特点：每个客户端直接连接所有服务器，可能导致连接数爆炸（如1000客户端×100服务器=10万连接）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "作为反向代理类型: payload\n",
    "- 架构组成\n",
    "  - 独立负载均衡进程\n",
    "  - 多客户端连接\n",
    "  - 后端服务器集群\n",
    "- 优势\n",
    "  - 连接数大幅减少（1000客户端→10LB→100服务器，总连接数1.1万）\n",
    "  - 避免客户端直接连接后端\n",
    "- 代价：增加一跳网络延迟（client→LB→server）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "旁路式\n",
    "\n",
    "- 工作模式: 作为advisor角色提供路由建议\n",
    "  - 独立监控服务器健康状态\n",
    "  - 定期向客户端推送最优连接方案\n",
    "- 特点\n",
    "  - 数据流不经过负载均衡器\n",
    "  - 适合对延迟敏感的场景\n",
    "- 扩展性：支持多级负载均衡架构（如客户端库→LB集群→服务器集群）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 连接级还是请求级"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 连接级均衡\n",
    "  - TCP反向代理模式\n",
    "  - 简单按连接数分配（如选择当前连接最少的服务器）\n",
    "  - 局限：单个连接绑定单个服务器，无法充分利用集群资源\n",
    "- 请求级均衡\n",
    "  - 需要理解应用层协议（如HTTP）\n",
    "  - 动态分配每个请求到最优服务器\n",
    "  - 优势：类似线程池模型，真正实现资源均衡利用\n",
    "- 混合部署：网络层做连接均衡，应用层做请求均衡"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 负载均衡策略"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1）随机选\n",
    "\n",
    "- 隐患\n",
    "  - 时间种子导致多进程随机序列同步\n",
    "  - 实际形成\"波浪式\"集中请求\n",
    "- 解决方案\n",
    "  - 组合机器IP、进程ID等生成种子\n",
    "  - 确保不同进程产生不同随机序列"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "2）轮询\n",
    "\n",
    "- 实现缺陷\n",
    "  - 初始选择未随机化导致同步\n",
    "  - 所有LB同时选择相同服务器\n",
    "- 改进要点\n",
    "  - 初始位置随机化\n",
    "  - 避免使用时间相关种子"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "3）高级策略 \n",
    "\n",
    "- 根据现有服务器负载最轻的 \n",
    "  - 实现挑战\n",
    "    - LB间缺乏实时通信\n",
    "    - 可能同时选中同一台服务器\n",
    "    - 引发\"雪崩效应\"（突然压垮最闲节点）\n",
    "  - 优化方向：引入分布式协调机制\n",
    "- 考虑服务器是否过载 \n",
    "  - 过载保护\n",
    "    - 主动拒绝超限请求\n",
    "    - 相比超时更快速失败\n",
    "    - 避免请求堆积恶化\n",
    "  - 业务适配：需要根据具体场景设计拒绝策略"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 应用案例"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "例题:fastCGI作为初级负载均衡器\n",
    "\n",
    "- 实现方案\n",
    "  - Nginx作为前端代理(外部)\n",
    "  - FastCGI协议连接后端服务(内部)\n",
    "  - 自动维护长连接池（默认10连接/进程）\n",
    "- 优势\n",
    "  - 复用成熟Web服务器功能（HTTPS/长连接等）\n",
    "  - 业务层专注核心逻辑\n",
    "  - 连接数自动管控\n",
    "- 局限性：负载均衡策略较为基础"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 负载均衡实例*"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- https://github.com/chenshuo/muduo/blob/master/examples/fastcgi/"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "vscode": {
     "languageId": "cpp"
    }
   },
   "outputs": [],
   "source": [
    "// https://github.com/chenshuo/muduo/blob/master/examples/fastcgi/fastcgi_test.cc\n",
    "\n",
    "#include \"examples/fastcgi/fastcgi.h\"\n",
    "#include \"examples/sudoku/sudoku.h\"\n",
    "\n",
    "#include \"muduo/base/Logging.h\"\n",
    "#include \"muduo/net/EventLoop.h\"\n",
    "#include \"muduo/net/TcpServer.h\"\n",
    "\n",
    "using namespace muduo;\n",
    "using namespace muduo::net;\n",
    "\n",
    "const string kPath = \"/sudoku/\";\n",
    "\n",
    "void onRequest(const TcpConnectionPtr& conn,\n",
    "               FastCgiCodec::ParamMap& params,\n",
    "               Buffer* in)\n",
    "{\n",
    "  string uri = params[\"REQUEST_URI\"];\n",
    "  LOG_INFO << conn->name() << \": \" << uri;\n",
    "\n",
    "  for (FastCgiCodec::ParamMap::const_iterator it = params.begin();\n",
    "       it != params.end(); ++it)\n",
    "  {\n",
    "    LOG_DEBUG << it->first << \" = \" << it->second;\n",
    "  }\n",
    "  if (in->readableBytes() > 0)\n",
    "    LOG_DEBUG << \"stdin \" << in->retrieveAllAsString();\n",
    "  Buffer response;\n",
    "  response.append(\"Context-Type: text/plain\\r\\n\\r\\n\");\n",
    "  if (uri.size() == kCells + kPath.size() && uri.find(kPath) == 0)\n",
    "  {\n",
    "    response.append(solveSudoku(uri.substr(kPath.size())));\n",
    "  }\n",
    "  else\n",
    "  {\n",
    "    // FIXME: set http status code 400\n",
    "    response.append(\"bad request\");\n",
    "  }\n",
    "\n",
    "  FastCgiCodec::respond(&response);\n",
    "  conn->send(&response);\n",
    "}\n",
    "\n",
    "void onConnection(const TcpConnectionPtr& conn)\n",
    "{\n",
    "  if (conn->connected())\n",
    "  {\n",
    "    typedef std::shared_ptr<FastCgiCodec> CodecPtr;\n",
    "    CodecPtr codec(new FastCgiCodec(onRequest));\n",
    "    conn->setContext(codec);\n",
    "    conn->setMessageCallback(\n",
    "        std::bind(&FastCgiCodec::onMessage, codec, _1, _2, _3));\n",
    "    conn->setTcpNoDelay(true);\n",
    "  }\n",
    "}\n",
    "\n",
    "int main(int argc, char* argv[])\n",
    "{\n",
    "  int port = 19981;\n",
    "  int threads = 0;\n",
    "  if (argc > 1)\n",
    "    port = atoi(argv[1]);\n",
    "  if (argc > 2)\n",
    "    threads = atoi(argv[2]); // 指定IO线程数\n",
    "  InetAddress addr(static_cast<uint16_t>(port));\n",
    "  LOG_INFO << \"Sudoku FastCGI listens on \" << addr.toIpPort()\n",
    "           << \" threads \" << threads;\n",
    "  muduo::net::EventLoop loop;\n",
    "  TcpServer server(&loop, addr, \"FastCGI\");\n",
    "  server.setConnectionCallback(onConnection);\n",
    "  server.setThreadNum(threads);\n",
    "  server.start();\n",
    "  loop.loop();\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## fast cgi"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 实例位置: 位于muduo/examples/fastcgi目录下\n",
    "- 核心文件\n",
    "  - fastcgi.cc：主实现文件\n",
    "  - fastcgi_test.cc：测试代码\n",
    "  - nginx.conf：Nginx配置文件\n",
    "- 配置要点\n",
    "  - 需要修改server配置中的IP地址\n",
    "  - 当前配置中backend server为10.0.0.37和10.0.0.49"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "启动与线程设置\n",
    "\n",
    "- 启动参数\n",
    "  - 第一个参数：端口号（默认19981）\n",
    "  - 第二个参数：IO线程数（默认0）\n",
    "- 线程特性\n",
    "  - 线程数应与CPU核心数匹配\n",
    "  - 示例中atom机器使用2线程（双核）\n",
    "  - ws490机器使用8线程（八核）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "```shell\n",
    "objs/nginx -p /home/schen/muduo/examples/fastcgi\n",
    "```\n",
    "\n",
    "```shell\n",
    "psg nginx\n",
    "```\n",
    "\n",
    "```shell\n",
    "curl http://localhost:10080/sudoku/000000010400000000020000000000050407008000300001090000300400200050100000000806000 # 测试 fastcgi\n",
    "```\n",
    "\n",
    "响应特征:\n",
    "- 使用HTTP/1.1协议\n",
    "- 采用分块传输编码（chunked）\n",
    "- 保持长连接（keep-alive）\n",
    "\n",
    "```shell\n",
    "bin/fastcgi_test 199981 8 > /dev/null # 10.0.0.49\n",
    "```\n",
    "\n",
    "```shell\n",
    "bin/fastcgi_test 199981 2 > /dev/null # 10.0.0.37 ATOM\n",
    "```\n",
    "\n",
    "```shell\n",
    "netstat -tpna | grep 19981\n",
    "```\n",
    "\n",
    "```shell\n",
    "top\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "压力测试工具的选择\n",
    "\n",
    "- 工具选择\n",
    "  - 不能使用apache bench（AB）\n",
    "  - 原因：不支持分块传输\n",
    "  - 推荐使用weighttp（lighttpd工具）\n",
    "- 测试参数\n",
    "  - -n：请求总数\n",
    "  - -c：并发连接数\n",
    "  - -k：保持长连接"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "```shell\n",
    "build/default/weighttp -n 100000 -c 1 -k localhost:10080/sudoku/000000010400000000020000000000050407008000300001090000300400200050100000000806000\n",
    "```\n",
    "\n",
    "- http测试工具没有pipeline功能\n",
    "\n",
    "```shell\n",
    "build/default/weighttp -n 100000 -c 10 -k localhost:10080/sudoku/000000010400000000020000000000050407008000300001090000300400200050100000000806000\n",
    "```\n",
    "\n",
    "```shell\n",
    "build/default/weighttp -n 500000 -c 100 -k localhost:10080/sudoku/000000010400000000020000000000050407008000300001090000300400200050100000000806000\n",
    "```\n",
    "\n",
    "压力测试与结果分析\n",
    "\n",
    "- 单连接测试\n",
    "  - 吞吐量：12800 req/s\n",
    "  - 带宽：3086 kbyte/s\n",
    "  - 成功率：100%\n",
    "- 多连接测试\n",
    "  - 10并发连接时出现连接重置错误\n",
    "  - 需要调整Nginx worker_processes配置\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "测试结果与负载均衡效果\n",
    "\n",
    "- 负载分配\n",
    "  - 采用轮询（round robin）策略\n",
    "  - 请求均匀分配到两个backend server\n",
    "  - 通过netstat可查看实际连接分布\n",
    "- 性能瓶颈\n",
    "  - 单Nginx进程CPU使用率达99%\n",
    "  - backend server负载较低"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "测试中的问题与调整\n",
    "\n",
    "- 优化措施\n",
    "  - 增加worker_processes为2\n",
    "  - 调整worker_connections\n",
    "  - 监控系统资源使用情况\n",
    "- 注意事项\n",
    "  - 测试时要明确瓶颈位置\n",
    "  - 需要平衡连接数与系统资源"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "最终测试与结论\n",
    "\n",
    "- 关键发现\n",
    "  - Nginx是主要性能瓶颈\n",
    "  - backend server有充足余量\n",
    "  - 长连接能显著提升性能\n",
    "- 实践建议\n",
    "  - 根据CPU核心数设置worker_processes\n",
    "  - 压力测试要循序渐进增加负载\n",
    "  - 监控各组件资源使用情况"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 负载均衡实验分析"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "单机与多机性能对比\n",
    "\n",
    "- 单机性能测试\n",
    "  - 使用单台机器(490)进行测试时，线程使用率更高\n",
    "  - 测得性能为26000请求/秒\n",
    "  - 系统监控显示NGINX进程CPU占用率高达94.3%\n",
    "- 多机负载均衡配置:"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "性能测试结果\n",
    "\n",
    "- 测试指标\n",
    "  - 完成50万请求耗时19.138秒\n",
    "  - 吞吐量：26258请求/秒\n",
    "  - 数据传输速率：6332 kbyte/s\n",
    "  - 成功率：100%（500000成功，0失败）\n",
    "- 资源使用情况\n",
    "  - CPU状态：84.8%空闲\n",
    "  - 内存使用：380m实际使用\n",
    "  - FastCGI进程CPU占用在9.3%-20.3%之间波动"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "负载均衡行为观察\n",
    "\n",
    "- 负载分布特点\n",
    "  - 负载在两台机器间摇摆不定\n",
    "  - FastCGI进程负载不均衡（37.3%-35%不等）\n",
    "  - 系统整体CPU使用率在70.8%空闲\n",
    "- 异常现象\n",
    "  - 虽然NGINX进程CPU占用很高(94.3%)\n",
    "  - 但后端FastCGI进程负载会突然下降\n",
    "  - 表明负载均衡机制存在优化空间"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "实际应用建议\n",
    "\n",
    "- 实践经验\n",
    "  - 在实际工作中较少使用这种配置方式\n",
    "  - FastCGI作为负载均衡器效果不够理想\n",
    "  - 需要进一步优化负载分配算法\n",
    "- 测试结论\n",
    "  - 单机性能可达26000请求/秒\n",
    "  - 多机配置时性能波动较大\n",
    "  - 系统瓶颈可能出现在NGINX本身"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Productionize"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 核心目标: 将速度库部署到生产环境，使其成为产品级服务\n",
    "- 关键要素: 需要补充心跳机制、监控系统和资源调配方案"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- Heartbeat\n",
    "  - Fault tolerance and failover\n",
    "  - Upgrading\n",
    "- Monitoring\n",
    "  - Periodically grab :9982/sudoku/stats and graph with rrdtool\n",
    "- Provisioning\n",
    "  - Decide how much resource needed and how to deploy the service"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "心跳机制\n",
    "\n",
    "- 功能作用\n",
    "  - 容错处理: 服务端周期性向客户端发送心跳包（建议10秒间隔）\n",
    "  - 故障转移: 客户端连续2次未收到心跳即判定服务下线（20秒超时）\n",
    "- 实现细节\n",
    "  - 协议设计: 当前协议未预留心跳位置，需改进\n",
    "  - 优雅重启: 先停止心跳→等待20秒→流量降零→安全重启\n",
    "  - 自动重连: 客户端需实现基于相同IP端口的自动重连机制"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "升级方案\n",
    "\n",
    "- 滚动升级流程\n",
    "  - 目标服务器停止心跳\n",
    "  - 等待20秒使客户端切换\n",
    "  - 确认RPS降为零\n",
    "  - 执行进程重启/升级\n",
    "  - 稳定运行1分钟后处理下一台\n",
    "- 升级场景\n",
    "  - 硬件维护（如风扇更换）\n",
    "  - 软件更新（程序改进/安全补丁）\n",
    "  - 系统升级（内核/固件更新）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "监控系统\n",
    "\n",
    "- 接口特性\n",
    "  - 端口9982的/stats端点\n",
    "  - 采用纯文本格式（非HTML）便于程序解析\n",
    "- 监控方案\n",
    "  - 数据采集: 使用curl获取监控数据\n",
    "  - 数据处理: 结合grep进行文本过滤\n",
    "  - 可视化: 通过rrdtool存储时间序列并生成图表\n",
    "- 扩展功能: 可基于监控数据实现报警机制"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "资源调配\n",
    "\n",
    "- 容量计算\n",
    "  - 单核能力: 在可接受延迟下支持 $10k$ RPS（CPU 100%时达 $12k-13k$但延迟不可接受）\n",
    "  - 整机容量: 8核机器理论峰值 $80k$ RPS\n",
    "- 部署方案\n",
    "  - 机器数量: 日峰值100万RPS需13台8核机器（$100万/80k=12.5$，向上取整）\n",
    "  - 余量设计: 需预留升级时的容量缺口（如升级时减少1/13容量）\n",
    "  - 执行时机: 避免在峰值时段进行维护操作"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 代码阅读1: 客户端"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "> 同\"最大容量及伸缩性测试\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 代码阅读2: 客户端"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "> 重复: 08_Broadcasting to TCP peers"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "C++17",
   "language": "C++17",
   "name": "xcpp17"
  },
  "language_info": {
   "name": "C++17"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
