{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1. PP-Human简介\n",
    "PaddleDetection深入探索核心行业的高频场景，提供了行人场景的开箱即用分析工具，支持图片/单镜头视频/多镜头视频/在线视频流多种输入方式，广泛应用于智慧交通、智慧城市、工业巡检等领域。支持服务器端部署及TensorRT加速，T4服务器上可达到实时！\n",
    "PP-Human支持四大产业级功能：五大异常行为识别、26种人体属性分析、实时人流计数、跨镜头（ReID）跟踪。\n",
    "\n",
    "PP-Human由飞桨官方出品，是基于PaddleDetection的行人分析pipeline。\n",
    "更多关于PaddleDetection可以点击https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.5/deploy/pipeline 进行了解。\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2. 模型效果及应用场景\n",
    "### 2.1 PP-Human模型效果：\n",
    "\n",
    "| 任务        | 端到端速度（ms） | 模型方案                                                                                                                                                                                                                                                                  | 模型体积                                        |\n",
    "|:---------:|:---------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-------------------------------------------:|\n",
    "| 行人检测（高精度） | 25.1ms    | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)                                                                                                                                                                            | 182M                                        |\n",
    "| 行人检测（轻量级） | 16.2ms    | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip)                                                                                                                                                                            | 27M                                         |\n",
    "| 行人跟踪（高精度） | 31.8ms    | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)                                                                                                                                                                            | 182M                                        |\n",
    "| 行人跟踪（轻量级） | 21.0ms    | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip)                                                                                                                                                                            | 27M                                         |\n",
    "|  跨镜跟踪(REID)   |   单人1.5ms | [REID](https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip) | REID：92M |\n",
    "| 属性识别（高精度） | 单人8.5ms   | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)<br> [属性识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/strongbaseline_r50_30e_pa100k.zip)                                                                            | 目标检测：182M<br>属性识别：86M                       |\n",
    "| 属性识别（轻量级） | 单人7.1ms   | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)<br> [属性识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/strongbaseline_r50_30e_pa100k.zip)                                                                            | 目标检测：182M<br>属性识别：86M                       |\n",
    "| 摔倒识别      | 单人10ms    | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip) <br> [关键点检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip) <br> [基于关键点行为识别](https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip) | 多目标跟踪：182M<br>关键点检测：101M<br>基于关键点行为识别：21.8M |\n",
    "| 闯入识别      | 31.8ms    | [多目标跟踪](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)                                                                                                                                                                            | 182M                                        |\n",
    "| 打架识别      | 19.7ms    | [视频分类](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)                                                                                                                                                                             | 90M                                         |\n",
    "| 抽烟识别      | 单人15.1ms  | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)<br>[基于人体id的目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/ppyoloe_crn_s_80e_smoking_visdrone.zip)                                                                 | 目标检测：182M<br>基于人体id的目标检测：27M                |\n",
    "| 打电话识别     | 单人ms      | [目标检测](https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip)<br>[基于人体id的图像分类](https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_tiny_calling_halfbody.zip)                                                                      | 目标检测：182M<br>基于人体id的图像分类：45M                |\n",
    "\n",
    "\n",
    "\n",
    "### 2.2 应用场景：\n",
    "| 功能                  | 方案优势                                                                                                                                     | 💡示例图                                                                                                      |\n",
    "| --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- |\n",
    "| **跨镜跟踪（ReID）**        | 超强性能：针对目标遮挡、完整度、模糊度等难点特殊优化，实现mAP 98.8、1.5ms/人                                                                                               | <img title=\"\" src=\"https://user-images.githubusercontent.com/48054808/173037607-0a5deadc-076e-4dcc-bd96-d54eea205f1f.png\" alt=\"\" width=\"191\"> |\n",
    "| **属性分析**              | 兼容多种数据格式：支持图片、视频、在线视频流输入<br><br>高性能：融合开源数据集与企业真实数据进行训练，实现mAP 95.4、2ms/人<br><br>支持26种属性：性别、年龄、眼镜、上衣、鞋子、帽子、背包等26种高频属性                         | <img title=\"\" src=\"https://user-images.githubusercontent.com/48054808/173036043-68b90df7-e95e-4ada-96ae-20f52bc98d7c.png\" alt=\"\" width=\"191\">|\n",
    "| **行为识别**              | 功能丰富：支持摔倒、打架、抽烟、打电话、人员闯入五种高频异常行为识别<br><br>鲁棒性强：对光照、视角、背景环境无限制<br><br>性能高：与视频识别技术相比，模型计算量大幅降低，支持本地化与服务化快速部署<br><br>训练速度快：仅需15分钟即可产出高精度行为识别模型 |<img title=\"\" src=\"https://user-images.githubusercontent.com/48054808/173034825-623e4f78-22a5-4f14-9b83-dc47aa868478.gif\" alt=\"\" width=\"191\">  |\n",
    "| **人流量计数**<br>**轨迹记录** | 简洁易用：单个参数即可开启人流量计数与轨迹记录功能                                                                                                                   | <img title=\"\" src=\"https://user-images.githubusercontent.com/22989727/174736440-87cd5169-c939-48f8-90a1-0495a1fcb2b1.gif\" alt=\"\" width=\"191\"> |\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3. 模型如何使用\n",
    "\n",
    "（在Jupyter Notebook上运行时需要加\"！\"，若是cd命令则需加\"%\"）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 克隆PaddleDetection仓库\n",
    "%mkdir -p ~/work\n",
    "%cd ~/work/\n",
    "!git clone https://github.com/PaddlePaddle/PaddleDetection.git\n",
    "\n",
    "# 安装其他依赖\n",
    "%cd PaddleDetection\n",
    "%mkdir -p demo_input demo_output\n",
    "!pip install -r requirements.txt"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "\n",
    "### 3.1 配置文件说明\n",
    "\n",
    "PP-Human相关配置位于```deploy/pipeline/config/infer_cfg_pphuman.yml```中，存放模型路径，该配置文件中包含了目前PP-Human支持的所有功能。如果想要查看某个单一功能的配置，请参见```deploy/pipeline/config/examples/```中相关配置。此外，配置文件中的内容可以通过```-o```命令行参数修改，如修改属性的模型目录，则可通过```-o ATTR.model_dir=\"DIR_PATH\"```进行设置。\n",
    "\n",
    "功能及任务类型对应表单如下：\n",
    "\n",
    "| 输入类型 | 功能 | 任务类型 | 配置项 |\n",
    "|-------|-------|----------|-----|\n",
    "| 图片 | 属性识别 | 目标检测 属性识别 | DET ATTR |\n",
    "| 单镜头视频 | 属性识别 | 多目标跟踪 属性识别 | MOT ATTR |\n",
    "| 单镜头视频 | 行为识别 | 多目标跟踪 关键点检测 摔倒识别 | MOT KPT SKELETON_ACTION |\n",
    "\n",
    "例如基于视频输入的属性识别，任务类型包含多目标跟踪和属性识别，具体配置如下：\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "```\n",
    "crop_thresh: 0.5\n",
    "attr_thresh: 0.5\n",
    "visual: True\n",
    "\n",
    "MOT:\n",
    "  model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip\n",
    "  tracker_config: deploy/pipeline/config/tracker_config.yml\n",
    "  batch_size: 1\n",
    "  enable: True\n",
    "\n",
    "ATTR:\n",
    "  model_dir:  https://bj.bcebos.com/v1/paddledet/models/pipeline/PPLCNet_x1_0_person_attribute_945_infer.zip\n",
    "  batch_size: 8\n",
    "  enable: True\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**注意：**\n",
    "\n",
    "- 如果用户需要实现不同任务，可以在配置文件对应enable选项设置为True。\n",
    "\n",
    "### 3.2 预测部署"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#直接使用默认配置或者examples中配置文件，或者直接在`infer_cfg_pphuman.yml`中修改配置：\n",
    "\n",
    "# 例：行人检测，指定配置文件路径和测试图片，图片输入默认打开检测模型\n",
    "!wget -P demo_input -N https://paddledet.bj.bcebos.com/modelcenter/images/PP-Human/human_attr.jpg \n",
    "!python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml --image_file=demo_input/human_attr.jpg --device=gpu --output_dir=demo_output"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 例：行人属性识别，直接使用examples中配置\n",
    "!wget -P demo_input -N https://paddledet.bj.bcebos.com/modelcenter/images/PP-Human/human_attr.mp4\n",
    "!python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_human_attr.yml --video_file=demo_input/human_attr.mp4 --device=gpu --output_dir=demo_output"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#使用命令行进行功能开启，或者模型路径修改：\n",
    "\n",
    "# 例：行人跟踪，指定配置文件路径，模型路径和测试视频, 命令行中指定的模型路径优先级高于配置文件\n",
    "!wget -P demo_input -N https://paddledet.bj.bcebos.com/modelcenter/images/PP-Human/human_count.mp4\n",
    "!python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml -o MOT.enable=True --video_file=demo_input/human_count.mp4 --device=gpu --output_dir=demo_output"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 例：行为识别，以摔倒识别为例，命令行中开启SKELETON_ACTION模型\n",
    "!wget -P demo_input -N https://paddledet.bj.bcebos.com/modelcenter/images/PP-Human/human_falldown.mp4\n",
    "!python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml -o SKELETON_ACTION.enable=True --video_file=demo_input/human_falldown.mp4 --device=gpu --output_dir=demo_output"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#rtsp推拉流\n",
    "\n",
    "#对rtsp拉流的支持，使用--rtsp RTSP [RTSP ...]参数指定一路或者多路rtsp视频流，如果是多路地址中间用空格隔开。(或者video_file后面的视频地址直接更换为rtsp流地址)，示例如下：\n",
    "\n",
    "# 例：行人属性识别，单路视频流\n",
    "!python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_human_attr.yml -o visual=False --rtsp rtsp://[YOUR_RTSP_SITE]  --device=gpu"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 例：行人属性识别，多路视频流\n",
    "!python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_human_attr.yml -o visual=False --rtsp rtsp://[YOUR_RTSP_SITE1]  rtsp://[YOUR_RTSP_SITE2] --device=gpu"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 视频结果推流rtsp\n",
    "\n",
    "# 预测结果进行rtsp推流，使用--pushurl rtsp:[IP] 推流到IP地址端，PC端可以使用[VLC播放器](https://vlc.onl/)打开网络流进行播放，播放地址为 `rtsp:[IP]/videoname`。其中`videoname`是预测的视频文件名，如果视频来源是本地摄像头则`videoname`默认为`output`.\n",
    "\n",
    "# 例：行人属性识别，单路视频流，该示例播放地址为 rtsp://[YOUR_SERVER_IP]:8554/test_video\n",
    "!python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_human_attr.yml --video_file=test_video.mp4 --device=gpu --pushurl rtsp://[YOUR_SERVER_IP]:8554\n",
    "\n",
    "# 注：\n",
    "# 1. rtsp推流服务基于 [rtsp-simple-server](https://github.com/aler9/rtsp-simple-server), 如使用推流功能请先开启该服务.\n",
    "# 2. rtsp推流如果模型处理速度跟不上会出现很明显的卡顿现象，建议跟踪模型使用ppyoloe_s版本，即修改配置中跟踪模型mot_ppyoloe_l_36e_pipeline.zip替换为mot_ppyoloe_s_36e_pipeline.zip。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "### 3.3 Jetson部署说明\n",
    "\n",
    "由于Jetson平台算力相比服务器有较大差距，有如下使用建议：\n",
    "\n",
    "1. 模型选择轻量级版本，特别是跟踪模型，推荐使用`ppyoloe_s: https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_s_36e_pipeline.zip`\n",
    "2. 开启跟踪跳帧功能，推荐使用2或者3. `skip_frame_num: 3`\n",
    "\n",
    "使用该推荐配置，在TX2平台上可以达到较高速率，经测试属性案例达到20fps。\n",
    "\n",
    "可以直接修改配置文件（推荐），也可以在命令行中修改（字段较长，不推荐）。\n",
    "\n",
    "\n",
    "### 参数说明\n",
    "\n",
    "| 参数 | 是否必须|含义 |\n",
    "|-------|-------|----------|\n",
    "| --config | Yes | 配置文件路径 |\n",
    "| -o | Option | 覆盖配置文件中对应的配置  |\n",
    "| --image_file | Option | 需要预测的图片 |\n",
    "| --image_dir  | Option |  要预测的图片文件夹路径   |\n",
    "| --video_file | Option | 需要预测的视频，或者rtsp流地址 |\n",
    "| --rtsp | Option | rtsp视频流地址，支持一路或者多路同时输入 |\n",
    "| --camera_id | Option | 用来预测的摄像头ID，默认为-1(表示不使用摄像头预测，可设置为：0 - (摄像头数目-1) )，预测过程中在可视化界面按`q`退出输出预测结果到：output/output.mp4|\n",
    "| --device | Option | 运行时的设备，可选择`CPU/GPU/XPU`，默认为`CPU`|\n",
    "| --pushurl | Option| 对预测结果视频进行推流的地址，以rtsp://开头，该选项优先级高于视频结果本地存储，打开时不再另外存储本地预测结果视频, 默认为空，表示没有开启|\n",
    "| --output_dir | Option|可视化结果保存的根目录，默认为output/|\n",
    "| --run_mode | Option |使用GPU时，默认为paddle, 可选（paddle/trt_fp32/trt_fp16/trt_int8）|\n",
    "| --enable_mkldnn | Option | CPU预测中是否开启MKLDNN加速，默认为False |\n",
    "| --cpu_threads | Option| 设置cpu线程数，默认为1 |\n",
    "| --trt_calib_mode | Option| TensorRT是否使用校准功能，默认为False。使用TensorRT的int8功能时，需设置为True，使用PaddleSlim量化后的模型时需要设置为False |\n",
    "| --do_entrance_counting | Option | 是否统计出入口流量，默认为False |\n",
    "| --draw_center_traj | Option | 是否绘制跟踪轨迹，默认为False |\n",
    "| --region_type | Option | 'horizontal'（默认值）、'vertical'：表示流量统计方向选择；'custom'：表示设置车辆禁停区域 |\n",
    "| --region_polygon | Option | 设置禁停区域多边形多点的坐标，无默认值 |\n",
    "| --illegal_parking_time | Option | 设置禁停时间阈值，单位秒（s），-1（默认值）表示不做检查 |\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 4. 方案介绍\n",
    "PP-Human 整体方案如下图所示:\n",
    "\n",
    "<div width=\"1000\" align=\"center\">\n",
    "  <img src=\"pphumanv2.png\"/>\n",
    "</div>\n",
    "\n",
    "### 行人检测\n",
    "- 采用PP-YOLOE L 作为目标检测模型\n",
    "- 详细文档参考[PP-YOLOE](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/configs/ppyoloe)和[检测跟踪文档](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/deploy/pipeline/docs/tutorials/pphuman_mot.md)\n",
    "\n",
    "### 行人跟踪\n",
    "- 采用SDE方案完成行人跟踪\n",
    "- 检测模型使用PP-YOLOE L(高精度)和S(轻量级)\n",
    "- 跟踪模块采用OC-SORT方案\n",
    "- 详细文档参考[OC-SORT](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/configs/mot/ocsort)和[检测跟踪文档](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/deploy/pipeline/docs/tutorials/pphuman_mot.md)\n",
    "\n",
    "### 跨镜行人跟踪\n",
    "- 使用PP-YOLOE + OC-SORT得到单镜头多目标跟踪轨迹\n",
    "- 使用ReID（StrongBaseline网络）对每一帧的检测结果提取特征\n",
    "- 多镜头轨迹特征进行匹配，得到跨镜头跟踪结果\n",
    "- 详细文档参考[跨镜跟踪](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/deploy/pipeline/docs/tutorials/pphuman_mtmct.md))\n",
    "\n",
    "### 属性识别\n",
    "- 使用PP-YOLOE + OC-SORT跟踪人体\n",
    "- 使用PP-HGNet、PP-LCNet（多分类模型）完成识别属性，主要属性包括年龄、性别、帽子、眼睛、上衣下衣款式、背包等\n",
    "- 详细文档参考[属性识别](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/deploy/pipeline/docs/tutorials/pphuman_attribute.md)\n",
    "\n",
    "### 行为识别：\n",
    "- 提供四种行为识别方案\n",
    "- 1. 基于骨骼点的行为识别，例如摔倒识别\n",
    "- 2. 基于图像分类的行为识别，例如打电话识别\n",
    "- 3. 基于检测的行为识别，例如吸烟识别\n",
    "- 4. 基于视频分类的行为识别，例如打架识别\n",
    "- 详细文档参考[行为识别](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.5/deploy/pipeline/docs/tutorials/pphuman_action.md)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3.8.13 ('paddle_env')",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.13"
  },
  "vscode": {
   "interpreter": {
    "hash": "864bc28e4d94d9c1c4bd0747e4313c0ab41718ab445ced17dbe1a405af5ecc64"
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
