{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 中文场景文字识别Baseline——基于2.0-rc"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 常规赛简介\n",
    "飞桨(PaddlePaddle)以百度多年的深度学习技术研究和业务应用为基础，集深度学习核心框架、基础模型库、端到端开发套件、工具组件和服务平台于一体，是中国首个开源开放、技术领先、功能完备的产业级深度学习平台。更多飞桨资讯，点击[此处](https://www.paddlepaddle.org.cn/)查看。\n",
    "\n",
    "飞桨常规赛由百度飞桨于2019年发起，面向全球AI开发者，赛题范围广，涵盖领域多。常规赛旨在通过长期发布的经典比赛项目，为开发者提供学习锻炼机会，助力大家在飞将大赛中获得骄人成绩。\n",
    "\n",
    "参赛选手需使用飞桨框架，基于特定赛题下的真实行业数据完成并提交任务。常规赛采取月度评比方式，为打破历史最高记录选手和当月有资格参与月度评奖的前10名选手提供飞桨特别礼包奖励。更多惊喜，更多收获，尽在飞桨常规赛。\n",
    "\n",
    "## 数据集描述\n",
    "本次赛题数据集共包括6万张图片，其中5万张图片作为训练集，1万张作为测试集。数据集采自中国街景，并由街景图片中的文字行区域（例如店铺标牌、地标等等）截取出来而形成。数据集中所有图像都经过预处理，高度统一为48像素，如下图所示：\n",
    "\n",
    "![](https://ai-studio-static-online.cdn.bcebos.com/557b2c744ec74d25b3183e8d5ca239396548cdb3312c4fdcaf85d5d259cf4e9c)\n",
    "\n",
    "标注：g3738\n",
    "\n",
    "为了避免标注上的歧义，竞赛组委会在计算指标之前对文字行进行如下预处理：\n",
    "- 全角统一为半角\n",
    "- 英文字符统一为小写\n",
    "- 中文字符统一为简体\n",
    "- 忽略所有空格和符号"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 解压文件\n",
    "! unzip -qo /home/aistudio/data/data62842/train_images.zip -d /home/aistudio/data\n",
    "! cp /home/aistudio/data/data62842/train_label.csv /home/aistudio/data\n",
    "! unzip -qo /home/aistudio/data/data62843/test_images.zip -d /home/aistudio/data\n",
    "! rm -r /home/aistudio/data/__MACOSX"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## **总体思路**\n",
    "模型设计思路（模型主要构成以及设计思想）\n",
    "参考Shi等人的CRNN模型架构，参赛者可以在此基础上进行修改，也可以使用全新的网络架构冲榜~~\n",
    "\n",
    "![](https://ai-studio-static-online.cdn.bcebos.com/48c90f0942e24996871abc6bcab47d37d5c339bb74704fc99b556f60d589c761)\n",
    "\n",
    "1. 参数\n",
    "2. 预处理\n",
    "\t- 生成标签\n",
    "   - 图像增强\n",
    "\t- 制作Reader\n",
    "3. 训练\n",
    "\t- 定义输入层\n",
    "   - 前向计算（组网、模型实例化）\n",
    "   - 反向传播（损失函数、优化方法）\n",
    "4. 预测\n",
    "   - 制作Reader\n",
    "   - 定义输入层\n",
    "   - model.predict() 预测模型"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Let's go！"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 参数\n",
    "训练和预测的超参数和变量都放在cfg字典中，方便修改调试模型。考虑到AI Studio每周使用GPU时间上限70小时，而图像类的比赛训练需要花费大量时间，我们可以本地调试成功后将模型挂到后台任务上跑，所以尽可能使用绝对地址来指定文件。关于后台任务的更多说明，可以看 [这里](https://aistudio.baidu.com/aistudio/projectdetail/1173726)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "cfg = {\n",
    "    \"input_size\": (3, 48, 256),\n",
    "    \"epoch\": 1,\n",
    "    \"batch_size\": 64,\n",
    "    \"learning_rate\": 0.0001,\n",
    "    \n",
    "    \"label_max_len\": -1,\n",
    "    \"classify_num\": -1,\n",
    "    \"train_list\": \"/home/aistudio/data/train_label.txt\",\n",
    "    \"label_list\": \"/home/aistudio/data/label_list.txt\",\n",
    "    \"data_path\": \"/home/aistudio/data/train_images\",\n",
    "    \"infer_data_path\": \"/home/aistudio/data/test_images\",\n",
    "    \"checkpoint_path\": \"/home/aistudio/work/output/final\",\n",
    "    \"save_dir\": \"/home/aistudio/work/output\",\n",
    "}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 预处理\n",
    "1. 标签处理\n",
    "2. 创建字典\n",
    "3. 图像增强\n",
    "4. 制作Reader"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "为了避免标注上的歧义，竞赛组委会在计算指标之前对文字行进行过如下预处理：\n",
    "- 全角统一为半角\n",
    "- 英文字符统一为小写\n",
    "- 中文字符统一为简体\n",
    "- 忽略所有空格和符号\n",
    "***\n",
    "`train_label.csv` 文件格式如下：\n",
    "| name | value |\n",
    "| :-: | :-: | \n",
    "| 0.jpg     | 拉拉     | \n",
    "| 1.jpg | ６号 |\n",
    "| 2.jpg | 胖胖 |\n",
    "| ... | ... |\n",
    "| 49999.jpg | 脑维修 |\n",
    "\n",
    "\n",
    "我们使用Pandas读取 train_label.csv 文件，对每一行的标注信息进行处理，并且删除处理后为空字符的标注，将其保存为 .txt 文件。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "label max len:  77\n"
     ]
    }
   ],
   "source": [
    "import pandas as pd\n",
    "\n",
    "def Q2B(s):\n",
    "    \"\"\"全角转半角\"\"\"\n",
    "    inside_code=ord(s)\n",
    "    if inside_code==0x3000:\n",
    "        inside_code=0x0020\n",
    "    else:\n",
    "        inside_code-=0xfee0\n",
    "    if inside_code<0x0020 or inside_code>0x7e: #转完之后不是半角字符返回原来的字符\n",
    "        return s\n",
    "    return chr(inside_code)\n",
    "\n",
    "def stringQ2B(s):\n",
    "    \"\"\"把字符串全角转半角\"\"\"\n",
    "    return \"\".join([Q2B(c) for c in s])\n",
    "\n",
    "def is_chinese(s):\n",
    "    \"\"\"判断unicode是否是汉字\"\"\"\n",
    "    for c in s:\n",
    "        if c < u'\\u4e00' or c > u'\\u9fa5':\n",
    "            return False\n",
    "    return True\n",
    "\n",
    "def is_number(s):\n",
    "    \"\"\"判断unicode是否是数字\"\"\"\n",
    "    for c in s:\n",
    "        if c < u'\\u0030' or c > u'\\u0039':\n",
    "            return False\n",
    "    return True\n",
    "\n",
    "def is_alphabet(s):\n",
    "    \"\"\"判断unicode是否是英文字母\"\"\"\n",
    "    for c in s:\n",
    "        if c < u'\\u0061' or c > u'\\u007a':\n",
    "            return False\n",
    "    return True\n",
    "\n",
    "def del_other(s):\n",
    "    \"\"\"判断是否非汉字，数字和小写英文\"\"\"\n",
    "    res = str()\n",
    "    for c in s:\n",
    "        if not (is_chinese(c) or is_number(c) or is_alphabet(c)):\n",
    "            c = \"\"\n",
    "        res += c\n",
    "    return res\n",
    "\n",
    "\n",
    "df = pd.read_csv(\"/home/aistudio/data/train_label.csv\", encoding=\"gbk\")\n",
    "name, value = list(df.name), list(df.value)\n",
    "for i, label in enumerate(value):\n",
    "    # 全角转半角\n",
    "    label = stringQ2B(label)\n",
    "    # 大写转小写\n",
    "    label = \"\".join([c.lower() for c in label])\n",
    "    # 删除所有空格符号\n",
    "    label = del_other(label)\n",
    "    value[i] = label\n",
    "\n",
    "# 删除标签为\"\"的行\n",
    "data = zip(name, value)\n",
    "data = list(filter(lambda c: c[1]!=\"\", list(data)))\n",
    "# 保存到work目录\n",
    "with open(cfg[\"train_list\"], \"w\") as f:\n",
    "    for line in data:\n",
    "        f.write(line[0] + \"\\t\" + line[1] + \"\\n\")\n",
    "\n",
    "# 记录训练集中最长标签\n",
    "with open(cfg[\"train_list\"], \"r\") as f:\n",
    "    for line in f:\n",
    "        name, label = line.strip().split(\"\\t\")\n",
    "        if len(label) > cfg[\"label_max_len\"]:\n",
    "            cfg[\"label_max_len\"] = len(label)\n",
    "\n",
    "print(\"label max len: \", cfg[\"label_max_len\"])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "处理完标签信息，就可以整理字典了。通过字典，可以在训练的时候把字符映射成字符的索引。比如我们在训练图片的时候，定义标签的值是整型，但我们期望的结果是字符型，就需要通过一个字典进行转换。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "classify num:  3096\n"
     ]
    }
   ],
   "source": [
    "def create_label_list(train_list):\n",
    "    classSet = set()\n",
    "    with open(train_list) as f:\n",
    "        next(f)\n",
    "        for line in f:\n",
    "            img_name, label = line.strip().split(\"\\t\")\n",
    "            for e in label:\n",
    "                classSet.add(e)\n",
    "    # 在类的基础上加一个blank\n",
    "    cfg[\"classify_num\"] = len(classSet) + 1\n",
    "    classList = sorted(list(classSet))\n",
    "    with open(\"/home/aistudio/data/label_list.txt\", \"w\") as f:\n",
    "        for idx, c in enumerate(classList):\n",
    "            f.write(\"{}\\t{}\\n\".format(c, idx))\n",
    "            \n",
    "    return classSet\n",
    "\n",
    "classSet = create_label_list(cfg[\"train_list\"])\n",
    "print(\"classify num: \", len(classSet))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "图像增强在分类任务中很常见，主要是针对训练数据不足的情况。对于中文字体，存在字体多样、结构复杂的问题，形近字也很容易让机器“困惑”；另外采集器的分辨率导致图像质量不一以及场景的一些特点诸如光照不匀、背景复杂...这些都让中文场景文字识别任务困难。\n",
    "\n",
    "飞桨2.0-rc版本提供了一些预处理模块，放在 `paddle.vision.transforms` 下。通过这些模块，可以快速完成图像增强的任务。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "from paddle.vision.transforms import Compose, ColorJitter, Resize, RandomRotation, Normalize\n",
    "import cv2\n",
    "\n",
    "def preprocess(img):\n",
    "    transform = Compose([\n",
    "        ColorJitter(0.2, 0.2, 0.2, 0.2),\n",
    "        Resize(size=(48, 256)),\n",
    "        RandomRotation(5)\n",
    "        ])\n",
    "    img = transform(img).reshape(cfg[\"input_size\"]).astype(\"float32\")\n",
    "    return img / 255."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Reader制作有两种方式：**Dataset** 和 **DataLoader** 。\n",
    "\n",
    "paddle.io.Dataset在paddle.Model()中使用较多，而DataLoader返回一个迭代器，该迭代器根据 batch_sampler 给定的顺序迭代一次给定的 Dataset。Dataset的使用方法如下，DataLoader的使用可以参考 [官方文档](https://www.paddlepaddle.org.cn/documentation/docs/zh/2.0-rc/api/paddle/io/DataLoader_cn.html#dataloader)\n",
    "```python\n",
    "# 步骤一：继承paddle.io.Dataset类\n",
    "class MyDataset(Dataset):\n",
    "\n",
    "# 步骤二：实现构造函数，定义数据读取方式，划分训练和测试数据集\n",
    "def __init__(self, mode='train’):\n",
    "    super().__init__()\n",
    "    if mode == 'train':\n",
    "        self.data = [\n",
    "            ['traindata1', 'label1'],\n",
    "            ['traindata2', 'label2’], \n",
    "        ]\n",
    "    else:\n",
    "        self.data = [\n",
    "            ['testdata1', 'label1'],\n",
    "            ['testdata2', 'label2’], \n",
    "        ]\n",
    "        \n",
    "# 步骤三：实现__getitem__方法，定义指定index时如何获取数据，并返回单条数据（训练数据，对应的标签）\n",
    "def __getitem__(self, index):\n",
    "    data = self.data[index][0]\n",
    "    label = self.data[index][1]\n",
    "    return data, label\n",
    "\n",
    "# 步骤四：实现__len__方法，返回数据集总数目 \n",
    "def __len__(self):\n",
    "    return len(self.data)\n",
    "    \n",
    "# 调用\n",
    "reader = MyDataset()\n",
    "for i in range(len(reader)):\n",
    "    print(reader[i])\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 打包图片和标签\n",
    "train_data = list()\n",
    "with open(cfg[\"train_list\"], \"r\") as f:\n",
    "    next(f)\n",
    "    for line in f:\n",
    "        img_name, label = line.strip().split(\"\\t\")\n",
    "        train_data.append([img_name, label])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "import numpy as np\n",
    "from paddle.io import Dataset\n",
    "\n",
    "\n",
    "class Reader(Dataset):\n",
    "    def __init__(self, data, is_val=False):\n",
    "        super().__init__()\n",
    "        # 划分训练集和测试集\n",
    "        self.samples = data[-int(len(data)*0.2):] if is_val else data[:-int(len(data)*0.2)]\n",
    "\n",
    "    def __getitem__(self, idx):\n",
    "        # 处理图像\n",
    "        img = self.samples[idx][0]\n",
    "        img_path = os.path.join(cfg[\"data_path\"], img)\n",
    "        img = cv2.imread(img_path)\n",
    "        img = preprocess(img)\n",
    "        # 处理标签\n",
    "        label = self.samples[idx][1]\n",
    "        temp_label = list()\n",
    "        for c in label:\n",
    "            with open(cfg[\"label_list\"]) as f:\n",
    "                for line in f:\n",
    "                    k, v = line.strip(\"\\n\").split(\"\\t\")\n",
    "                    if c == k:\n",
    "                        temp_label.append(v)\n",
    "        # 用blank填充label\n",
    "        label = np.ones(cfg[\"label_max_len\"]+1, dtype=\"int32\") * (cfg[\"classify_num\"]-1)\n",
    "        label[: len(temp_label)] = np.array(temp_label, dtype=\"int32\")\n",
    "        return img, label\n",
    "\n",
    "    def __len__(self):\n",
    "        # 返回每个Epoch中图片数量\n",
    "        return len(self.samples)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 训练"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在输入层，我们使用InputSpec定义输入规范，包括图片和标签。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "from paddle.nn import Conv2D, BatchNorm2D, LeakyReLU, MaxPool2D, LSTM, Linear, Dropout\n",
    "from paddle.vision.models import ResNet, resnet34\n",
    "from paddle.vision.models.resnet import BottleneckBlock\n",
    "import paddle\n",
    "\n",
    "\n",
    "# 定义输入层，shape中第0维使用-1则可以在推理时自由调节batch size\n",
    "input_define = paddle.static.InputSpec(\n",
    "    shape=[-1, cfg[\"input_size\"][0], cfg[\"input_size\"][1], cfg[\"input_size\"][2]],\n",
    "    dtype=\"float32\",\n",
    "    name=\"img\")\n",
    "\n",
    "# 定义标签\n",
    "label_define = paddle.static.InputSpec(\n",
    "    shape=[-1, cfg[\"label_max_len\"]+1],\n",
    "    dtype=\"int32\",\n",
    "    name=\"label\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "2.0版本提供高级API，我们可以直接使用 paddle.vision.models 下的模块完成卷积层的设置。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2021-03-16 15:17:39,959 - INFO - unique_endpoints {''}\n",
      "2021-03-16 15:17:39,962 - INFO - Downloading resnet34.pdparams from https://paddle-hapi.bj.bcebos.com/models/resnet34.pdparams\n",
      "100%|██████████| 128669/128669 [00:01<00:00, 64857.10it/s]\n",
      "2021-03-16 15:17:42,125 - INFO - File /home/aistudio/.cache/paddle/hapi/weights/resnet34.pdparams md5 checking...\n"
     ]
    }
   ],
   "source": [
    "class Net(paddle.nn.Layer):\n",
    "    def __init__(self, mode=\"train\"):\n",
    "        super().__init__()\n",
    "        self.mode = mode\n",
    "        # CNN\n",
    "        # self.resnet34 = ResNet(BottleneckBlock, 34, -1, False)\n",
    "        self.resnet = resnet34(True, num_classes=-1, with_pool=False)\n",
    "        self.linear_1 = Linear(16, 100)\n",
    "        # RNN\n",
    "        self.lstm = LSTM(512, 256, direction=\"bidirectional\")\n",
    "        self.dropout = Dropout()\n",
    "        self.linear_2 = Linear(512, cfg[\"classify_num\"])\n",
    "\n",
    "\n",
    "    def forward(self, x):\n",
    "        # (-1, 3, 48, 256)\n",
    "        x = self.resnet(x)\n",
    "        # (-1, 512, 2, 8)\n",
    "        x = paddle.tensor.flatten(x, 2)\n",
    "        # (-1, 512, 16)\n",
    "        x = self.linear_1(x)\n",
    "        # (-1, 512, 100)\n",
    "        x = paddle.tensor.transpose(x, [0, 2, 1])\n",
    "        # (-1, 100, 512)\n",
    "        x = self.lstm(x)[0]\n",
    "        # (-1, 100, hidden_size*2)\n",
    "        x = self.dropout(x)\n",
    "        x = self.linear_2(x)\n",
    "        # (-1, 100, 3097)\n",
    "        # 在计算损失时ctc-loss会自动进行softmax，所以在推理模式中需额外做softmax获取标签概率\n",
    "        if self.mode == \"eval\":\n",
    "            # 输出层 - Shape = (Batch Size, Max label len, Prob)\n",
    "            x = paddle.nn.functional.softmax(x)\n",
    "            # 转换为标签\n",
    "            x = paddle.tensor.argmax(x, axis=-1)\n",
    "        return x\n",
    "\n",
    "# 实例化模型\n",
    "model = paddle.Model(Net(), inputs=input_define, labels=label_define)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "这里要说一个特别好用的属性——**summary()** ，通过summary可以快速打印模型的网络结构，并且，执行该语句的时候会执行一次网络。在动态图中，我们需要手算网络的输入和输出层，如果出现一点问题就会报错非常麻烦，而我们可以在遇到问题的层后加上print函数配合summary，debug速度直接芜湖~"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "--------------------------------------------------------------------------------------------------\n",
      " Layer (type)       Input Shape                     Output Shape                     Param #    \n",
      "==================================================================================================\n",
      "   Conv2D-1      [[1, 3, 48, 256]]                [1, 64, 24, 128]                    9,408     \n",
      " BatchNorm2D-1   [[1, 64, 24, 128]]               [1, 64, 24, 128]                     256      \n",
      "    ReLU-1       [[1, 64, 24, 128]]               [1, 64, 24, 128]                      0       \n",
      "  MaxPool2D-1    [[1, 64, 24, 128]]                [1, 64, 12, 64]                      0       \n",
      "   Conv2D-2      [[1, 64, 12, 64]]                 [1, 64, 12, 64]                   36,864     \n",
      " BatchNorm2D-2   [[1, 64, 12, 64]]                 [1, 64, 12, 64]                     256      \n",
      "    ReLU-2       [[1, 64, 12, 64]]                 [1, 64, 12, 64]                      0       \n",
      "   Conv2D-3      [[1, 64, 12, 64]]                 [1, 64, 12, 64]                   36,864     \n",
      " BatchNorm2D-3   [[1, 64, 12, 64]]                 [1, 64, 12, 64]                     256      \n",
      " BasicBlock-1    [[1, 64, 12, 64]]                 [1, 64, 12, 64]                      0       \n",
      "   Conv2D-4      [[1, 64, 12, 64]]                 [1, 64, 12, 64]                   36,864     \n",
      " BatchNorm2D-4   [[1, 64, 12, 64]]                 [1, 64, 12, 64]                     256      \n",
      "    ReLU-3       [[1, 64, 12, 64]]                 [1, 64, 12, 64]                      0       \n",
      "   Conv2D-5      [[1, 64, 12, 64]]                 [1, 64, 12, 64]                   36,864     \n",
      " BatchNorm2D-5   [[1, 64, 12, 64]]                 [1, 64, 12, 64]                     256      \n",
      " BasicBlock-2    [[1, 64, 12, 64]]                 [1, 64, 12, 64]                      0       \n",
      "   Conv2D-6      [[1, 64, 12, 64]]                 [1, 64, 12, 64]                   36,864     \n",
      " BatchNorm2D-6   [[1, 64, 12, 64]]                 [1, 64, 12, 64]                     256      \n",
      "    ReLU-4       [[1, 64, 12, 64]]                 [1, 64, 12, 64]                      0       \n",
      "   Conv2D-7      [[1, 64, 12, 64]]                 [1, 64, 12, 64]                   36,864     \n",
      " BatchNorm2D-7   [[1, 64, 12, 64]]                 [1, 64, 12, 64]                     256      \n",
      " BasicBlock-3    [[1, 64, 12, 64]]                 [1, 64, 12, 64]                      0       \n",
      "   Conv2D-9      [[1, 64, 12, 64]]                 [1, 128, 6, 32]                   73,728     \n",
      " BatchNorm2D-9   [[1, 128, 6, 32]]                 [1, 128, 6, 32]                     512      \n",
      "    ReLU-5       [[1, 128, 6, 32]]                 [1, 128, 6, 32]                      0       \n",
      "   Conv2D-10     [[1, 128, 6, 32]]                 [1, 128, 6, 32]                   147,456    \n",
      "BatchNorm2D-10   [[1, 128, 6, 32]]                 [1, 128, 6, 32]                     512      \n",
      "   Conv2D-8      [[1, 64, 12, 64]]                 [1, 128, 6, 32]                    8,192     \n",
      " BatchNorm2D-8   [[1, 128, 6, 32]]                 [1, 128, 6, 32]                     512      \n",
      " BasicBlock-4    [[1, 64, 12, 64]]                 [1, 128, 6, 32]                      0       \n",
      "   Conv2D-11     [[1, 128, 6, 32]]                 [1, 128, 6, 32]                   147,456    \n",
      "BatchNorm2D-11   [[1, 128, 6, 32]]                 [1, 128, 6, 32]                     512      \n",
      "    ReLU-6       [[1, 128, 6, 32]]                 [1, 128, 6, 32]                      0       \n",
      "   Conv2D-12     [[1, 128, 6, 32]]                 [1, 128, 6, 32]                   147,456    \n",
      "BatchNorm2D-12   [[1, 128, 6, 32]]                 [1, 128, 6, 32]                     512      \n",
      " BasicBlock-5    [[1, 128, 6, 32]]                 [1, 128, 6, 32]                      0       \n",
      "   Conv2D-13     [[1, 128, 6, 32]]                 [1, 128, 6, 32]                   147,456    \n",
      "BatchNorm2D-13   [[1, 128, 6, 32]]                 [1, 128, 6, 32]                     512      \n",
      "    ReLU-7       [[1, 128, 6, 32]]                 [1, 128, 6, 32]                      0       \n",
      "   Conv2D-14     [[1, 128, 6, 32]]                 [1, 128, 6, 32]                   147,456    \n",
      "BatchNorm2D-14   [[1, 128, 6, 32]]                 [1, 128, 6, 32]                     512      \n",
      " BasicBlock-6    [[1, 128, 6, 32]]                 [1, 128, 6, 32]                      0       \n",
      "   Conv2D-15     [[1, 128, 6, 32]]                 [1, 128, 6, 32]                   147,456    \n",
      "BatchNorm2D-15   [[1, 128, 6, 32]]                 [1, 128, 6, 32]                     512      \n",
      "    ReLU-8       [[1, 128, 6, 32]]                 [1, 128, 6, 32]                      0       \n",
      "   Conv2D-16     [[1, 128, 6, 32]]                 [1, 128, 6, 32]                   147,456    \n",
      "BatchNorm2D-16   [[1, 128, 6, 32]]                 [1, 128, 6, 32]                     512      \n",
      " BasicBlock-7    [[1, 128, 6, 32]]                 [1, 128, 6, 32]                      0       \n",
      "   Conv2D-18     [[1, 128, 6, 32]]                 [1, 256, 3, 16]                   294,912    \n",
      "BatchNorm2D-18   [[1, 256, 3, 16]]                 [1, 256, 3, 16]                    1,024     \n",
      "    ReLU-9       [[1, 256, 3, 16]]                 [1, 256, 3, 16]                      0       \n",
      "   Conv2D-19     [[1, 256, 3, 16]]                 [1, 256, 3, 16]                   589,824    \n",
      "BatchNorm2D-19   [[1, 256, 3, 16]]                 [1, 256, 3, 16]                    1,024     \n",
      "   Conv2D-17     [[1, 128, 6, 32]]                 [1, 256, 3, 16]                   32,768     \n",
      "BatchNorm2D-17   [[1, 256, 3, 16]]                 [1, 256, 3, 16]                    1,024     \n",
      " BasicBlock-8    [[1, 128, 6, 32]]                 [1, 256, 3, 16]                      0       \n",
      "   Conv2D-20     [[1, 256, 3, 16]]                 [1, 256, 3, 16]                   589,824    \n",
      "BatchNorm2D-20   [[1, 256, 3, 16]]                 [1, 256, 3, 16]                    1,024     \n",
      "    ReLU-10      [[1, 256, 3, 16]]                 [1, 256, 3, 16]                      0       \n",
      "   Conv2D-21     [[1, 256, 3, 16]]                 [1, 256, 3, 16]                   589,824    \n",
      "BatchNorm2D-21   [[1, 256, 3, 16]]                 [1, 256, 3, 16]                    1,024     \n",
      " BasicBlock-9    [[1, 256, 3, 16]]                 [1, 256, 3, 16]                      0       \n",
      "   Conv2D-22     [[1, 256, 3, 16]]                 [1, 256, 3, 16]                   589,824    \n",
      "BatchNorm2D-22   [[1, 256, 3, 16]]                 [1, 256, 3, 16]                    1,024     \n",
      "    ReLU-11      [[1, 256, 3, 16]]                 [1, 256, 3, 16]                      0       \n",
      "   Conv2D-23     [[1, 256, 3, 16]]                 [1, 256, 3, 16]                   589,824    \n",
      "BatchNorm2D-23   [[1, 256, 3, 16]]                 [1, 256, 3, 16]                    1,024     \n",
      " BasicBlock-10   [[1, 256, 3, 16]]                 [1, 256, 3, 16]                      0       \n",
      "   Conv2D-24     [[1, 256, 3, 16]]                 [1, 256, 3, 16]                   589,824    \n",
      "BatchNorm2D-24   [[1, 256, 3, 16]]                 [1, 256, 3, 16]                    1,024     \n",
      "    ReLU-12      [[1, 256, 3, 16]]                 [1, 256, 3, 16]                      0       \n",
      "   Conv2D-25     [[1, 256, 3, 16]]                 [1, 256, 3, 16]                   589,824    \n",
      "BatchNorm2D-25   [[1, 256, 3, 16]]                 [1, 256, 3, 16]                    1,024     \n",
      " BasicBlock-11   [[1, 256, 3, 16]]                 [1, 256, 3, 16]                      0       \n",
      "   Conv2D-26     [[1, 256, 3, 16]]                 [1, 256, 3, 16]                   589,824    \n",
      "BatchNorm2D-26   [[1, 256, 3, 16]]                 [1, 256, 3, 16]                    1,024     \n",
      "    ReLU-13      [[1, 256, 3, 16]]                 [1, 256, 3, 16]                      0       \n",
      "   Conv2D-27     [[1, 256, 3, 16]]                 [1, 256, 3, 16]                   589,824    \n",
      "BatchNorm2D-27   [[1, 256, 3, 16]]                 [1, 256, 3, 16]                    1,024     \n",
      " BasicBlock-12   [[1, 256, 3, 16]]                 [1, 256, 3, 16]                      0       \n",
      "   Conv2D-28     [[1, 256, 3, 16]]                 [1, 256, 3, 16]                   589,824    \n",
      "BatchNorm2D-28   [[1, 256, 3, 16]]                 [1, 256, 3, 16]                    1,024     \n",
      "    ReLU-14      [[1, 256, 3, 16]]                 [1, 256, 3, 16]                      0       \n",
      "   Conv2D-29     [[1, 256, 3, 16]]                 [1, 256, 3, 16]                   589,824    \n",
      "BatchNorm2D-29   [[1, 256, 3, 16]]                 [1, 256, 3, 16]                    1,024     \n",
      " BasicBlock-13   [[1, 256, 3, 16]]                 [1, 256, 3, 16]                      0       \n",
      "   Conv2D-31     [[1, 256, 3, 16]]                 [1, 512, 2, 8]                   1,179,648   \n",
      "BatchNorm2D-31    [[1, 512, 2, 8]]                 [1, 512, 2, 8]                     2,048     \n",
      "    ReLU-15       [[1, 512, 2, 8]]                 [1, 512, 2, 8]                       0       \n",
      "   Conv2D-32      [[1, 512, 2, 8]]                 [1, 512, 2, 8]                   2,359,296   \n",
      "BatchNorm2D-32    [[1, 512, 2, 8]]                 [1, 512, 2, 8]                     2,048     \n",
      "   Conv2D-30     [[1, 256, 3, 16]]                 [1, 512, 2, 8]                    131,072    \n",
      "BatchNorm2D-30    [[1, 512, 2, 8]]                 [1, 512, 2, 8]                     2,048     \n",
      " BasicBlock-14   [[1, 256, 3, 16]]                 [1, 512, 2, 8]                       0       \n",
      "   Conv2D-33      [[1, 512, 2, 8]]                 [1, 512, 2, 8]                   2,359,296   \n",
      "BatchNorm2D-33    [[1, 512, 2, 8]]                 [1, 512, 2, 8]                     2,048     \n",
      "    ReLU-16       [[1, 512, 2, 8]]                 [1, 512, 2, 8]                       0       \n",
      "   Conv2D-34      [[1, 512, 2, 8]]                 [1, 512, 2, 8]                   2,359,296   \n",
      "BatchNorm2D-34    [[1, 512, 2, 8]]                 [1, 512, 2, 8]                     2,048     \n",
      " BasicBlock-15    [[1, 512, 2, 8]]                 [1, 512, 2, 8]                       0       \n",
      "   Conv2D-35      [[1, 512, 2, 8]]                 [1, 512, 2, 8]                   2,359,296   \n",
      "BatchNorm2D-35    [[1, 512, 2, 8]]                 [1, 512, 2, 8]                     2,048     \n",
      "    ReLU-17       [[1, 512, 2, 8]]                 [1, 512, 2, 8]                       0       \n",
      "   Conv2D-36      [[1, 512, 2, 8]]                 [1, 512, 2, 8]                   2,359,296   \n",
      "BatchNorm2D-36    [[1, 512, 2, 8]]                 [1, 512, 2, 8]                     2,048     \n",
      " BasicBlock-16    [[1, 512, 2, 8]]                 [1, 512, 2, 8]                       0       \n",
      "   ResNet-1      [[1, 3, 48, 256]]                 [1, 512, 2, 8]                       0       \n",
      "   Linear-1        [[1, 512, 16]]                   [1, 512, 100]                     1,700     \n",
      "    LSTM-1        [[1, 100, 512]]    [[1, 100, 512], [[2, 1, 256], [2, 1, 256]]]    1,576,960   \n",
      "   Dropout-1      [[1, 100, 512]]                   [1, 100, 512]                       0       \n",
      "   Linear-2       [[1, 100, 512]]                  [1, 100, 3097]                   1,588,761   \n",
      "==================================================================================================\n",
      "Total params: 24,469,117\n",
      "Trainable params: 24,435,069\n",
      "Non-trainable params: 34,048\n",
      "--------------------------------------------------------------------------------------------------\n",
      "Input size (MB): 0.14\n",
      "Forward/backward pass size (MB): 26.91\n",
      "Params size (MB): 93.34\n",
      "Estimated Total Size (MB): 120.39\n",
      "--------------------------------------------------------------------------------------------------\n",
      "\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'total_params': 24469117, 'trainable_params': 24435069}"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "model.summary()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "The loss value printed in the log is the current step, and the metric is the average value of previous step.\n",
      "Epoch 1/1\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/utils.py:77: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working\n",
      "  return (isinstance(seq, collections.Sequence) and\n",
      "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/nn/layer/norm.py:648: UserWarning: When training, we now always track global mean and variance.\n",
      "  \"When training, we now always track global mean and variance.\")\n",
      "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/math_op_patch.py:238: UserWarning: The dtype of left and right variables are not the same, left dtype is VarType.FP32, but right dtype is VarType.INT64, the right dtype will convert to VarType.FP32\n",
      "  format(lhs_dtype, rhs_dtype, lhs_dtype))\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "step 621/621 [==============================] - loss: 6.9796 - 783ms/step         \n",
      "save checkpoint at /home/aistudio/work/output/0\n",
      "Eval begin...\n",
      "The loss value printed in the log is the current batch, and the metric is the average value of previous step.\n",
      "step 156/156 [==============================] - loss: 6.6648 - 655ms/step         \n",
      "Eval samples: 9922\n",
      "save checkpoint at /home/aistudio/work/output/final\n"
     ]
    }
   ],
   "source": [
    "class CTCLoss(paddle.nn.Layer):\n",
    "    def __init__(self):\n",
    "        \"\"\"\n",
    "        定义CTCLoss\n",
    "        \"\"\"\n",
    "        super().__init__()\n",
    "\n",
    "    def forward(self, ipt, label):\n",
    "        input_lengths = paddle.tensor.creation.fill_constant([ipt.shape[0]], \"int64\", ipt.shape[1])\n",
    "        # 指定label中未padding的长度\n",
    "        label_lengths = list()\n",
    "        for i in range(label.shape[0]):\n",
    "            idx = 0\n",
    "            while label[i][idx] != cfg[\"classify_num\"]-1:\n",
    "                idx += 1\n",
    "            label_lengths.append(idx)\n",
    "        label_lengths = paddle.to_tensor(label_lengths, dtype=\"int64\")\n",
    "        # 按文档要求进行转换dim顺序\n",
    "        ipt = paddle.tensor.transpose(ipt, [1, 0, 2])\n",
    "        # 计算loss\n",
    "        loss = paddle.nn.functional.ctc_loss(ipt, label, input_lengths, label_lengths, blank=cfg[\"classify_num\"]-1)\n",
    "        return loss\n",
    "\n",
    "# 定义优化器\n",
    "optimizer = paddle.optimizer.Adam(learning_rate=cfg[\"learning_rate\"], parameters=model.parameters())\n",
    "\n",
    "# 为模型配置运行环境并设置该优化策略\n",
    "model.prepare(\n",
    "    optimizer=optimizer,\n",
    "    loss=CTCLoss())\n",
    "\n",
    "# 执行训练\n",
    "model.fit(\n",
    "    train_data=Reader(train_data),\n",
    "    eval_data=Reader(train_data, is_val=True),\n",
    "    batch_size=cfg[\"batch_size\"],\n",
    "    epochs=cfg[\"epoch\"],\n",
    "    save_dir=cfg[\"save_dir\"],\n",
    "    save_freq=10,\n",
    "    log_freq=1,\n",
    "    verbose=1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 预测"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 与训练近似，但不包含Label\n",
    "class InferReader(Dataset):\n",
    "    def __init__(self, dir_path=None, img_path=None):\n",
    "        \"\"\"\n",
    "        数据读取Reader(推理)\n",
    "        :param dir_path: 推理对应文件夹（二选一）\n",
    "        :param img_path: 推理单张图片（二选一）\n",
    "        \"\"\"\n",
    "        super().__init__()\n",
    "        if dir_path:\n",
    "            # 获取文件夹中所有图片路径\n",
    "            self.img_names = [i for i in os.listdir(dir_path) if os.path.splitext(i)[1] == \".jpg\"]\n",
    "            self.img_paths = [os.path.join(dir_path, i) for i in self.img_names]\n",
    "        elif img_path:\n",
    "            self.img_names = [os.path.split(img_path)[1]]\n",
    "            self.img_paths = [img_path]\n",
    "        else:\n",
    "            raise Exception(\"请指定需要预测的文件夹或对应图片路径\")\n",
    "\n",
    "    def get_names(self):\n",
    "        \"\"\"\n",
    "        获取推理文件名顺序\n",
    "        \"\"\"\n",
    "        return self.img_names\n",
    "\n",
    "    def __getitem__(self, index):\n",
    "        # 获取图像路径\n",
    "        file_path = self.img_paths[index]\n",
    "        \n",
    "        img = cv2.imread(file_path)\n",
    "        img = preprocess(img)\n",
    "        return img\n",
    "\n",
    "    def __len__(self):\n",
    "        return len(self.img_paths)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2021-03-16 16:31:07,272 - INFO - unique_endpoints {''}\n",
      "2021-03-16 16:31:07,273 - INFO - File /home/aistudio/.cache/paddle/hapi/weights/resnet34.pdparams md5 checking...\n",
      "2021-03-16 16:31:07,607 - INFO - Found /home/aistudio/.cache/paddle/hapi/weights/resnet34.pdparams\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Predict begin...\n",
      "step 157/157 [==============================] - 90ms/step          \n",
      "Predict samples: 10000\n"
     ]
    }
   ],
   "source": [
    "# 编写简易版解码器\n",
    "import paddle\n",
    "def ctc_decode(text_idx, char_dict, blank=10):\n",
    "    \"\"\"\n",
    "    简易CTC解码器\n",
    "    :param text_idx: 待解码数据\n",
    "    :param blank: 分隔符索引值\n",
    "    :param char_dict: idx转中文字典\n",
    "    :return: 解码后数据\n",
    "    \"\"\"\n",
    "    result = []\n",
    "    cache_idx = -1\n",
    "    for char_idx in text_idx:\n",
    "        if char_idx != blank and char_idx != cache_idx:\n",
    "            result.append(char_dict[char_idx])\n",
    "        cache_idx = char_idx\n",
    "    return result\n",
    "\n",
    "\n",
    "char_dict_cache = dict()\n",
    "with open(cfg[\"label_list\"], \"r\") as file:\n",
    "    for line in file.readlines():\n",
    "        char, idx = line.split(\"\\t\")\n",
    "        char_dict_cache[int(idx.strip(\"\\n\"))] = char\n",
    "\n",
    "\n",
    "# 实例化推理模型\n",
    "model = paddle.Model(Net(mode=\"eval\"), inputs=input_define)\n",
    "# 加载训练好的参数模型\n",
    "model.load(cfg[\"checkpoint_path\"])\n",
    "# 设置运行环境\n",
    "model.prepare()\n",
    "\n",
    "# 加载预测Reader\n",
    "infer_reader = InferReader(cfg[\"infer_data_path\"])\n",
    "img_names = infer_reader.get_names()\n",
    "results = model.predict(infer_reader, batch_size=cfg[\"batch_size\"])\n",
    "index = 0\n",
    "for text_batch in results[0]:\n",
    "    with open(\"results.txt\", \"w\") as f:\n",
    "        f.write(\"new_name\\tvalue\\n\")\n",
    "        for prob in text_batch:\n",
    "            out = ctc_decode(prob, char_dict_cache, blank=cfg[\"classify_num\"]-1)\n",
    "            f.write(img_names[index] + \"\\t\" + \"\\n\") # out\n",
    "            index += 1"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[]"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "ou"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 笔记\n",
    "1. 修改 `ctc_loss()` 中的参数reduction为mean时网络的输出为nan，而修改为sum和none则没事？ \n",
    "> 因为在标签预处理的时候部分不合格标签处理后为空，在计算的时候由于没有标签，相除的时候分母len(label)=0，自然会得到nan。因此需要把label长度为0的删去。\n",
    "\n",
    "如果不想从头开始训练模型，可以了解一下基于PaddleOCR的 [baseline](https://aistudio.baidu.com/aistudio/projectdetail/1286140)\n",
    "\n",
    "预祝各位选手取得好成绩！![](https://ai-studio-static-online.cdn.bcebos.com/7b3dd522e0104505a0bc2f9a0d7ecf4ae544c4efeaa44c0c8d7dc1653b444e9d)\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
