<!DOCTYPE html>
<html lang="en-us">
    <head><meta charset='utf-8'>
<meta name='viewport' content='width=device-width, initial-scale=1'><meta name='description' content='数据集：PASCAL VOC 官网介绍：http://host.robots.ox.ac.uk/pascal/VOC/voc2012/
下载链接：http://d2l-data.s3-accelerate.amazonaws.com/VOCtrainval_11-May-2012.tar
下载解压后的文件目录如下，我们任务所需用到的是 ImageSets、 JPEGImages 和 SegmentationClass 文件夹。 ImageSets：用于分割任务模型的数据训练集、测试集和验证集文件.txt JPEGImages：图片数据 SegmentationClass：标签，也采用图像格式
VOCdevkit └── VOC2012 ├── Annotations ├── ImageSets ├── JPEGImages ├── SegmentationClass └── SegmentationObject UNet U-Net架构源于 Long、Shelhamer 和 Darrell 首次提出的所谓“全卷积网络”。
主要思想是通过连续的层来补充通常的承包网络，其中池化操作被上采样操作员所取代。因此，这些层增加了输出的分辨率。更重要的是，一个连续的卷积层可以学习根据这些信息组装一个精确的输出。
U-Net的一个重要修改是在上采样部分有大量的特征通道，这允许网络将上下文信息传播到更高分辨率的层。因此，扩展路径或多或少与收缩部分对称，并产生 u 形架构。网络只使用每个卷积的有效部分，没有任何全连接层。为了预测图像边界区域的像素，通过镜像输入图像来推断缺失的上下文。这种平铺策略对于将网络应用于大图像很重要，因为否则分辨率将受到GPU内存的限制。
网络结构设计 U-Net 的网络结构图在下方给出，左侧可视为一个编码器，右侧可视为一个解码器。编码器有四个子模块，每个子模块包含两个卷积层，每个子模块之后有一个通过max pool实现的下采样层。根据我调整的数据集，输入图像的分辨率是128x128。第1-5个模块的分辨率分别是128x128, 64x64, 32x32, 16x16和8x8。解码器包含四个子模块，分辨率通过上采样操作依次上升，直到与输入图像的分辨率一致。该网络还使用了跳跃连接，将上采样结果与编码器中具有相同分辨率的子模块的输出进行连接，作为解码器中下一个子模块的输入。
结构图  
模块说明 Model: &amp;#34;model&amp;#34; __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to  ================================================================================================== input_1 (InputLayer) [(None, 128, 128, 3) 0 __________________________________________________________________________________________________ conv2d (Conv2D) (None, 128, 128, 32) 896 input_1[0][0] __________________________________________________________________________________________________ conv2d_1 (Conv2D) (None, 128, 128, 32) 9248 conv2d[0][0] __________________________________________________________________________________________________ batch_normalization (BatchNorma (None, 128, 128, 32) 128 conv2d_1[0][0] __________________________________________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 64, 64, 32) 0 batch_normalization[0][0] __________________________________________________________________________________________________ conv2d_2 (Conv2D) (None, 64, 64, 64) 18496 max_pooling2d[0][0] __________________________________________________________________________________________________ conv2d_3 (Conv2D) (None, 64, 64, 64) 36928 conv2d_2[0][0] __________________________________________________________________________________________________ batch_normalization_1 (BatchNor (None, 64, 64, 64) 256 conv2d_3[0][0] __________________________________________________________________________________________________ max_pooling2d_1 (MaxPooling2D) (None, 32, 32, 64) 0 batch_normalization_1[0][0] __________________________________________________________________________________________________ conv2d_4 (Conv2D) (None, 32, 32, 128) 73856 max_pooling2d_1[0][0] __________________________________________________________________________________________________ conv2d_5 (Conv2D) (None, 32, 32, 128) 147584 conv2d_4[0][0] __________________________________________________________________________________________________ batch_normalization_2 (BatchNor (None, 32, 32, 128) 512 conv2d_5[0][0] __________________________________________________________________________________________________ max_pooling2d_2 (MaxPooling2D) (None, 16, 16, 128) 0 batch_normalization_2[0][0] __________________________________________________________________________________________________ conv2d_6 (Conv2D) (None, 16, 16, 256) 295168 max_pooling2d_2[0][0] __________________________________________________________________________________________________ conv2d_7 (Conv2D) (None, 16, 16, 256) 590080 conv2d_6[0][0] __________________________________________________________________________________________________ batch_normalization_3 (BatchNor (None, 16, 16, 256) 1024 conv2d_7[0][0] __________________________________________________________________________________________________ dropout (Dropout) (None, 16, 16, 256) 0 batch_normalization_3[0][0] __________________________________________________________________________________________________ max_pooling2d_3 (MaxPooling2D) (None, 8, 8, 256) 0 dropout[0][0] __________________________________________________________________________________________________ conv2d_8 (Conv2D) (None, 8, 8, 512) 1180160 max_pooling2d_3[0][0] __________________________________________________________________________________________________ conv2d_9 (Conv2D) (None, 8, 8, 512) 2359808 conv2d_8[0][0] __________________________________________________________________________________________________ batch_normalization_4 (BatchNor (None, 8, 8, 512) 2048 conv2d_9[0][0] __________________________________________________________________________________________________ dropout_1 (Dropout) (None, 8, 8, 512) 0 batch_normalization_4[0][0] __________________________________________________________________________________________________ conv2d_transpose (Conv2DTranspo (None, 16, 16, 256) 1179904 dropout_1[0][0] __________________________________________________________________________________________________ concatenate (Concatenate) (None, 16, 16, 512) 0 conv2d_transpose[0][0] dropout[0][0] __________________________________________________________________________________________________ conv2d_10 (Conv2D) (None, 16, 16, 256) 1179904 concatenate[0][0] __________________________________________________________________________________________________ conv2d_11 (Conv2D) (None, 16, 16, 256) 590080 conv2d_10[0][0] __________________________________________________________________________________________________ conv2d_transpose_1 (Conv2DTrans (None, 32, 32, 128) 295040 conv2d_11[0][0] __________________________________________________________________________________________________ concatenate_1 (Concatenate) (None, 32, 32, 256) 0 conv2d_transpose_1[0][0] batch_normalization_2[0][0] __________________________________________________________________________________________________ conv2d_12 (Conv2D) (None, 32, 32, 128) 295040 concatenate_1[0][0] __________________________________________________________________________________________________ conv2d_13 (Conv2D) (None, 32, 32, 128) 147584 conv2d_12[0][0] __________________________________________________________________________________________________ conv2d_transpose_2 (Conv2DTrans (None, 64, 64, 64) 73792 conv2d_13[0][0] __________________________________________________________________________________________________ concatenate_2 (Concatenate) (None, 64, 64, 128) 0 conv2d_transpose_2[0][0] batch_normalization_1[0][0] __________________________________________________________________________________________________ conv2d_14 (Conv2D) (None, 64, 64, 64) 73792 concatenate_2[0][0] __________________________________________________________________________________________________ conv2d_15 (Conv2D) (None, 64, 64, 64) 36928 conv2d_14[0][0] __________________________________________________________________________________________________ conv2d_transpose_3 (Conv2DTrans (None, 128, 128, 32) 18464 conv2d_15[0][0] __________________________________________________________________________________________________ concatenate_3 (Concatenate) (None, 128, 128, 64) 0 conv2d_transpose_3[0][0] batch_normalization[0][0] __________________________________________________________________________________________________ conv2d_16 (Conv2D) (None, 128, 128, 32) 18464 concatenate_3[0][0] __________________________________________________________________________________________________ conv2d_17 (Conv2D) (None, 128, 128, 32) 9248 conv2d_16[0][0] __________________________________________________________________________________________________ conv2d_18 (Conv2D) (None, 128, 128, 32) 9248 conv2d_17[0][0] __________________________________________________________________________________________________ conv2d_19 (Conv2D) (None, 128, 128, 3) 99 conv2d_18[0][0] ================================================================================================== Total params: 8,643,779 Trainable params: 8,641,795 Non-trainable params: 1,984 ___________________________________________________ 模型定性和定量分析 训练模型使用了10个周期数，但在第2个周期数后训练准确率并没有因此大幅上升，平均稳定在86%以上。而训练误差也稳定在大概43%。验证误差和验证集准确率也是如此。训练上，在损失函数的选择上还有待改善。U-Net可能并不适合VOC此类数据集，其更适合特征少，需要浅层特征的数据集之类的。'><title>语义分割</title>

<link rel='canonical' href='https://enrique518.gitee.io/p/segmentation/'>

<link rel="stylesheet" href="/scss/style.min.css"><meta property='og:title' content='语义分割'>
<meta property='og:description' content='数据集：PASCAL VOC 官网介绍：http://host.robots.ox.ac.uk/pascal/VOC/voc2012/
下载链接：http://d2l-data.s3-accelerate.amazonaws.com/VOCtrainval_11-May-2012.tar
下载解压后的文件目录如下，我们任务所需用到的是 ImageSets、 JPEGImages 和 SegmentationClass 文件夹。 ImageSets：用于分割任务模型的数据训练集、测试集和验证集文件.txt JPEGImages：图片数据 SegmentationClass：标签，也采用图像格式
VOCdevkit └── VOC2012 ├── Annotations ├── ImageSets ├── JPEGImages ├── SegmentationClass └── SegmentationObject UNet U-Net架构源于 Long、Shelhamer 和 Darrell 首次提出的所谓“全卷积网络”。
主要思想是通过连续的层来补充通常的承包网络，其中池化操作被上采样操作员所取代。因此，这些层增加了输出的分辨率。更重要的是，一个连续的卷积层可以学习根据这些信息组装一个精确的输出。
U-Net的一个重要修改是在上采样部分有大量的特征通道，这允许网络将上下文信息传播到更高分辨率的层。因此，扩展路径或多或少与收缩部分对称，并产生 u 形架构。网络只使用每个卷积的有效部分，没有任何全连接层。为了预测图像边界区域的像素，通过镜像输入图像来推断缺失的上下文。这种平铺策略对于将网络应用于大图像很重要，因为否则分辨率将受到GPU内存的限制。
网络结构设计 U-Net 的网络结构图在下方给出，左侧可视为一个编码器，右侧可视为一个解码器。编码器有四个子模块，每个子模块包含两个卷积层，每个子模块之后有一个通过max pool实现的下采样层。根据我调整的数据集，输入图像的分辨率是128x128。第1-5个模块的分辨率分别是128x128, 64x64, 32x32, 16x16和8x8。解码器包含四个子模块，分辨率通过上采样操作依次上升，直到与输入图像的分辨率一致。该网络还使用了跳跃连接，将上采样结果与编码器中具有相同分辨率的子模块的输出进行连接，作为解码器中下一个子模块的输入。
结构图  
模块说明 Model: &amp;#34;model&amp;#34; __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to  ================================================================================================== input_1 (InputLayer) [(None, 128, 128, 3) 0 __________________________________________________________________________________________________ conv2d (Conv2D) (None, 128, 128, 32) 896 input_1[0][0] __________________________________________________________________________________________________ conv2d_1 (Conv2D) (None, 128, 128, 32) 9248 conv2d[0][0] __________________________________________________________________________________________________ batch_normalization (BatchNorma (None, 128, 128, 32) 128 conv2d_1[0][0] __________________________________________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 64, 64, 32) 0 batch_normalization[0][0] __________________________________________________________________________________________________ conv2d_2 (Conv2D) (None, 64, 64, 64) 18496 max_pooling2d[0][0] __________________________________________________________________________________________________ conv2d_3 (Conv2D) (None, 64, 64, 64) 36928 conv2d_2[0][0] __________________________________________________________________________________________________ batch_normalization_1 (BatchNor (None, 64, 64, 64) 256 conv2d_3[0][0] __________________________________________________________________________________________________ max_pooling2d_1 (MaxPooling2D) (None, 32, 32, 64) 0 batch_normalization_1[0][0] __________________________________________________________________________________________________ conv2d_4 (Conv2D) (None, 32, 32, 128) 73856 max_pooling2d_1[0][0] __________________________________________________________________________________________________ conv2d_5 (Conv2D) (None, 32, 32, 128) 147584 conv2d_4[0][0] __________________________________________________________________________________________________ batch_normalization_2 (BatchNor (None, 32, 32, 128) 512 conv2d_5[0][0] __________________________________________________________________________________________________ max_pooling2d_2 (MaxPooling2D) (None, 16, 16, 128) 0 batch_normalization_2[0][0] __________________________________________________________________________________________________ conv2d_6 (Conv2D) (None, 16, 16, 256) 295168 max_pooling2d_2[0][0] __________________________________________________________________________________________________ conv2d_7 (Conv2D) (None, 16, 16, 256) 590080 conv2d_6[0][0] __________________________________________________________________________________________________ batch_normalization_3 (BatchNor (None, 16, 16, 256) 1024 conv2d_7[0][0] __________________________________________________________________________________________________ dropout (Dropout) (None, 16, 16, 256) 0 batch_normalization_3[0][0] __________________________________________________________________________________________________ max_pooling2d_3 (MaxPooling2D) (None, 8, 8, 256) 0 dropout[0][0] __________________________________________________________________________________________________ conv2d_8 (Conv2D) (None, 8, 8, 512) 1180160 max_pooling2d_3[0][0] __________________________________________________________________________________________________ conv2d_9 (Conv2D) (None, 8, 8, 512) 2359808 conv2d_8[0][0] __________________________________________________________________________________________________ batch_normalization_4 (BatchNor (None, 8, 8, 512) 2048 conv2d_9[0][0] __________________________________________________________________________________________________ dropout_1 (Dropout) (None, 8, 8, 512) 0 batch_normalization_4[0][0] __________________________________________________________________________________________________ conv2d_transpose (Conv2DTranspo (None, 16, 16, 256) 1179904 dropout_1[0][0] __________________________________________________________________________________________________ concatenate (Concatenate) (None, 16, 16, 512) 0 conv2d_transpose[0][0] dropout[0][0] __________________________________________________________________________________________________ conv2d_10 (Conv2D) (None, 16, 16, 256) 1179904 concatenate[0][0] __________________________________________________________________________________________________ conv2d_11 (Conv2D) (None, 16, 16, 256) 590080 conv2d_10[0][0] __________________________________________________________________________________________________ conv2d_transpose_1 (Conv2DTrans (None, 32, 32, 128) 295040 conv2d_11[0][0] __________________________________________________________________________________________________ concatenate_1 (Concatenate) (None, 32, 32, 256) 0 conv2d_transpose_1[0][0] batch_normalization_2[0][0] __________________________________________________________________________________________________ conv2d_12 (Conv2D) (None, 32, 32, 128) 295040 concatenate_1[0][0] __________________________________________________________________________________________________ conv2d_13 (Conv2D) (None, 32, 32, 128) 147584 conv2d_12[0][0] __________________________________________________________________________________________________ conv2d_transpose_2 (Conv2DTrans (None, 64, 64, 64) 73792 conv2d_13[0][0] __________________________________________________________________________________________________ concatenate_2 (Concatenate) (None, 64, 64, 128) 0 conv2d_transpose_2[0][0] batch_normalization_1[0][0] __________________________________________________________________________________________________ conv2d_14 (Conv2D) (None, 64, 64, 64) 73792 concatenate_2[0][0] __________________________________________________________________________________________________ conv2d_15 (Conv2D) (None, 64, 64, 64) 36928 conv2d_14[0][0] __________________________________________________________________________________________________ conv2d_transpose_3 (Conv2DTrans (None, 128, 128, 32) 18464 conv2d_15[0][0] __________________________________________________________________________________________________ concatenate_3 (Concatenate) (None, 128, 128, 64) 0 conv2d_transpose_3[0][0] batch_normalization[0][0] __________________________________________________________________________________________________ conv2d_16 (Conv2D) (None, 128, 128, 32) 18464 concatenate_3[0][0] __________________________________________________________________________________________________ conv2d_17 (Conv2D) (None, 128, 128, 32) 9248 conv2d_16[0][0] __________________________________________________________________________________________________ conv2d_18 (Conv2D) (None, 128, 128, 32) 9248 conv2d_17[0][0] __________________________________________________________________________________________________ conv2d_19 (Conv2D) (None, 128, 128, 3) 99 conv2d_18[0][0] ================================================================================================== Total params: 8,643,779 Trainable params: 8,641,795 Non-trainable params: 1,984 ___________________________________________________ 模型定性和定量分析 训练模型使用了10个周期数，但在第2个周期数后训练准确率并没有因此大幅上升，平均稳定在86%以上。而训练误差也稳定在大概43%。验证误差和验证集准确率也是如此。训练上，在损失函数的选择上还有待改善。U-Net可能并不适合VOC此类数据集，其更适合特征少，需要浅层特征的数据集之类的。'>
<meta property='og:url' content='https://enrique518.gitee.io/p/segmentation/'>
<meta property='og:site_name' content='Enriqueliu'>
<meta property='og:type' content='article'><meta property='article:section' content='Post' /><meta property='article:published_time' content='2022-01-21T00:00:00&#43;00:00'/><meta property='article:modified_time' content='2022-01-21T00:00:00&#43;00:00'/>
<meta name="twitter:title" content="语义分割">
<meta name="twitter:description" content="数据集：PASCAL VOC 官网介绍：http://host.robots.ox.ac.uk/pascal/VOC/voc2012/
下载链接：http://d2l-data.s3-accelerate.amazonaws.com/VOCtrainval_11-May-2012.tar
下载解压后的文件目录如下，我们任务所需用到的是 ImageSets、 JPEGImages 和 SegmentationClass 文件夹。 ImageSets：用于分割任务模型的数据训练集、测试集和验证集文件.txt JPEGImages：图片数据 SegmentationClass：标签，也采用图像格式
VOCdevkit └── VOC2012 ├── Annotations ├── ImageSets ├── JPEGImages ├── SegmentationClass └── SegmentationObject UNet U-Net架构源于 Long、Shelhamer 和 Darrell 首次提出的所谓“全卷积网络”。
主要思想是通过连续的层来补充通常的承包网络，其中池化操作被上采样操作员所取代。因此，这些层增加了输出的分辨率。更重要的是，一个连续的卷积层可以学习根据这些信息组装一个精确的输出。
U-Net的一个重要修改是在上采样部分有大量的特征通道，这允许网络将上下文信息传播到更高分辨率的层。因此，扩展路径或多或少与收缩部分对称，并产生 u 形架构。网络只使用每个卷积的有效部分，没有任何全连接层。为了预测图像边界区域的像素，通过镜像输入图像来推断缺失的上下文。这种平铺策略对于将网络应用于大图像很重要，因为否则分辨率将受到GPU内存的限制。
网络结构设计 U-Net 的网络结构图在下方给出，左侧可视为一个编码器，右侧可视为一个解码器。编码器有四个子模块，每个子模块包含两个卷积层，每个子模块之后有一个通过max pool实现的下采样层。根据我调整的数据集，输入图像的分辨率是128x128。第1-5个模块的分辨率分别是128x128, 64x64, 32x32, 16x16和8x8。解码器包含四个子模块，分辨率通过上采样操作依次上升，直到与输入图像的分辨率一致。该网络还使用了跳跃连接，将上采样结果与编码器中具有相同分辨率的子模块的输出进行连接，作为解码器中下一个子模块的输入。
结构图  
模块说明 Model: &amp;#34;model&amp;#34; __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to  ================================================================================================== input_1 (InputLayer) [(None, 128, 128, 3) 0 __________________________________________________________________________________________________ conv2d (Conv2D) (None, 128, 128, 32) 896 input_1[0][0] __________________________________________________________________________________________________ conv2d_1 (Conv2D) (None, 128, 128, 32) 9248 conv2d[0][0] __________________________________________________________________________________________________ batch_normalization (BatchNorma (None, 128, 128, 32) 128 conv2d_1[0][0] __________________________________________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 64, 64, 32) 0 batch_normalization[0][0] __________________________________________________________________________________________________ conv2d_2 (Conv2D) (None, 64, 64, 64) 18496 max_pooling2d[0][0] __________________________________________________________________________________________________ conv2d_3 (Conv2D) (None, 64, 64, 64) 36928 conv2d_2[0][0] __________________________________________________________________________________________________ batch_normalization_1 (BatchNor (None, 64, 64, 64) 256 conv2d_3[0][0] __________________________________________________________________________________________________ max_pooling2d_1 (MaxPooling2D) (None, 32, 32, 64) 0 batch_normalization_1[0][0] __________________________________________________________________________________________________ conv2d_4 (Conv2D) (None, 32, 32, 128) 73856 max_pooling2d_1[0][0] __________________________________________________________________________________________________ conv2d_5 (Conv2D) (None, 32, 32, 128) 147584 conv2d_4[0][0] __________________________________________________________________________________________________ batch_normalization_2 (BatchNor (None, 32, 32, 128) 512 conv2d_5[0][0] __________________________________________________________________________________________________ max_pooling2d_2 (MaxPooling2D) (None, 16, 16, 128) 0 batch_normalization_2[0][0] __________________________________________________________________________________________________ conv2d_6 (Conv2D) (None, 16, 16, 256) 295168 max_pooling2d_2[0][0] __________________________________________________________________________________________________ conv2d_7 (Conv2D) (None, 16, 16, 256) 590080 conv2d_6[0][0] __________________________________________________________________________________________________ batch_normalization_3 (BatchNor (None, 16, 16, 256) 1024 conv2d_7[0][0] __________________________________________________________________________________________________ dropout (Dropout) (None, 16, 16, 256) 0 batch_normalization_3[0][0] __________________________________________________________________________________________________ max_pooling2d_3 (MaxPooling2D) (None, 8, 8, 256) 0 dropout[0][0] __________________________________________________________________________________________________ conv2d_8 (Conv2D) (None, 8, 8, 512) 1180160 max_pooling2d_3[0][0] __________________________________________________________________________________________________ conv2d_9 (Conv2D) (None, 8, 8, 512) 2359808 conv2d_8[0][0] __________________________________________________________________________________________________ batch_normalization_4 (BatchNor (None, 8, 8, 512) 2048 conv2d_9[0][0] __________________________________________________________________________________________________ dropout_1 (Dropout) (None, 8, 8, 512) 0 batch_normalization_4[0][0] __________________________________________________________________________________________________ conv2d_transpose (Conv2DTranspo (None, 16, 16, 256) 1179904 dropout_1[0][0] __________________________________________________________________________________________________ concatenate (Concatenate) (None, 16, 16, 512) 0 conv2d_transpose[0][0] dropout[0][0] __________________________________________________________________________________________________ conv2d_10 (Conv2D) (None, 16, 16, 256) 1179904 concatenate[0][0] __________________________________________________________________________________________________ conv2d_11 (Conv2D) (None, 16, 16, 256) 590080 conv2d_10[0][0] __________________________________________________________________________________________________ conv2d_transpose_1 (Conv2DTrans (None, 32, 32, 128) 295040 conv2d_11[0][0] __________________________________________________________________________________________________ concatenate_1 (Concatenate) (None, 32, 32, 256) 0 conv2d_transpose_1[0][0] batch_normalization_2[0][0] __________________________________________________________________________________________________ conv2d_12 (Conv2D) (None, 32, 32, 128) 295040 concatenate_1[0][0] __________________________________________________________________________________________________ conv2d_13 (Conv2D) (None, 32, 32, 128) 147584 conv2d_12[0][0] __________________________________________________________________________________________________ conv2d_transpose_2 (Conv2DTrans (None, 64, 64, 64) 73792 conv2d_13[0][0] __________________________________________________________________________________________________ concatenate_2 (Concatenate) (None, 64, 64, 128) 0 conv2d_transpose_2[0][0] batch_normalization_1[0][0] __________________________________________________________________________________________________ conv2d_14 (Conv2D) (None, 64, 64, 64) 73792 concatenate_2[0][0] __________________________________________________________________________________________________ conv2d_15 (Conv2D) (None, 64, 64, 64) 36928 conv2d_14[0][0] __________________________________________________________________________________________________ conv2d_transpose_3 (Conv2DTrans (None, 128, 128, 32) 18464 conv2d_15[0][0] __________________________________________________________________________________________________ concatenate_3 (Concatenate) (None, 128, 128, 64) 0 conv2d_transpose_3[0][0] batch_normalization[0][0] __________________________________________________________________________________________________ conv2d_16 (Conv2D) (None, 128, 128, 32) 18464 concatenate_3[0][0] __________________________________________________________________________________________________ conv2d_17 (Conv2D) (None, 128, 128, 32) 9248 conv2d_16[0][0] __________________________________________________________________________________________________ conv2d_18 (Conv2D) (None, 128, 128, 32) 9248 conv2d_17[0][0] __________________________________________________________________________________________________ conv2d_19 (Conv2D) (None, 128, 128, 3) 99 conv2d_18[0][0] ================================================================================================== Total params: 8,643,779 Trainable params: 8,641,795 Non-trainable params: 1,984 ___________________________________________________ 模型定性和定量分析 训练模型使用了10个周期数，但在第2个周期数后训练准确率并没有因此大幅上升，平均稳定在86%以上。而训练误差也稳定在大概43%。验证误差和验证集准确率也是如此。训练上，在损失函数的选择上还有待改善。U-Net可能并不适合VOC此类数据集，其更适合特征少，需要浅层特征的数据集之类的。">
    </head>
    <body class="
    article-page has-toc
">
    <script>
        (function() {
            const colorSchemeKey = 'StackColorScheme';
            if(!localStorage.getItem(colorSchemeKey)){
                localStorage.setItem(colorSchemeKey, "auto");
            }
        })();
    </script><script>
    (function() {
        const colorSchemeKey = 'StackColorScheme';
        const colorSchemeItem = localStorage.getItem(colorSchemeKey);
        const supportDarkMode = window.matchMedia('(prefers-color-scheme: dark)').matches === true;

        if (colorSchemeItem == 'dark' || colorSchemeItem === 'auto' && supportDarkMode) {
            

            document.documentElement.dataset.scheme = 'dark';
        } else {
            document.documentElement.dataset.scheme = 'light';
        }
    })();
</script>
<div class="container main-container flex 
    
        extended
    
">
    
        <div id="article-toolbar">
            <a href="https://enrique518.gitee.io/" class="back-home">
                <svg xmlns="http://www.w3.org/2000/svg" class="icon icon-tabler icon-tabler-chevron-left" width="24" height="24" viewBox="0 0 24 24" stroke-width="2" stroke="currentColor" fill="none" stroke-linecap="round" stroke-linejoin="round">
  <path stroke="none" d="M0 0h24v24H0z"/>
  <polyline points="15 6 9 12 15 18" />
</svg>



                <span>返回</span>
            </a>
        </div>
    
<main class="main full-width">
    <article class="main-article">
    <header class="article-header">

    <div class="article-details">
    
    <header class="article-category">
        
            <a href="/categories/image-segmentation/" >
                Image segmentation
            </a>
        
            <a href="/categories/unet/" >
                UNet
            </a>
        
            <a href="/categories/tensorflow/" >
                tensorflow
            </a>
        
            <a href="/categories/pascal-voc/" >
                Pascal VOC
            </a>
        
    </header>
    

    <h2 class="article-title">
        <a href="/p/segmentation/">语义分割</a>
    </h2>

    

    
    <footer class="article-time">
        
            <div>
                <svg xmlns="http://www.w3.org/2000/svg" class="icon icon-tabler icon-tabler-calendar-time" width="56" height="56" viewBox="0 0 24 24" stroke-width="2" stroke="currentColor" fill="none" stroke-linecap="round" stroke-linejoin="round">
  <path stroke="none" d="M0 0h24v24H0z"/>
  <path d="M11.795 21h-6.795a2 2 0 0 1 -2 -2v-12a2 2 0 0 1 2 -2h12a2 2 0 0 1 2 2v4" />
  <circle cx="18" cy="18" r="4" />
  <path d="M15 3v4" />
  <path d="M7 3v4" />
  <path d="M3 11h16" />
  <path d="M18 16.496v1.504l1 1" />
</svg>
                <time class="article-time--published">Jan 21, 2022</time>
            </div>
        

        
            <div>
                <svg xmlns="http://www.w3.org/2000/svg" class="icon icon-tabler icon-tabler-clock" width="24" height="24" viewBox="0 0 24 24" stroke-width="2" stroke="currentColor" fill="none" stroke-linecap="round" stroke-linejoin="round">
  <path stroke="none" d="M0 0h24v24H0z"/>
  <circle cx="12" cy="12" r="9" />
  <polyline points="12 7 12 12 15 15" />
</svg>



                <time class="article-time--reading">
                    阅读时长: 3 分钟
                </time>
            </div>
        
    </footer>
    
</div>
</header>

    <section class="article-content">
    <h2 id="数据集pascal-voc">数据集：PASCAL VOC</h2>
<p>官网介绍：http://host.robots.ox.ac.uk/pascal/VOC/voc2012/</p>
<p>下载链接：http://d2l-data.s3-accelerate.amazonaws.com/VOCtrainval_11-May-2012.tar</p>
<p>下载解压后的文件目录如下，我们任务所需用到的是 <strong>ImageSets</strong>、 <strong>JPEGImages</strong> 和 <strong>SegmentationClass</strong> 文件夹。
<strong>ImageSets</strong>：用于分割任务模型的数据训练集、测试集和验证集文件.txt
<strong>JPEGImages</strong>：图片数据
<strong>SegmentationClass</strong>：标签，也采用图像格式</p>
<div class="highlight"><pre class="chroma"><code class="language-python" data-lang="python"><span class="n">VOCdevkit</span>
<span class="err">└──</span> <span class="n">VOC2012</span>
    <span class="err">├──</span> <span class="n">Annotations</span>
    <span class="err">├──</span> <span class="n">ImageSets</span>
    <span class="err">├──</span> <span class="n">JPEGImages</span>
    <span class="err">├──</span> <span class="n">SegmentationClass</span>
    <span class="err">└──</span> <span class="n">SegmentationObject</span>
</code></pre></div><h2 id="unet">UNet</h2>
<p>U-Net架构源于 Long、Shelhamer 和 Darrell 首次提出的所谓“全卷积网络”。</p>
<p>主要思想是通过连续的层来补充通常的承包网络，其中池化操作被上采样操作员所取代。因此，这些层增加了输出的分辨率。更重要的是，一个连续的卷积层可以学习根据这些信息组装一个精确的输出。</p>
<p>U-Net的一个重要修改是在上采样部分有大量的特征通道，这允许网络将上下文信息传播到更高分辨率的层。因此，扩展路径或多或少与收缩部分对称，并产生 u 形架构。网络只使用每个卷积的有效部分，没有任何全连接层。为了预测图像边界区域的像素，通过镜像输入图像来推断缺失的上下文。这种平铺策略对于将网络应用于大图像很重要，因为否则分辨率将受到GPU内存的限制。</p>
<h3 id="网络结构设计">网络结构设计</h3>
<p><strong>U-Net</strong> 的网络结构图在下方给出，左侧可视为一个编码器，右侧可视为一个解码器。编码器有四个子模块，每个子模块包含两个卷积层，每个子模块之后有一个通过max pool实现的下采样层。根据我调整的数据集，输入图像的分辨率是128x128。第1-5个模块的分辨率分别是128x128, 64x64, 32x32, 16x16和8x8。解码器包含四个子模块，分辨率通过上采样操作依次上升，直到与输入图像的分辨率一致。该网络还使用了跳跃连接，将上采样结果与编码器中具有相同分辨率的子模块的输出进行连接，作为解码器中下一个子模块的输入。</p>
<h3 id="结构图">结构图</h3>
<p><figure 
	
		class="gallery-image" 
		style="
			flex-grow: 150; 
			flex-basis: 360px"
	>
	<a href="/p/segmentation/unet.png" data-size="1555x1036">
		<img src="/p/segmentation/unet.png"
			width="1555"
			height="1036"
			srcset="/p/segmentation/unet_hua7d70ae1f7cbdafa7d7ab33e5d6fa6be_103270_480x0_resize_box_2.png 480w, /p/segmentation/unet_hua7d70ae1f7cbdafa7d7ab33e5d6fa6be_103270_1024x0_resize_box_2.png 1024w"
			loading="lazy"
			>
	</a>
	
</figure></p>
<h3 id="模块说明">模块说明</h3>
<div class="highlight"><pre class="chroma"><code class="language-python" data-lang="python"><span class="n">Model</span><span class="p">:</span> <span class="s2">&#34;model&#34;</span>
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">Layer</span> <span class="p">(</span><span class="nb">type</span><span class="p">)</span>                    <span class="n">Output</span> <span class="n">Shape</span>         <span class="n">Param</span> <span class="c1">#     Connected to                     </span>
<span class="o">==================================================================================================</span>
<span class="n">input_1</span> <span class="p">(</span><span class="n">InputLayer</span><span class="p">)</span>            <span class="p">[(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">128</span><span class="p">,</span> <span class="mi">128</span><span class="p">,</span> <span class="mi">3</span><span class="p">)</span> <span class="mi">0</span>                                            
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">conv2d</span> <span class="p">(</span><span class="n">Conv2D</span><span class="p">)</span>                 <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">128</span><span class="p">,</span> <span class="mi">128</span><span class="p">,</span> <span class="mi">32</span><span class="p">)</span> <span class="mi">896</span>         <span class="n">input_1</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>                    
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">conv2d_1</span> <span class="p">(</span><span class="n">Conv2D</span><span class="p">)</span>               <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">128</span><span class="p">,</span> <span class="mi">128</span><span class="p">,</span> <span class="mi">32</span><span class="p">)</span> <span class="mi">9248</span>        <span class="n">conv2d</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>                     
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">batch_normalization</span> <span class="p">(</span><span class="n">BatchNorma</span> <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">128</span><span class="p">,</span> <span class="mi">128</span><span class="p">,</span> <span class="mi">32</span><span class="p">)</span> <span class="mi">128</span>         <span class="n">conv2d_1</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>                   
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">max_pooling2d</span> <span class="p">(</span><span class="n">MaxPooling2D</span><span class="p">)</span>    <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">64</span><span class="p">,</span> <span class="mi">64</span><span class="p">,</span> <span class="mi">32</span><span class="p">)</span>   <span class="mi">0</span>           <span class="n">batch_normalization</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>        
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">conv2d_2</span> <span class="p">(</span><span class="n">Conv2D</span><span class="p">)</span>               <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">64</span><span class="p">,</span> <span class="mi">64</span><span class="p">,</span> <span class="mi">64</span><span class="p">)</span>   <span class="mi">18496</span>       <span class="n">max_pooling2d</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>              
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">conv2d_3</span> <span class="p">(</span><span class="n">Conv2D</span><span class="p">)</span>               <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">64</span><span class="p">,</span> <span class="mi">64</span><span class="p">,</span> <span class="mi">64</span><span class="p">)</span>   <span class="mi">36928</span>       <span class="n">conv2d_2</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>                   
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">batch_normalization_1</span> <span class="p">(</span><span class="n">BatchNor</span> <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">64</span><span class="p">,</span> <span class="mi">64</span><span class="p">,</span> <span class="mi">64</span><span class="p">)</span>   <span class="mi">256</span>         <span class="n">conv2d_3</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>                   
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">max_pooling2d_1</span> <span class="p">(</span><span class="n">MaxPooling2D</span><span class="p">)</span>  <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">32</span><span class="p">,</span> <span class="mi">32</span><span class="p">,</span> <span class="mi">64</span><span class="p">)</span>   <span class="mi">0</span>           <span class="n">batch_normalization_1</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>      
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">conv2d_4</span> <span class="p">(</span><span class="n">Conv2D</span><span class="p">)</span>               <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">32</span><span class="p">,</span> <span class="mi">32</span><span class="p">,</span> <span class="mi">128</span><span class="p">)</span>  <span class="mi">73856</span>       <span class="n">max_pooling2d_1</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>            
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">conv2d_5</span> <span class="p">(</span><span class="n">Conv2D</span><span class="p">)</span>               <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">32</span><span class="p">,</span> <span class="mi">32</span><span class="p">,</span> <span class="mi">128</span><span class="p">)</span>  <span class="mi">147584</span>      <span class="n">conv2d_4</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>                   
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">batch_normalization_2</span> <span class="p">(</span><span class="n">BatchNor</span> <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">32</span><span class="p">,</span> <span class="mi">32</span><span class="p">,</span> <span class="mi">128</span><span class="p">)</span>  <span class="mi">512</span>         <span class="n">conv2d_5</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>                   
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">max_pooling2d_2</span> <span class="p">(</span><span class="n">MaxPooling2D</span><span class="p">)</span>  <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">16</span><span class="p">,</span> <span class="mi">16</span><span class="p">,</span> <span class="mi">128</span><span class="p">)</span>  <span class="mi">0</span>           <span class="n">batch_normalization_2</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>      
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">conv2d_6</span> <span class="p">(</span><span class="n">Conv2D</span><span class="p">)</span>               <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">16</span><span class="p">,</span> <span class="mi">16</span><span class="p">,</span> <span class="mi">256</span><span class="p">)</span>  <span class="mi">295168</span>      <span class="n">max_pooling2d_2</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>            
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">conv2d_7</span> <span class="p">(</span><span class="n">Conv2D</span><span class="p">)</span>               <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">16</span><span class="p">,</span> <span class="mi">16</span><span class="p">,</span> <span class="mi">256</span><span class="p">)</span>  <span class="mi">590080</span>      <span class="n">conv2d_6</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>                   
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">batch_normalization_3</span> <span class="p">(</span><span class="n">BatchNor</span> <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">16</span><span class="p">,</span> <span class="mi">16</span><span class="p">,</span> <span class="mi">256</span><span class="p">)</span>  <span class="mi">1024</span>        <span class="n">conv2d_7</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>                   
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">dropout</span> <span class="p">(</span><span class="n">Dropout</span><span class="p">)</span>               <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">16</span><span class="p">,</span> <span class="mi">16</span><span class="p">,</span> <span class="mi">256</span><span class="p">)</span>  <span class="mi">0</span>           <span class="n">batch_normalization_3</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>      
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">max_pooling2d_3</span> <span class="p">(</span><span class="n">MaxPooling2D</span><span class="p">)</span>  <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">256</span><span class="p">)</span>    <span class="mi">0</span>           <span class="n">dropout</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>                    
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">conv2d_8</span> <span class="p">(</span><span class="n">Conv2D</span><span class="p">)</span>               <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">512</span><span class="p">)</span>    <span class="mi">1180160</span>     <span class="n">max_pooling2d_3</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>            
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">conv2d_9</span> <span class="p">(</span><span class="n">Conv2D</span><span class="p">)</span>               <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">512</span><span class="p">)</span>    <span class="mi">2359808</span>     <span class="n">conv2d_8</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>                   
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">batch_normalization_4</span> <span class="p">(</span><span class="n">BatchNor</span> <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">512</span><span class="p">)</span>    <span class="mi">2048</span>        <span class="n">conv2d_9</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>                   
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">dropout_1</span> <span class="p">(</span><span class="n">Dropout</span><span class="p">)</span>             <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">8</span><span class="p">,</span> <span class="mi">512</span><span class="p">)</span>    <span class="mi">0</span>           <span class="n">batch_normalization_4</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>      
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">conv2d_transpose</span> <span class="p">(</span><span class="n">Conv2DTranspo</span> <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">16</span><span class="p">,</span> <span class="mi">16</span><span class="p">,</span> <span class="mi">256</span><span class="p">)</span>  <span class="mi">1179904</span>     <span class="n">dropout_1</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>                  
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">concatenate</span> <span class="p">(</span><span class="n">Concatenate</span><span class="p">)</span>       <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">16</span><span class="p">,</span> <span class="mi">16</span><span class="p">,</span> <span class="mi">512</span><span class="p">)</span>  <span class="mi">0</span>           <span class="n">conv2d_transpose</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>           
                                                                 <span class="n">dropout</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>                    
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">conv2d_10</span> <span class="p">(</span><span class="n">Conv2D</span><span class="p">)</span>              <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">16</span><span class="p">,</span> <span class="mi">16</span><span class="p">,</span> <span class="mi">256</span><span class="p">)</span>  <span class="mi">1179904</span>     <span class="n">concatenate</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>                
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">conv2d_11</span> <span class="p">(</span><span class="n">Conv2D</span><span class="p">)</span>              <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">16</span><span class="p">,</span> <span class="mi">16</span><span class="p">,</span> <span class="mi">256</span><span class="p">)</span>  <span class="mi">590080</span>      <span class="n">conv2d_10</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>                  
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">conv2d_transpose_1</span> <span class="p">(</span><span class="n">Conv2DTrans</span> <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">32</span><span class="p">,</span> <span class="mi">32</span><span class="p">,</span> <span class="mi">128</span><span class="p">)</span>  <span class="mi">295040</span>      <span class="n">conv2d_11</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>                  
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">concatenate_1</span> <span class="p">(</span><span class="n">Concatenate</span><span class="p">)</span>     <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">32</span><span class="p">,</span> <span class="mi">32</span><span class="p">,</span> <span class="mi">256</span><span class="p">)</span>  <span class="mi">0</span>           <span class="n">conv2d_transpose_1</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>         
                                                                 <span class="n">batch_normalization_2</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>      
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">conv2d_12</span> <span class="p">(</span><span class="n">Conv2D</span><span class="p">)</span>              <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">32</span><span class="p">,</span> <span class="mi">32</span><span class="p">,</span> <span class="mi">128</span><span class="p">)</span>  <span class="mi">295040</span>      <span class="n">concatenate_1</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>              
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">conv2d_13</span> <span class="p">(</span><span class="n">Conv2D</span><span class="p">)</span>              <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">32</span><span class="p">,</span> <span class="mi">32</span><span class="p">,</span> <span class="mi">128</span><span class="p">)</span>  <span class="mi">147584</span>      <span class="n">conv2d_12</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>                  
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">conv2d_transpose_2</span> <span class="p">(</span><span class="n">Conv2DTrans</span> <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">64</span><span class="p">,</span> <span class="mi">64</span><span class="p">,</span> <span class="mi">64</span><span class="p">)</span>   <span class="mi">73792</span>       <span class="n">conv2d_13</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>                  
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">concatenate_2</span> <span class="p">(</span><span class="n">Concatenate</span><span class="p">)</span>     <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">64</span><span class="p">,</span> <span class="mi">64</span><span class="p">,</span> <span class="mi">128</span><span class="p">)</span>  <span class="mi">0</span>           <span class="n">conv2d_transpose_2</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>         
                                                                 <span class="n">batch_normalization_1</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>      
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">conv2d_14</span> <span class="p">(</span><span class="n">Conv2D</span><span class="p">)</span>              <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">64</span><span class="p">,</span> <span class="mi">64</span><span class="p">,</span> <span class="mi">64</span><span class="p">)</span>   <span class="mi">73792</span>       <span class="n">concatenate_2</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>              
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">conv2d_15</span> <span class="p">(</span><span class="n">Conv2D</span><span class="p">)</span>              <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">64</span><span class="p">,</span> <span class="mi">64</span><span class="p">,</span> <span class="mi">64</span><span class="p">)</span>   <span class="mi">36928</span>       <span class="n">conv2d_14</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>                  
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">conv2d_transpose_3</span> <span class="p">(</span><span class="n">Conv2DTrans</span> <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">128</span><span class="p">,</span> <span class="mi">128</span><span class="p">,</span> <span class="mi">32</span><span class="p">)</span> <span class="mi">18464</span>       <span class="n">conv2d_15</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>                  
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">concatenate_3</span> <span class="p">(</span><span class="n">Concatenate</span><span class="p">)</span>     <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">128</span><span class="p">,</span> <span class="mi">128</span><span class="p">,</span> <span class="mi">64</span><span class="p">)</span> <span class="mi">0</span>           <span class="n">conv2d_transpose_3</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>         
                                                                 <span class="n">batch_normalization</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>        
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">conv2d_16</span> <span class="p">(</span><span class="n">Conv2D</span><span class="p">)</span>              <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">128</span><span class="p">,</span> <span class="mi">128</span><span class="p">,</span> <span class="mi">32</span><span class="p">)</span> <span class="mi">18464</span>       <span class="n">concatenate_3</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>              
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">conv2d_17</span> <span class="p">(</span><span class="n">Conv2D</span><span class="p">)</span>              <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">128</span><span class="p">,</span> <span class="mi">128</span><span class="p">,</span> <span class="mi">32</span><span class="p">)</span> <span class="mi">9248</span>        <span class="n">conv2d_16</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>                  
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">conv2d_18</span> <span class="p">(</span><span class="n">Conv2D</span><span class="p">)</span>              <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">128</span><span class="p">,</span> <span class="mi">128</span><span class="p">,</span> <span class="mi">32</span><span class="p">)</span> <span class="mi">9248</span>        <span class="n">conv2d_17</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>                  
<span class="n">__________________________________________________________________________________________________</span>
<span class="n">conv2d_19</span> <span class="p">(</span><span class="n">Conv2D</span><span class="p">)</span>              <span class="p">(</span><span class="bp">None</span><span class="p">,</span> <span class="mi">128</span><span class="p">,</span> <span class="mi">128</span><span class="p">,</span> <span class="mi">3</span><span class="p">)</span>  <span class="mi">99</span>          <span class="n">conv2d_18</span><span class="p">[</span><span class="mi">0</span><span class="p">][</span><span class="mi">0</span><span class="p">]</span>                  
<span class="o">==================================================================================================</span>
<span class="n">Total</span> <span class="n">params</span><span class="p">:</span> <span class="mi">8</span><span class="p">,</span><span class="mi">643</span><span class="p">,</span><span class="mi">779</span>
<span class="n">Trainable</span> <span class="n">params</span><span class="p">:</span> <span class="mi">8</span><span class="p">,</span><span class="mi">641</span><span class="p">,</span><span class="mi">795</span>
<span class="n">Non</span><span class="o">-</span><span class="n">trainable</span> <span class="n">params</span><span class="p">:</span> <span class="mi">1</span><span class="p">,</span><span class="mi">984</span>
<span class="n">___________________________________________________</span>
</code></pre></div><h3 id="模型定性和定量分析">模型定性和定量分析</h3>
<p>训练模型使用了10个周期数，但在第2个周期数后训练准确率并没有因此大幅上升，平均稳定在86%以上。而训练误差也稳定在大概43%。验证误差和验证集准确率也是如此。训练上，在损失函数的选择上还有待改善。U-Net可能并不适合VOC此类数据集，其更适合特征少，需要浅层特征的数据集之类的。</p>
<p><figure 
	
		class="gallery-image" 
		style="
			flex-grow: 327; 
			flex-basis: 785px"
	>
	<a href="/p/segmentation/result.png" data-size="2336x714">
		<img src="/p/segmentation/result.png"
			width="2336"
			height="714"
			srcset="/p/segmentation/result_hu01ee72bbb634407bfc73fe583d3c99b9_768821_480x0_resize_box_2.png 480w, /p/segmentation/result_hu01ee72bbb634407bfc73fe583d3c99b9_768821_1024x0_resize_box_2.png 1024w"
			loading="lazy"
			>
	</a>
	
</figure></p>

</section>


    <footer class="article-footer">
    

    
    <section class="article-copyright">
        <svg xmlns="http://www.w3.org/2000/svg" class="icon icon-tabler icon-tabler-copyright" width="24" height="24" viewBox="0 0 24 24" stroke-width="2" stroke="currentColor" fill="none" stroke-linecap="round" stroke-linejoin="round">
  <path stroke="none" d="M0 0h24v24H0z"/>
  <circle cx="12" cy="12" r="9" />
  <path d="M14.5 9a3.5 4 0 1 0 0 6" />
</svg>



        <span>Licensed under CC BY-NC-SA 4.0</span>
    </section>
    </footer>


    
        <link 
                rel="stylesheet" 
                href="https://cdn.jsdelivr.net/npm/katex@0.13.13/dist/katex.min.css"integrity="sha384-RZU/ijkSsFbcmivfdRBQDtwuwVqK7GMOw6IMvKyeWL2K5UAlyp6WonmB8m7Jd0Hn"crossorigin="anonymous"
            ><script 
                src="https://cdn.jsdelivr.net/npm/katex@0.13.13/dist/katex.min.js"integrity="sha384-pK1WpvzWVBQiP0/GjnvRxV4mOb0oxFuyRxJlk6vVw146n3egcN5C925NCP7a7BY8"crossorigin="anonymous"
                defer="true"
                >
            </script><script 
                src="https://cdn.jsdelivr.net/npm/katex@0.13.13/dist/contrib/auto-render.min.js"integrity="sha384-vZTG03m&#43;2yp6N6BNi5iM4rW4oIwk5DfcNdFfxkk9ZWpDriOkXX8voJBFrAO7MpVl"crossorigin="anonymous"
                defer="true"
                >
            </script><script>
    window.addEventListener("DOMContentLoaded", () => {
        renderMathInElement(document.querySelector(`.article-content`), {
            delimiters: [
                { left: "$$", right: "$$", display: true },
                { left: "$", right: "$", display: false },
                { left: "\\(", right: "\\)", display: false },
                { left: "\\[", right: "\\]", display: true }
            ]
        });})
</script>
    
</article>

    <aside class="related-contents--wrapper">
    
    
</aside>

     
     
        
    <div class="disqus-container">
    <div id="disqus_thread"></div>
<script type="application/javascript">
    var disqus_config = function () {
    
    
    
    };
    (function() {
        if (["localhost", "127.0.0.1"].indexOf(window.location.hostname) != -1) {
            document.getElementById('disqus_thread').innerHTML = 'Disqus comments not available by default when the website is previewed locally.';
            return;
        }
        var d = document, s = d.createElement('script'); s.async = true;
        s.src = '//' + "hugo-theme-stack" + '.disqus.com/embed.js';
        s.setAttribute('data-timestamp', +new Date());
        (d.head || d.body).appendChild(s);
    })();
</script>
<noscript>Please enable JavaScript to view the <a href="https://disqus.com/?ref_noscript">comments powered by Disqus.</a></noscript>
<a href="https://disqus.com" class="dsq-brlink">comments powered by <span class="logo-disqus">Disqus</span></a>
</div>

<style>
    .disqus-container {
        background-color: var(--card-background);
        border-radius: var(--card-border-radius);
        box-shadow: var(--shadow-l1);
        padding: var(--card-padding);
    }
</style>

<script>
    window.addEventListener('onColorSchemeChange', (e) => {
        if (DISQUS) {
            DISQUS.reset({
                reload: true
            });
        }
    })
</script>

    

    <footer class="site-footer">
    <section class="copyright">
        &copy; 
        
            2020 - 
        
        2022 Enriqueliu
    </section>
    
    <section class="powerby">
        Built with <a href="https://gohugo.io/" target="_blank" rel="noopener">Hugo</a> <br />
        Theme <b><a href="https://github.com/CaiJimmy/hugo-theme-stack" target="_blank" rel="noopener" data-version="3.2.0">Stack</a></b> designed by <a href="https://jimmycai.com" target="_blank" rel="noopener">Jimmy</a>
    </section>
</footer>


    
<div class="pswp" tabindex="-1" role="dialog" aria-hidden="true">

    
    <div class="pswp__bg"></div>

    
    <div class="pswp__scroll-wrap">

        
        <div class="pswp__container">
            <div class="pswp__item"></div>
            <div class="pswp__item"></div>
            <div class="pswp__item"></div>
        </div>

        
        <div class="pswp__ui pswp__ui--hidden">

            <div class="pswp__top-bar">

                

                <div class="pswp__counter"></div>

                <button class="pswp__button pswp__button--close" title="Close (Esc)"></button>

                <button class="pswp__button pswp__button--share" title="Share"></button>

                <button class="pswp__button pswp__button--fs" title="Toggle fullscreen"></button>

                <button class="pswp__button pswp__button--zoom" title="Zoom in/out"></button>

                
                
                <div class="pswp__preloader">
                    <div class="pswp__preloader__icn">
                        <div class="pswp__preloader__cut">
                            <div class="pswp__preloader__donut"></div>
                        </div>
                    </div>
                </div>
            </div>

            <div class="pswp__share-modal pswp__share-modal--hidden pswp__single-tap">
                <div class="pswp__share-tooltip"></div>
            </div>

            <button class="pswp__button pswp__button--arrow--left" title="Previous (arrow left)">
            </button>

            <button class="pswp__button pswp__button--arrow--right" title="Next (arrow right)">
            </button>

            <div class="pswp__caption">
                <div class="pswp__caption__center"></div>
            </div>

        </div>

    </div>

</div><script 
                src="https://cdn.jsdelivr.net/npm/photoswipe@4.1.3/dist/photoswipe.min.js"integrity="sha256-ePwmChbbvXbsO02lbM3HoHbSHTHFAeChekF1xKJdleo="crossorigin="anonymous"
                defer="true"
                >
            </script><script 
                src="https://cdn.jsdelivr.net/npm/photoswipe@4.1.3/dist/photoswipe-ui-default.min.js"integrity="sha256-UKkzOn/w1mBxRmLLGrSeyB4e1xbrp4xylgAWb3M42pU="crossorigin="anonymous"
                defer="true"
                >
            </script><link 
                rel="stylesheet" 
                href="https://cdn.jsdelivr.net/npm/photoswipe@4.1.3/dist/default-skin/default-skin.css"integrity="sha256-c0uckgykQ9v5k&#43;IqViZOZKc47Jn7KQil4/MP3ySA3F8="crossorigin="anonymous"
            ><link 
                rel="stylesheet" 
                href="https://cdn.jsdelivr.net/npm/photoswipe@4.1.3/dist/photoswipe.css"integrity="sha256-SBLU4vv6CA6lHsZ1XyTdhyjJxCjPif/TRkjnsyGAGnE="crossorigin="anonymous"
            >

            </main>
    
        <aside class="sidebar right-sidebar sticky">
            <section class="widget archives">
                <div class="widget-icon">
                    <svg xmlns="http://www.w3.org/2000/svg" class="icon icon-tabler icon-tabler-hash" width="24" height="24" viewBox="0 0 24 24" stroke-width="2" stroke="currentColor" fill="none" stroke-linecap="round" stroke-linejoin="round">
  <path stroke="none" d="M0 0h24v24H0z"/>
  <line x1="5" y1="9" x2="19" y2="9" />
  <line x1="5" y1="15" x2="19" y2="15" />
  <line x1="11" y1="4" x2="7" y2="20" />
  <line x1="17" y1="4" x2="13" y2="20" />
</svg>



                </div>
                <h2 class="widget-title section-title">目录</h2>
                
                <div class="widget--toc">
                    <nav id="TableOfContents">
  <ol>
    <li><a href="#数据集pascal-voc">数据集：PASCAL VOC</a></li>
    <li><a href="#unet">UNet</a>
      <ol>
        <li><a href="#网络结构设计">网络结构设计</a></li>
        <li><a href="#结构图">结构图</a></li>
        <li><a href="#模块说明">模块说明</a></li>
        <li><a href="#模型定性和定量分析">模型定性和定量分析</a></li>
      </ol>
    </li>
  </ol>
</nav>
                </div>
            </section>
        </aside>
    

        </div>
        <script 
                src="https://cdn.jsdelivr.net/npm/node-vibrant@3.1.5/dist/vibrant.min.js"integrity="sha256-5NovOZc4iwiAWTYIFiIM7DxKUXKWvpVEuMEPLzcm5/g="crossorigin="anonymous"
                defer="false"
                >
            </script><script type="text/javascript" src="/ts/main.js" defer></script>
<script>
    (function () {
        const customFont = document.createElement('link');
        customFont.href = "https://fonts.googleapis.com/css2?family=Lato:wght@300;400;700&display=swap";

        customFont.type = "text/css";
        customFont.rel = "stylesheet";

        document.head.appendChild(customFont);
    }());
</script>

    </body>
</html>
