
'''
1.数据集位置:https://www.tinymind.com/qq86376032/datasets/homework10-data

2.模型训练输出位置:https://www.tinymind.com/executions/z262fgct

3.训练结果输出:https://www.tinymind.com/executions/z262fgct/output

4.模型代码如下:https://github.com/qq751220449/Week10

5.模型部分代码和说明如下:

'''
#upsample_factor = 16
upsample_factor = 8        #FCN8使用-上采样
number_of_classes = 21	

log_folder = os.path.join(FLAGS.output_dir, 'train')

vgg_checkpoint_path = FLAGS.checkpoint_path

# Creates a variable to hold the global_step.
global_step = tf.Variable(0, trainable=False, name='global_step', dtype=tf.int64)


# Define the model that we want to use -- specify to use only two classes at the last layer
with slim.arg_scope(vgg.vgg_arg_scope()):
    logits, end_points = vgg.vgg_16(image_tensor,
                                    num_classes=number_of_classes,
                                    is_training=is_training_placeholder,
                                    spatial_squeeze=False,
                                    fc_conv_padding='SAME')
#这里将输入图像经过5次卷积池化之后，图像缩小了1/32，将VGG网络修改为全卷积网络
downsampled_logits_shape = tf.shape(logits)

img_shape = tf.shape(image_tensor)

# Calculate the ouput size of the upsampled tensor
# The shape should be batch_size X width X height X num_classes
upsampled_logits_shape = tf.stack([
                                  downsampled_logits_shape[0],
                                  img_shape[1],
                                  img_shape[2],
                                  downsampled_logits_shape[3]
                                  ])

#首先将pool4输出的feature map进行预测，使用1*1的卷积核,此时图像的尺寸变为了原来的1/16，此时保留的原始像素的信息要比pool5之后要多
pool4_feature = end_points['vgg_16/pool4']
#取出pool4池化的特征图，此时进行分类，其实就是将此时的图像中的每个像素分为21类，网络参数初始化设置为0
with tf.variable_scope('vgg_16/fc8'):
    aux_logits_16s = slim.conv2d(pool4_feature, number_of_classes, [1, 1],
                                 activation_fn=None,
                                 weights_initializer=tf.zeros_initializer,
                                 scope='conv_pool4')

# Perform the upsampling
#首先生成上采样的反卷积核,并用0初始化，2倍上采样
upsample_filter_np_x2 = bilinear_upsample_weights(2,  # upsample_factor,
                                                  number_of_classes)

upsample_filter_tensor_x2 = tf.Variable(upsample_filter_np_x2, name='vgg_16/fc8/t_conv_x2')
#对最后一层的输出进行上采样，上采样之后，图像变为原始图像的1/16
upsampled_logits = tf.nn.conv2d_transpose(logits, upsample_filter_tensor_x2,
                                          output_shape=tf.shape(aux_logits_16s),
                                          strides=[1, 2, 2, 1],
                                          padding='SAME')

#合并logits
upsampled_logits = upsampled_logits + aux_logits_16s



pool3_feature = end_points['vgg_16/pool3']  #取出pool3的feature map，此时图像的大小为原始图像的1/8
with tf.variable_scope('vgg_16/fc8'):
    aux_logits_8s = slim.conv2d(pool3_feature, number_of_classes, [1, 1],#预测
                                 activation_fn=None,
                                 weights_initializer=tf.zeros_initializer,
                                 scope='conv_pool3')


#对上面合并之后的logits进行2倍上采样，得到原始图像的1/8尺寸
upsampled_logits = tf.nn.conv2d_transpose(upsampled_logits, upsample_filter_tensor_x2,
                                          output_shape=tf.shape(aux_logits_8s),
                                          strides=[1, 2, 2, 1],
                                          padding='SAME')

#继续合并logits
upsampled_logits = upsampled_logits + aux_logits_8s




upsample_filter_np_x8 = bilinear_upsample_weights(upsample_factor,
                                                   number_of_classes)
#使用8倍上采样得到与原图等大的logits
upsample_filter_tensor_x8 = tf.Variable(upsample_filter_np_x8, name='vgg_16/fc8/t_conv_x8')
upsampled_logits = tf.nn.conv2d_transpose(upsampled_logits, upsample_filter_tensor_x8,
                                          output_shape=upsampled_logits_shape,
                                          strides=[1, upsample_factor, upsample_factor, 1],
                                          padding='SAME')


lbl_onehot = tf.one_hot(annotation_tensor, number_of_classes)
cross_entropies = tf.nn.softmax_cross_entropy_with_logits(logits=upsampled_logits,
                                                          labels=lbl_onehot)

cross_entropy_loss = tf.reduce_mean(tf.reduce_sum(cross_entropies, axis=-1))


'''
FCN32其实是对第五层的输出反卷积得到原图大小(32倍的放大)，这时得到的结果不是很精确，一些细节无法恢复，因此将第四层的输出
和第三层的输出也依次反卷积，分别需要16倍和8倍上采样，结果过也更精细一些了。
FCN对图像进行像素级的分类，从而解决了语义级别的图像分割问题。与经典的CNN在卷积层使用全连接层得到固定长度的特征向量进行
分类不同，FCN可以接受任意尺寸的输入图像，采用反卷积层对最后一个卷基层的特征图（feature map）进行上采样，使它恢复到输入
图像相同的尺寸，从而可以对每一个像素都产生一个预测，同时保留了原始输入图像中的空间信息，最后奇偶在上采样的特征图进行像
素的分类。
而由于细节上的不准确，将pool3和pool4的输出用来进行预测，进一步提高准确性。
FCM最后会使用CRF进行修正，主要原因是在分类任务中，特征具有的平移不变性，在语义级别的分割就会是个大问题。而CRF可以比较好
的解决这个问题
'''