'''
由图片生成的数据集需要以下几个核心维度：
图像分辨率（高度×宽度）、通道数（RGB 图像、灰度或多模态数据）、样本数量，
故数据集的shape应为(样本数, 高度, 宽度, 通道)，例如(10000, 224, 224, 3)
'''

import tensorflow as tf
from custom_settings import InputConfigs

# input_picture_configs = InputConfigs()
# Input_Image_Size, Pipe_Image_Size, Initial_Image_Size, BATCH_SIZE = input_picture_configs.get_input_paras_level_l()

class CustomDatasets:
    """根据原生图片生成自定义数据集"""
    def __init__(self):
        self.input_picture_configs = InputConfigs() # 实例化设置信息

        self.base_dir = self.input_picture_configs.base_dir
        (self.input_image_size, self.pipe_image_size,
         self.initial_image_size, self.batch_size) = self.input_picture_configs.get_input_paras_level_l()

    def build_train_ds(self):
        return self.ds_pipeline(self.base_dir, is_train=True)

    def build_val_test_ds(self, val_ratio=0.5):
        original_ds = self.ds_pipeline(self.base_dir)
        total_batches = tf.data.experimental.cardinality(original_ds)

        # 类型安全转换
        split_point = tf.cast(
            tf.round(  # 添加四舍五入避免截断误差
                tf.cast(total_batches, tf.float32) * val_ratio
            ),
            tf.int64
        )

        return (
            original_ds.take(split_point),  # val_ds
            original_ds.skip(split_point)   # test_ds
        )

    def ds_pipeline(self, data_dir, is_train=False):
        """根据 原生照片 和 方法：image_dataset_from_directory 生成数据集"""

        ds = tf.keras.preprocessing.image_dataset_from_directory(
            data_dir,
            labels='inferred',  # 从子目录自动推断标签（每个子目录为一类）
            label_mode="int",   # 默认值 int (categorical、binary、None)
            # class_names=None,
            color_mode="rgb",   #默认值 rgb (grayscale、rgba)
            batch_size=self.batch_size,
            image_size=self.pipe_image_size,
            shuffle=True,
            seed=42,
            validation_split=0.2,
            subset="training" if is_train else "validation",
            # interpolation='bilinear', #插值方式，默认 bilinear
            # follow_links=False
            )
        return ds

    """数据集数据增强"""
    @staticmethod
    def data_augmentation(image):
        print("apply old augmentation method")

        # 随机裁剪
        image = tf.image.random_crop(image, size=[len(image), 512, 384, 3])
        # 随机水平翻转
        image = tf.image.random_flip_left_right(image)  # 默认翻转概率为50%，在在后面加[]来控制翻转概率
        # 随机旋转(90 * k, k=0,1,2,3)
        image = tf.image.rot90(image, tf.random.uniform(shape=[], minval=0, maxval=4, dtype=tf.int32))
        # 随机伽马校正
        # image = random_gamma(image)
        # 调整亮度和对比度
        # image = tf.image.random_brightness(image, max_delta=0.2)
        # image = tf.image.random_contrast(image, lower=0.8, upper=1.2)
        return image

    def data_augmentation_latest(self, image):
        from tensorflow.keras.layers import (
            RandomFlip,  # 随机翻转
            RandomRotation,  # 随机旋转
            # RandomZoom,         # 随机缩放
            # RandomContrast,     # 随机对比度调整
            # RandomBrightness,   # 随机亮度调整
            # RandomTranslation,  # 随机平移
            # Rescaling,  # 归一化（非增强，但常配合使用）
            # RandomCrop,         # 随机裁剪
        )

        print("apply new augmentation method")
        tf.keras.Sequential([
            RandomFlip("horizontal_and_vertical"),
            RandomRotation(factor=0.2),
            # RandomContrast(0.1),
        ])

    def center_crop(self, image):
        offset_height = (self.initial_image_size[0] - self.input_image_size[0]) // 2  # 计算垂直方向上的偏移
        offset_width = (self.initial_image_size[1] - self.input_image_size[1]) // 2  # 计算水平方向上的偏移

        # 执行裁剪
        cropped_image = tf.image.crop_to_bounding_box(
            image,
            offset_height=offset_height,
            offset_width=offset_width,
            target_height=self.input_image_size[0],
            target_width=self.input_image_size[1]
        )
        return cropped_image

    def ds_enhancement(self, datasets, is_train=False, lastest_method=False):
        ds = datasets

        # 对训练集应用随机变换（如旋转、翻转）以增加泛化性
        if is_train:
            if lastest_method:
                ds = ds.map(lambda x, y: (self.data_augmentation_latest(x), y))
            else:
                ds = ds.map(lambda x, y: (self.data_augmentation(x), y))

        # 对数据集进行中心裁减
        if lastest_method:
            ds = ds.map(lambda x, y: (self.center_crop(x), y))

        # 缓存与预加载：提升训练效率
        ds = ds.cache().prefetch(buffer_size=tf.data.AUTOTUNE)
        return ds

    def ds_preprocessing(self):
        # 归一化到 [0,1] 范围
        normalization = tf.keras.layers.Rescaling(1. / 255)

        # 组合预处理流程
        tf.keras.Sequential([
            normalization,
        ])