'''
由图片生成的数据集需要以下几个核心维度：
图像分辨率（高度×宽度）、通道数（RGB 图像、灰度或多模态数据）、样本数量，
故数据集的shape应为(样本数, 高度, 宽度, 通道)，例如(10000, 224, 224, 3)
'''

import tensorflow as tf
# import numpy as np
from matplotlib import pyplot as plt
from tensorflow.keras import layers

Fine_Tuning_En = True

Low_Level_Method = True
# Low_Level_Method = False

# parameter、hyper-parameter
num_classes = 3         # 需要分类的类别数
base_lr = 1e-3          # 基础学习率
base_epochs = 60        # 基础epoch轮数

'''
使用image_dataset_from_directory构建数据集
'''
# base_dir = 'D:\Engineer_workshop\SW station\work\JDEC\project data\initial data'
base_dir = '/home/leo/Downloads/MVS pictures/TF2.4_mine pictures'

def get_input_paras(if_data_level_l = False):
    initial_image_size = (2048, 1536)
    # input_image_size = (512, 384)
    input_image_size = (200, 200)
    batch_size = 32
    # input_image_size = (1024, 768)
    # batch_size = 16
    pipe_image_size = initial_image_size

    if if_data_level_l:
        input_image_size = (512, 384)
        batch_size = 16
        pipe_image_size = input_image_size
        initial_image_size = input_image_size
        print("apply old method")
    else:
        print("apply new method")
    return input_image_size, pipe_image_size, initial_image_size, batch_size

# Input_Image_Size, Pipe_Image_Size, Initial_Image_Size, BATCH_SIZE = get_input_paras(Low_Level_Method)
Input_Image_Size, Pipe_Image_Size, Initial_Image_Size, BATCH_SIZE = get_input_paras()

# 数据管道
def build_pipeline(data_dir, batch_size=BATCH_SIZE, is_train=False):
    ds = tf.keras.preprocessing.image_dataset_from_directory(
        data_dir,
        labels='inferred',  # 从子目录自动推断标签（每个子目录为一类）
        label_mode="int",   # 默认值 int (categorical、binary、None)
        # class_names=None,
        color_mode="rgb",   #默认值 rgb (grayscale、rgba)
        batch_size=batch_size,
        image_size=Pipe_Image_Size,
        shuffle=True,
        seed=42,
        validation_split=0.2,
        subset="training" if is_train else "validation",
        # interpolation='bilinear', #插值方式，默认 bilinear
        # follow_links=False
        )
    return ds

# 加载训练集和验证集
train_ds = build_pipeline(base_dir, is_train=True)
val_ds = build_pipeline(base_dir)

# 拆分验证集为验证集和测试集
val_batches = tf.data.experimental.cardinality(val_ds)
test_ds = val_ds.take(val_batches // 2)  # 取50%作为测试集
val_ds = val_ds.skip(val_batches // 2)  # 剩余作为验证集
# shapes: ((None, 1024, 768, 3), (None,)), types: (tf.float32, tf.int32)
print("train_batches:", train_ds)
print("val_batches:", val_ds)
print("test_batches:", test_ds)

# 数据增强
from tensorflow.keras.layers import (
    RandomFlip,         # 随机翻转
    RandomRotation,     # 随机旋转
    # RandomZoom,         # 随机缩放
    # RandomContrast,     # 随机对比度调整
    # RandomBrightness,   # 随机亮度调整
    # RandomTranslation,  # 随机平移
    Rescaling,          # 归一化（非增强，但常配合使用）
    # RandomCrop,         # 随机裁剪
)

# 所有像素数据（包括训练集、验证集、测试集）归一化处理的实例化
normalization_layer = Rescaling(1./255)
def ds_normalize(datasets):
    ds = datasets.map(lambda x, y: (normalization_layer(x), y))
    return ds

train_ds = ds_normalize(train_ds)
val_ds = ds_normalize(val_ds)
test_ds = ds_normalize(test_ds)

if Low_Level_Method:
    def data_augmentation(image):
        print("apply old augmentation method")
        # 随机裁剪
        image = tf.image.random_crop(image, size=[len(image), 512, 384, 3])
        # 随机水平翻转
        image = tf.image.random_flip_left_right(image)  # 默认翻转概率为50%，在在后面加[]来控制翻转概率
        # 随机旋转(90 * k, k=0,1,2,3)
        image = tf.image.rot90(image, tf.random.uniform(shape=[], minval=0, maxval=4, dtype=tf.int32))
        # 随机伽马校正
        # image = random_gamma(image)
        # 调整亮度和对比度
        # image = tf.image.random_brightness(image, max_delta=0.2)
        # image = tf.image.random_contrast(image, lower=0.8, upper=1.2)
        return image
else:
    def center_crop(image):
        print("do center crop")
        offset_height = (Initial_Image_Size[0] - Input_Image_Size[0]) // 2  # 计算垂直方向上的偏移
        offset_width = (Initial_Image_Size[1] - Input_Image_Size[1]) // 2  # 计算水平方向上的偏移

        # 执行裁剪
        cropped_image = tf.image.crop_to_bounding_box(
            image,
            offset_height=offset_height,
            offset_width=offset_width,
            target_height=Input_Image_Size[0],
            target_width=Input_Image_Size[1]
        )
        return cropped_image

    def augmentation_print():
        print("apply new augmentation method")

    data_augmentation = tf.keras.Sequential([
        augmentation_print(),
        RandomFlip("horizontal_and_vertical"),
        RandomRotation(factor=0.2),
        # RandomContrast(0.1),
    ])

def ds_enhancement(datasets, is_train=False):
    ds = datasets

    # 对训练集应用随机变换（如旋转、翻转）以增加泛化性
    if is_train:
        ds = ds.map(lambda x, y: (data_augmentation(x), y))

    if not Low_Level_Method:
        ds = ds.map(lambda x, y: (center_crop(x), y))

    # 缓存与预加载：提升训练效率
    ds = ds.cache().prefetch(buffer_size=tf.data.AUTOTUNE)
    return ds

# 加载基础模型
# base_model = tf.keras.applications.MobileNetV2(
# base_model = tf.keras.applications.VGG16(
# base_model = tf.keras.applications.InceptionV3(
base_model = tf.keras.applications.ResNet50V2(
    include_top=False,
    weights=None,
    # input_tensor=None,
    input_shape=Input_Image_Size + (3,),
    pooling=None,
    # pooling='max',
    # classes=1000,
    # classifier_activation='softmax'
)
# 冻结基础模型
base_model.trainable = False

# 添加自定义层
model = tf.keras.Sequential([  # CBAPD
    layers.Input(shape=Input_Image_Size + (3,)),
    base_model,
    layers.GlobalAveragePooling2D(), # 全局池化操作
    # layers.GlobalMaxPooling2D(),
    # layers.MaxPooling2D()
    layers.Dense(256, activation='relu'),
    layers.Dropout(0.5), # 正则化技术之一，随机丢弃神经元
    layers.Dense(num_classes, activation='softmax')
])

from tensorflow.keras.optimizers.schedules import LearningRateSchedule
class OneCycleLR(LearningRateSchedule):
    def __init__(self, max_lr, total_steps, pct_start=0.3, div_factor=25.0, name=None):
        """
        Args:
            max_lr: 峰值学习率 (e.g. 3e-3)
            total_steps: 总训练步数 (epochs * steps_per_epoch)
            pct_start: 学习率上升阶段占比 (默认30%)
            div_factor: 初始学习率 = max_lr/div_factor
            name:为了兼容 Keras 的命名规范
        """
        super().__init__()
        self.max_lr = max_lr
        self.total_steps = total_steps
        self.pct_start = pct_start  # 上升阶段占比
        self.div_factor = div_factor  # 初始学习率 = max_lr / div_factor
        self.name = name

    def __call__(self, step):
        step = tf.cast(step, tf.float32)
        up_steps = tf.cast(self.total_steps * self.pct_start, tf.float32)
        down_steps = tf.cast(self.total_steps - up_steps, tf.float32)

        initial_lr = self.max_lr / self.div_factor

        # 上升阶段
        lr = tf.cond(
            step < up_steps,
            lambda: initial_lr + (self.max_lr - initial_lr) * (step / up_steps),
            # 下降阶段
            lambda: self.max_lr - (self.max_lr - initial_lr) * ((step - up_steps) / down_steps),
        )
        return lr

    def get_config(self):
        return {
            "max_lr": self.max_lr,
            "total_steps": self.total_steps,
            "pct_start": self.pct_start,
            "div_factor": self.div_factor,
            "name": self.name,
        }

# 初始化 OneCycleLR
lr_schedule = OneCycleLR(max_lr=0.001, total_steps=(len(train_ds)//BATCH_SIZE)*base_epochs)

# 编译模型
model.compile(
    optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3, clipnorm=1.0),
    loss='sparse_categorical_crossentropy',  # 若标签为整数，使用sparse_categorical
    metrics=['accuracy']
)

# 设置回调函数
callbacks = [
    # tf.keras.callbacks.EarlyStopping(patience=5, restore_best_weights=True),
    tf.keras.callbacks.ModelCheckpoint('best_model.h5', save_best_only=True),
]

# 执行训练
history = model.fit(
    ds_enhancement(train_ds, is_train=True),
    epochs=50,
    validation_data=val_ds,
    callbacks=callbacks
)

if Fine_Tuning_En:
    # 解冻最后N层（例如后20层）
    for layer in base_model.layers[-20:]:
        if not isinstance(layer, layers.BatchNormalization): # 解冻避开BN
            layer.trainable = True

    # 重新编译模型（使用更小的学习率）
    model.compile(
        optimizer=tf.keras.optimizers.SGD(1e-5, momentum=0.9),
        # optimizer=tf.keras.optimizers.Adam(learning_rate=1e-5, clipnorm=1.0),
        loss='sparse_categorical_crossentropy',
        metrics=['accuracy']
        )

    # 继续训练（微调阶段）
    history = model.fit(
        ds_enhancement(train_ds, is_train=True),
        epochs=50,
        validation_data=val_ds,
        callbacks=callbacks
    )

# 评估模型
test_loss, test_acc = model.evaluate(test_ds)
print(f'Test accuracy: {test_acc}')

# 保存模型
# model.save('my_multiclass_model.h5')

# 提取训练和验证损失
train_loss = history.history['loss']
val_loss = history.history['val_loss']

# 绘制曲线
plt.figure(figsize=(10, 5))
plt.plot(train_loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.title('Training and Validation Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.grid(True)
plt.show()


