"""
八维保险数据挖掘项目 - 保险销售预测 (完整企业级实现)
工单编号：INS-DM-20250205-09
版本：v2.0
日期：2025-08-01
"""

import os
import random
import sys
import logging
import time
import joblib
import numpy as np
import pandas as pd
import matplotlib as mpl
# 中文显示配置
mpl.use('Agg')  # 无GUI环境兼容
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split, StratifiedKFold
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import roc_auc_score, confusion_matrix, classification_report
from sklearn.calibration import CalibratedClassifierCV
from sklearn.utils import class_weight

# 设置中文支持
plt.rcParams['font.sans-serif'] = ['SimHei', 'Microsoft YaHei', 'WenQuanYi Micro Hei']  # 中文支持
plt.rcParams['axes.unicode_minus'] = False  # 解决负号显示问题

# 使用兼容的XGBoost版本
try:
    from xgboost import XGBClassifier
    XGB_AVAILABLE = True
except ImportError:
    print("XGBoost not available, using RandomForest as fallback")
    XGB_AVAILABLE = False

# ----------------------------
# 配置设置
# ----------------------------
class Config:
    # 输出目录
    OUTPUT_DIR = "reports"

    # 文件路径
    DATA_DIR = "data"
    TRAIN_FILE = "train.csv"
    TEST_FILE = "test.csv"
    SAMPLE_SUBMISSION = "sample_submission.csv"
    SUBMISSION_FILE = os.path.join(OUTPUT_DIR, "submission.csv")
    MODEL_FILE = os.path.join(OUTPUT_DIR, "insurance_sales_model.pkl")
    FEATURE_IMPORTANCE_PLOT = os.path.join(OUTPUT_DIR, "feature_importance.png")
    PREDICTION_DISTRIBUTION_PLOT = os.path.join(OUTPUT_DIR, "prediction_distribution.png")
    AUC_IMPROVEMENT_PLOT = os.path.join(OUTPUT_DIR, "auc_improvement.png")
    OPTIMIZATION_REPORT = os.path.join(OUTPUT_DIR, "优化报告.md")
    LOG_FILE = os.path.join(OUTPUT_DIR, "insurance_prediction.log")

    # 随机种子
    RANDOM_STATE = 42

    # 特征工程配置
    AGE_BINS = [0, 25, 35, 45, 55, 65, 100]

    # 模型训练配置
    TEST_SIZE = 0.2
    CV_FOLDS = 5
    THRESHOLD = 0.35  # 业务优化的决策阈值

    # 日志配置
    LOG_LEVEL = logging.INFO

    # 兼容性设置
    USE_XGBOOST = True and XGB_AVAILABLE  # 自动检测XGBoost可用性

# ----------------------------
# 初始化日志
# ----------------------------
def setup_logging():
    # 确保输出目录存在
    os.makedirs(Config.OUTPUT_DIR, exist_ok=True)

    logging.basicConfig(
        level=Config.LOG_LEVEL,
        format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
        handlers=[
            logging.FileHandler(Config.LOG_FILE),
            logging.StreamHandler(sys.stdout)
        ]
    )
    return logging.getLogger(__name__)

logger = setup_logging()

# ----------------------------
# 模型类型安全访问器
# ----------------------------
def get_base_model(model):
    """安全获取基础模型，处理校准模型包装"""
    if hasattr(model, 'calibrated_classifiers_') and model.calibrated_classifiers_:
        return get_base_model(model.calibrated_classifiers_[0].base_estimator)
    return model

# ----------------------------
# 数据加载与验证
# ----------------------------
def load_data():
    """加载并验证数据集"""
    logger.info("开始加载数据...")
    start_time = time.time()

    # 构建文件路径
    train_path = os.path.join(Config.DATA_DIR, Config.TRAIN_FILE)
    test_path = os.path.join(Config.DATA_DIR, Config.TEST_FILE)
    sample_path = os.path.join(Config.DATA_DIR, Config.SAMPLE_SUBMISSION)

    # 检查文件是否存在
    for path in [train_path, test_path, sample_path]:
        if not os.path.exists(path):
            raise FileNotFoundError(f"文件不存在: {path}")

    # 加载数据
    try:
        train_df = pd.read_csv(train_path)
        test_df = pd.read_csv(test_path)
        sample_submission = pd.read_csv(sample_path)
    except Exception as e:
        logger.error(f"数据加载失败: {str(e)}")
        raise

    # 验证数据完整性
    required_columns = [
        'id', 'Gender', 'Age', 'Driving_License', 'Region_Code',
        'Previously_Insured', 'Vehicle_Age', 'Vehicle_Damage',
        'Annual_Premium', 'Policy_Sales_Channel', 'Vintage'
    ]

    # 检查训练集
    for col in required_columns + ['Response']:
        if col not in train_df.columns:
            raise ValueError(f"训练数据缺失必要列: {col}")

    # 检查测试集
    for col in required_columns:
        if col not in test_df.columns:
            raise ValueError(f"测试数据缺失必要列: {col}")

    # 检查样本提交格式
    if not {'id', 'Response'}.issubset(sample_submission.columns):
        raise ValueError("样本提交文件格式不正确")

    logger.info(f"数据加载成功. 训练集: {train_df.shape}, 测试集: {test_df.shape}")
    logger.info(f"数据加载耗时: {time.time() - start_time:.2f}秒")

    return train_df, test_df, sample_submission

# ----------------------------
# 特征工程
# ----------------------------
def feature_engineering(df):
    """执行特征工程"""
    logger.info("开始特征工程...")
    start_time = time.time()

    # 复制数据避免修改原始数据
    df = df.copy()

    # 保存原始列数
    original_columns = len(df.columns)

    # 保存ID列用于后续处理
    if 'id' in df.columns:
        id_col = df['id']
    else:
        id_col = None

    # 1. 缺失值处理
    if 'Region_Code' in df.columns:
        df['Region_Code'] = df['Region_Code'].fillna(-1).astype(int)

    if 'Policy_Sales_Channel' in df.columns:
        df['Policy_Sales_Channel'] = df['Policy_Sales_Channel'].fillna(-1).astype(int)

    # 2. 特征转换
    if 'Gender' in df.columns:
        df['Gender'] = df['Gender'].map({'Male': 1, 'Female': 0})

    if 'Vehicle_Damage' in df.columns:
        df['Vehicle_Damage'] = df['Vehicle_Damage'].map({'Yes': 1, 'No': 0})

    if 'Vehicle_Age' in df.columns:
        vehicle_age_map = {'< 1 Year': 0, '1-2 Year': 1, '> 2 Years': 2}
        df['Vehicle_Age_Num'] = df['Vehicle_Age'].map(vehicle_age_map)

    # 3. 特征创建
    if 'Age' in df.columns:
        df['Age_Group'] = pd.cut(
            df['Age'],
            bins=Config.AGE_BINS,
            labels=[f"Age_{i}-{j}" for i, j in zip(Config.AGE_BINS[:-1], Config.AGE_BINS[1:])]
        )

    if 'Annual_Premium' in df.columns:
        df['Premium_Log'] = np.log1p(df['Annual_Premium'])

    if 'Age' in df.columns and 'Vehicle_Age_Num' in df.columns:
        df['Age_Vehicle_Interaction'] = df['Age'] * df['Vehicle_Age_Num']
        df['Premium_Age_Ratio'] = df['Annual_Premium'] / (df['Age'] + 1)

    if 'Previously_Insured' in df.columns and 'Driving_License' in df.columns:
        df['Insurance_History'] = df['Previously_Insured'] * df['Driving_License']

    # 4. 高基数特征处理
    if 'Region_Code' in df.columns:
        df['Region_Group'] = (df['Region_Code'] // 10).astype(str) + '0s'

    if 'Policy_Sales_Channel' in df.columns:
        df['Sales_Channel_Group'] = pd.cut(
            df['Policy_Sales_Channel'],
            bins=[0, 50, 100, 150, 200, 250, float('inf')],
            labels=['Chan_0-50', 'Chan_50-100', 'Chan_100-150', 'Chan_150-200', 'Chan_200-250', 'Chan_250+']
        )

    # 恢复ID列
    if id_col is not None:
        df['id'] = id_col

    # 计算新增特征数
    new_features_count = len(df.columns) - original_columns
    logger.info(f"特征工程完成. 新增特征数: {new_features_count}")
    logger.info(f"特征工程耗时: {time.time() - start_time:.2f}秒")

    return df

# ----------------------------
# 构建预处理管道
# ----------------------------
def build_preprocessor():
    """构建特征预处理管道"""
    # 数值特征
    numeric_features = [
        'Age', 'Driving_License', 'Previously_Insured',
        'Vintage', 'Vehicle_Age_Num', 'Premium_Log',
        'Age_Vehicle_Interaction', 'Premium_Age_Ratio',
        'Insurance_History'
    ]

    # 类别特征
    categorical_features = [
        'Gender', 'Vehicle_Damage', 'Age_Group',
        'Region_Group', 'Sales_Channel_Group'
    ]

    # 数值特征管道
    numeric_transformer = Pipeline(steps=[
        ('imputer', SimpleImputer(strategy='median')),
        ('scaler', StandardScaler())
    ])

    # 类别特征管道
    categorical_transformer = Pipeline(steps=[
        ('imputer', SimpleImputer(strategy='most_frequent')),
        ('onehot', OneHotEncoder(handle_unknown='ignore', sparse=False))
    ])

    # 组合所有转换器
    preprocessor = ColumnTransformer(
        transformers=[
            ('num', numeric_transformer, numeric_features),
            ('cat', categorical_transformer, categorical_features)
        ],
        remainder='drop'
    )

    return preprocessor

# ----------------------------
# 处理样本不平衡
# ----------------------------
def handle_class_imbalance(y_train):
    """使用类别权重处理样本不平衡问题"""
    logger.info("处理样本不平衡...")
    class_counts = y_train.value_counts().to_dict()
    logger.info(f"类别分布: {class_counts}")

    # 计算类别权重
    weights = class_weight.compute_class_weight(
        'balanced',
        classes=np.unique(y_train),
        y=y_train
    )
    class_weights = dict(zip(np.unique(y_train), weights))

    logger.info(f"类别权重: {class_weights}")
    return class_weights

# ----------------------------
# 模型训练与评估
# ----------------------------
def train_and_evaluate(X_train, y_train, X_val, y_val):
    """训练模型并进行评估"""
    logger.info("开始模型训练...")
    start_time = time.time()

    # 构建模型管道
    preprocessor = build_preprocessor()

    # 计算类别权重
    class_weights = handle_class_imbalance(y_train)

    # 模型选择
    if Config.USE_XGBOOST:
        logger.info("使用XGBoost模型")
        scale_pos_weight = class_weights[1] / class_weights[0]
        classifier = XGBClassifier(
            n_estimators=300,
            max_depth=5,
            learning_rate=0.05,
            subsample=0.8,
            colsample_bytree=0.8,
            random_state=Config.RANDOM_STATE,
            use_label_encoder=False,
            eval_metric='auc',
            scale_pos_weight=scale_pos_weight
        )
    else:
        logger.info("使用RandomForest模型")
        classifier = RandomForestClassifier(
            n_estimators=200,
            max_depth=10,
            class_weight=class_weights,
            random_state=Config.RANDOM_STATE
        )

    model = Pipeline(steps=[
        ('preprocessor', preprocessor),
        ('classifier', classifier)
    ])

    # 模型训练
    model.fit(X_train, y_train)

    # 模型评估
    y_pred_proba = model.predict_proba(X_val)[:, 1]
    y_pred = (y_pred_proba > Config.THRESHOLD).astype(int)

    auc = roc_auc_score(y_val, y_pred_proba)
    cm = confusion_matrix(y_val, y_pred)
    report = classification_report(y_val, y_pred)

    logger.info(f"模型训练完成. 耗时: {time.time() - start_time:.2f}秒")
    logger.info(f"验证集 AUC: {auc:.4f}")
    logger.info(f"混淆矩阵:\n{cm}")
    logger.info(f"分类报告:\n{report}")

    # 模型校准
    logger.info("进行模型校准...")
    calibrated_model = CalibratedClassifierCV(
        model,
        method='isotonic',
        cv=3
    )
    calibrated_model.fit(X_train, y_train)

    y_calib_proba = calibrated_model.predict_proba(X_val)[:, 1]
    calib_auc = roc_auc_score(y_val, y_calib_proba)
    logger.info(f"校准后 AUC: {calib_auc:.4f}")

    return calibrated_model, auc, calib_auc

# ----------------------------
# 交叉验证训练
# ----------------------------
def cross_validation_train(X, y):
    """使用交叉验证训练更稳健的模型"""
    logger.info("开始交叉验证训练...")
    start_time = time.time()

    skf = StratifiedKFold(
        n_splits=Config.CV_FOLDS,
        shuffle=True,
        random_state=Config.RANDOM_STATE
    )

    auc_scores = []
    models = []

    for fold, (train_idx, val_idx) in enumerate(skf.split(X, y)):
        logger.info(f"训练 Fold {fold+1}/{Config.CV_FOLDS}")

        X_train, X_val = X.iloc[train_idx], X.iloc[val_idx]
        y_train, y_val = y.iloc[train_idx], y.iloc[val_idx]

        model, auc, _ = train_and_evaluate(X_train, y_train, X_val, y_val)
        auc_scores.append(auc)
        models.append(model)

    # 计算平均AUC
    mean_auc = np.mean(auc_scores)
    logger.info(f"交叉验证平均 AUC: {mean_auc:.4f}")
    logger.info(f"交叉验证耗时: {time.time() - start_time:.2f}秒")

    # 使用全部数据训练最终模型
    logger.info("训练最终模型...")
    final_model, _, _ = train_and_evaluate(X, y, X, y)

    return final_model, mean_auc

# ----------------------------
# 特征重要性分析
# ----------------------------
def plot_feature_importance(model, X):
    """分析并可视化特征重要性"""
    logger.info("分析特征重要性...")

    # 安全获取基础模型
    base_model = get_base_model(model)

    if not hasattr(base_model, 'named_steps'):
        logger.warning("基础模型不是Pipeline，无法分析特征重要性")
        return None

    preprocessor = base_model.named_steps['preprocessor']
    classifier = base_model.named_steps['classifier']

    # 获取数值特征名称
    num_features = preprocessor.transformers_[0][2]

    # 获取类别特征名称
    try:
        cat_transformer = preprocessor.transformers_[1][1]
        if hasattr(cat_transformer.named_steps['onehot'], 'get_feature_names_out'):
            cat_features = cat_transformer.named_steps['onehot'].get_feature_names_out(
                input_features=preprocessor.transformers_[1][2]
            )
        else:
            cat_features = cat_transformer.named_steps['onehot'].get_feature_names()
    except Exception as e:
        logger.error(f"获取类别特征名称失败: {str(e)}")
        return None

    # 组合所有特征名称
    all_features = list(num_features) + list(cat_features)

    # 获取特征重要性
    if hasattr(classifier, 'feature_importances_'):
        importances = classifier.feature_importances_
    else:
        logger.warning("模型没有feature_importances_属性")
        return None

    # 创建重要性DataFrame
    feature_importance = pd.DataFrame({
        '特征': all_features,
        '重要性': importances
    }).sort_values('重要性', ascending=False)

    # 可视化
    plt.figure(figsize=(14, 10))
    sns.barplot(x='重要性', y='特征', data=feature_importance.head(20), palette="viridis")
    plt.title('Top 20 特征重要性', fontsize=16, fontweight='bold')
    plt.xlabel('重要性分数', fontsize=12)
    plt.ylabel('特征名称', fontsize=12)
    plt.grid(axis='x', linestyle='--', alpha=0.7)
    plt.tight_layout()
    plt.savefig(Config.FEATURE_IMPORTANCE_PLOT, dpi=300, bbox_inches='tight')
    logger.info(f"特征重要性图已保存为 {Config.FEATURE_IMPORTANCE_PLOT}")

    return feature_importance

# ----------------------------
# 生成预测结果
# ----------------------------
def generate_predictions(model, test_df, sample_submission, features):
    """生成预测结果并保存为提交格式"""
    logger.info("生成预测结果...")
    start_time = time.time()

    # 确保测试集有ID列
    if 'id' not in test_df.columns:
        logger.error("测试数据缺少ID列")
        raise ValueError("测试数据缺少ID列")

    # 确保测试集包含所有特征列
    missing_features = [f for f in features if f not in test_df.columns]
    if missing_features:
        logger.warning(f"测试数据缺失特征: {missing_features}")
        for f in missing_features:
            test_df[f] = test_df.get(f, 0)  # 安全填充缺失特征

    # 提取特征用于预测
    X_test = test_df[features]

    # 预测概率
    logger.info("开始模型预测...")
    try:
        test_probs = model.predict_proba(X_test)[:, 1]
    except Exception as e:
        logger.error(f"预测失败: {str(e)}")
        raise

    # 应用业务决策阈值
    test_preds = (test_probs > Config.THRESHOLD).astype(int)

    # 创建提交文件
    submission = pd.DataFrame({
        'id': test_df['id'],
        'Response': test_preds
    })

    # 验证格式
    if not submission.columns.equals(sample_submission.columns):
        logger.warning("提交文件列名不匹配样本格式，进行调整")
        submission = submission[sample_submission.columns]

    # 保存结果
    submission.to_csv(Config.SUBMISSION_FILE, index=False)
    logger.info(f"预测结果已保存至 {Config.SUBMISSION_FILE}")

    # 保存概率分布图
    plt.figure(figsize=(12, 7))
    sns.histplot(test_probs, bins=50, kde=True, color='royalblue')
    plt.title('预测概率分布', fontsize=16, fontweight='bold')
    plt.xlabel('购买车辆保险的概率', fontsize=12)
    plt.ylabel('客户数量', fontsize=12)

    # 添加阈值线
    plt.axvline(x=Config.THRESHOLD, color='r', linestyle='--', linewidth=2)
    plt.text(
        Config.THRESHOLD + 0.01, plt.ylim()[1]*0.9,
        f'决策阈值: {Config.THRESHOLD}',
        color='r', fontsize=12, fontweight='bold'
    )

    # 添加分布统计信息
    mean_prob = np.mean(test_probs)
    plt.axvline(x=mean_prob, color='g', linestyle='-', linewidth=1.5)
    plt.text(
        mean_prob + 0.01, plt.ylim()[1]*0.8,
        f'平均概率: {mean_prob:.2f}',
        color='g', fontsize=10
    )

    plt.grid(axis='y', linestyle='--', alpha=0.7)
    plt.savefig(Config.PREDICTION_DISTRIBUTION_PLOT, dpi=300, bbox_inches='tight')
    logger.info(f"预测概率分布图已保存为 {Config.PREDICTION_DISTRIBUTION_PLOT}")

    logger.info(f"预测生成耗时: {time.time() - start_time:.2f}秒")
    return submission

# ----------------------------
# 模型保存与元数据管理
# ----------------------------
def save_model_with_metadata(model, feature_names, auc_score):
    """保存模型并附加元数据"""
    logger.info("保存模型及元数据...")
    model_type = type(model).__name__

    if hasattr(model, 'calibrated_classifiers_'):
        base_type = type(model.calibrated_classifiers_[0].base_estimator).__name__
        model_type = f"Calibrated_{base_type}"

    # 创建元数据
    metadata = {
        'model_type': model_type,
        'feature_names': feature_names,
        'threshold': Config.THRESHOLD,
        'auc_score': auc_score,
        'creation_date': time.strftime("%Y-%m-%d"),
        'version': '2.0',
        'ticket_id': 'INS-DM-20250205-09'
    }

    # 保存模型和元数据
    model_data = {
        'model': model,
        'metadata': metadata
    }

    joblib.dump(model_data, Config.MODEL_FILE)
    logger.info(f"模型和元数据已保存至 {Config.MODEL_FILE}")
    logger.info(f"模型元数据: {metadata}")

    return model_data

# ----------------------------
# 优化报告生成
# ----------------------------
def generate_optimization_report(cv_auc, feature_importance, threshold, model_type):
    """生成模型优化报告"""
    logger.info("生成模型优化报告...")

    with open(Config.OPTIMIZATION_REPORT, 'w', encoding='utf-8') as f:
        f.write("# 保险销售预测模型优化报告\n\n")
        f.write(f"生成时间: {time.strftime('%Y-%m-%d %H:%M:%S')}\n")
        f.write(f"模型版本: v2.0\n")
        f.write(f"工单编号: INS-DM-20250205-09\n\n")

        # 1. 模型性能概览
        f.write("## 1. 模型性能概览\n")
        f.write(f"- 模型类型: {model_type}\n")
        f.write(f"- 交叉验证AUC: {cv_auc:.4f}\n")
        f.write(f"- 决策阈值: {threshold}\n")
        f.write(f"- 交叉验证折数: {Config.CV_FOLDS}\n\n")

        # 2. 特征重要性分析
        f.write("## 2. 特征重要性分析\n")
        if feature_importance is not None:
            top_features = feature_importance.head(10)
            f.write("Top 10 重要特征:\n")
            for idx, row in top_features.iterrows():
                f.write(f"- {row['特征']}: {row['重要性']:.4f}\n")
        else:
            f.write("无法获取特征重要性数据\n")
        f.write(f"特征重要性可视化: {Config.FEATURE_IMPORTANCE_PLOT}\n\n")

        # 3. 业务优化建议
        f.write("## 3. 业务优化建议\n")
        f.write("1. **目标客户定位**:\n")
        f.write("   - 基于高重要性特征，重点关注车辆使用年限较长、年龄在35-55岁区间的客户\n")
        f.write("   - 针对未参保客户设计专属推广方案，提高转化率\n\n")

        f.write("2. **阈值调整策略**:\n")
        f.write("   - 当前阈值设置为0.35，可根据业务目标调整:\n")
        f.write("     - 若追求更多潜在客户，可降低阈值至0.30\n")
        f.write("     - 若追求更高精准度，可提高阈值至0.40\n\n")

        f.write("3. **模型改进方向**:\n")
        f.write("   - 收集更多客户行为数据，如历史理赔记录、车辆类型等特征\n")
        f.write("   - 尝试集成多个模型（如XGBoost+LightGBM）进一步提升性能\n")
        f.write("   - 增加时间序列分析，捕捉季节性销售模式\n\n")

        # 4. 预测分布分析
        f.write("## 4. 预测分布分析\n")
        f.write(f"预测概率分布情况: {Config.PREDICTION_DISTRIBUTION_PLOT}\n")
        f.write("可根据分布特征制定分层营销策略，对高概率客户优先跟进\n")

    logger.info(f"优化报告已生成: {Config.OPTIMIZATION_REPORT}")


# ----------------------------
# 主程序执行流程
# ----------------------------
def main():
    """主程序执行入口"""
    logger.info("===== 八维保险数据挖掘项目 - 保险销售预测 v2.0 启动 =====")
    overall_start_time = time.time()

    try:
        # 1. 加载数据
        train_df, test_df, sample_submission = load_data()

        # 2. 执行特征工程
        train_fe = feature_engineering(train_df)
        test_fe = feature_engineering(test_df)

        # 3. 准备训练数据
        # 定义特征列和目标列
        target_col = 'Response'
        exclude_cols = [target_col, 'id']  # 排除目标列和ID列
        feature_cols = [col for col in train_fe.columns if col not in exclude_cols]

        logger.info(f"特征列数量: {len(feature_cols)}")
        logger.info(f"前5个特征列: {feature_cols[:5]}...")

        # 提取特征和目标变量
        X = train_fe[feature_cols]
        y = train_fe[target_col]

        # 4. 交叉验证训练模型
        final_model, cv_auc = cross_validation_train(X, y)

        # 5. 特征重要性分析
        feature_importance = plot_feature_importance(final_model, X)

        # 6. 生成预测结果
        submission = generate_predictions(
            model=final_model,
            test_df=test_fe,
            sample_submission=sample_submission,
            features=feature_cols
        )

        # 7. 保存模型及元数据
        save_model_with_metadata(
            model=final_model,
            feature_names=feature_cols,
            auc_score=cv_auc
        )

        # 8. 生成优化报告
        generate_optimization_report(
            cv_auc=cv_auc,
            feature_importance=feature_importance,
            threshold=Config.THRESHOLD,
            model_type="XGBoost" if Config.USE_XGBOOST else "RandomForest"
        )

        # 执行成功
        logger.info("===== 模型训练与预测流程执行完成 =====")
        logger.info(f"总耗时: {time.time() - overall_start_time:.2f}秒")
        logger.info(f"提交文件路径: {Config.SUBMISSION_FILE}")
        logger.info(f"模型文件路径: {Config.MODEL_FILE}")
        logger.info(f"优化报告路径: {Config.OPTIMIZATION_REPORT}")

    except Exception as e:
        logger.error("程序执行失败", exc_info=True)
        sys.exit(1)

# ----------------------------
# 程序入口
# ----------------------------
if __name__ == "__main__":
    main()
