"""
分层模型策略管理模块
根据任务复杂度自动选择合适的模型后端和配置
支持vLLM、Ollama等多种后端的智能选择
"""

import asyncio
import re
from typing import Optional, Dict, Any, Tuple, List
from enum import Enum
from loguru import logger

from config.model_config import ModelBackend, ModelSize, model_config_manager
from middleware.model_interface import get_model_client, BaseModelClient


class TaskComplexity(Enum):
    """任务复杂度级别"""
    LOW = "low"
    MEDIUM = "medium"
    HIGH = "high"
    VERY_HIGH = "very_high"


class TaskType(Enum):
    """任务类型"""
    TEXT_QA = "text_qa"              # 文本问答
    MULTI_MODAL = "multi_modal"      # 多模态
    CODE_GENERATION = "code_gen"     # 代码生成
    SUMMARIZATION = "summarization"  # 文本摘要
    TRANSLATION = "translation"      # 翻译
    REASONING = "reasoning"          # 推理任务
    OTHER = "other"                  # 其他任务


class ModelStrategy:
    """模型选择策略基类"""
    
    @abstractmethod
    def select_model(
        self,
        task_complexity: TaskComplexity,
        task_type: TaskType,
        **kwargs
    ) -> Tuple[str, ModelBackend]:
        """选择模型配置
        
        Args:
            task_complexity: 任务复杂度
            task_type: 任务类型
            **kwargs: 额外参数
            
        Returns:
            Tuple[str, ModelBackend]: 配置名称和后端类型
        """
        pass


class DefaultModelStrategy(ModelStrategy):
    """默认模型选择策略
    
    根据任务复杂度和类型选择合适的模型
    - 低复杂度：使用轻量级模型
    - 中复杂度：使用中等模型
    - 高复杂度：使用大型模型
    - 非常高复杂度：使用最大型模型
    """
    
    def __init__(self):
        # 模型映射配置
        self.complexity_model_map = {
            TaskComplexity.LOW: {
                TaskType.TEXT_QA: ("ollama-llama3", ModelBackend.OLLAMA),
                TaskType.MULTI_MODAL: ("vllm-7b", ModelBackend.VLLM),
                TaskType.CODE_GENERATION: ("vllm-7b", ModelBackend.VLLM),
                TaskType.SUMMARIZATION: ("ollama-llama3", ModelBackend.OLLAMA),
                TaskType.TRANSLATION: ("ollama-llama3", ModelBackend.OLLAMA),
                TaskType.REASONING: ("vllm-7b", ModelBackend.VLLM),
                TaskType.OTHER: ("ollama-llama3", ModelBackend.OLLAMA)
            },
            TaskComplexity.MEDIUM: {
                TaskType.TEXT_QA: ("vllm-7b", ModelBackend.VLLM),
                TaskType.MULTI_MODAL: ("vllm-7b", ModelBackend.VLLM),
                TaskType.CODE_GENERATION: ("vllm-7b", ModelBackend.VLLM),
                TaskType.SUMMARIZATION: ("vllm-7b", ModelBackend.VLLM),
                TaskType.TRANSLATION: ("vllm-7b", ModelBackend.VLLM),
                TaskType.REASONING: ("vllm-7b", ModelBackend.VLLM),
                TaskType.OTHER: ("vllm-7b", ModelBackend.VLLM)
            },
            TaskComplexity.HIGH: {
                TaskType.TEXT_QA: ("vllm-7b", ModelBackend.VLLM),
                TaskType.MULTI_MODAL: ("vllm-7b", ModelBackend.VLLM),
                TaskType.CODE_GENERATION: ("vllm-7b", ModelBackend.VLLM),
                TaskType.SUMMARIZATION: ("vllm-7b", ModelBackend.VLLM),
                TaskType.TRANSLATION: ("vllm-7b", ModelBackend.VLLM),
                TaskType.REASONING: ("vllm-7b", ModelBackend.VLLM),
                TaskType.OTHER: ("vllm-7b", ModelBackend.VLLM)
            },
            TaskComplexity.VERY_HIGH: {
                TaskType.TEXT_QA: ("vllm-7b", ModelBackend.VLLM),
                TaskType.MULTI_MODAL: ("vllm-7b", ModelBackend.VLLM),
                TaskType.CODE_GENERATION: ("vllm-7b", ModelBackend.VLLM),
                TaskType.SUMMARIZATION: ("vllm-7b", ModelBackend.VLLM),
                TaskType.TRANSLATION: ("vllm-7b", ModelBackend.VLLM),
                TaskType.REASONING: ("vllm-7b", ModelBackend.VLLM),
                TaskType.OTHER: ("vllm-7b", ModelBackend.VLLM)
            }
        }
    
    def select_model(
        self,
        task_complexity: TaskComplexity,
        task_type: TaskType,
        **kwargs
    ) -> Tuple[str, ModelBackend]:
        """选择模型配置"""
        # 首先检查是否有针对此任务类型的特定配置
        if task_type in self.complexity_model_map.get(task_complexity, {}):
            return self.complexity_model_map[task_complexity][task_type]
        
        # 回退到OTHER类型
        return self.complexity_model_map[task_complexity].get(TaskType.OTHER, 
                                                            ("ollama-llama3", ModelBackend.OLLAMA))


class ResourceAwareModelStrategy(ModelStrategy):
    """资源感知模型策略
    
    根据系统资源状态动态选择模型
    - 有GPU且负载低：使用vLLM大型模型
    - 有GPU但负载高：使用Ollama中型模型
    - 无GPU：使用Ollama轻量级模型
    """
    
    def __init__(self):
        self.system_resources = {
            "has_gpu": True,  # 默认假设系统有GPU
            "gpu_memory_available": 80,  # 默认80%可用
            "cpu_usage": 30  # 默认30% CPU使用率
        }
    
    def update_resources(self, resources: Dict[str, Any]):
        """更新系统资源状态"""
        self.system_resources.update(resources)
    
    def select_model(
        self,
        task_complexity: TaskComplexity,
        task_type: TaskType,
        **kwargs
    ) -> Tuple[str, ModelBackend]:
        """基于资源状态选择模型"""
        # 检查系统资源状态
        has_gpu = self.system_resources.get("has_gpu", False)
        gpu_memory = self.system_resources.get("gpu_memory_available", 0)
        cpu_usage = self.system_resources.get("cpu_usage", 100)
        
        # 如果没有GPU，强制使用Ollama
        if not has_gpu:
            return "ollama-llama3", ModelBackend.OLLAMA
        
        # 如果GPU内存充足且CPU负载低，使用vLLM
        if gpu_memory > 50 and cpu_usage < 70:
            return "vllm-7b", ModelBackend.VLLM
        
        # 其他情况使用Ollama
        return "ollama-llama3", ModelBackend.OLLAMA


class TaskComplexityAnalyzer:
    """任务复杂度分析器"""
    
    def __init__(self):
        self.logger = logger
        # 复杂度评估规则
        self.complexity_rules = {
            # 关键词规则
            "high_complexity_keywords": [
                "分析", "推理", "论证", "证明", "推导", "优化", "设计", 
                "规划", "解决方案", "复杂", "详细", "全面", "深入"
            ],
            "code_keywords": [
                "编写", "实现", "代码", "程序", "算法", "function", 
                "class", "method", "def", "function", "代码示例"
            ],
            "multi_step_keywords": [
                "首先", "然后", "接着", "最后", "步骤", "过程", "顺序",
                "第一", "第二", "第三", "首先...然后...", "依次"
            ]
        }
    
    def analyze_complexity(
        self,
        task_text: str,
        task_type: TaskType,
        **kwargs
    ) -> TaskComplexity:
        """分析任务复杂度
        
        Args:
            task_text: 任务文本
            task_type: 任务类型
            **kwargs: 额外参数
            
        Returns:
            TaskComplexity: 复杂度级别
        """
        if not task_text:
            return TaskComplexity.MEDIUM
        
        complexity_score = 0
        
        # 文本长度分析
        text_length = len(task_text)
        if text_length > 1000:
            complexity_score += 3
        elif text_length > 500:
            complexity_score += 2
        elif text_length > 200:
            complexity_score += 1
        
        # 关键词分析
        lower_text = task_text.lower()
        
        # 高复杂度关键词
        for keyword in self.complexity_rules["high_complexity_keywords"]:
            if keyword in lower_text:
                complexity_score += 2
        
        # 代码相关关键词
        for keyword in self.complexity_rules["code_keywords"]:
            if keyword in lower_text:
                complexity_score += 2
        
        # 多步骤关键词
        for keyword in self.complexity_rules["multi_step_keywords"]:
            if keyword in lower_text:
                complexity_score += 1
        
        # 特殊符号和结构分析
        if re.search(r'```[\s\S]*?```', task_text):  # 代码块
            complexity_score += 3
        
        if re.search(r'\b(\d+\.)\s+', task_text):  # 有序列表
            complexity_score += 1
        
        # 根据任务类型调整
        if task_type == TaskType.MULTI_MODAL:
            complexity_score += 2
        elif task_type == TaskType.CODE_GENERATION:
            complexity_score += 2
        elif task_type == TaskType.REASONING:
            complexity_score += 3
        elif task_type == TaskType.TEXT_QA:
            # 简单QA可能比较简单
            pass
        
        # 根据分数确定复杂度级别
        if complexity_score >= 8:
            return TaskComplexity.VERY_HIGH
        elif complexity_score >= 5:
            return TaskComplexity.HIGH
        elif complexity_score >= 2:
            return TaskComplexity.MEDIUM
        else:
            return TaskComplexity.LOW
    
    def detect_task_type(self, task_text: str) -> TaskType:
        """自动检测任务类型
        
        Args:
            task_text: 任务文本
            
        Returns:
            TaskType: 任务类型
        """
        if not task_text:
            return TaskType.OTHER
        
        lower_text = task_text.lower()
        
        # 检测代码生成任务
        code_patterns = [
            '编写代码', '实现', '代码示例', 'function', 'class',
            '写一个', '生成代码', 'code', '编程', 'coding'
        ]
        for pattern in code_patterns:
            if pattern in lower_text:
                return TaskType.CODE_GENERATION
        
        # 检测多模态任务
        multi_modal_patterns = [
            '图片', '图像', '照片', 'ocr', '识别', '视觉',
            'image', 'photo', 'vision', 'visual'
        ]
        for pattern in multi_modal_patterns:
            if pattern in lower_text:
                return TaskType.MULTI_MODAL
        
        # 检测摘要任务
        summarization_patterns = [
            '总结', '摘要', '概括', '归纳', '总结一下',
            'summary', 'summarize', 'summarization'
        ]
        for pattern in summarization_patterns:
            if pattern in lower_text:
                return TaskType.SUMMARIZATION
        
        # 检测翻译任务
        translation_patterns = [
            '翻译', 'translate', 'translation', '翻译成',
            'from.*to', 'to.*from', '中英文'
        ]
        for pattern in translation_patterns:
            if pattern in lower_text:
                return TaskType.TRANSLATION
        
        # 检测推理任务
        reasoning_patterns = [
            '为什么', '原因是', '推理', '分析', '论证',
            '推导', '证明', 'because', 'why', 'analyze'
        ]
        for pattern in reasoning_patterns:
            if pattern in lower_text:
                return TaskType.REASONING
        
        # 默认认为是文本问答
        return TaskType.TEXT_QA


class ModelSelector:
    """模型选择器
    
    集成复杂度分析和模型策略，提供统一的模型选择接口
    """
    
    def __init__(self, strategy: Optional[ModelStrategy] = None):
        self.logger = logger
        self.analyzer = TaskComplexityAnalyzer()
        self.strategy = strategy or DefaultModelStrategy()
        self.fallback_config = "ollama-llama3"
        self.fallback_backend = ModelBackend.OLLAMA
    
    def set_strategy(self, strategy: ModelStrategy):
        """设置模型选择策略"""
        self.strategy = strategy
    
    async def select_model_for_task(
        self,
        task_text: str,
        task_type: Optional[TaskType] = None,
        force_backend: Optional[ModelBackend] = None,
        **kwargs
    ) -> Tuple[str, ModelBackend, BaseModelClient]:
        """为任务选择合适的模型
        
        Args:
            task_text: 任务文本
            task_type: 任务类型，如果为None则自动检测
            force_backend: 强制使用的后端，如果指定则忽略策略选择
            **kwargs: 额外参数
            
        Returns:
            Tuple[str, ModelBackend, BaseModelClient]: 配置名称、后端类型和客户端实例
        """
        try:
            # 自动检测任务类型
            if task_type is None:
                task_type = self.analyzer.detect_task_type(task_text)
                self.logger.info(f"自动检测任务类型: {task_type.value}")
            
            # 分析复杂度
            complexity = self.analyzer.analyze_complexity(task_text, task_type, **kwargs)
            self.logger.info(f"任务复杂度分析结果: {complexity.value}")
            
            # 选择模型配置
            if force_backend:
                # 如果强制指定后端，使用该后端的默认配置
                config_name, backend = self._get_default_config_for_backend(force_backend)
                self.logger.info(f"强制使用后端: {backend.value}, 配置: {config_name}")
            else:
                # 使用策略选择模型
                config_name, backend = self.strategy.select_model(complexity, task_type, **kwargs)
                self.logger.info(f"根据策略选择: {backend.value} (配置: {config_name}, 复杂度: {complexity.value})")
            
            # 获取模型客户端
            client = get_model_client(backend=backend, config_name=config_name)
            
            return config_name, backend, client
            
        except Exception as e:
            self.logger.error(f"模型选择失败: {str(e)}")
            # 使用降级策略
            self.logger.warning(f"使用降级配置: {self.fallback_config} ({self.fallback_backend.value})")
            try:
                client = get_model_client(
                    backend=self.fallback_backend,
                    config_name=self.fallback_config
                )
                return self.fallback_config, self.fallback_backend, client
            except:
                self.logger.critical("降级配置也失败，请检查系统配置")
                raise
    
    def _get_default_config_for_backend(self, backend: ModelBackend) -> Tuple[str, ModelBackend]:
        """获取指定后端的默认配置"""
        config_map = {
            ModelBackend.VLLM: ("vllm-7b", ModelBackend.VLLM),
            ModelBackend.OLLAMA: ("ollama-llama3", ModelBackend.OLLAMA),
            ModelBackend.LLAMA_CPP: ("llama-cpp-7b", ModelBackend.LLAMA_CPP)
        }
        return config_map.get(backend, ("ollama-llama3", ModelBackend.OLLAMA))
    
    async def execute_with_selected_model(
        self,
        task_text: str,
        messages: Optional[List[Dict[str, str]]] = None,
        task_type: Optional[TaskType] = None,
        force_backend: Optional[ModelBackend] = None,
        **kwargs
    ) -> str:
        """使用选择的模型执行任务
        
        Args:
            task_text: 任务文本
            messages: 聊天消息列表（如果使用chat_completion）
            task_type: 任务类型
            force_backend: 强制使用的后端
            **kwargs: 额外参数
            
        Returns:
            str: 模型执行结果
        """
        # 选择模型
        config_name, backend, client = await self.select_model_for_task(
            task_text, task_type, force_backend, **kwargs
        )
        
        # 执行任务
        try:
            if messages:
                # 使用聊天完成接口
                result = await client.chat_completion(
                    messages=messages,
                    max_tokens=kwargs.get("max_tokens", 2048),
                    temperature=kwargs.get("temperature", 0.7),
                    **kwargs
                )
            else:
                # 使用生成接口
                result = await client.generate(
                    prompt=task_text,
                    max_tokens=kwargs.get("max_tokens", 2048),
                    temperature=kwargs.get("temperature", 0.7),
                    **kwargs
                )
            
            self.logger.info(f"模型执行成功: {backend.value}")
            return result
            
        except Exception as e:
            self.logger.error(f"模型执行失败: {str(e)}")
            # 如果执行失败，尝试降级到Ollama
            if backend != ModelBackend.OLLAMA:
                self.logger.warning("尝试降级到Ollama后端")
                try:
                    fallback_client = get_model_client(backend=ModelBackend.OLLAMA)
                    if messages:
                        result = await fallback_client.chat_completion(
                            messages=messages,
                            max_tokens=kwargs.get("max_tokens", 2048),
                            temperature=kwargs.get("temperature", 0.7),
                            **kwargs
                        )
                    else:
                        result = await fallback_client.generate(
                            prompt=task_text,
                            max_tokens=kwargs.get("max_tokens", 2048),
                            temperature=kwargs.get("temperature", 0.7),
                            **kwargs
                        )
                    return result
                except:
                    self.logger.critical("降级执行也失败")
                    raise
            raise


# 从abc导入abstractmethod
from abc import ABC, abstractmethod

# 全局模型选择器实例
model_selector = ModelSelector()

# 导出所有类和函数
__all__ = [
    'TaskComplexity',
    'TaskType',
    'ModelStrategy',
    'DefaultModelStrategy',
    'ResourceAwareModelStrategy',
    'TaskComplexityAnalyzer',
    'ModelSelector',
    'model_selector'
]