"""TensorRT推理引擎"""
import numpy as np
import tensorrt as trt
import pycuda.driver as cuda
import pycuda.autoinit
from typing import List, Union
from .base_engine import InferenceEngine

class TensorRTEngine(InferenceEngine):
    """TensorRT推理引擎"""
    
    def _initialize(self):
        """初始化TensorRT引擎"""
        self.logger = trt.Logger(trt.Logger.ERROR)
        runtime = trt.Runtime(self.logger)
        
        # 加载引擎
        with open(self.model_path, 'rb') as f:
            engine_data = f.read()
        
        self.engine = runtime.deserialize_cuda_engine(engine_data)
        self.context = self.engine.create_execution_context()
        self.stream = cuda.Stream()
        
        # 设置输入
        self._setup_io_bindings()
    
    def _setup_io_bindings(self):
        """设置输入输出绑定"""
        # 输入设置
        input_names = [name for name in self.engine 
                      if self.engine.get_tensor_mode(name) == trt.TensorIOMode.INPUT]
        self.input_name = input_names[0]
        
        self.input_shape = self._fix_shape(
            self.engine.get_tensor_shape(self.input_name)
        )
        self.input_dtype = trt.nptype(
            self.engine.get_tensor_dtype(self.input_name)
        )
        
        # 分配输入GPU内存
        input_size = int(np.prod(self.input_shape) * np.dtype(self.input_dtype).itemsize)
        self.d_input = cuda.mem_alloc(input_size)
        self.context.set_tensor_address(self.input_name, int(self.d_input))
        
        # 输出设置
        self.output_names = [name for name in self.engine 
                           if self.engine.get_tensor_mode(name) == trt.TensorIOMode.OUTPUT]
        
        self.output_shapes = [
            self._fix_shape(self.engine.get_tensor_shape(name)) 
            for name in self.output_names
        ]
        self.output_dtypes = [
            trt.nptype(self.engine.get_tensor_dtype(name)) 
            for name in self.output_names
        ]
        
        # 分配输出GPU内存
        self.d_outputs = []
        for shape, dtype in zip(self.output_shapes, self.output_dtypes):
            size = int(np.prod(shape) * np.dtype(dtype).itemsize)
            d_output = cuda.mem_alloc(size)
            self.d_outputs.append(d_output)
        
        # 设置输出地址
        for name, d_output in zip(self.output_names, self.d_outputs):
            self.context.set_tensor_address(name, int(d_output))
    
    def _fix_shape(self, shape: tuple) -> tuple:
        """修正动态形状为batch=1"""
        return tuple(1 if x == -1 else x for x in shape)
    
    def infer(self, input_data: np.ndarray) -> Union[np.ndarray, List[np.ndarray]]:
        """执行推理"""
        # 准备输入数据
        h_input = np.ascontiguousarray(input_data.astype(self.input_dtype))
        
        # 准备输出缓冲区
        h_outputs = [
            np.empty(shape, dtype=dtype) 
            for shape, dtype in zip(self.output_shapes, self.output_dtypes)
        ]
        
        # 数据传输和推理
        cuda.memcpy_htod_async(self.d_input, h_input, self.stream)
        self.context.execute_async_v3(self.stream.handle)
        
        for h_output, d_output in zip(h_outputs, self.d_outputs):
            cuda.memcpy_dtoh_async(h_output, d_output, self.stream)
        
        self.stream.synchronize()
        
        return h_outputs if len(h_outputs) > 1 else h_outputs[0]
    
    def get_input_shape(self) -> tuple:
        """获取输入形状"""
        return self.input_shape
    
    def get_input_dtype(self) -> np.dtype:
        """获取输入数据类型"""
        return self.input_dtype