from repo.plate.models.LPRNet import LPRNet,CHARS # 导入完整路径防止在同一个环境中运行多个python function时的路径查找错误
import torch
import numpy as np

class lprnet:
    def __init__(self):
        self.cnt = 0

    def start(self):
        # 创建cuda stream
        self.stream_for_data_trans = torch.cuda.Stream()
        self.stream_for_load_model = torch.cuda.Stream()
        self.stream_for_inference = torch.cuda.Stream()
        # 给stream划分空间
        with torch.cuda.stream(self.stream_for_load_model):
            torch.cuda.insert_shared_cache((2*1024+768)*1024*1024, 256*1024*1024)
        with torch.cuda.stream(self.stream_for_inference):
            torch.cuda.insert_shared_cache(3*1024*1024*1024, 6*1024*1024*1024)
        self.model = LPRNet(lpr_max_len=8, phase=False, class_num=len(CHARS), dropout_rate=0)
        with torch.cuda.stream(self.stream_for_load_model):
            self.model.load_state_dict(torch.load('/home/lx/SmartPipe/src/core/functions/Model/LPRnet/lprnet/weights/Final_LPRNet_model.pth', map_location=torch.device('cuda')))
            self.model.cuda().eval()
            torch.cuda.synchronize()
        print("load lprnet.")
        with torch.cuda.stream(self.stream_for_inference):
            input_tensor = torch.rand([8, 3, 24, 94], device='cuda:0')
            res = self.model(input_tensor)
        print("warm GPU.")

    # [Mat,Mat,...] -> [Tensor,Tensor,...]
    def preprocess(self, data):
        imgs = [x.astype('float32') for x in data]
        imgs = [x - 127.5 for x in imgs]
        imgs = [x*0.0078125 for x in imgs]
        imgs = [np.transpose(x, (2, 0, 1)) for x in imgs]
        self.cnt += 1
        return imgs

    # 0: [Tensor,Tensor,...] -> [Tensor,Tensor,...]
    # 1: [Batch_Tensor,Batch_Tensor,...] -> [Batch_Tensor,Batch_Tensor,...]
    # 2: [Gpu_Tensor,Gpu_Tensor,...] -> [Gpu_Tensor,Gpu_Tensor,...]
    # 3: [Batch_Gpu_Tensor,Batch_Gpu_Tensor,...] -> [Batch_Gpu_Tensor,Batch_Gpu_Tensor,...]
    # Tensor: Numpy-Array, dim 3 Numpy-Array -> Numpy-Array
    # Batch_Tensor: NumPy-Array, dim 4 Numpy-Array -> Numpy-Array
    # Gpu_Tensor: Tuple, (C, H, W, src_pos, src_size) -> (C, H, W, pos, block_id)
    # Batch_Gpu_Tensor: Tuple, (B, C, H, W, src_pos, src_size) -> (B, C, H, W, pos, block_id)
    # DONE: 使其支持以上四种泛型
    def inference(self, data):
        assert len(data) > 0
        item = data[0]
        if isinstance(item, tuple): # 2,3
            if len(item) == 5: # 2
                res = eval('self.inference_02(data)')
            elif len(item) == 6: # 3
                res = eval('self.inference_03(data)')
        else: # 0,1
            if len(item.shape) == 3: # 0
                res = eval('self.inference_00(data)')
            elif len(item.shape) == 4: # 1
                res = eval('self.inference_01(data)')
            else:
                assert False
        self.cnt += 1
        return res

    # [Tensor,Tensor,...] -> [Tensor,Tensor,...]
    def inference_00(self, data):
        res = []
        # 拼接tensor
        input_tensor = torch.from_numpy(np.array(data))
        # Gpu操作
        with torch.cuda.stream(self.stream_for_inference):
            # 传输到gpu
            input_tensor = input_tensor.cuda()
            # 进行推理
            with torch.no_grad():
                self.preds = self.model(input_tensor)
                # 传回cpu
                self.prebs = self.preds.cpu().detach().numpy()
        # 拆分tensor并装入res
        for i in range(self.prebs.shape[0]):
            res.append(np.expand_dims(self.prebs[i], 0))
        return res

    # 1: [Batch_Tensor,Batch_Tensor,...] -> [Batch_Tensor,Batch_Tensor,...]
    def inference_01(self, data):
        res = []
        for i in data:
            # 转换为tensor
            input_tensor = torch.from_numpy(i)
            # 进行推理
            with torch.cuda.stream(self.stream_for_inference):
                # 传输到gpu
                input_tensor = input_tensor.cuda()
                with torch.no_grad():
                    self.preds = self.model(input_tensor)
                    # 传回cpu
                    self.prebs = self.preds.cpu().detach().numpy()
                    # 加入res
                    res.append(self.prebs)
        return res

    # 2: [Gpu_Tensor,Gpu_Tensor,...] -> [Gpu_Tensor,Gpu_Tensor,...] TODO: lprnet推理时推理结果不对,猜测可能是更改显存池后单个tensor过小导致的，应该增大batch size就不会有这个问题。
    def inference_02(self, data):
        res = []
        # 获取并拼接
        for i in data:
            assert len(i) == 5
            c, h, w, pos, size = i[0], i[1], i[2], i[3], i[4]
            # 获取输入tensor
            with torch.cuda.stream(self.stream_for_data_trans):
                torch.cuda.insert_shared_cache(pos, size)
                input_tensor = torch.empty([c,h,w], device='cuda:0')
                torch.cuda.synchronize()
            # 拷贝到计算空间并进行推理
            with torch.cuda.stream(self.stream_for_inference):
                with torch.no_grad():
                    preds = self.model(input_tensor.unsqueeze(0))
                    preds = preds.unsqueeze(3).squeeze(0)
                torch.cuda.synchronize()
            # 拷贝到输出位置
            with torch.cuda.stream(self.stream_for_data_trans):
                del input_tensor
                item = torch.empty(preds.shape, device = 'cuda:0')
                item.copy_(preds)
                torch.cuda.synchronize()
                res.append((item.shape[0], item.shape[1], item.shape[2], pos, 0))
                del item
                torch.cuda.clear_shared_cache()
        return res

    # 3: [Batch_Gpu_Tensor,Batch_Gpu_Tensor,...] -> [Batch_Gpu_Tensor,Batch_Gpu_Tensor,...]
    def inference_03(self, data):
        res = []
        # 获取并拼接
        for i in data:
            assert len(i) == 6
            b, c, h, w, pos, size = i[0], i[1], i[2], i[3], i[4], i[5]
            # 获取输入tensor
            with torch.cuda.stream(self.stream_for_data_trans):
                torch.cuda.insert_shared_cache(pos, size)
                self.input_tensor = torch.empty([b,c,h,w], device='cuda:0')
                torch.cuda.synchronize()
            # 拷贝到计算空间并进行推理
            with torch.cuda.stream(self.stream_for_inference):
                with torch.no_grad():
                    self.preds = self.model(self.input_tensor)
                    self.preds = self.preds.unsqueeze(3)
                    torch.cuda.synchronize()
            # 拷贝到输出位置
            with torch.cuda.stream(self.stream_for_data_trans):
                del self.input_tensor
                item = torch.empty(self.preds.shape, device = 'cuda:0')
                item.copy_(self.preds)
                torch.cuda.synchronize()
                res.append((item.shape[0], item.shape[1], item.shape[2], item.shape[3], pos, 0))
                del item
                torch.cuda.clear_shared_cache()
            with torch.cuda.stream(self.stream_for_inference):
                del self.preds
        return res
    
    # [Tenosr,Tensor,...] -> [[String,String,...],[String,String,...],...]
    def postprocess(self, data):
        # 后处理
        res = []
        for index in range(len(data)):
            if len(data[index].shape) == 3 and data[index].shape[2] == 1:
                data[index] = np.squeeze(data[index], 2)
            if len(data[index].shape) == 2:
                data[index] = data[index][np.newaxis,:]
            preb_labels = list()
            for w in range(data[index].shape[0]):
                preb = data[index][w, :, :]
                preb_label = list()
                for j in range(preb.shape[1]):
                    preb_label.append(np.argmax(preb[:, j], axis=0))
                no_repeat_blank_label = list()
                pre_c = preb_label[0]
                if pre_c != len(CHARS) - 1:
                    no_repeat_blank_label.append(pre_c)

                for c in preb_label:  # dropout repeate label and blank label
                    if (pre_c == c) or (c == len(CHARS) - 1):
                        if c == len(CHARS) - 1:
                            pre_c = c
                        continue
                    no_repeat_blank_label.append(c)
                    pre_c = c
                preb_labels.append(no_repeat_blank_label)
            plat_num = np.array(preb_labels)
            # 转字符并输出
            plates = []
            for i in plat_num:
                car_num_str = ""
                for j in i:
                    car_num_str += CHARS[int(j)]
                if len(car_num_str) < 20:
                    plates.append(car_num_str)
            res.append(plates)
        self.cnt += 1
        return res # 一维列表 [string,] 

    def finish(self):
        with torch.cuda.stream(self.stream_for_data_trans):
            torch.cuda.clear_shared_cache()
        with torch.cuda.stream(self.stream_for_load_model):
            torch.cuda.clear_shared_cache()
        with torch.cuda.stream(self.stream_for_inference):
            torch.cuda.clear_shared_cache()
 