'''
使用方法：
python3 metrics.py <labels> <pred_labels>
其中labels是真实标签文件夹, pred_labels是预测后的标签文件夹。
文件夹中是同名的txt文件。
'''

import os
import sys
import numpy as np


def convert_to_bbox(label):
    if len(label) == 6:
        class_id, x_center, y_center, width, height, _ = label
    else:
        class_id, x_center, y_center, width, height = label
    x_min = x_center - width / 2
    y_min = y_center - height / 2
    x_max = x_center + width / 2
    y_max = y_center + height / 2
    return int(class_id), x_min, y_min, x_max, y_max


def iou(box1, box2):
    x_min1, y_min1, x_max1, y_max1 = box1
    x_min2, y_min2, x_max2, y_max2 = box2
    
    inter_x_min = max(x_min1, x_min2)
    inter_y_min = max(y_min1, y_min2)
    inter_x_max = min(x_max1, x_max2)
    inter_y_max = min(y_max1, y_max2)
    
    if inter_x_max < inter_x_min or inter_y_max < inter_y_min:
        return 0.0
    
    intersection = (inter_x_max - inter_x_min) * (inter_y_max - inter_y_min)
    area1 = (x_max1 - x_min1) * (y_max1 - y_min1)
    area2 = (x_max2 - x_min2) * (y_max2 - y_min2)
    
    union = area1 + area2 - intersection
    return intersection / union


def parse_labels(file_path):
    with open(file_path, 'r') as f:
        labels = [list(map(float, line.split())) for line in f.readlines()]
    return [convert_to_bbox(label) for label in labels]


labels_dir = sys.argv[1]
out_labels_dir = sys.argv[2]

iou_threshold = 0.5
num_classes = 8
confusion_matrix = np.zeros((num_classes + 1, num_classes + 1), dtype=int)


for label_file in os.listdir(labels_dir):
    if label_file.endswith('.txt'):
        real_labels = parse_labels(os.path.join(labels_dir, label_file))
        pred_labels = parse_labels(os.path.join(out_labels_dir, label_file))
        
        matched = []
        
        for pred_label in pred_labels:
            pred_class, *pred_box = pred_label
            best_iou = 0
            best_gt = None
            
            for gt_label in real_labels:
                gt_class, *gt_box = gt_label
                current_iou = iou(pred_box, gt_box)
                
                if current_iou >= iou_threshold:
                    if current_iou > best_iou:
                        best_iou = current_iou
                        best_gt = gt_label
            
            if best_gt:
                gt_class, *_ = best_gt
                confusion_matrix[gt_class][pred_class] += 1  # TP
                matched.append(best_gt)
            else:
                confusion_matrix[num_classes][pred_class] += 1  # FP
        
        for gt_label in real_labels:
            if gt_label not in matched:
                gt_class, *_ = gt_label
                confusion_matrix[gt_class][num_classes] += 1  # FN

precision = np.zeros(num_classes)
recall = np.zeros(num_classes)

for i in range(num_classes):
    TP = confusion_matrix[i][i]
    FP = np.sum(confusion_matrix[:, i]) - TP
    FN = np.sum(confusion_matrix[i, :]) - TP
    
    if TP + FP > 0:
        precision[i] = TP / (TP + FP)
    else:
        precision[i] = 0.0

    if TP + FN > 0:
        recall[i] = TP / (TP + FN)
    else:
        recall[i] = 0.0


total_TP = np.sum(np.diag(confusion_matrix))
total_FP = np.sum(confusion_matrix, axis=0) - np.diag(confusion_matrix)
total_FN = np.sum(confusion_matrix, axis=1) - np.diag(confusion_matrix)

overall_precision = total_TP / (total_TP + total_FP.sum()) if total_TP + total_FP.sum() > 0 else 0.0
overall_recall = total_TP / (total_TP + total_FN.sum()) if total_TP + total_FN.sum() > 0 else 0.0


print("Confusion Matrix:")
print(confusion_matrix)
print("\nPrecision for each class:")
for i in range(num_classes):
    print(f"Class {i}: {precision[i]:.3f}")

print("\nRecall for each class:")
for i in range(num_classes):
    print(f"Class {i}: {recall[i]:.3f}")

print(f"\nOverall Precision: {overall_precision:.3f}")
print(f"Overall Recall: {overall_recall:.3f}")
print(f"\nAverage Precision: {np.mean(precision[precision!=0]):.3f}")
print(f"Average Recall: {np.mean(recall[recall!=0]):.3f}")