import numpy as np
import cv2 as cv
import glob

"""
bool cv::findChessboardCorners( InputArray image,
                                Size patternSize,
                                OutputArray corners,
                                int flags = CALIB_CB_ADAPTIVE_THRESH+CALIB_CB_NORMALIZE_IMAGE 
)		
Python:
retval, corners	=	cv.findChessboardCorners(	image, patternSize[, corners[, flags]]	)

Finds the positions of internal corners of the chessboard.

Parameters
    image	Source chessboard view. It must be an 8-bit grayscale or color image.
    patternSize	Number of inner corners per a chessboard row and column ( patternSize = cv::Size(points_per_row,points_per_colum) = cv::Size(columns,rows) ).
    corners	Output array of detected corners.
    flags	Various operation flags that can be zero or a combination of the following values:
    CALIB_CB_ADAPTIVE_THRESH Use adaptive thresholding to convert the image to black and white, rather than a fixed threshold level (computed from the average image brightness).
    CALIB_CB_NORMALIZE_IMAGE Normalize the image gamma with equalizeHist before applying fixed or adaptive thresholding.
    CALIB_CB_FILTER_QUADS Use additional criteria (like contour area, perimeter, square-like shape) to filter out false quads extracted at the contour retrieval stage.
    CALIB_CB_FAST_CHECK Run a fast check on the image that looks for chessboard corners, and shortcut the call if none is found. This can drastically speed up the call in the degenerate condition when no chessboard is observed.

The function attempts to determine whether the input image is a view of the chessboard pattern and locate the internal chessboard corners.
The function returns a non-zero value if all of the corners are found and they are placed in a certain order (row by row, left to right in every row). 
Otherwise, if the function fails to find all the corners or reorder them, it returns 0. 
For example, a regular chessboard has 8 x 8 squares and 7 x 7 internal corners, that is, points where the black squares touch each other. 
The detected coordinates are approximate, and to determine their positions more accurately, the function calls cornerSubPix. 
You also may use the function cornerSubPix with different parameters if returned coordinates are not accurate enough.

"""

"""
void cv::cornerSubPix(	InputArray 	image,
                        InputOutputArray 	corners,
                        Size 	winSize,
                        Size 	zeroZone,
                        TermCriteria 	criteria 
)

Python:
        corners = cv.cornerSubPix(image, corners, winSize, zeroZone, criteria)

Parameters
    image	    Input single-channel, 8-bit or float image.
    corners	    Initial coordinates of the input corners and refined coordinates provided for output.
    winSize	    Half of the side length of the search window. For example, if winSize=Size(5,5) , then a (5∗2+1)×(5∗2+1)=11×11 search window is used.
    zeroZone	Half of the size of the dead region in the middle of the search zone over which the summation in the formula below is not done. It is used sometimes to avoid possible singularities of the autocorrelation matrix. The value of (-1,-1) indicates that there is no such a size.
    criteria	Criteria for termination of the iterative process of corner refinement. That is, the process of corner position refinement stops either after criteria.maxCount iterations or when the corner position moves by less than criteria.epsilon on some iteration.

"""

"""
double cv::calibrateCamera	(	InputArrayOfArrays 	objectPoints,
                                InputArrayOfArrays 	imagePoints,
                                Size 	imageSize,
                                InputOutputArray 	cameraMatrix,
                                InputOutputArray 	distCoeffs,
                                OutputArrayOfArrays 	rvecs,
                                OutputArrayOfArrays 	tvecs,
                                OutputArray 	stdDeviationsIntrinsics,
                                OutputArray 	stdDeviationsExtrinsics,
                                OutputArray 	perViewErrors,
                                int 	flags = 0,
                                TermCriteria criteria = TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 30, DBL_EPSILON)
)

Python:
    retval, cameraMatrix, distCoeffs, rvecs, tvecs = cv.calibrateCamera(objectPoints, imagePoints, imageSize, cameraMatrix, distCoeffs[, rvecs[, tvecs[, flags[, criteria]]]])

Parameters
            objectPoints	In the new interface it is a vector of vectors of calibration pattern points in the calibration pattern coordinate space (e.g. std::vector<std::vector<cv::Vec3f>>). The outer vector contains as many elements as the number of the pattern views. If the same calibration pattern is shown in each view and it is fully visible, all the vectors will be the same. Although, it is possible to use partially occluded patterns, or even different patterns in different views. Then, the vectors will be different. The points are 3D, but since they are in a pattern coordinate system, then, if the rig is planar, it may make sense to put the model to a XY coordinate plane so that Z-coordinate of each input object point is 0. In the old interface all the vectors of object points from different views are concatenated together.
            imagePoints	In the new interface it is a vector of vectors of the projections of calibration pattern points (e.g. std::vector<std::vector<cv::Vec2f>>). imagePoints.size() and objectPoints.size() and imagePoints[i].size() must be equal to objectPoints[i].size() for each i. In the old interface all the vectors of object points from different views are concatenated together.
            imageSize	Size of the image used only to initialize the intrinsic camera matrix.
            cameraMatrix	Output 3x3 floating-point camera matrix A=⎡⎣⎢fx000fy0cxcy1⎤⎦⎥ . If CV_CALIB_USE_INTRINSIC_GUESS and/or CALIB_FIX_ASPECT_RATIO are specified, some or all of fx, fy, cx, cy must be initialized before calling the function.
            distCoeffs	Output vector of distortion coefficients (k1,k2,p1,p2[,k3[,k4,k5,k6[,s1,s2,s3,s4[,τx,τy]]]]) of 4, 5, 8, 12 or 14 elements.
            rvecs	Output vector of rotation vectors (see Rodrigues ) estimated for each pattern view (e.g. std::vector<cv::Mat>>). That is, each k-th rotation vector together with the corresponding k-th translation vector (see the next output parameter description) brings the calibration pattern from the model coordinate space (in which object points are specified) to the world coordinate space, that is, a real position of the calibration pattern in the k-th pattern view (k=0.. M -1).
            tvecs	Output vector of translation vectors estimated for each pattern view.
            stdDeviationsIntrinsics	Output vector of standard deviations estimated for intrinsic parameters. Order of deviations values: (fx,fy,cx,cy,k1,k2,p1,p2,k3,k4,k5,k6,s1,s2,s3,s4,τx,τy) If one of parameters is not estimated, it's deviation is equals to zero.
            stdDeviationsExtrinsics	Output vector of standard deviations estimated for extrinsic parameters. Order of deviations values: (R1,T1,…,RM,TM) where M is number of pattern views, Ri,Ti are concatenated 1x3 vectors.
            perViewErrors	Output vector of the RMS re-projection error estimated for each pattern view.
            flags	Different flags that may be zero or a combination of the following values:
                    CALIB_USE_INTRINSIC_GUESS cameraMatrix contains valid initial values of fx, fy, cx, cy that are optimized further. Otherwise, (cx, cy) is initially set to the image center ( imageSize is used), and focal distances are computed in a least-squares fashion. Note, that if intrinsic parameters are known, there is no need to use this function just to estimate extrinsic parameters. Use solvePnP instead.
                    CALIB_FIX_PRINCIPAL_POINT The principal point is not changed during the global optimization. It stays at the center or at a different location specified when CALIB_USE_INTRINSIC_GUESS is set too.
                    CALIB_FIX_ASPECT_RATIO The functions considers only fy as a free parameter. The ratio fx/fy stays the same as in the input cameraMatrix . When CALIB_USE_INTRINSIC_GUESS is not set, the actual input values of fx and fy are ignored, only their ratio is computed and used further.
                    CALIB_ZERO_TANGENT_DIST Tangential distortion coefficients (p1,p2) are set to zeros and stay zero.
                    CALIB_FIX_K1,...,CALIB_FIX_K6 The corresponding radial distortion coefficient is not changed during the optimization. If CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the supplied distCoeffs matrix is used. Otherwise, it is set to 0.
                    CALIB_RATIONAL_MODEL Coefficients k4, k5, and k6 are enabled. To provide the backward compatibility, this extra flag should be explicitly specified to make the calibration function use the rational model and return 8 coefficients. If the flag is not set, the function computes and returns only 5 distortion coefficients.
                    CALIB_THIN_PRISM_MODEL Coefficients s1, s2, s3 and s4 are enabled. To provide the backward compatibility, this extra flag should be explicitly specified to make the calibration function use the thin prism model and return 12 coefficients. If the flag is not set, the function computes and returns only 5 distortion coefficients.
                    CALIB_FIX_S1_S2_S3_S4 The thin prism distortion coefficients are not changed during the optimization. If CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the supplied distCoeffs matrix is used. Otherwise, it is set to 0.
                    CALIB_TILTED_MODEL Coefficients tauX and tauY are enabled. To provide the backward compatibility, this extra flag should be explicitly specified to make the calibration function use the tilted sensor model and return 14 coefficients. If the flag is not set, the function computes and returns only 5 distortion coefficients.
                    CALIB_FIX_TAUX_TAUY The coefficients of the tilted sensor model are not changed during the optimization. If CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the supplied distCoeffs matrix is used. Otherwise, it is set to 0.
            criteria	Termination criteria for the iterative optimization algorithm.

"""

# 迭代终止准则
# 30 为最大的迭代次数
# 0.001为最小误差变化阈值
criteria = (cv.TERM_CRITERIA_EPS + cv.TERM_CRITERIA_MAX_ITER, 30, 0.001)
# cv.TERM_CRITERIA_EPS-如果达到指定的精度epsilon，则停止算法迭代。
# cv.TERM_CRITERIA_MAX_ITER-在指定的迭代次数max_iter之后停止算法。
# cv.TERM_CRITERIA_EPS + cv.TERM_CRITERIA_MAX_ITER-当满足上述任何条件时，停止迭代。

# 功能：将一个24*13的网格状坐标系转换为一个形状的为（n，3）的数组，其中每个点的x，y坐标分别
# 保存在objp的前两列，z坐标为0，用于表示物体在空间中的坐标位置
# 用于实现相机标定位置
objp = np.zeros((24 * 13, 3), np.float32)
# 输出为24*13，2的二维数组，总共有24*13个点
objp[:, :2] = np.mgrid[0:24, 0:13].T.reshape(-1, 2)
# Arrays to store object points and data points from all the images.
# objpoint为实际世界中的三维坐标点
# imgpoint为图片中的二维坐标点
objpoints = []  # 3d point in real world space
imgpoints = []  # 2d points in data plane.

# 获取指定文件夹中所有符合规则的文件路径
images = glob.glob('data/cali/*.jpg')

#  遍历每一张图片，并读取
for fname in images:
    img = cv.imread(fname)
    gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
    # 找到棋盘中的角点
    ret, corners = cv.findChessboardCorners(gray, (24, 13), None)
    print(ret)
    # If found, add object points, data points (after refining them)
    if ret:
        objpoints.append(objp)
        # 用于棋盘格角点检测后，对角点进行亚像素级别的精确化定位
        # 接受灰度图片和角点坐标后，在每个角点的周围的亚像素级别搜索区域内
        # 使用最小二乘法进行拟合，得到亚像素级别的角点坐标
        cv.cornerSubPix(gray, corners, (11, 11), (-1, -1), criteria)
        # 将角点加入到原图中
        imgpoints.append(corners)
        # 画出并展示角点在原图中
        cv.drawChessboardCorners(img, (24, 13), corners, ret)
        cv.imshow('img', img)
        cv.waitKey(0) & 0xFF

cv.destroyAllWindows()

# 函数返回 整体RMS重投影误差, 相机矩阵, 畸变系数, 旋转和平移向量等
# 进行相机标定，计算相机内外参数
# 基本思想：利用相机对不同角度、不同位置、不同姿态下的物体进行拍摄
# 从中提取出物体的三维坐标和对应二维坐标
# 利用相机的投影模型，将三维坐标转换为二维坐标
# 最终使用最小二乘法等乘法，求解内外参数
# mtx相机的内参矩阵，是一个浮点型的 3x3 矩阵，包括相机的焦距、主点坐标和图像的畸变参数。
# dist：相机的畸变系数，是一个浮点型的 1x5、1x8 或者 1x14 的数组，包括径向畸变和切向畸变参数。
# rvecs：每个角点对应的相机姿态的旋转向量，用于将三维物体坐标系旋转到相机坐标系中。
#        它的类型为 NumPy 数组，其中每个元素表示一个角点对应的旋转向量，通常是一个长度为 3 的一维数组。
# tvecs：每个角点对应的相机姿态的平移向量，用于将三维物体坐标系平移至相机坐标系中。它的类型也为 NumPy
#        数组，其中每个元素表示一个角点对应的平移向量，通常是一个长度为 3 的一维数组
ret, mtx, dist, rvecs, tvecs = cv.calibrateCamera(objpoints, imgpoints,
                                                  gray.shape[::-1], None, None)

mean_error = 0
for i in range(len(objpoints)):
    # 将三维物体坐标转换为像素坐标，进行相机坐标系到像素坐标系的投影
    imgpoints2, _ = cv.projectPoints(objpoints[i], rvecs[i], tvecs[i], mtx, dist)
    # 计算向量范数，本程序中用于计算投影误差
    error = cv.norm(imgpoints[i], imgpoints2, cv.NORM_L2) / len(imgpoints2)
    mean_error += error
print("total error: {}".format(mean_error / len(objpoints)))

# 将投影结果保存为npz文件，用于后续的相机姿态估计
np.savez('B.npz', mtx=mtx, dist=dist, rvecs=rvecs, tvecs=tvecs)