import faiss
import numpy as np

np.random.seed(768)

data = np.random.random((1000, 128))

"""
1. Scalar Quantizer

Normal data type of vector embeedings is usually 32 bit floats. Scalar quantization is transforming the 32 float representation to, for example, 8 bit interger. Thus with a 4x reduction in size. In this way, it can be seen as we distribute each dimension into 256 buckets.
1. 标量量化 
向量嵌入的正常数据类型通常是32位浮点数。标量量化是将32位浮点表示转换为例如8位整数。因此，大小减少了4倍。通过这种方式，可以看作是将每个维度分配到256个桶中。

ScalarQuantizer	
    Quantizer class 
    Parameters:	
        d: dimension of vectors 
        qtype: map dimension into 2^qtype  clusters

IndexScalarQuantizer
    Flat index class
    Parameters: 
        d: dimension of vectors
        qtype: map dimension into 2^qtype clusters
        metric: similarity metric (L2 or IP)

IndexIVFScalarQuantizer 
    IVF index class 
    Parameters:
        d: dimension of vectors
        nlist: number of cells/clusters to partition the inverted file space
        qtype: map dimension into  2^qtype  clusters
        metric: similarity metric (L2 or IP)

Quantizer class objects are used to compress the data before adding into indexes. Flat index class objects and IVF index class objects can be used direct as and index. Quantization will be done automatically.
量化器类对象用于在将数据添加到索引之前进行压缩。平面索引类对象和 IVF 索引类对象可以直接用作索引。量化将会自动完成。

"""
def test_scalar_quantizer():
     # Scalar Quantizer
    d = 128
    qtype = faiss.ScalarQuantizer.QT_8bit

    quantizer = faiss.ScalarQuantizer(d, qtype)

    quantizer.train(data)
    new_data = quantizer.compute_codes(data)

    print(new_data[0])

    # Scalar Quantizer Index
    d = 128
    k = 3
    qtype = faiss.ScalarQuantizer.QT_8bit
    # nlist = 5

    index = faiss.IndexScalarQuantizer(d, qtype, faiss.METRIC_L2)
    # index = faiss.IndexIVFScalarQuantizer(d, nlist, faiss.ScalarQuantizer.QT_8bit, faiss.METRIC_L2)

    index.train(data)
    index.add(data)
    D, I = index.search(data[:1], k)

    print(f"closest elements: {I}")
    print(f"distance: {D}")

"""
2. Product Quantizer
When speed and memory are crucial factors in searching, product quantizer becomes a top choice. It is one of the effective quantizer on reducing memory size.
当速度和内存在搜索中是关键因素时，产品量化器成为首选。它是减少内存大小的一种有效量化器。

The first step of PQ is dividing the original vectors with dimension d into smaller, low-dimensional sub-vectors with dimension d/m. Here m is the number of sub-vectors.
PQ的第一步是将原始维度为d的向量划分为小的低维子向量，维度为d/m。这里m是子向量的数量。

Then clustering algorithms are used to create codebook of a fixed number of centroids.
然后使用聚类算法创建固定数量的质心的代码本.

Next, each sub-vector of a vector is replaced by the index of the closest centroid from its corresponding codebook. Now each vector will be stored with only the indices instead of the full vector.
接下来，向量的每个子向量都被其对应码本中最近质心的索引所替换。现在每个向量将仅用索引而不是完整向量进行存储。

When comuputing the distance between a query vector. Only the distances to the centroids in the codebooks are calculated, thus enable the quick approximate nearest neighbor searches.
在计算查询向量之间的距离时，仅计算与代码簿中质心的距离，从而实现快速的近似最近邻搜索。

ProductQuantizer	
    Quantizer class	
    Parameters:
        d: dimension of vectors
        M: number of sub-vectors that D % M == 0
        nbits: number of bits per subquantizer, so each contain 2^nbits centroids

IndexPQ	
    Flat index class
    Parameters:	
        d: dimension of vectors
        M: number of sub-vectors that D % M == 0
        nbits: number of bits per subquantizer, so each contain 2^nbits centroids
        metric: similarity metric (L2 or IP)

IndexIVFPQ	
    IVF index class	
    Parameters:
        quantizer: the quantizer used in computing distance phase.
        d: dimension of vectors
        nlist: number of cells/clusters to partition the inverted file space
        M: number of sub-vectors that D % M == 0
        nbits: number of bits per subquantizer, so each contain 2^nbits centroids
        metric: similarity metric (L2 or IP)
"""
def test_product_quantizer():
    # Product Quantizer
    d = 128
    M = 8
    nbits = 4
    k = 3

    quantizer = faiss.ProductQuantizer(d, M, nbits)

    quantizer.train(data)
    new_data = quantizer.compute_codes(data)

    print(new_data.max())
    print(new_data[:2])

    # Product Quantizer Index
    index = faiss.IndexPQ(d, M, nbits, faiss.METRIC_L2)

    index.train(data)
    index.add(data)
    D, I = index.search(data[:1], k)

    print(f"closest elements: {I}")
    print(f"distance: {D}")

    # Product Quantizer IVF Index
    nlist = 5

    quantizer = faiss.IndexFlat(d, faiss.METRIC_L2)
    index = faiss.IndexIVFPQ(quantizer, d, nlist, M, nbits, faiss.METRIC_L2)

    index.train(data)
    index.add(data)
    D, I = index.search(data[:1], k)

    print(f"closest elements: {I}")
    print(f"distance: {D}")

if __name__ == '__main__':
    test_scalar_quantizer()

    test_product_quantizer()

   