import faiss
import numpy as np

np.random.seed(768)

data = np.random.random((1000, 128))

if __name__ == '__main__':
    
    d = 128  # dimension of the vector
    k = 3    # number of nearest neighbors to search

    # 1. IndexFlat*
    """
    Flat index is the very fundamental index structure. It does not do any preprocess for the incoming vectors. All the vectors are stored directly without compression or quantization. Thus no training is need for flat indexes.

    When searching, Flat index will decode all the vectors sequentially and compute the similarity score to the query vectors. Thus, Flat Index guarantees the global optimum of results.

    Flat index family is small: just IndexFlatL2 and IndexFlatIP, which are just different by the similarity metrics of Euclidean distance and inner product.

    Flat index 是非常基本的索引结构。它不会对传入的向量进行任何预处理。所有向量都直接存储，无需压缩或量化。因此，不需要对 flat indexs 进行训练。

    搜索时，Flat index 将按顺序解码所有向量，并计算与查询向量的相似度分数。因此，Flat Index 保证了全局最佳结果。

    Flat index 系列很少：只有 IndexFlatL2 和 IndexFlatIP，它们只是在欧几里得距离和内积的相似性指标上有所不同。

    Flat Indexes guarantee the perfect quality but with terrible speed. It works well on small datasets or the cases that speed is not a crucial factor
    Flat Indexes 保证完美的质量，但速度很慢。它适用于小型数据集或速度不是关键因素的情况
    """
    # just simply create the index and add all the data
    index = faiss.IndexFlatL2(d)
    index.add(data)

    # search for the k nearest neighbor for the first element in data
    D, I = index.search(data[:1], k)

    print(f"closest elements: {I}")
    print(f"distance: {D}")

    # 2. IndexIVF*  
    """
    # Intro
    Inverted File Flat (IVF) Index is a widely accepted technique to speed up searching by using k-means or Voronoi diagram to create a number of cells (or say, clusters) in the whole space. Then when given a query, an amount of closest cells will be searched. After that, k closest elements to the query will be searched in those cells.

    - quantizer is another index/quantizer to assign vectors to inverted lists.
    - nlist is the number of cells the space to be partitioned.
    - nprob is the nuber of closest cells to visit for searching in query time.

    倒排文件 Flat Index (IVF) 是一种被广泛接受的技术，它使用 K 均值或 Voronoi 图在整个空间中创建大量单元（或称为簇），从而加快搜索速度。然后，当给定一个查询时，将搜索一定数量的最近单元格。之后，将在这些单元格中搜索与查询最近的 k 个元素.

    - quantizer 是另一个索引/量化器，用于将向量分配给倒排列表。
    - nlist 是要分区的空间的单元格数。
    - nprob 是查询时要访问的最近单元格的数量，用于搜索。
    

    # Tradeoff
    Increasing nlist will shrink the size of each cell, which speed up the search process. But the smaller coverage will sacrifice accuracy and increase the possibility of the edge/surface problem discribed above.
    增加 nlist 会缩小每个单元的大小，从而加快搜索过程。但较小的覆盖范围将牺牲准确性，并增加上述边缘/表面问题的可能性。

    Increasing nprob will have a greater scope, preferring search quality by the tradeoff of slower speed.
    增加 nprob 将会有更大的范围，通过速度较慢的折衷来更合适搜索质量。

    # Shortage
    There could be a problem when the query vector lands on the edge/surface of the cell. It is possible that the closest element falls into the neighbor cell, which may not be considered due to nprob is not large enough.
    当查询向量到达单元的边缘/表面时，可能会出现问题。最近的元素可能落入邻近单元，而由于nprob不够大，这可能不会被考虑。
    """

    nlist = 5
    nprob = 2

    # the quantizer defines how to store and compare the vectors 
    # 量化器定义了如何存储和比较向量
    quantizer = faiss.IndexFlatL2(d)
    index = faiss.IndexIVFFlat(quantizer, d, nlist)

    # note different from flat index, IVF index first needs training to create the cells
    # 与flat index不同，IVF 索引首先需要训练以创建单元。
    index.train(data)
    index.add(data)
    # set nprob before searching
    # 在搜索之前设置nprob
    index.nprobe = 8
    D, I = index.search(data[:1], k)

    print(f"closest elements: {I}")
    print(f"distance: {D}")

    # 3. IndexHNSW*
    """
    Intro
    Hierarchical Navigable Small World (HNSW) indexing is a graph based method, which is an extension of navigable small world (NSW). It builds a multi-layered graph where nodes (vectors) are connected based on their proximity, forming "small-world" structures that allow efficient navigation through the space.
    层次可导航小世界（HNSW）索引是一种基于图的方法，它是可导航小世界（NSW）的扩展。它构建了一个多层次的图，其中节点（向量）根据它们的邻近性连接，形成“近小世界”结构，从而允许在空间中有效导航。

    - M is the number of neighbors each vector has in the graph.  M是每个向量在图中拥有的邻居数量。
    - efConstruction is the number of entry points to explore when building the index.  efConstruction 是在构建索引时可探索的入口点数量。
    - efSearch is the number of entry points to explore when searching. efSearch 是在搜索时可探索的入口点数量。
    
    Tradeoff
    Increasing M or efSearch will make greater fidelity with reasonable longer time. Larger efConstruction mainly increases the index construction time.
    增加M或efSearch将使结果更准确，但需要更长的时间。更大的efConstruction主要增加索引构建时间。

    HNSW has great searching quality and speed. But it is memory-consuming due to the graph structure. Scaling up M will cause a linear increase of memory usage.
    HNSW 具有出色的搜索质量和速度。但由于其图结构，它消耗内存。增加 M 的规模将导致内存使用量线性增加。

    Note that HNSW index does not support vector's removal because removing nodes will distroy graph structure.
    请注意，HNSW 索引不支持移除向量，因为移除节点会破坏图结构。

    Thus HNSW is a great index to choose when RAM is not a limiting factor.
    因此，当RAM不是限制因素时，HNSW是一个很好的索引选择。
    """
    M = 32
    ef_search = 16
    ef_construction = 32

    index = faiss.IndexHNSWFlat(d, M)
    # set the two parameters before adding data
    index.hnsw.efConstruction = ef_construction
    index.hnsw.efSearch = ef_search

    index.add(data)
    D, I = index.search(data[:1], k)

    print(f"closest elements: {I}")
    print(f"distance: {D}")

    # 4. IndexLSH
    """
    Intro
    Locality Sensitive Hashing (LSH) is an ANN method that hashing data points into buckets. While well known use cases of hash function such as dictionary/hashtabel are trying to avoid hashing collisions, LSH trys to maximize hashing collisions. Similar vectors will be grouped into same hash bucket.
    局部敏感哈希（LSH）是一种近似最近邻（ANN）方法，它将数据点哈希到桶中。虽然哈希函数的知名用例，如字典/哈希表，是试图避免哈希冲突，LSH 则试图最大化哈希冲突。相似的向量将被分组到同一个哈希桶中。

    In Faiss, IndexLSH is a Flat index with binary codes. Vectors are hashed into binary codes and compared by Hamming distances.
    在Faiss中，IndexLSH是一个具有二进制编码的平面索引。向量被哈希为二进制编码，并通过Hamming距离进行比较。

    - nbits can be seen as the "resolution" of hashed vectors. nbits 可以被视为哈希向量的“分辨率”。
    Tradeoff
    Increasing nbits can get higher fidelity with the cost of more memory and longer searching time.
    增加nbits可以获得更高的保真度，但代价是需要更多的内存和更长的搜索时间。

    LSH suffers the curse of dimensionality when using a larger d. In order to get similar search quality, the nbits value needs to be scaled up to maintain the search quality.
    在使用更大的d时，LSH遭受了维度诅咒。为了获得类似的搜索质量，必须将nbits值进行调整以维持搜索质量。

    Shortage
    LSH speeds up searching time with a reasonable sacrifice of quality. But that only applies to small dimension d. Even 128 is already too large for LSH. Thus for vectors generated by transformer based embedding models, LSH index is not a common choice.
    LSH通过合理牺牲质量来加快搜索时间。但这仅适用于小维度d。即使128对于LSH来说也已经太大。因此，对于通过基于变压器的嵌入模型生成的向量，LSH索引并不是一个常见的选择。
    """
    nbits = d * 8

    index = faiss.IndexLSH(d, nbits)
    index.train(data)
    index.add(data)

    D, I = index.search(data[:1], k)

    print(f"closest elements: {I}")
    print(f"distance: {D}")

