

"""
Reranker usually better captures the latent semantic meanings between sentences. But comparing to using an embedding model, it will take quadratic O(N^2) running time for the whole dataset. Thus the most common use cases of rerankers in information retrieval or RAG is reranking the top k answers retrieved according to the embedding similarities.
Reranker 通常能更好地捕捉句子之间的潜在语义含义。但与使用embedding模型相比，整个数据集需要的运行时间: O(N^2)。因此，信息检索或 RAG 中重新排序器最常见的用例是根据embedding相似性对检索到的前 k 个答案进行重新排序。

The evaluation of reranker has the similar idea. We compare how much better the rerankers can rerank the candidates searched by a same embedder. In this tutorial, we will evaluate two rerankers' performances on BEIR benchmark, with bge-large-en-v1.5 as the base embedding model.
reranker 的评估也有类似的想法。我们比较了 rerankers 对同一嵌入器搜索的候选人进行重新排序的能力。在本教程中，我们将评估两个 reranker 在 BEIR 基准测试中的性能，以 bge-large-en-v1.5 作为基本嵌入模型。

Note: We highly recommend to run this notebook with GPU. The whole pipeline is very time consuming. For simplicity, we only use a single task FiQA in BEIR.
注意：我们强烈建议使用 GPU 运行。整个管道非常耗时。为简单起见，我们在 BEIR 中只使用单个任务 FiQA。
"""