nreimers's picture
add
61a1d07
|
raw
history blame
10 kB
metadata
configs:
  - config_name: hotpotqa-corpus
    data_files:
      - split: train
        path: hotpotqa/corpus/*
  - config_name: hotpotqa-queries
    data_files:
      - split: train
        path: hotpotqa/queries/train.parquet
      - split: dev
        path: hotpotqa/queries/dev.parquet
      - split: test
        path: hotpotqa/queries/test.parquet
  - config_name: hotpotqa-qrels
    data_files:
      - split: train
        path: hotpotqa/qrels/train.parquet
      - split: dev
        path: hotpotqa/qrels/dev.parquet
      - split: test
        path: hotpotqa/qrels/test.parquet
  - config_name: msmarco-corpus
    data_files:
      - split: train
        path: msmarco/corpus/*
  - config_name: msmarco-queries
    data_files:
      - split: train
        path: msmarco/queries/train.parquet
      - split: dev
        path: msmarco/queries/dev.parquet
  - config_name: msmarco-qrels
    data_files:
      - split: train
        path: msmarco/qrels/train.parquet
      - split: dev
        path: msmarco/qrels/dev.parquet
  - config_name: nfcorpus-corpus
    data_files:
      - split: train
        path: nfcorpus/corpus/*
  - config_name: nfcorpus-queries
    data_files:
      - split: train
        path: nfcorpus/queries/train.parquet
      - split: dev
        path: nfcorpus/queries/dev.parquet
      - split: test
        path: nfcorpus/queries/test.parquet
  - config_name: nfcorpus-qrels
    data_files:
      - split: train
        path: nfcorpus/qrels/train.parquet
      - split: dev
        path: nfcorpus/qrels/dev.parquet
      - split: test
        path: nfcorpus/qrels/test.parquet

BEIR embeddings with Cohere embed-english-v3.0 model

This datasets contains all query & document embeddings for BEIR, embedded with the Cohere embed-english-v3.0 embedding model.

Overview of datasets

This repository hosts all 18 datasets from BEIR, including query and document embeddings. The following table gives an overview of the available datasets. See the next section how to load the individual datasets.

Dataset nDCG@10 #Documents
arguana 53.98 8,674
bioasq 45.66 14,914,603
climate-fever 25.90 5,416,593
cqadupstack-android 50.01 22,998
cqadupstack-english 49.09 40,221
cqadupstack-gaming 60.50 45,301
cqadupstack-gis 39.17 37,637
cqadupstack-mathematica 30.38 16,705
cqadupstack-physics 43.82 38,316
cqadupstack-programmers 43.67 32,176
cqadupstack-stats 35.23 42,269
cqadupstack-text 30.84 68,184
cqadupstack-unix 40.59 47,382
cqadupstack-webmasters 40.68 17,405
cqadupstack-wordpress 34.26 48,605
fever 89.00 5,416,568
fiqa 42.14 57,638
hotpotqa 70.72 5,233,329
msmarco 42.86 8,841,823
nfcorpus 38.63 3,633
nq 61.62 2,681,468
quora 88.72 522,931
robust04 54.06 528,155
scidocs 20.34 25,657
scifact 71.81 5,183
signal1m 26.32 2,866,316
trec-covid 81.78 171,332
trec-news 50.42 594,977
webis-touche2020 32.64 382,545

Notes:

  • arguana: The task of arguana is to find for a given argument (e.g. Being vegetarian helps the environment ...), an argument that refutes it (e.g. Vegetarian doesn't have an impact on the environment). Naturally, embedding models work by finding the most similar texts, hence for the given argument it would find similar arguments first that support that vegetarian helps the environment, which would be treated here as non-relevant. By special embedding model prompting, the model can be steered to find arguments that refute the query. This will improve the nDCG@10 score from 53.98 to 61.5.
  • climate-fever: The task is to find evidence that support or refute a claim. As with arguana, with the default mode, the model will find the evidence primarily supporting the claim. By embedding model prompting, we can tell the model to find support and contra evidence for a claim. This improves the nDCG@10 score to 38.4.
  • Quora: As the corpus consists of question, they have been encoded with the input_type='search_query' in order to find similar/duplicate questions.
  • cqadupstack: The datasets consists of several sub-datasets, where the nDCG@10 scores will be averaged in BEIR.

Loading the dataset

Loading the document embeddings

The corpus split contains all document embeddings of the corpus.

You can either load the dataset like this:

from datasets import load_dataset
dataset_name = "hotpotqa"
docs = load_dataset("Cohere/beir-embed-english-v3", f"{dataset_name}-corpus", split="train")

Or you can also stream it without downloading it before:

from datasets import load_dataset
dataset_name = "hotpotqa"
docs = load_dataset("Cohere/beir-embed-english-v3", f"{dataset_name}-corpus", split="train", streaming=True)
for doc in docs:
    doc_id = doc['_id']
    title = doc['title']
    text = doc['text']
    emb = doc['emb']

Note, depending on the dataset size, the corpus split can be quite large.

Loading the query embeddings

The queries split contains all query embeddings. There might be up to three splits: train, dev, and test, depending which splits are available in BEIR. Evaluation is performed on the test split.

You can load the dataset like this:

from datasets import load_dataset
dataset_name = "hotpotqa"
queries = load_dataset("Cohere/beir-embed-english-v3", f"{dataset_name}-queries", split="test")

for query in queries:
    query_id = query['_id']
    text = query['text']
    emb = query['emb']

Loading the qrels

The qrels split contains the query relevance annotation, i.e., it contains the relevance score for (query, document) pairs.

You can load the dataset like this:

from datasets import load_dataset
dataset_name = "hotpotqa"
qrels = load_dataset("Cohere/beir-embed-english-v3", f"{dataset_name}-qrels", split="test")

for qrel in qrels:
    query_id = qrel['query_id']
    corpus_id = qrel['corpus_id']
    score = qrel['score']

Search

The following shows an example, how the dataset can be used to build a semantic search application.

Get your API key from cohere.com and start using this dataset.

#Run: pip install cohere datasets torch
from datasets import load_dataset
import torch
import cohere
dataset_name = "hotpotqa"
co = cohere.Client("<<COHERE_API_KEY>>")  # Add your cohere API key from www.cohere.com

#Load at max 1000 documents + embeddings
max_docs = 1000
docs_stream = load_dataset("Cohere/beir-embed-english-v3", f"{dataset_name}-corpus", split="train", streaming=True)
docs = []
doc_embeddings = []
for doc in docs_stream:
    docs.append(doc)
    doc_embeddings.append(doc['emb'])
    if len(docs) >= max_docs:
        break

doc_embeddings = torch.tensor(doc_embeddings)

query = 'What is an abstract' #Your query 
response = co.embed(texts=[query], model='embed-english-v3.0', input_type='search_query')
query_embedding = response.embeddings 
query_embedding = torch.tensor(query_embedding)

# Compute dot score between query embedding and document embeddings
dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1))
top_k = torch.topk(dot_scores, k=3)

# Print results
print("Query:", query)
for doc_id in top_k.indices[0].tolist():
    print(docs[doc_id]['title'])
    print(docs[doc_id]['text'], "\n")

Running evaluations

This dataset allows to reproduce the BEIR performance results and to compute nDCG@10, Recall@10, and Accuracy@3.

You must have beir, faiss, numpy, and datasets installed. The following scripts loads all files, runs search and computes the search quality metrices.

import numpy as np
import faiss
from beir.retrieval.evaluation import EvaluateRetrieval
import time
from datasets import load_dataset

def faiss_search(index, queries_emb, k=[10, 100]):
    start_time = time.time()
    faiss_scores, faiss_doc_ids = index.search(queries_emb, max(k))
    print(f"Search took {(time.time()-start_time):.2f} sec")
    
    query2id = {idx: qid for idx, qid in enumerate(query_ids)}
    doc2id = {idx: cid for idx, cid in enumerate(docs_ids)}

    faiss_results = {}
    for idx in range(0, len(faiss_scores)):
        qid = query2id[idx]
        doc_scores = {doc2id[doc_id]: score.item() for doc_id, score in zip(faiss_doc_ids[idx], faiss_scores[idx])}
        faiss_results[qid] = doc_scores

    ndcg, map_score, recall, precision = EvaluateRetrieval.evaluate(qrels, faiss_results, k)
    acc = EvaluateRetrieval.evaluate_custom(qrels, faiss_results, [3, 5, 10], metric="acc")
    print(ndcg)
    print(recall)
    print(acc)

dataset_name = "<<DATASET_NAME>>" 
dataset_split = "test"
num_dim = 1024

#Load qrels
df = load_dataset("Cohere/beir-embed-english-v3", f"{dataset_name}-qrels", split=dataset_split)
qrels = {}
for row in df:
    qid = row['query_id']
    cid = row['corpus_id']
    
    if row['score'] > 0:
        if qid not in qrels:
            qrels[qid] = {}
        qrels[qid][cid] = row['score']

#Load queries
df = load_dataset("Cohere/beir-embed-english-v3", f"{dataset_name}-queries", split=dataset_split)

query_ids = df['_id']
query_embs = np.asarray(df['emb'])
print("Query embeddings:", query_embs.shape)

#Load corpus
df = load_dataset("Cohere/beir-embed-english-v3", f"{dataset_name}-corpus", split="train")

docs_ids = df['_id']

#Build index
print("Build index. This might take some time")
index = faiss.IndexFlatIP(num_dim)
index.add(np.asarray(df.to_pandas()['emb'].tolist()))

#Run and evaluate search
print("Seach on index")
faiss_search(index, query_embs)

Notes

  • This dataset was created with datasets==2.15.0. Make sure to use this or a newer version of the datasets library.