The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 138, in compute
                  return CompleteJobResult(compute_split_names_from_info_response(dataset=self.dataset, config=self.config))
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 117, in compute_split_names_from_info_response
                  config_info_response = get_previous_step_or_raise(kind="config-info", dataset=dataset, config=config)
                File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 566, in get_previous_step_or_raise
                  raise CachedArtifactError(
              libcommon.simple_cache.CachedArtifactError: The previous step failed.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 498, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 82, in _split_generators
                  inferred_arrow_schema = pa.concat_tables(pa_tables, promote_options="default").schema
                File "pyarrow/table.pxi", line 5245, in pyarrow.lib.concat_tables
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Must pass at least one table
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 68, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 571, in get_dataset_split_names
                  info = get_dataset_config_info(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 503, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Open a discussion for direct support.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

Introduction

This respository introduces how to reproduce the Dense, Sparse, and Dense+Sparse evaluation results of the paper BGE-M3 on the MIRACL dev split.

Requirements

# Install Java (Linux)
apt update
apt install openjdk-21-jdk

# Install Pyserini
pip install pyserini

# Install Faiss
## CPU version
conda install -c conda-forge faiss-cpu

## GPU version
conda install -c conda-forge faiss-gpu

It should be noted that the Pyserini code needs to be modified to support the multiple alpha settings in pyserini/fusion. I have already submitted a pull request to the official repository to support this feature. You can refer to this PR to modify the code.

2CR

Download and Unzip

# Download
## MIRACL topics and qrels
git clone https://huggingface.co/datasets/miracl/miracl
mv miracl/*/*/* topics-and-qrels
## Dense and Sparse Index
git lfs install
git clone https://huggingface.co/datasets/hanhainebula/bge-m3_miracl_2cr

cat bge-m3_miracl_2cr/dense/en.tar.gz.part_* > bge-m3_miracl_2cr/dense/en.tar.gz
cat bge-m3_miracl_2cr/dense/de.tar.gz.part_* > bge-m3_miracl_2cr/dense/de.tar.gz


# Unzip
languages=(ar bn en es fa fi fr hi id ja ko ru sw te th zh de yo)

## Dense
for lang in ${languages[@]}; do
  tar -zxvf bge-m3_miracl_2cr/dense/${lang}.tar.gz -C bge-m3_miracl_2cr/dense/
done

## Sparse
for lang in ${languages[@]}; do
  tar -zxvf bge-m3_miracl_2cr/sparse/${lang}.tar.gz -C bge-m3_miracl_2cr/sparse/
done

Reproduction

Dense

# Avaliable Language: ar bn en es fa fi fr hi id ja ko ru sw te th zh de yo
lang=zh

# Generate run
python -m pyserini.search.faiss \
  --threads 16 --batch-size 512 \
  --encoder-class auto \
  --encoder BAAI/bge-m3 \
  --pooling cls --l2-norm \
  --topics topics-and-qrels/topics.miracl-v1.0-${lang}-dev.tsv \
  --index bge-m3_miracl_2cr/dense/${lang} \
  --output bge-m3_miracl_2cr/dense/runs/${lang}.txt \
  --hits 1000

# Evaluate
## nDCG@10
python -m pyserini.eval.trec_eval \
  -c -M 100 -m ndcg_cut.10 \
  topics-and-qrels/qrels.miracl-v1.0-${lang}-dev.tsv \
  bge-m3_miracl_2cr/dense/runs/${lang}.txt
## Recall@100
python -m pyserini.eval.trec_eval \
  -c -m recall.100 \
  topics-and-qrels/qrels.miracl-v1.0-${lang}-dev.tsv \
  bge-m3_miracl_2cr/dense/runs/${lang}.txt

Sparse

# Avaliable Language: ar bn en es fa fi fr hi id ja ko ru sw te th zh de yo
lang=zh

# Generate run
python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --topics bge-m3_miracl_2cr/sparse/${lang}/query_embd.tsv \
  --index bge-m3_miracl_2cr/sparse/${lang}/index \
  --output bge-m3_miracl_2cr/sparse/runs/${lang}.txt \
  --output-format trec \
  --impact --hits 1000

# Evaluate
## nDCG@10
python -m pyserini.eval.trec_eval \
  -c -M 100 -m ndcg_cut.10 \
  topics-and-qrels/qrels.miracl-v1.0-${lang}-dev.tsv \
  bge-m3_miracl_2cr/sparse/runs/${lang}.txt
## Recall@100
python -m pyserini.eval.trec_eval \
  -c -m recall.100 \
  topics-and-qrels/qrels.miracl-v1.0-${lang}-dev.tsv \
  bge-m3_miracl_2cr/sparse/runs/${lang}.txt

Dense+Sparse

Note: You should first merge this PR to support the multiple alpha settings in pyserini/fusion.

# Avaliable Language: ar bn en es fa fi fr hi id ja ko ru sw te th zh de yo
lang=zh

# Generate dense run and sparse run
python -m pyserini.search.faiss \
  --threads 16 --batch-size 512 \
  --encoder-class auto \
  --encoder BAAI/bge-m3 \
  --pooling cls --l2-norm \
  --topics topics-and-qrels/topics.miracl-v1.0-${lang}-dev.tsv \
  --index bge-m3_miracl_2cr/dense/${lang} \
  --output bge-m3_miracl_2cr/dense/runs/${lang}.txt \
  --hits 1000

python -m pyserini.search.lucene \
  --threads 16 --batch-size 128 \
  --topics bge-m3_miracl_2cr/sparse/${lang}/query_embd.tsv \
  --index bge-m3_miracl_2cr/sparse/${lang}/index \
  --output bge-m3_miracl_2cr/sparse/runs/${lang}.txt \
  --output-format trec \
  --impact --hits 1000

# Generate dense+sparse run
mkdir -p bge-m3_miracl_2cr/fusion/runs

python -m pyserini.fusion \
  --method interpolation \
  --runs bge-m3_miracl_2cr/dense/runs/${lang}.txt bge-m3_miracl_2cr/sparse/runs/${lang}.txt \
  --alpha 1 3e-5 \
  --output bge-m3_miracl_2cr/fusion/runs/${lang}.txt \
  --depth 1000 --k 1000

# Evaluation
## nDCG@10
python -m pyserini.eval.trec_eval \
  -c -M 100 -m ndcg_cut.10 \
  topics-and-qrels/qrels.miracl-v1.0-${lang}-dev.tsv \
  bge-m3_miracl_2cr/fusion/runs/${lang}.txt
## Recall@100
python -m pyserini.eval.trec_eval \
  -c -m recall.100 \
  topics-and-qrels/qrels.miracl-v1.0-${lang}-dev.tsv \
  bge-m3_miracl_2cr/fusion/runs/${lang}.txt

Note:

  • The hybrid method we used for MIRACL in BGE-M3 paper is: s_dense + 0.3 * s_sparse. But when the sparse score is calculated, it has already been multiplied by 100^2, so the alpha for sparse run here is 3e-5, instead of 0.3.
Downloads last month
0
Edit dataset card