bench / README.md
eugene-yang's picture
update run name
8f02c19 verified
metadata
license: cc-by-4.0
annotations_creators:
  - NIST
task_categories:
  - text-retrieval
  - text-ranking
language:
  - en
  - zh
  - fa
  - ru
multilinguality:
  - multilingual
pretty_name: NeuCLIRBench
size_categories:
  - n<1K
task_ids:
  - document-retrieval
configs:
  - config_name: queries
    default: true
    data_files:
      - split: eng
        path: data/news.eng.tsv
      - split: fas
        path: data/news.fas.tsv
      - split: rus
        path: data/news.rus.tsv
      - split: zho
        path: data/news.zho.tsv
    format: csv
    sep: "\t"
    header: null
    names:
      - id
      - query
    dataset_info:
      features:
        - name: id
          dtype: string
        - name: query
          dtype: string
  - config_name: qrels
    default: false
    data_files:
      - split: mlir
        path: data/qrels.mlir.gains.txt
      - split: fas
        path: data/qrels.fas.gains.txt
      - split: rus
        path: data/qrels.rus.gains.txt
      - split: zho
        path: data/qrels.zho.gains.txt
    format: csv
    sep: ' '
    header: null
    names:
      - id
      - ignore
      - docid
      - relevance
    dataset_info:
      features:
        - name: id
          dtype: string
        - name: ignore
          dtype: string
        - name: docid
          dtype: string
        - name: relevance
          dtype: int

NeuCLIRBench Topics and Queries

NeuCLIRBench is an evaluation benchmark for monolingual, cross-language, and multilingual adhoc retrieval.

The document collection can be found at neuclir/neuclir1.

Supporting Tasks and Corresponding Data

NeuCLIRBench supports three types of tasks: monolingual, cross-language, and multilingual adhoc retrieval. The following specifies the documents, queries, and qrels (labels) that should be used for each task.

Please report nDCG@20 for all tasks.

We use : to indicate different subset under the dataset.

Monolingual Retrieval (mono)

Language Documents Queries Qrels
English All splits under neuclir/neuclir1:mt_docs eng split of neuclir/bench:queries mlir split of neuclir/bench:qrels
Persian fas split of neuclir/neuclir1:default fas split of neuclir/bench:queries fas split of neuclir/bench:qrels
Russian rus split of neuclir/neuclir1:default rus split of neuclir/bench:queries rus split of neuclir/bench:qrels
Chinese zho split of neuclir/neuclir1:default zho split of neuclir/bench:queries zho split of neuclir/bench:qrels

Cross-Language Retrieval (clir)

Language Documents Queries Qrels
Persian fas split of neuclir/neuclir1:default eng split of neuclir/bench:queries fas split of neuclir/bench:qrels
Russian rus split of neuclir/neuclir1:default eng split of neuclir/bench:queries rus split of neuclir/bench:qrels
Chinese zho split of neuclir/neuclir1:default eng split of neuclir/bench:queries zho split of neuclir/bench:qrels

Multilingual Retrieval (mlir)

Language Documents Queries Qrels
English All splits under neuclir/neuclir1:default eng split of neuclir/bench:queries mlir split of neuclir/bench:qrels

Baseline Retrieval Results and Run Files

We also provide all reported baseline retrieval results in the NeuCLIRBench paper. Please refer to the paper for the detailed descriptions of each model.

Monolingual Results Cross-Language and Multilngual Results

Run Names

Please refer to the ./runs directory in this dataset to find all the runs. Files follow the naming scheme of {run_handle}_{task:mono/clir/mlir}_{lang}.trec. Please refer to the task section for the details.

Run Handle Model Type Model Name
bm25 Lexical BM25
bm25dt Lexical BM25 w/ DT
bm25qt Lexical BM25 w/ QT
milco Bi-Encoder MILCO
plaidx Bi-Encoder PLAID-X
qwen8b Bi-Encoder Qwen3 8B Embed
qwen4b Bi-Encoder Qwen3 4B Embed
qwen600m Bi-Encoder Qwen3 0.6B Embed
arctic Bi-Encoder Arctic-Embed Large v2
splade Bi-Encoder SPLADEv3
fusion3 Bi-Encoder Fusion
repllama Bi-Encoder RepLlama
me5large Bi-Encoder e5 Large
jinav3 Bi-Encoder JinaV3
bgem3sparse Bi-Encoder BGE-M3 Sparse
mt5 Pointwise Reranker Mono-mT5XXL
qwen3-0.6b-rerank Pointwise Reranker Qwen3 0.6B Rerank
qwen3-4b-rerank Pointwise Reranker Qwen3 4B Rerank
qwen3-8b-rerank Pointwise Reranker Qwen3 8B Rerank
jina-rerank Pointwise Reranker Jina Reranker
searcher-rerank Pointwise Reranker SEARCHER Reranker
rank1 Pointwise Reranker Rank1
qwq Listwise Reranker Rank-K (QwQ)
rankzephyr Listwise Reranker RankZephyr 7B
firstqwen Listwise Reranker FIRST Qwen3 8B
rq32b Listwise Reranker RankQwen-32B

Citation

@article{neuclirbench,
      title={NeuCLIRBench: A Modern Evaluation Collection for Monolingual, Cross-Language, and Multilingual Information Retrieval}, 
      author={Dawn Lawrie and James Mayfield and Eugene Yang and Andrew Yates and Sean MacAvaney and Ronak Pradeep and Scott Miller and Paul McNamee and Luca Soldani},
      year={2025},
      eprint={2511.14758},
      archivePrefix={arXiv},
      primaryClass={cs.IR},
      url={https://arxiv.org/abs/2511.14758}, 
}