orgrctera's picture
Upload README.md with huggingface_hub
631ece0 verified
metadata
language:
  - en
license: cc-by-sa-4.0
tags:
  - retrieval
  - text-retrieval
  - beir
  - physics
  - stack-exchange
  - duplicate-questions
  - community-question-answering
  - benchmark
pretty_name: BEIR CQADupStack Physics (retrieval)
size_categories: 1K<n<10K
task_categories:
  - text-retrieval

CQADupStack — Physics (BEIR)

Dataset description

CQADupStack is a benchmark for community question answering (cQA) research built from multiple Stack Exchange forums. Each subforum is distributed as a separate “stack” with posts, metadata, and duplicate-question annotations. The Physics stack draws from Physics Stack Exchange—short to medium-length questions and answers about concepts, experiments, and problem solving in physics.

BEIR (Benchmarking IR) incorporated selected CQADupStack forums—including physics—into a unified zero-shot information retrieval benchmark. In the BEIR formulation, the task is ad hoc retrieval: given a query (a question post), a system must rank corpus documents (other question posts) so that posts marked as duplicates or relevant in the official judgments appear at the top.

This repository (orgrctera/beir_cqadupstack_physics) exposes the BEIR CQADupStack / physics split in Parquet form for retrieval evaluation pipelines (aligned with CTERA-style query + qrels rows). The Hub snapshot corresponds to the test retrieval setting: 1,039 query rows (one per evaluation query).

Background: duplicate-question retrieval

The original CQADupStack resource was introduced to support research on finding duplicate or near-duplicate questions in large cQA archives—a core retrieval problem for forums that want to link users to existing threads. BEIR repurposes the annotated relationships as qrels (query–relevance labels) over a document collection of question posts, enabling standard IR metrics (nDCG, Recall@k, MRR, etc.) alongside other heterogeneous BEIR tasks.

Scale (Physics subset, BEIR / standard mirrors)

Reported statistics for the test split of CQADupStack Physics in retrieval benchmarks are on the order of:

Aspect Approximate scale
Queries ~1,039 (unique evaluation queries)
Corpus documents ~38k question posts (document collection to index)
Qrels ~1.9k judged relevant pairs total; ~1.86 relevant documents per query on average (min 1, max can be large for highly duplicated threads)

Exact counts should be verified against the specific BEIR export you pair with this file (corpus + qrels version).

Task: retrieval (CQADupStack Physics)

This dataset defines a text retrieval task for the Physics slice of CQADupStack under BEIR:

  1. Input: a natural-language question (the query text).
  2. Output: a ranked list of document IDs from the Physics corpus (or scores over the full collection), such that relevant IDs—per official qrels—receive high rank.

Evaluation uses standard information retrieval metrics. Full benchmarking also requires the corpus (passage or document text keyed by the same IDs as in BEIR).

Note: Rows in this repository describe the query + relevance judgments side. Combine them with the BEIR CQADupStack physics corpus from the same release for end-to-end retrieval experiments.

Data format (this repository)

Each record typically includes:

Field Description
id UUID for this example row.
input The query text (Physics Stack Exchange–style question).
expected_output JSON string: list of objects {"id": "<corpus-doc-id>", "score": <relevance>}. Scores follow BEIR qrels conventions (e.g., relevance grades as released upstream).
metadata.query_id Original BEIR / CQADupStack query identifier (string).
metadata.split Split name (here: test for the exported BEIR evaluation split).

Example 1

{
  "id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
  "input": "Why does the sky appear blue during the day but red at sunset?",
  "expected_output": "[{\"id\": \"12345\", \"score\": 1}, {\"id\": \"67890\", \"score\": 1}]",
  "metadata.query_id": "42",
  "metadata.split": "test"
}

Example 2

{
  "id": "f0e1d2c3-b4a5-6978-9012-3456789abcde",
  "input": "How is the Higgs boson related to the mechanism that gives mass to elementary particles?",
  "expected_output": "[{\"id\": \"24680\", \"score\": 1}]",
  "metadata.query_id": "108",
  "metadata.split": "test"
}

(Example IDs and query text are illustrative; real query_id and document IDs match the BEIR physics export.)

References

CQADupStack (original dataset)

Doris Hoogeveen, Karin M. Verspoor, Timothy Baldwin
CQADupStack: A Benchmark Data Set for Community Question-Answering Research
Proceedings of the 20th Australasian Document Computing Symposium (ADCS 2015), Parramatta, Australia.

The paper presents CQADupStack as a multi-forum Stack Exchange resource with duplicate annotations and predefined splits, intended to make cQA and duplicate-detection experiments comparable across studies.

BEIR benchmark (CQADupStack as a retrieval task)

Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, Iryna Gurevych
BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models
NeurIPS 2021 (Datasets and Benchmarks Track).

Abstract (from arXiv): “Existing neural information retrieval (IR) models have often been studied in homogeneous and narrow settings, which has considerably limited insights into their out-of-distribution (OOD) generalization capabilities. To address this, and to facilitate researchers to broadly evaluate the effectiveness of their models, we introduce Benchmarking-IR (BEIR), a robust and heterogeneous evaluation benchmark for information retrieval. We leverage a careful selection of 18 publicly available datasets from diverse text retrieval tasks and domains and evaluate 10 state-of-the-art retrieval systems including lexical, sparse, dense, late-interaction and re-ranking architectures on the BEIR benchmark. Our results show BM25 is a robust baseline and re-ranking and late-interaction-based models on average achieve the best zero-shot performances, however, at high computational costs. In contrast, dense and sparse-retrieval models are computationally more efficient but often underperform other approaches, highlighting the considerable room for improvement in their generalization capabilities.”

Related Hub resources

  • Raw BEIR-style dataset layouts on Hugging Face (corpus / queries / qrels), e.g. community mirrors under BeIR/ or irds/beir_* collections—search for CQADupStack physics in the dataset hub to match your tooling.

Citation

If you use CQADupStack, cite the ADCS 2015 paper above. If you use the BEIR packaging and evaluation protocol, cite the BEIR NeurIPS 2021 paper. BibTeX for BEIR is available from the official BEIR repository.

License

Stack Exchange content is typically distributed under Creative Commons terms (commonly CC BY-SA). This card uses cc-by-sa-4.0 as a conventional label for Stack Exchange–derived text; confirm against your exact data snapshot and Stack Exchange’s content policy for compliance-sensitive deployments.


Dataset card for orgrctera/beir_cqadupstack_physics — BEIR CQADupStack Physics retrieval split.