license: cc-by-sa-4.0
language:
- en
pretty_name: BEIR CQADupStack Mathematica (Retrieval)
size_categories:
- n<1K
tags:
- beir
- information-retrieval
- retrieval
- rag
- cqadupstack
- stackexchange
- mathematica
- wolfram-language
- community-question-answering
BEIR CQADupStack Mathematica (orgrctera/beir_cqadupstack_mathematica)
Overview
This release packages CQADupStack / Mathematica from the BEIR (Benchmarking IR) benchmark as a table-oriented dataset for retrieval evaluation and tooling (e.g. Langfuse-exported runs). The Mathematica slice is one of the Stack Exchange–hosted sub-corpora in CQADupStack: questions and answers about Wolfram Mathematica (the symbolic computation / notebook environment), drawn from the Mathematica Stack Exchange community.
CQADupStack was introduced as a benchmark for community question answering (cQA) research. Posts come from multiple Stack Exchange forums; each subforum is distributed as its own subset. Duplicate-question links (and other annotations) support both classification and retrieval-style experiments. In BEIR’s retrieval formulation, queries are question posts and relevant documents are other posts marked as duplicates—so models must retrieve semantically equivalent or overlapping questions from a corpus of prior threads.
BEIR (Thakur et al., NeurIPS 2021) standardizes many heterogeneous IR datasets (lexical vs. semantic gaps, short vs. long documents, domain shift) so sparse, dense, and hybrid retrievers can be compared—including in zero-shot settings where training did not target that forum.
This Hub dataset contains 804 query-level rows on the test split, aligned with the standard BEIR CQADupStack–Mathematica evaluation split.
Task
- Task type: Retrieval (document retrieval against an external corpus identified by BEIR document IDs).
- Input (
input): The user query text (a Mathematica Stack Exchange–style question title/body fragment as distributed in BEIR). - Reference (
expected_output): A JSON string encoding the list of relevant document IDs with relevance scores (BEIR qrels: typically binary1for relevant pairs), e.g.[{"id": "26203", "score": 1}, ...].
Evaluators rank a candidate pool (the full CQADupStack–Mathematica corpus in BEIR) and score overlap with these IDs using standard IR metrics (nDCG, MRR, Recall@k, etc.). - Metadata: Original BEIR identifiers (
query_id) and split name are preserved for traceability.
The retrieval system’s job is to return the correct corpus document IDs for each query when scored against the full Mathematica forum corpus distributed with BEIR—that corpus is not inlined row-wise in this table.
Background
CQADupStack (original dataset)
CQADupStack is a collection of threads from twelve Stack Exchange communities, built from a 2014 Stack Exchange data dump and annotated for duplicate question detection and related cQA tasks. The Mathematica subset corresponds to mathematica.stackexchange.com: practical and technical questions about graphics, notebooks, language semantics, performance, interoperability, and similar topics—often with jargon and code-like tokens that differ from general web text.
Hoogeveen et al. (ADCS 2015) describe the resource and evaluation protocols aimed at comparable training/test splits for retrieval and classification.
BEIR reformulation
BEIR re-hosts CQADupStack per subforum in a shared layout: corpus (JSONL: _id, title, text), queries (JSONL: _id, text), and qrels (TSV: query-id, corpus-id, score). That common format enables cross-dataset benchmarks and zero-shot evaluation of neural retrieval models.
This release
Rows were exported from Langfuse (CTERA AI evaluation pipeline) in a flat, parquet-friendly schema: one row per query with gold relevant document IDs in expected_output for downstream scoring and observability.
Data fields
| Column | Type | Description |
|---|---|---|
id |
string |
Stable UUID for this row in this Hub release. |
input |
string |
Query text (question). |
expected_output |
string |
JSON string: list of objects {"id": "<corpus-doc-id>", "score": <int>} — qrels for that query. |
metadata.query_id |
string |
BEIR query identifier. |
metadata.split |
string |
Split name (here: test). |
Splits
| Split | Rows |
|---|---|
test |
804 |
| Total | 804 |
Examples
Illustrative rows from this release (truncated where long).
Example 1 — numerical / differential equations
input:NDSolve Problemmetadata.query_id:2769metadata.split:testexpected_output:
[{"id": "26203", "score": 1}]
Example 2 — front-end / typesetting
input:Formatting Framed - some FrameStyle graphic directives don't work?metadata.query_id:8943metadata.split:testexpected_output:
[{"id": "49021", "score": 1}]
Example 3 — notebook UI behavior
input:My "N" turned red randomlymetadata.query_id:48802metadata.split:testexpected_output:
[{"id": "3294", "score": 1}]
References and citations
BEIR benchmark (aggregation & protocol)
Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, Iryna Gurevych. BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models. NeurIPS 2021 Datasets and Benchmarks Track.
- Paper: OpenReview
- Code: beir-cellar/beir
- Related HF collections under the broader BEIR ecosystem include per-subforum CQADupStack mirrors (naming varies; see BEIR docs for the canonical paths).
@inproceedings{thakur2021beir,
title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
author={Thakur, Nandan and Reimers, Nils and R{\"u}ckl{\'e}, Andreas and Srivastava, Abhishek and Gurevych, Iryna},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
year={2021},
url={https://openreview.net/forum?id=wCu6T5xFjeJ}
}
CQADupStack (original dataset)
Doris Hoogeveen, Karin M. Verspoor, Timothy Baldwin. CQADupStack: A Benchmark Data Set for Community Question-Answering Research. ADCS 2015.
- Anthology / DOI: 10.1145/2838931.2838934
- Historical resource page (dataset lineage): University of Melbourne scholarly work record
Abstract (short): The authors present a benchmark built from Stack Exchange forums for community question answering, with duplicate annotations and splits designed so that evaluation matches realistic settings (e.g. matching a new post to older threads). The resource supports retrieval and classification baselines with shared preprocessing and metrics—motivating the later BEIR retrieval packaging.
@inproceedings{hoogeveen2015cqadupstack,
title={{CQADupStack}: A Benchmark Data Set for Community Question-Answering Research},
author={Hoogeveen, Doris and Verspoor, Karin M. and Baldwin, Timothy},
booktitle={Proceedings of the 20th Australasian Document Computing Symposium},
year={2015},
doi={10.1145/2838931.2838934}
}
Stack Exchange content licensing
Underlying posts are subject to Stack Exchange’s CC BY-SA licensing terms for user-contributed content; respect attribution and share-alike requirements when redistributing or building on the text.