orgrctera commited on
Commit
0458064
·
verified ·
1 Parent(s): eaa0511

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +118 -18
README.md CHANGED
@@ -1,28 +1,128 @@
1
  ---
2
- tags: ["benchmark", "beir", "cqadupstack_wordpress", "retrieval"]
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  task_categories:
4
- - question-answering
5
  - text-retrieval
6
- size_categories:
7
- - 1K<n<10K
8
  ---
9
 
10
- # beir_cqadupstack_wordpress
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
 
12
- BEIR CQADupStack/wordpress test split
13
 
14
- | Field | Value |
15
- |-------|-------|
16
- | Benchmark | beir |
17
- | Sub-benchmark | cqadupstack_wordpress |
18
- | Type | retrieval |
19
- | Total items | 541 |
20
- | Splits | 1 |
 
 
21
 
22
- ## Splits
23
 
24
- | Split | Items |
25
- |-------|-------|
26
- | test | 541 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
 
28
- Exported from Langfuse.
 
1
  ---
2
+ language:
3
+ - en
4
+ license: cc-by-sa-4.0
5
+ tags:
6
+ - retrieval
7
+ - text-retrieval
8
+ - beir
9
+ - stack-exchange
10
+ - wordpress
11
+ - community-question-answering
12
+ - duplicate-questions
13
+ - benchmark
14
+ pretty_name: BEIR CQADupStack WordPress (retrieval)
15
+ size_categories: n<1K
16
  task_categories:
 
17
  - text-retrieval
 
 
18
  ---
19
 
20
+ # CQADupStack WordPress (BEIR) — duplicate-question retrieval
21
+
22
+ ## Dataset description
23
+
24
+ **CQADupStack** is a benchmark for **community question answering (cQA)** built from publicly available **Stack Exchange** content. It was introduced by Hoogeveen, Verspoor, and Baldwin at **ADCS 2015** as a resource for studying **duplicate questions**: threads and posts are organized so that systems can be trained and evaluated on finding prior questions that match (or semantically duplicate) a newly asked question—central to reducing fragmentation and improving search on Q&A sites.
25
+
26
+ The original release aggregates material across **twelve** Stack Exchange forums. The **WordPress** subset corresponds to **[WordPress Stack Exchange](https://wordpress.stackexchange.com/)**—questions about WordPress development, themes, plugins, and site administration. Duplicate links come from the platform’s moderation workflow, with predefined splits so results stay comparable across papers.
27
+
28
+ **BEIR** (*Benchmarking IR*) repackaged CQADupStack—along with many other public corpora—as a standard **retrieval** benchmark for **zero-shot** evaluation of lexical, sparse, dense, and hybrid retrievers across heterogeneous tasks. In the BEIR formulation, **CQADupStack (WordPress)** is a **duplicate-question retrieval** setting: the “documents” are questions (or question-like posts) from the WordPress Stack Exchange corpus, and the task is to rank the true duplicate(s) for each query highly.
29
+
30
+ In upstream BEIR / **ir_datasets**, this slice is documented with on the order of **~49K** corpus documents, **541** test queries, and **744** qrels line items (binary relevance). Full retrieval evaluation requires indexing that **corpus** and ranking **queries** against it; this Hub repository exposes the **query + qrels** side in **Parquet** form for retrieval pipelines (aligned with the BEIR **test** split).
31
+
32
+ ### Scale (this Hub snapshot)
33
+
34
+ | Split | Rows |
35
+ |-------|------|
36
+ | `test` | 541 |
37
+
38
+ Each row is one **query** with **relevance judgments** (`expected_output`) pointing at corpus document identifiers.
39
+
40
+ ## Task: retrieval (CQADupStack WordPress)
41
+
42
+ The task is **ad hoc retrieval** specialized to **duplicate question finding** on the WordPress Stack Exchange domain:
43
+
44
+ 1. **Input:** a natural-language **question** (the query)—typically about WordPress configuration, PHP hooks, themes, plugins, or the REST API.
45
+ 2. **Output:** a ranked list of **document IDs** from the CQADupStack WordPress corpus (or scores over the full collection), such that **relevant** IDs—those marked as duplicates in the official qrels—appear at the top.
46
+
47
+ Standard IR metrics apply (e.g., **nDCG@k**, **Recall@k**, **MRR**), using the provided qrels as ground truth.
48
+
49
+ > **Note:** Align `expected_output` document IDs with the same **BEIR CQADupStack WordPress** corpus you use for indexing (same ID space as the upstream BEIR release).
50
+
51
+ ## Data format (this repository)
52
+
53
+ Each record includes:
54
+
55
+ | Field | Description |
56
+ |--------|-------------|
57
+ | `id` | UUID for this example row. |
58
+ | `input` | The **query text** (Stack Exchange–style question). |
59
+ | `expected_output` | JSON string: list of objects `{"id": "<corpus-doc-id>", "score": <relevance>}`. Scores follow the BEIR qrels convention (typically `1` for relevant in binary settings). A query may have **one or more** relevant documents. |
60
+ | `metadata.query_id` | Original BEIR query identifier (string). |
61
+ | `metadata.split` | Split name; in this dataset, **`test`**. |
62
 
63
+ ### Example 1 (single relevant document)
64
 
65
+ ```json
66
+ {
67
+ "id": "141baaae-cb5d-4cde-9987-91b40dcbf9cd",
68
+ "input": "CPT admin column auto order by date instead of title",
69
+ "expected_output": "[{\"id\": \"81939\", \"score\": 1}]",
70
+ "metadata.query_id": "101834",
71
+ "metadata.split": "test"
72
+ }
73
+ ```
74
 
75
+ ### Example 2 (multiple relevant documents)
76
 
77
+ ```json
78
+ {
79
+ "id": "a51c061a-64a8-4460-9c73-3db13e5d3cd0",
80
+ "input": "Listing pages which uses specific template",
81
+ "expected_output": "[{\"id\": \"29918\", \"score\": 1}, {\"id\": \"130919\", \"score\": 1}]",
82
+ "metadata.query_id": "115020",
83
+ "metadata.split": "test"
84
+ }
85
+ ```
86
+
87
+ ## References
88
+
89
+ ### CQADupStack (original dataset)
90
+
91
+ **Doris Hoogeveen, Karin M. Verspoor, Timothy Baldwin**
92
+ *CQADupStack: A Benchmark Data Set for Community Question-Answering Research*
93
+ Proceedings of the 20th Australasian Document Computing Symposium (**ADCS 2015**), Parramatta, NSW, Australia.
94
+
95
+ - DOI: [10.1145/2838931.2838934](https://doi.org/10.1145/2838931.2838934)
96
+ - PDF (author page): [ADCS 2015 paper](https://people.eng.unimelb.edu.au/tbaldwin/pubs/adcs2015.pdf)
97
+ - Project page: [CQADupStack resources](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
98
+ - Code/data mirror: [CQADupStack on GitHub](https://github.com/D1Doris/CQADupStack)
99
+
100
+ The paper motivates duplicate-question tasks on real **Stack Exchange** communities and describes construction from a **Stack Exchange data dump**, including duplicate links and evaluation protocols suited to retrieval experiments.
101
+
102
+ ### BEIR benchmark (CQADupStack as one of 18 datasets)
103
+
104
+ **Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, Iryna Gurevych**
105
+ *BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models*
106
+ NeurIPS 2021 (Datasets and Benchmarks Track).
107
+
108
+ **Abstract (from arXiv):** *“Existing neural information retrieval (IR) models have often been studied in homogeneous and narrow settings, which has considerably limited insights into their out-of-distribution (OOD) generalization capabilities. To address this, and to facilitate researchers to broadly evaluate the effectiveness of their models, we introduce Benchmarking-IR (BEIR), a robust and heterogeneous evaluation benchmark for information retrieval. We leverage a careful selection of 18 publicly available datasets from diverse text retrieval tasks and domains and evaluate 10 state-of-the-art retrieval systems including lexical, sparse, dense, late-interaction and re-ranking architectures on the BEIR benchmark. Our results show BM25 is a robust baseline and re-ranking and late-interaction-based models on average achieve the best zero-shot performances, however, at high computational costs. In contrast, dense and sparse-retrieval models are computationally more efficient but often underperform other approaches, highlighting the considerable room for improvement in their generalization capabilities.”*
109
+
110
+ - Paper: [arXiv:2104.08663](https://arxiv.org/abs/2104.08663) — [OpenReview](https://openreview.net/forum?id=wCu6T5xFjeJ) (NeurIPS 2021 Datasets & Benchmarks)
111
+ - Code and data: [BEIR on GitHub](https://github.com/beir-cellar/beir)
112
+
113
+ ### Related resources
114
+
115
+ - **ir_datasets** documents BEIR slices with corpus/query/qrel counts: [beir/cqadupstack/wordpress](https://ir-datasets.com/beir.html) (search for `beir/cqadupstack/wordpress` on the page).
116
+ - **MTEB** lists CQADupStack variants for embedding evaluation—useful for cross-checking task definitions: [MTEB on Hugging Face](https://huggingface.co/mteb).
117
+
118
+ ## Citation
119
+
120
+ If you use **CQADupStack**, cite the ADCS 2015 paper above. If you use the **BEIR** packaging or evaluation protocol, cite the BEIR NeurIPS 2021 paper. If you use **this Parquet export**, cite both the original data sources and BEIR as appropriate for your experiment.
121
+
122
+ ## License
123
+
124
+ Stack Exchange content is typically distributed under **Creative Commons** terms; BEIR and downstream cards commonly reference **`cc-by-sa-4.0`**. Verify against your corpus snapshot and upstream Stack Exchange / BEIR terms if you need strict compliance.
125
+
126
+ ---
127
 
128
+ *Dataset card maintained for the `orgrctera/beir_cqadupstack_wordpress` Hub repository.*