orgrctera commited on
Commit
a6a06ca
·
verified ·
1 Parent(s): da0360d

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +147 -18
README.md CHANGED
@@ -1,28 +1,157 @@
1
  ---
2
- tags: ["benchmark", "beir", "cqadupstack_android", "retrieval"]
3
- task_categories:
4
- - question-answering
5
- - text-retrieval
6
  size_categories:
7
- - 1K<n<10K
 
 
 
 
 
 
 
 
 
8
  ---
9
 
10
- # beir_cqadupstack_android
11
 
12
- BEIR CQADupStack/android test split
13
 
14
- | Field | Value |
15
- |-------|-------|
16
- | Benchmark | beir |
17
- | Sub-benchmark | cqadupstack_android |
18
- | Type | retrieval |
19
- | Total items | 699 |
20
- | Splits | 1 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
 
22
  ## Splits
23
 
24
- | Split | Items |
25
- |-------|-------|
26
- | test | 699 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
 
28
- Exported from Langfuse.
 
1
  ---
2
+ license: cc-by-sa-4.0
3
+ language:
4
+ - en
5
+ pretty_name: BEIR CQADupStack Android (Retrieval)
6
  size_categories:
7
+ - 100<n<1K
8
+ tags:
9
+ - information-retrieval
10
+ - beir
11
+ - retrieval
12
+ - rag
13
+ - stack-exchange
14
+ - community-question-answering
15
+ - duplicate-question-retrieval
16
+ - android
17
  ---
18
 
19
+ # BEIR CQADupStack — Android (`orgrctera/beir_cqadupstack_android`)
20
 
21
+ ## Overview
22
 
23
+ This release packages the **Android** subforum slice of **CQADupStack** from the [**BEIR**](https://github.com/beir-cellar/beir) (Benchmarking IR) benchmark as a table-oriented dataset for **retrieval** evaluation and tooling (e.g. Langfuse-exported runs).
24
+
25
+ **CQADupStack** is a community **question answering (cQA)** resource built from **Stack Exchange** forums: posts are paired with **duplicate-question** annotations so systems can be evaluated on finding **earlier questions** that address the same information need. The **Android** subset restricts the corpus and queries to the [**android.stackexchange.com**](https://android.stackexchange.com/) forum—typical topics include device setup, apps, rooting, OEM behavior, and connectivity.
26
+
27
+ In the **BEIR** formulation, each **test query** is matched against a **corpus** of forum posts (questions with titles and body text, plus metadata such as tags in the upstream BEIR distribution). **Relevance judgments (qrels)** mark which corpus documents are **true duplicates** (or duplicate-cluster members used as positives) for that query. Evaluators rank retrieved documents and measure overlap with these IDs using standard IR metrics.
28
+
29
+ **Lineage:** [CQADupStack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/) (original benchmark) → [BEIR `cqadupstack/android`](https://github.com/beir-cellar/beir) → **this Hub dataset** (`orgrctera/beir_cqadupstack_android`).
30
+
31
+ **Scale (BEIR / ir-datasets):** the Android configuration includes on the order of **~699 test queries** and **~23K corpus documents** (full corpus is indexed at evaluation time; this table stores **query rows** with gold document IDs, not the corpus text).
32
+
33
+ ## Task
34
+
35
+ - **Task type:** **Retrieval** — **duplicate-question retrieval** on the **CQADupStack Android** collection (BEIR sub-benchmark `cqadupstack_android`).
36
+ - **Input (`input`):** Natural-language **query** text (a Stack Exchange **question** title/body-style string as distributed in BEIR).
37
+ - **Reference (`expected_output`):** A JSON **string** encoding the list of **relevant corpus document IDs** with binary relevance **scores** (typically `1` for relevant pairs), e.g. `[{"id": "1120", "score": 1}, ...]`.
38
+ - **Metadata:** `metadata.query_id` is the BEIR query identifier; `metadata.split` is **`test`** for this release.
39
+
40
+ The retrieval system’s job is to return the correct **corpus document IDs** for each query when scored against the full **CQADupStack Android** corpus distributed with BEIR (not duplicated row-wise in this table).
41
+
42
+ ## Background
43
+
44
+ ### CQADupStack (original dataset)
45
+
46
+ Doris Hoogeveen, Karin M. Verspoor, and Timothy Baldwin introduced **CQADupStack** as a **benchmark for community question answering**, with emphasis on **duplicate question detection** and related retrieval/classification setups. The resource spans **twelve** Stack Exchange subforums; each subforum provides **annotated duplicate relations** derived from site curation (e.g. “linked” / duplicate closures), enabling comparable **train/test** splits for retrieval and classification experiments. The underlying posts come from **Stack Exchange** data dumps (the original release notes reference a **September 26, 2014** dump).
47
+
48
+ > *Community question answering (cQA) forums accumulate large volumes of user-generated knowledge… Duplicate question detection—identifying whether a new question has already been answered—is central to maintaining these archives.*
49
+ > *(Paraphrased theme; see the ADCS 2015 paper below for the full problem statement and methodology.)*
50
+
51
+ - Paper: [**CQADupStack: A Benchmark Data Set for Community Question-Answering Research**](https://doi.org/10.1145/2838931.2838934) (ADCS 2015) — PDF mirror: [University of Melbourne](https://people.eng.unimelb.edu.au/tbaldwin/pubs/adcs2015.pdf)
52
+ - Website: [nlp.cis.unimelb.edu.au/resources/cqadupstack](http://nlp.cis.unimelb.edu.au/resources/cqadupstack/)
53
+ - Repository: [D1Doris/CQADupStack](https://github.com/D1Doris/CQADupStack)
54
+
55
+ ### BEIR reformulation
56
+
57
+ **BEIR** (Thakur et al., NeurIPS 2021 Datasets & Benchmarks) repackages CQADupStack (per subforum, including **android**) into a unified layout: **corpus** (JSONL: `_id`, `title`, `text`, …), **queries** (JSONL: `_id`, `text`, …), and **qrels** (TSV). That format supports **zero-shot** comparison of lexical, sparse, dense, and reranking retrievers across heterogeneous tasks.
58
+
59
+ ### This release
60
+
61
+ Rows were **exported from Langfuse** (CTERA AI evaluation pipeline) in a flat, Parquet-friendly schema: **one row per query** with **gold relevant document IDs** in `expected_output` for downstream scoring and observability.
62
+
63
+ ## Data fields
64
+
65
+ | Column | Type | Description |
66
+ |--------|------|-------------|
67
+ | `id` | `string` | Stable UUID for this row in this Hub release. |
68
+ | `input` | `string` | Query text (natural-language question). |
69
+ | `expected_output` | `string` | JSON string: list of objects `{"id": "<corpus-doc-id>", "score": <int>}` — qrels for that query. |
70
+ | `metadata.query_id` | `string` | BEIR CQADupStack Android query identifier. |
71
+ | `metadata.split` | `string` | Split name: `test`. |
72
 
73
  ## Splits
74
 
75
+ | Split | Rows |
76
+ |-------|------|
77
+ | `test` | 699 |
78
+ | **Total** | **699** |
79
+
80
+ ## Examples
81
+
82
+ Real rows from this dataset (IDs and text as published on the Hub).
83
+
84
+ ### Example 1 — multi-duplicate query
85
+
86
+ - **`input`:** `Symbolic link to Dropbox`
87
+ - **`metadata.query_id`:** `47290`
88
+ - **`metadata.split`:** `test`
89
+ - **`expected_output`:**
90
+ ```json
91
+ [
92
+ {"id": "1120", "score": 1},
93
+ {"id": "31804", "score": 1},
94
+ {"id": "14873", "score": 1},
95
+ {"id": "38645", "score": 1}
96
+ ]
97
+ ```
98
+
99
+ ### Example 2 — single relevant document
100
+
101
+ - **`input`:** `Does anyone else have issues with Google Talk for Android always "losing connection to server"?`
102
+ - **`metadata.query_id`:** `10118`
103
+ - **`metadata.split`:** `test`
104
+ - **`expected_output`:**
105
+ ```json
106
+ [
107
+ {"id": "4345", "score": 1}
108
+ ]
109
+ ```
110
+
111
+ ## References and citations
112
+
113
+ ### BEIR benchmark (aggregation & protocol)
114
+
115
+ Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, Iryna Gurevych. **BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models.** *NeurIPS 2021 Datasets and Benchmarks Track.*
116
+
117
+ > **Abstract (excerpt):** *We introduce Benchmarking-IR (BEIR), a robust and heterogeneous evaluation benchmark for information retrieval. We leverage a careful selection of 18 publicly available datasets from diverse text retrieval tasks and domains and evaluate 10 state-of-the-art retrieval systems including lexical, sparse, dense, late-interaction and re-ranking architectures on the BEIR benchmark…*
118
+
119
+ - Paper: [arXiv:2104.08663](https://arxiv.org/abs/2104.08663) · [OpenReview](https://openreview.net/forum?id=wCu6T5xFjeJ)
120
+ - Code / data: [beir-cellar/beir](https://github.com/beir-cellar/beir)
121
+
122
+ ```bibtex
123
+ @inproceedings{thakur2021beir,
124
+ title={{BEIR}: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models},
125
+ author={Thakur, Nandan and Reimers, Nils and R{\"u}ckl{\'e}, Andreas and Srivastava, Abhishek and Gurevych, Iryna},
126
+ booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)},
127
+ year={2021},
128
+ url={https://openreview.net/forum?id=wCu6T5xFjeJ}
129
+ }
130
+ ```
131
+
132
+ ### CQADupStack (original Android / multi-forum resource)
133
+
134
+ Doris Hoogeveen, Karin M. Verspoor, Timothy Baldwin. **CQADupStack: A Benchmark Data Set for Community Question-Answering Research.** *Proceedings of the 20th Australasian Document Computing Symposium (ADCS 2015).*
135
+
136
+ - DOI: [10.1145/2838931.2838934](https://doi.org/10.1145/2838931.2838934)
137
+ - PDF: [adcs2015.pdf](https://people.eng.unimelb.edu.au/tbaldwin/pubs/adcs2015.pdf)
138
+
139
+ ```bibtex
140
+ @inproceedings{hoogeveen2015cqadupstack,
141
+ author = {Hoogeveen, Doris and Verspoor, Karin M. and Baldwin, Timothy},
142
+ title = {{CQADupStack}: A Benchmark Data Set for Community Question-Answering Research},
143
+ booktitle = {Proceedings of the 20th Australasian Document Computing Symposium},
144
+ year = {2015},
145
+ pages = {3:1--3:8},
146
+ publisher = {ACM},
147
+ doi = {10.1145/2838931.2838934}
148
+ }
149
+ ```
150
+
151
+ ### Related tooling
152
+
153
+ - **ir-datasets** catalog entry: [`beir/cqadupstack/android`](https://ir-datasets.com/beir.html) (query/doc counts and export helpers).
154
+
155
+ ---
156
 
157
+ *Dataset card maintained for the `orgrctera/beir_cqadupstack_android` Hub repository.*