metadata
language:
- ara
- dan
- deu
- eng
- fas
- fra
- hin
- ind
- ita
- jpn
- kor
- nld
- pol
- por
- rus
- spa
- swe
- tur
- vie
- zho
multilingual: true
tags:
- dense-retrieval
- hard-negatives
- knowledge-distillation
- webfaq
license: cc-by-4.0
task_categories:
- sentence-similarity
- text-retrieval
size_categories:
- 1M<n<10M
WebFAQ 2.0: Multilingual Hard Negatives
This dataset contains mined hard negatives derived from the WebFAQ 2.0 corpus. It includes approximately 1.3 million samples across 20 languages.
The dataset is designed to support robust training of dense retrieval models, specifically enabling:
- Contrastive Learning: Using strict hard negatives to improve discrimination.
- Knowledge Distillation: Using the provided cross-encoder scores to train with soft labels (e.g., MarginMSE).
Dataset Creation & Mining Process
To ensure high-quality training signals, we employed a two-stage mining pipeline. The full mining script is available in this repository: mining_script.py.
1. Lexical Retrieval (Recall)
We first retrieved the top-200 candidate answers for each query using BM25 (via Pyserini).
- Goal: Identify candidates with high lexical overlap (shared keywords) that are likely to be "hard" for a dense retriever to distinguish.
2. Semantic Reranking (Precision)
We reranked the top-200 candidates using the state-of-the-art cross-encoder model: BAAI/bge-m3.
- Goal: Assess the true semantic relevance of each candidate.
- Filtering: We applied a rigorous filtering strategy to remove False Negatives (high semantic scores) and Easy Negatives (low scores).
- Scoring: We retained the BGE-M3 relevance scores for every negative to enable knowledge distillation (MarginMSE).
Code & Reproduction
You can reproduce the mining process using the provided script:
python mining_hardnegatives_bge3.py \
--repo-id "PaDaS-Lab/webfaq-retrieval" \
--output-dir "./data/distilled_data" \
--k-negatives 200
## Dataset Structure
The data is stored in a **grouped format** (JSONL), where each line represents a single query paired with its positive answer and a list of mined hard negatives.
Each sample contains:
| Field | Type | Description |
| :--- | :--- | :--- |
| `query` | String | The user question. |
| `positive` | String | The ground-truth correct answer. |
| `positive_score` | Float | The **BGE-M3** relevance score for the positive answer. |
| `negatives` | List[String] | A list of mined hard negatives (non-relevant but similar). |
| `negative_scores`| List[Float] | The **BGE-M3** relevance scores corresponding to each negative. |
**Note:** This format is optimized for:
* **Contrastive Learning:** You can instantly sample 1 positive and $N$ negatives.
* **MarginMSE:** You have all the teacher scores (positive and negative) required to compute the margin loss.
## Languages & Distribution
The dataset covers **20 languages** with the following sample counts:
| ISO Code | Language | Samples |
| :--- | :--- | :--- |
| `ara` | Arabic | 32,000 |
| `dan` | Danish | 32,000 |
| `deu` | German | 96,000 |
| `eng` | English | 128,000 |
| `fas` | Persian | 64,000 |
| `fra` | French | 96,000 |
| `hin` | Hindi | 32,000 |
| `ind` | Indonesian | 32,000 |
| `ita` | Italian | 96,000 |
| `jpn` | Japanese | 96,000 |
| `kor` | Korean | 32,000 |
| `nld` | Dutch | 96,000 |
| `pol` | Polish | 64,000 |
| `por` | Portuguese | 64,000 |
| `rus` | Russian | 96,000 |
| `spa` | Spanish | 96,000 |
| `swe` | Swedish | 32,000 |
| `tur` | Turkish | 32,000 |
| `vie` | Vietnamese | 32,000 |
| `zho` | Chinese | 32,000 |
| **Total** | **All** | **~1,280,000** |
## Citation