Datasets:
dleemiller
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -6,4 +6,50 @@ language:
|
|
6 |
- en
|
7 |
size_categories:
|
8 |
- 100K<n<1M
|
9 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
- en
|
7 |
size_categories:
|
8 |
- 100K<n<1M
|
9 |
+
---
|
10 |
+
|
11 |
+
# Wiki Sim
|
12 |
+
|
13 |
+
## Overview
|
14 |
+
This semi-synthetic dataset is derived from `wikimedia/wikipedia`.
|
15 |
+
Each row contains 1-3 references sentences extracted from the original dataset.
|
16 |
+
|
17 |
+
For each reference sentence, we use an optimized DSPy program to generate 4 similar sentences:
|
18 |
+
- *Synonym* (Replace words with synonyms to maintain the same meaning.)
|
19 |
+
- *Paraphrase* (Rephrase the sentence using a different structure while keeping the same idea.)
|
20 |
+
- *Conceptual Overlap* (Express a related concept differently without changing the core meaning.)
|
21 |
+
- *Contextual Meaning* (Modify the sentence to derive meaning from context, preserving the original intent.)
|
22 |
+
|
23 |
+
Additionally, we score each result using `cross-encoder/stsb-roberta-large`.
|
24 |
+
We use this to mine hard negatives from different contiguous sentences in the original passage, retaining the most similar result.
|
25 |
+
|
26 |
+
## Purpose
|
27 |
+
|
28 |
+
We aim to expand training for small models like [WordLlama](https://github.com/dleemiller/WordLlama),
|
29 |
+
general embedding models, and targeting benchmarks like stsb and similarity tasks differing from NLI or QnA.
|
30 |
+
|
31 |
+
## Dataset
|
32 |
+
|
33 |
+
The colums of the dataset include:
|
34 |
+
|
35 |
+
`synonym`
|
36 |
+
`paraphrase`
|
37 |
+
`conceptual_overlap`
|
38 |
+
`contextual_meaning`
|
39 |
+
`reference`
|
40 |
+
`negative`
|
41 |
+
`negative_score`
|
42 |
+
`model_id`
|
43 |
+
`cross_encoder`
|
44 |
+
`synonym_score`
|
45 |
+
`paraphrase_score`
|
46 |
+
`conceptual_overlap_score`
|
47 |
+
`contextual_meaning_score`
|
48 |
+
|
49 |
+
where `reference` and `negative` are derived from `wikimedia/wikipedia`,
|
50 |
+
and the similarity text columns are synthetically derived.
|
51 |
+
|
52 |
+
We filter all rows where negative scores exceed any of the similarity scores.
|
53 |
+
|
54 |
+
## Results
|
55 |
+
|