File size: 2,916 Bytes
b182dbd
 
 
 
 
 
 
 
762efa8
 
 
 
 
 
 
 
 
 
 
 
 
4dcf05e
 
 
 
 
 
762efa8
 
 
 
 
 
01750f1
 
 
 
 
 
62910c0
 
 
 
 
715cfe1
62910c0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
98da2c1
 
 
4dcf05e
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
---
license: gfdl
task_categories:
- sentence-similarity
language:
- en
size_categories:
- 100K<n<1M
configs:
  - config_name: raw
    data_files:
      - split: train
        path: raw/train-*
      - split: test
        path: raw/test-*
  - config_name: pair-score
    data_files:
      - split: train
        path: pair-score/train-*
      - split: test
        path: pair-score/test-*
  - config_name: pair-score-hard
    data_files:
      - split: train
        path: pair-score-hard/train-*
      - split: test
        path: pair-score-hard/test-*
  - config_name: triplet
    data_files:
      - split: train
        path: triplet/train-*
      - split: test
        path: triplet/test-*
  - config_name: triplet-hard
    data_files:
      - split: train
        path: triplet-hard/train-*
      - split: test
        path: triplet-hard/test-*
---

# Wiki Sim

## Overview
This new semi-synthetic dataset is derived from `wikimedia/wikipedia`.
Each row contains 1-3 references sentences extracted from the original dataset.

For each reference sentence, we use an optimized DSPy program to generate 4 similar sentences:
- *Synonym* (Replace words with synonyms to maintain the same meaning.)
- *Paraphrase* (Rephrase the sentence using a different structure while keeping the same idea.)
- *Conceptual Overlap* (Express a related concept differently without changing the core meaning.)
- *Contextual Meaning* (Modify the sentence to derive meaning from context, preserving the original intent.)

Additionally, we score each result using `cross-encoder/stsb-roberta-large`.
We use this to mine hard negatives from different contiguous sentences in the original passage, retaining the most similar result.

## Purpose

We aim to expand training for small models like [WordLlama](https://github.com/dleemiller/WordLlama),
general embedding models, and targeting benchmarks like stsb and similarity tasks differing from NLI or QnA.

## Dataset

The colums of the dataset include:

`synonym`
`paraphrase`
`conceptual_overlap`
`contextual_meaning`
`reference`
`negative`
`negative_score`
`model_id`
`cross_encoder`
`synonym_score`
`paraphrase_score`
`conceptual_overlap_score`
`contextual_meaning_score`

where `reference` and `negative` are derived from `wikimedia/wikipedia`,
and the similarity text columns are synthetically derived.

We filter all rows where negative scores exceed any of the similarity scores.

## Results

The 4 instruction types produce results of varying similarity scores,
with the most similar being `synonym` and least similar `contextual meaning`.

<img src="cdf_plot_scores.png" alt="CDF Plot" width="600"/>


## Subsets

* `pair-score` - random choice weighted to a target of 0.9
* `pair-score-hard` random choice weighted to a target of 0.85
* `triplet` - random choice weighted to a target of 0.9
* `triplet-hard` - random choice weighted to a target of 0.85
* `raw` - full dataset