gouki510 commited on
Commit
89b917c
·
verified ·
1 Parent(s): e031048

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +108 -0
README.md ADDED
@@ -0,0 +1,108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc
3
+ language:
4
+ - en
5
+ ---
6
+ ---
7
+ # Dataset Card for PS-Eval Dataset
8
+
9
+ ## Dataset Summary
10
+ The **PS-Eval Dataset** is a suite of polysemous and monosemous contexts extracted and filtered from the WiC dataset. It aims to evaluate the ability of Sparse Autoencoders (SAEs) to disentangle polysemantic activations into monosemantic features within large language models (LLMs). The dataset contains **1,112 samples** balanced between two classes:
11
+
12
+ - **Poly-contexts**: Target words with different meanings across two contexts (Label: 0).
13
+ - **Mono-contexts**: Target words with the same meaning across two contexts (Label: 1).
14
+
15
+ Each sample includes two sentences (contexts) containing the target word, along with a label indicating whether the target word's meaning is the same or different.
16
+
17
+ This dataset is particularly useful for evaluating methods and models that address polysemy in LLMs, such as feature-based interpretability techniques.
18
+
19
+ ## Supported Tasks and Leaderboards
20
+ - **Polysemy Detection**: Classify whether the target word has the same or different meaning across contexts.
21
+ - **Feature Interpretability**: Evaluate whether Sparse Autoencoders (SAEs) can map polysemantic activations into monosemantic features.
22
+
23
+ This dataset can also serve as a benchmark for **context-sensitive word representations**.
24
+
25
+ ## Languages
26
+ The dataset is in **English**.
27
+
28
+ ## Dataset Structure
29
+ ### Data Instances
30
+ Each instance in the dataset is stored in JSON format with the following structure:
31
+
32
+ ```json
33
+ {
34
+ "target_word": "space",
35
+ "context1": "The astronauts walked in outer \"space\" without a tether.",
36
+ "context2": "The \"space\" between his teeth.",
37
+ "label": 0
38
+ }
39
+ ```
40
+
41
+ ### Data Fields
42
+ - **`target_word`** (*string*): The polysemous or monosemous word shared across the two contexts.
43
+ - **`context1`** (*string*): The first sentence containing the target word.
44
+ - **`context2`** (*string*): The second sentence containing the target word.
45
+ - **`label`** (*integer*): Binary label where:
46
+ - `0` = Different meanings (**poly-contexts**).
47
+ - `1` = Same meaning (**mono-contexts**).
48
+
49
+ ### Data Splits
50
+ The dataset is provided as a single split with **1,112 samples**:
51
+ - **Poly-contexts (Label 0)**: 556 samples
52
+ - **Mono-contexts (Label 1)**: 556 samples
53
+
54
+ ## Dataset Creation
55
+ ### Source Data
56
+ The PS-Eval dataset is built on top of the **WiC Dataset** (Word-in-Context) – a rich resource for polysemous words originally introduced in \citet{pilehvar2019wic}.
57
+
58
+ ### Filtering Process
59
+ We carefully selected instances from WiC where the target word is tokenized as a **single token** in GPT-2-small. This ensures consistency when analyzing activations in Sparse Autoencoders.
60
+
61
+ ### Annotations
62
+ Labels are derived from the WiC dataset:
63
+ - **Different meanings**: Target words in poly-contexts (Label 0).
64
+ - **Same meaning**: Target words in mono-contexts (Label 1).
65
+
66
+ ## Dataset Usage
67
+ ### Intended Use
68
+ This dataset is designed for evaluating models and methods that:
69
+ - Analyze polysemantic and monosemantic activations in LLMs.
70
+ - Detect context-sensitive meanings of polysemous words.
71
+ - Test Sparse Autoencoders (SAEs) for interpretability.
72
+
73
+ ### Example Usage
74
+ ```python
75
+ from datasets import load_dataset
76
+
77
+ # Load the dataset
78
+ dataset = load_dataset("gouki510/wic_eval_data")
79
+
80
+ # Inspect a sample
81
+ print(dataset["train"][0])
82
+ ```
83
+
84
+ ### Metrics
85
+ The dataset supports evaluation metrics such as:
86
+ - **Accuracy**
87
+ - **Precision**
88
+ - **Recall**
89
+ - **F1 Score**
90
+ - **Specificity**
91
+
92
+ These metrics are particularly important for evaluating polysemy detection models and Sparse Autoencoders.
93
+
94
+ ## Dataset Curators
95
+ This dataset was curated by **Gouki Minegishi** as part of research on polysemantic activation analysis in Sparse Autoencoders and interpretability for large language models.
96
+
97
+ ## Citation
98
+ If you use the PS-Eval Dataset in your work, please cite:
99
+
100
+ ```
101
+ @inproceedings{your2025ps_eval,
102
+ title={hoge},
103
+ author={hoge},
104
+ booktitle={hoge},
105
+ year={2025}
106
+ }
107
+ ```
108
+