Datasets:

Modalities:
Text
Languages:
English
Libraries:
Datasets
License:
asahi417 commited on
Commit
9edeb25
1 Parent(s): d0dad77

fix readme

Browse files
README.md CHANGED
@@ -19,17 +19,24 @@ pretty_name: Analogy Question
19
  ### Dataset Summary
20
  This dataset contains 5 different word analogy questions used in [Analogy Language Model](https://aclanthology.org/2021.acl-long.280/).
21
 
22
- | name | Size (valid/test) | Num of choice | Num of relation group | Original Reference |
23
- |---------|------------------:|--------------:|----------------------:|:--------------------------------------------------------------------------:|
 
 
24
  | `sat_full`| -/374 | 5 | 2 | [Turney (2005)](https://arxiv.org/pdf/cs/0508053.pdf) |
25
  | `sat` | 37/337 | 5 | 2 | [Turney (2005)](https://arxiv.org/pdf/cs/0508053.pdf) |
26
  | `u2` | 24/228 | 5,4,3 | 9 | [EnglishForEveryone](https://englishforeveryone.org/Topics/Analogies.html) |
27
  | `u4` | 48/432 | 5,4,3 | 5 | [EnglishForEveryone](https://englishforeveryone.org/Topics/Analogies.html) |
28
  | `google` | 50/500 | 4 | 2 | [Mikolov et al., (2013)](https://www.aclweb.org/anthology/N13-1090.pdf) |
29
  | `bats` | 199/1799 | 4 | 3 | [Gladkova et al., (2016)](https://www.aclweb.org/anthology/N18-2017.pdf) |
30
- | `semeval2012_relational_similarity` | 78/- | 3 | - | [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity) |
31
- | `t_rex_relational_similarity` | 467/467 | 6 | - | [relbert/t_rex_relational_similarity](https://huggingface.co/datasets/relbert/t_rex_relational_similarity) |
32
- | `conceptnet_relational_similarity` | 546/570 | 6 | - | [relbert/conceptnet_relational_similarity](https://huggingface.co/datasets/relbert/conceptnet_relational_similarity) |
 
 
 
 
 
33
 
34
  ## Dataset Structure
35
  ### Data Instances
 
19
  ### Dataset Summary
20
  This dataset contains 5 different word analogy questions used in [Analogy Language Model](https://aclanthology.org/2021.acl-long.280/).
21
 
22
+ - original analogy questions
23
+
24
+ | name | Size (valid/test) | Num of choice | Num of relation group | Original Reference |
25
+ |-----------|------------------:|--------------:|----------------------:|:--------------------------------------------------------------------------:|
26
  | `sat_full`| -/374 | 5 | 2 | [Turney (2005)](https://arxiv.org/pdf/cs/0508053.pdf) |
27
  | `sat` | 37/337 | 5 | 2 | [Turney (2005)](https://arxiv.org/pdf/cs/0508053.pdf) |
28
  | `u2` | 24/228 | 5,4,3 | 9 | [EnglishForEveryone](https://englishforeveryone.org/Topics/Analogies.html) |
29
  | `u4` | 48/432 | 5,4,3 | 5 | [EnglishForEveryone](https://englishforeveryone.org/Topics/Analogies.html) |
30
  | `google` | 50/500 | 4 | 2 | [Mikolov et al., (2013)](https://www.aclweb.org/anthology/N13-1090.pdf) |
31
  | `bats` | 199/1799 | 4 | 3 | [Gladkova et al., (2016)](https://www.aclweb.org/anthology/N18-2017.pdf) |
32
+
33
+ - extra analogy questions
34
+
35
+ | name | Size (valid/test) | Num of choice (valid/test) | Num of relation group (valid/test) | Original Reference |
36
+ |:------------------------------------|:--------------------|:-----------------------------|:-------------------------------------|:-----------------------------------------------------------------------------------------------------------------------|
37
+ | `semeval2012_relational_similarity` | 79/- | 3/- | 79/- | [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity) |
38
+ | `t_rex_relational_similarity` | 496/183 | 74/48 | 60/19 | [relbert/t_rex_relational_similarity](https://huggingface.co/datasets/relbert/t_rex_relational_similarity) |
39
+ | `conceptnet_relational_similarity` | 1112/1192 | 19/17 | 18/16 | [relbert/conceptnet_relational_similarity](https://huggingface.co/datasets/relbert/conceptnet_relational_similarity) |
40
 
41
  ## Dataset Structure
42
  ### Data Instances
add_new_analogy.py CHANGED
@@ -1,94 +1,113 @@
1
  import json
2
  import os
3
- from itertools import combinations, chain
4
  from random import shuffle, seed
 
 
5
  from datasets import load_dataset
6
 
7
- # # create analogy from `relbert/semeval2012_relational_similarity`
8
- # data = load_dataset("relbert/semeval2012_relational_similarity", split="validation")
9
- # analogy_data = [{
10
- # "stem": i['positives'][0], "choice": i["negatives"] + [i['positives'][1]], "answer": 2, "prefix": i["relation_type"]
11
- # } for i in data]
12
- # os.makedirs("dataset/semeval2012_relational_similarity", exist_ok=True)
13
- # with open("dataset/semeval2012_relational_similarity/valid.jsonl", "w") as f:
14
- # f.write("\n".join([json.dumps(i) for i in analogy_data]))
15
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
 
17
- # # create analogy from `relbert/t_rex_relational_similarity`
18
- # data = load_dataset("relbert/t_rex_relational_similarity", "filter_unified.min_entity_1_max_predicate_100", split="test")
19
- # df = data.to_pandas()
20
- # df['negatives'] = [list(chain(
21
- # *[[y.tolist() for y in x.tolist()] for x in df[df.relation_type != i]['positives'].tolist()] +
22
- # [[y.tolist() for y in x.tolist()] for x in df[df.relation_type == i]['negatives'].tolist()])) for i in
23
- # df['relation_type']]
24
- # analogy_data = []
25
- # for _, i in df.iterrows():
26
- # if len(i['positives']) < 2:
27
- # continue
28
- # for m, (q, c) in enumerate(combinations(i['positives'], 2)):
29
- # if m > 5:
30
- # break
31
- # negative = i['negatives']
32
- # for n in range(6):
33
- # seed(n)
34
- # shuffle(negative)
35
- # analogy_data.append({
36
- # "stem": q.tolist(), "choice": [c.tolist()] + negative[:5], "answer": 0, "prefix": i["relation_type"]
37
- # })
38
- # os.makedirs("dataset/t_rex_relational_similarity", exist_ok=True)
39
- # with open("dataset/t_rex_relational_similarity/test.jsonl", "w") as f:
40
- # f.write("\n".join([json.dumps(i) for i in analogy_data]))
41
- #
42
- # data = load_dataset("relbert/t_rex_relational_similarity", "filter_unified.min_entity_4_max_predicate_100", split="validation")
43
- # df = data.to_pandas()
44
- # df['negatives'] = [list(chain(
45
- # *[[y.tolist() for y in x.tolist()] for x in df[df.relation_type != i]['positives'].tolist()] +
46
- # [[y.tolist() for y in x.tolist()] for x in df[df.relation_type == i]['negatives'].tolist()])) for i in
47
- # df['relation_type']]
48
- # analogy_data = []
49
- # for _, i in df.iterrows():
50
- # if len(i['positives']) < 5:
51
- # continue
52
- # for m, (q, c) in enumerate(combinations(i['positives'], 2)):
53
- # if m > 5:
54
- # break
55
- # negative = i['negatives']
56
- # for n in range(3):
57
- # seed(n)
58
- # shuffle(negative)
59
- # analogy_data.append({
60
- # "stem": q.tolist(), "choice": [c.tolist()] + negative[:5], "answer": 0, "prefix": i["relation_type"]
61
- # })
62
- # os.makedirs("dataset/t_rex_relational_similarity", exist_ok=True)
63
- # with open("dataset/t_rex_relational_similarity/valid.jsonl", "w") as f:
64
- # f.write("\n".join([json.dumps(i) for i in analogy_data]))
65
- #
66
- # # create analogy from `relbert/conceptnet_relational_similarity`
67
- # for s in ['test', 'validation']:
68
- # data = load_dataset("relbert/conceptnet_relational_similarity", split=s)
69
- # df = data.to_pandas()
70
- # df['negatives'] = [list(chain(
71
- # *[[y.tolist() for y in x.tolist()] for x in df[df.relation_type != i]['positives'].tolist()] +
72
- # [[y.tolist() for y in x.tolist()] for x in df[df.relation_type == i]['negatives'].tolist()])) for i in
73
- # df['relation_type']]
74
- #
75
- # analogy_data = []
76
- #
77
- # for _, i in df.iterrows():
78
- #
79
- # if len(i['positives']) < 2:
80
- # continue
81
- # for m, (q, c) in enumerate(combinations(i['positives'], 2)):
82
- # if m > 5:
83
- # break
84
- # negative = i['negatives']
85
- # for n in range(6):
86
- # seed(n)
87
- # shuffle(negative)
88
- # analogy_data.append({
89
- # "stem": q.tolist(), "choice": [c.tolist()] + negative[:5], "answer": 0, "prefix": i["relation_type"]
90
- # })
91
- # print(len(analogy_data))
92
- # os.makedirs("dataset/conceptnet_relational_similarity", exist_ok=True)
93
- # with open(f"dataset/conceptnet_relational_similarity/{s if s == 'test' else 'valid'}.jsonl", "w") as f:
94
- # f.write("\n".join([json.dumps(i) for i in analogy_data]))
 
1
  import json
2
  import os
3
+ from itertools import combinations
4
  from random import shuffle, seed
5
+
6
+ import pandas as pd
7
  from datasets import load_dataset
8
 
 
 
 
 
 
 
 
 
9
 
10
+ def get_stats(filename):
11
+ with open(filename) as f:
12
+ _data = [json.loads(i) for i in f.read().splitlines()]
13
+ return len(_data), list(set([len(i['choice']) for i in _data])), len(list(set([i['prefix'] for i in _data])))
14
+
15
+
16
+ def lexical_overlap(word_a, word_b):
17
+ for a in word_a.split(" "):
18
+ for b in word_b.split(" "):
19
+ if a.lower() == b.lower():
20
+ return True
21
+ return False
22
+
23
+
24
+ def create_analogy(_data, output_path, negative_per_relation, instance_per_relation=100):
25
+ # if os.path.exists(output_path):
26
+ # return
27
+ df = _data.to_pandas()
28
+ analogy_data = []
29
+ for _, i in df.iterrows():
30
+ target = [(q.tolist(), c.tolist()) for q, c in combinations(i['positives'], 2)
31
+ if not any(lexical_overlap(c[0], y) or lexical_overlap(c[1], y) for y in q)]
32
+ if len(target) == 0:
33
+ continue
34
+ if len(target) > instance_per_relation:
35
+ seed(42)
36
+ shuffle(target)
37
+ target = target[:instance_per_relation]
38
+ for m, (q, c) in enumerate(target):
39
+ negative = []
40
+ for r in df['relation_type']:
41
+ if r == i['relation_type']:
42
+ continue
43
+ target_per_relation = [y.tolist() for y in df[df['relation_type'] == r]['positives'].values[0]]
44
+ shuffle(target_per_relation)
45
+ negative += target_per_relation[:negative_per_relation]
46
+ analogy_data.append({
47
+ "stem": q,
48
+ "choice": [c, c[::-1]] + negative,
49
+ "answer": 0,
50
+ "prefix": i["relation_type"]
51
+ })
52
+ os.makedirs(os.path.dirname(output_path), exist_ok=True)
53
+ with open(output_path, "w") as f:
54
+ f.write("\n".join([json.dumps(i) for i in analogy_data]))
55
+
56
+ stat = []
57
+ ###################################################################
58
+ # create analogy from `relbert/semeval2012_relational_similarity` #
59
+ ###################################################################
60
+ if not os.path.exists("dataset/semeval2012_relational_similarity/valid.jsonl"):
61
+ data = load_dataset("relbert/semeval2012_relational_similarity", split="validation")
62
+ analogy_data = [{
63
+ "stem": i['positives'][0], "choice": i["negatives"] + [i['positives'][1]], "answer": 2, "prefix": i["relation_type"]
64
+ } for i in data]
65
+ os.makedirs("dataset/semeval2012_relational_similarity", exist_ok=True)
66
+ with open("dataset/semeval2012_relational_similarity/valid.jsonl", "w") as f:
67
+ f.write("\n".join([json.dumps(i) for i in analogy_data]))
68
+
69
+ v_size, v_num_choice, v_relation_type = get_stats("dataset/semeval2012_relational_similarity/valid.jsonl")
70
+ stat.append({
71
+ "name": "`semeval2012_relational_similarity`",
72
+ "Size (valid/test)": f"{v_size}/-",
73
+ "Num of choice (valid/test)": f"{','.join([str(n) for n in v_num_choice])}/-",
74
+ "Num of relation group (valid/test)": f"{v_relation_type}/-",
75
+ "Original Reference": "[relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity)"
76
+ })
77
+
78
+
79
+ #############################################################
80
+ # create analogy from `relbert/t_rex_relational_similarity` #
81
+ #############################################################
82
+ data = load_dataset("relbert/t_rex_relational_similarity", "filter_unified.min_entity_1_max_predicate_100", split="test")
83
+ create_analogy(data, "dataset/t_rex_relational_similarity/test.jsonl", negative_per_relation=2)
84
+ data = load_dataset("relbert/t_rex_relational_similarity", "filter_unified.min_entity_4_max_predicate_100", split="validation")
85
+ create_analogy(data, "dataset/t_rex_relational_similarity/valid.jsonl", negative_per_relation=1)
86
+
87
+ t_size, t_num_choice, t_relation_type = get_stats("dataset/t_rex_relational_similarity/test.jsonl")
88
+ v_size, v_num_choice, v_relation_type = get_stats("dataset/t_rex_relational_similarity/valid.jsonl")
89
+ stat.append({
90
+ "name": "`t_rex_relational_similarity`",
91
+ "Size (valid/test)": f"{v_size}/{t_size}",
92
+ "Num of choice (valid/test)": f"{','.join([str(n) for n in v_num_choice])}/{','.join([str(n) for n in t_num_choice])}",
93
+ "Num of relation group (valid/test)": f"{v_relation_type}/{t_relation_type}",
94
+ "Original Reference": "[relbert/t_rex_relational_similarity](https://huggingface.co/datasets/relbert/t_rex_relational_similarity)"
95
+ })
96
 
97
+ ##################################################################
98
+ # create analogy from `relbert/conceptnet_relational_similarity` #
99
+ ##################################################################
100
+ data = load_dataset("relbert/conceptnet_relational_similarity", split="test")
101
+ create_analogy(data, "dataset/conceptnet_relational_similarity/test.jsonl", negative_per_relation=1)
102
+ data = load_dataset("relbert/conceptnet_relational_similarity", split="validation")
103
+ create_analogy(data, "dataset/conceptnet_relational_similarity/valid.jsonl", negative_per_relation=1)
104
+ t_size, t_num_choice, t_relation_type = get_stats("dataset/conceptnet_relational_similarity/test.jsonl")
105
+ v_size, v_num_choice, v_relation_type = get_stats("dataset/conceptnet_relational_similarity/valid.jsonl")
106
+ stat.append({
107
+ "name": "`conceptnet_relational_similarity`",
108
+ "Size (valid/test)": f"{v_size}/{t_size}",
109
+ "Num of choice (valid/test)": f"{','.join([str(n) for n in v_num_choice])}/{','.join([str(n) for n in t_num_choice])}",
110
+ "Num of relation group (valid/test)": f"{v_relation_type}/{t_relation_type}",
111
+ "Original Reference": "[relbert/conceptnet_relational_similarity](https://huggingface.co/datasets/relbert/conceptnet_relational_similarity)"
112
+ })
113
+ print(pd.DataFrame(stat).to_markdown(index=False))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset/conceptnet_relational_similarity/test.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
dataset/conceptnet_relational_similarity/valid.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
dataset/t_rex_relational_similarity/test.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
dataset/t_rex_relational_similarity/valid.jsonl CHANGED
The diff for this file is too large to render. See raw diff