asahi417 commited on
Commit
54a668b
1 Parent(s): c5ee5cd

fix readme

Browse files
README.md CHANGED
@@ -9,7 +9,7 @@ size_categories:
9
  - 1K<n<10K
10
  pretty_name: ConceptNet with High Confidence
11
  ---
12
- # Dataset Card for "relbert/conceptnet_high_confidence"
13
  ## Dataset Description
14
  - **Repository:** [RelBERT](https://github.com/asahi417/relbert)
15
  - **Paper:** [https://home.ttic.edu/~kgimpel/commonsense.html](https://home.ttic.edu/~kgimpel/commonsense.html)
@@ -19,6 +19,12 @@ pretty_name: ConceptNet with High Confidence
19
  The selected subset of ConceptNet used in [this work](https://home.ttic.edu/~kgimpel/commonsense.html), which compiled
20
  to fine-tune [RelBERT](https://github.com/asahi417/relbert) model.
21
  We removed `NotCapableOf` and `NotDesires` to keep the positive relation only.
 
 
 
 
 
 
22
 
23
  ## Dataset Structure
24
  ### Data Instances
@@ -31,36 +37,7 @@ An example of `train` looks as follows.
31
  }
32
  ```
33
 
34
- ### Data Splits
35
- | name |train|validation|
36
- |---------|----:|---------:|
37
- |conceptnet_high_confidence| 25 | 24|
38
-
39
- ### Number of Positive/Negative Word-pairs in each Split
40
 
41
- | relation_type | positive (train) | negative (train) | positive (validation) | negative (validation) |
42
- |:-----------------|-------------------:|-------------------:|------------------------:|------------------------:|
43
- | AtLocation | 383 | 1749 | 97 | 574 |
44
- | CapableOf | 195 | 1771 | 73 | 596 |
45
- | Causes | 71 | 1778 | 26 | 591 |
46
- | CausesDesire | 9 | 1774 | 11 | 591 |
47
- | CreatedBy | 2 | 1777 | 0 | 0 |
48
- | DefinedAs | 0 | 0 | 2 | 591 |
49
- | Desires | 16 | 1775 | 12 | 591 |
50
- | HasA | 67 | 1795 | 17 | 591 |
51
- | HasFirstSubevent | 2 | 1777 | 0 | 0 |
52
- | HasLastSubevent | 2 | 1777 | 3 | 589 |
53
- | HasPrerequisite | 168 | 1784 | 57 | 588 |
54
- | HasProperty | 94 | 1782 | 39 | 601 |
55
- | HasSubevent | 125 | 1779 | 40 | 605 |
56
- | IsA | 310 | 1745 | 98 | 599 |
57
- | MadeOf | 17 | 1774 | 7 | 589 |
58
- | MotivatedByGoal | 14 | 1777 | 11 | 591 |
59
- | PartOf | 34 | 1782 | 7 | 589 |
60
- | ReceivesAction | 18 | 1774 | 8 | 589 |
61
- | SymbolOf | 0 | 0 | 2 | 592 |
62
- | UsedFor | 249 | 1796 | 81 | 584 |
63
- | SUM | 1776 | 31966 | 591 | 10641 |
64
 
65
  ### Citation Information
66
  ```
 
9
  - 1K<n<10K
10
  pretty_name: ConceptNet with High Confidence
11
  ---
12
+ # Dataset Card for "relbert/conceptnet_relation_similarity"
13
  ## Dataset Description
14
  - **Repository:** [RelBERT](https://github.com/asahi417/relbert)
15
  - **Paper:** [https://home.ttic.edu/~kgimpel/commonsense.html](https://home.ttic.edu/~kgimpel/commonsense.html)
 
19
  The selected subset of ConceptNet used in [this work](https://home.ttic.edu/~kgimpel/commonsense.html), which compiled
20
  to fine-tune [RelBERT](https://github.com/asahi417/relbert) model.
21
  We removed `NotCapableOf` and `NotDesires` to keep the positive relation only.
22
+ We consider the original test set as test set, dev1 as the training set, and dev2 as the validation set.
23
+
24
+ |train|validation|test|
25
+ |--------:|----:|---------:|
26
+ |15| 17 | 15|
27
+
28
 
29
  ## Dataset Structure
30
  ### Data Instances
 
37
  }
38
  ```
39
 
 
 
 
 
 
 
40
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
 
42
  ### Citation Information
43
  ```
conceptnet_high_confidence.py → conceptnet_relation_similarity.py RENAMED
@@ -3,8 +3,8 @@ import datasets
3
 
4
  logger = datasets.logging.get_logger(__name__)
5
  _DESCRIPTION = """[ConceptNet with high confidence](https://home.ttic.edu/~kgimpel/commonsense.html)"""
6
- _NAME = "conceptnet_high_confidence"
7
- _VERSION = "2.0.1"
8
  _CITATION = """
9
  @inproceedings{li-16,
10
  title = {Commonsense Knowledge Base Completion},
@@ -37,7 +37,7 @@ _URLS = {
37
  }
38
 
39
 
40
- class ConceptNetHighConfidenceConfig(datasets.BuilderConfig):
41
  """BuilderConfig"""
42
 
43
  def __init__(self, **kwargs):
@@ -45,14 +45,14 @@ class ConceptNetHighConfidenceConfig(datasets.BuilderConfig):
45
  Args:
46
  **kwargs: keyword arguments forwarded to super.
47
  """
48
- super(ConceptNetHighConfidenceConfig, self).__init__(**kwargs)
49
 
50
 
51
- class ConceptNetHighConfidence(datasets.GeneratorBasedBuilder):
52
  """Dataset."""
53
 
54
  BUILDER_CONFIGS = [
55
- ConceptNetHighConfidenceConfig(name=_NAME, version=datasets.Version(_VERSION), description=_DESCRIPTION),
56
  ]
57
 
58
  def _split_generators(self, dl_manager):
 
3
 
4
  logger = datasets.logging.get_logger(__name__)
5
  _DESCRIPTION = """[ConceptNet with high confidence](https://home.ttic.edu/~kgimpel/commonsense.html)"""
6
+ _NAME = "conceptnet_relation_similarity"
7
+ _VERSION = "2.0.2"
8
  _CITATION = """
9
  @inproceedings{li-16,
10
  title = {Commonsense Knowledge Base Completion},
 
37
  }
38
 
39
 
40
+ class ConceptNetRelationSimilarityConfig(datasets.BuilderConfig):
41
  """BuilderConfig"""
42
 
43
  def __init__(self, **kwargs):
 
45
  Args:
46
  **kwargs: keyword arguments forwarded to super.
47
  """
48
+ super(ConceptNetRelationSimilarityConfig, self).__init__(**kwargs)
49
 
50
 
51
+ class ConceptNetRelationSimilarity(datasets.GeneratorBasedBuilder):
52
  """Dataset."""
53
 
54
  BUILDER_CONFIGS = [
55
+ ConceptNetRelationSimilarityConfig(name=_NAME, version=datasets.Version(_VERSION), description=_DESCRIPTION),
56
  ]
57
 
58
  def _split_generators(self, dl_manager):
dataset/test.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
dataset/train.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
get_stats.py DELETED
@@ -1,35 +0,0 @@
1
- import pandas as pd
2
- from datasets import load_dataset
3
-
4
- data = load_dataset('relbert/conceptnet_high_confidence_v2')
5
- stats = []
6
- for k in data.keys():
7
- for i in data[k]:
8
- stats.append({'relation_type': i['relation_type'], 'split': k, 'positives': len(i['positives']), 'negatives': len(i['negatives'])})
9
- df = pd.DataFrame(stats)
10
- df_train = df[df['split'] == 'train']
11
- df_valid = df[df['split'] == 'validation']
12
- stats = []
13
- for r in df['relation_type'].unique():
14
- _df_t = df_train[df_train['relation_type'] == r]
15
- _df_v = df_valid[df_valid['relation_type'] == r]
16
- stats.append({
17
- 'relation_type': r,
18
- 'positive (train)': 0 if len(_df_t) == 0 else _df_t['positives'].values[0],
19
- 'negative (train)': 0 if len(_df_t) == 0 else _df_t['negatives'].values[0],
20
- 'positive (validation)': 0 if len(_df_v) == 0 else _df_v['positives'].values[0],
21
- 'negative (validation)': 0 if len(_df_v) == 0 else _df_v['negatives'].values[0],
22
- })
23
-
24
- df = pd.DataFrame(stats).sort_values(by=['relation_type'])
25
- df.index = df.pop('relation_type')
26
- sum_pairs = df.sum(0)
27
- df = df.T
28
- df['SUM'] = sum_pairs
29
- df = df.T
30
-
31
- df.to_csv('stats.csv')
32
- with open('stats.md', 'w') as f:
33
- f.write(df.to_markdown())
34
-
35
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
process.py CHANGED
@@ -20,12 +20,12 @@ def wget(url, cache_dir: str = './cache'):
20
  path = f'{cache_dir}/{filename}'
21
  if os.path.exists(path):
22
  return path.replace('.gz', '')
23
- with open(path, "wb") as f:
24
  r = requests.get(url)
25
- f.write(r.content)
26
- with gzip.open(path, 'rb') as f:
27
  with open(path.replace('.gz', ''), 'wb') as f_write:
28
- f_write.write(f.read())
29
  os.remove(path)
30
  return path.replace('.gz', '')
31
 
@@ -44,23 +44,35 @@ def read_file(file_name):
44
  if __name__ == '__main__':
45
  test_p, test_n = read_file(wget(urls['test']))
46
  dev1_p, dev1_n = read_file(wget(urls['dev1']))
47
- train_p = pd.concat([test_p, dev1_p])
48
- train_n = pd.concat([test_n, dev1_n])
 
 
 
 
 
 
 
 
 
 
 
 
 
49
  with open(f'dataset/train.jsonl', 'w') as f:
50
- for relation, df_p in train_p.groupby('relation'):
51
  if len(df_p) < 2:
52
  continue
53
  if relation in exclude:
54
  continue
55
  print(relation)
56
- df_n = train_n[train_n['relation'] == relation]
57
  f.write(json.dumps({
58
  'relation_type': relation,
59
  'positives': df_p[['head', 'tail']].to_numpy().tolist(),
60
  'negatives': df_n[['head', 'tail']].to_numpy().tolist()
61
  }) + '\n')
62
 
63
- dev2_p, dev2_n = read_file(wget(urls['dev2']))
64
  with open(f'dataset/valid.jsonl', 'w') as f:
65
  for relation, df_p in dev2_p.groupby('relation'):
66
  if len(df_p) < 2:
 
20
  path = f'{cache_dir}/{filename}'
21
  if os.path.exists(path):
22
  return path.replace('.gz', '')
23
+ with open(path, "wb") as f_:
24
  r = requests.get(url)
25
+ f_.write(r.content)
26
+ with gzip.open(path, 'rb') as f_:
27
  with open(path.replace('.gz', ''), 'wb') as f_write:
28
+ f_write.write(f_.read())
29
  os.remove(path)
30
  return path.replace('.gz', '')
31
 
 
44
  if __name__ == '__main__':
45
  test_p, test_n = read_file(wget(urls['test']))
46
  dev1_p, dev1_n = read_file(wget(urls['dev1']))
47
+ dev2_p, dev2_n = read_file(wget(urls['dev2']))
48
+
49
+ with open(f'dataset/test.jsonl', 'w') as f:
50
+ for relation, df_p in test_p.groupby('relation'):
51
+ if len(df_p) < 2:
52
+ continue
53
+ if relation in exclude:
54
+ continue
55
+ df_n = test_n[test_n['relation'] == relation]
56
+ f.write(json.dumps({
57
+ 'relation_type': relation,
58
+ 'positives': df_p[['head', 'tail']].to_numpy().tolist(),
59
+ 'negatives': df_n[['head', 'tail']].to_numpy().tolist()
60
+ }) + '\n')
61
+
62
  with open(f'dataset/train.jsonl', 'w') as f:
63
+ for relation, df_p in dev1_p.groupby('relation'):
64
  if len(df_p) < 2:
65
  continue
66
  if relation in exclude:
67
  continue
68
  print(relation)
69
+ df_n = dev1_n[dev1_n['relation'] == relation]
70
  f.write(json.dumps({
71
  'relation_type': relation,
72
  'positives': df_p[['head', 'tail']].to_numpy().tolist(),
73
  'negatives': df_n[['head', 'tail']].to_numpy().tolist()
74
  }) + '\n')
75
 
 
76
  with open(f'dataset/valid.jsonl', 'w') as f:
77
  for relation, df_p in dev2_p.groupby('relation'):
78
  if len(df_p) < 2:
stats.csv DELETED
@@ -1,22 +0,0 @@
1
- relation_type,positive (train),negative (train),positive (validation),negative (validation)
2
- AtLocation,383,1749,97,574
3
- CapableOf,195,1771,73,596
4
- Causes,71,1778,26,591
5
- CausesDesire,9,1774,11,591
6
- CreatedBy,2,1777,0,0
7
- DefinedAs,0,0,2,591
8
- Desires,16,1775,12,591
9
- HasA,67,1795,17,591
10
- HasFirstSubevent,2,1777,0,0
11
- HasLastSubevent,2,1777,3,589
12
- HasPrerequisite,168,1784,57,588
13
- HasProperty,94,1782,39,601
14
- HasSubevent,125,1779,40,605
15
- IsA,310,1745,98,599
16
- MadeOf,17,1774,7,589
17
- MotivatedByGoal,14,1777,11,591
18
- PartOf,34,1782,7,589
19
- ReceivesAction,18,1774,8,589
20
- SymbolOf,0,0,2,592
21
- UsedFor,249,1796,81,584
22
- SUM,1776,31966,591,10641
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
stats.md DELETED
@@ -1,23 +0,0 @@
1
- | relation_type | positive (train) | negative (train) | positive (validation) | negative (validation) |
2
- |:-----------------|-------------------:|-------------------:|------------------------:|------------------------:|
3
- | AtLocation | 383 | 1749 | 97 | 574 |
4
- | CapableOf | 195 | 1771 | 73 | 596 |
5
- | Causes | 71 | 1778 | 26 | 591 |
6
- | CausesDesire | 9 | 1774 | 11 | 591 |
7
- | CreatedBy | 2 | 1777 | 0 | 0 |
8
- | DefinedAs | 0 | 0 | 2 | 591 |
9
- | Desires | 16 | 1775 | 12 | 591 |
10
- | HasA | 67 | 1795 | 17 | 591 |
11
- | HasFirstSubevent | 2 | 1777 | 0 | 0 |
12
- | HasLastSubevent | 2 | 1777 | 3 | 589 |
13
- | HasPrerequisite | 168 | 1784 | 57 | 588 |
14
- | HasProperty | 94 | 1782 | 39 | 601 |
15
- | HasSubevent | 125 | 1779 | 40 | 605 |
16
- | IsA | 310 | 1745 | 98 | 599 |
17
- | MadeOf | 17 | 1774 | 7 | 589 |
18
- | MotivatedByGoal | 14 | 1777 | 11 | 591 |
19
- | PartOf | 34 | 1782 | 7 | 589 |
20
- | ReceivesAction | 18 | 1774 | 8 | 589 |
21
- | SymbolOf | 0 | 0 | 2 | 592 |
22
- | UsedFor | 249 | 1796 | 81 | 584 |
23
- | SUM | 1776 | 31966 | 591 | 10641 |