asahi417 commited on
Commit
38f62b5
1 Parent(s): df935c6

fix readme

Browse files
README.md CHANGED
@@ -6,37 +6,70 @@ license:
6
  multilinguality:
7
  - monolingual
8
  size_categories:
9
- - 1K<n<10K
10
- pretty_name: ConceptNet with High Confidence
11
  ---
12
  # Dataset Card for "relbert/conceptnet"
 
13
  ## Dataset Description
14
  - **Repository:** [RelBERT](https://github.com/asahi417/relbert)
15
  - **Paper:** [https://home.ttic.edu/~kgimpel/commonsense.html](https://home.ttic.edu/~kgimpel/commonsense.html)
16
- - **Dataset:** High Confidence Subset of ConceptNet
17
 
18
  ### Dataset Summary
19
  The selected subset of ConceptNet used in [this work](https://home.ttic.edu/~kgimpel/commonsense.html).
20
  We removed `NotCapableOf` and `NotDesires` to keep the positive relation only.
21
  We consider the original test set as test set, dev1 as the training set, and dev2 as the validation set.
22
 
23
- | train | validation | test |
24
- |------:|----:|---------:|
25
- | 592 | 595 | 1189|
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
 
27
 
28
  ### Data Instances
29
  An example of `train` looks as follows.
30
- ```
31
  {
32
- "relation_type": "AtLocation",
33
- "positives": [["fish", "water"], ["cloud", "sky"], ["child", "school"], ... ],
34
- "negatives": [["pen", "write"], ["sex", "fun"], ["soccer", "sport"], ["fish", "school"], ... ]
35
  }
36
  ```
37
 
38
 
39
- ### Citation Information
40
  ```
41
  @InProceedings{P16-1137,
42
  author = "Li, Xiang
 
6
  multilinguality:
7
  - monolingual
8
  size_categories:
9
+ - n<1K
10
+ pretty_name: relbert/conceptnet
11
  ---
12
  # Dataset Card for "relbert/conceptnet"
13
+
14
  ## Dataset Description
15
  - **Repository:** [RelBERT](https://github.com/asahi417/relbert)
16
  - **Paper:** [https://home.ttic.edu/~kgimpel/commonsense.html](https://home.ttic.edu/~kgimpel/commonsense.html)
17
+ - **Dataset:** High Confidence Subset of ConceptNet for link prediction
18
 
19
  ### Dataset Summary
20
  The selected subset of ConceptNet used in [this work](https://home.ttic.edu/~kgimpel/commonsense.html).
21
  We removed `NotCapableOf` and `NotDesires` to keep the positive relation only.
22
  We consider the original test set as test set, dev1 as the training set, and dev2 as the validation set.
23
 
24
+ - Number of instances
25
+
26
+ | | train | validation | test |
27
+ |:--------------------------------|--------:|-------------:|-------:|
28
+ | number of pairs | 592 | 595 | 1189 |
29
+ | number of unique relation types | 18 | 22 | 21 |
30
+
31
+ - Number of pairs in each relation type
32
+
33
+ | | number of pairs (train) | number of pairs (validation) | number of pairs (test) |
34
+ |:-----------------|--------------------------:|-------------------------------:|-------------------------:|
35
+ | AtLocation | 133 | 97 | 250 |
36
+ | CapableOf | 51 | 73 | 144 |
37
+ | Causes | 26 | 26 | 45 |
38
+ | CausesDesire | 4 | 11 | 5 |
39
+ | Desires | 8 | 12 | 8 |
40
+ | HasA | 26 | 17 | 41 |
41
+ | HasFirstSubevent | 1 | 1 | 1 |
42
+ | HasLastSubevent | 2 | 3 | 0 |
43
+ | HasPrerequisite | 59 | 57 | 109 |
44
+ | HasProperty | 24 | 39 | 70 |
45
+ | HasSubevent | 42 | 40 | 83 |
46
+ | IsA | 99 | 98 | 211 |
47
+ | MadeOf | 3 | 7 | 14 |
48
+ | MotivatedByGoal | 6 | 11 | 8 |
49
+ | NotMadeOf | 1 | 0 | 0 |
50
+ | PartOf | 12 | 7 | 22 |
51
+ | ReceivesAction | 7 | 8 | 11 |
52
+ | UsedFor | 88 | 81 | 161 |
53
+ | CreatedBy | 0 | 1 | 2 |
54
+ | DefinedAs | 0 | 2 | 1 |
55
+ | NotHasProperty | 0 | 1 | 1 |
56
+ | NotIsA | 0 | 1 | 1 |
57
+ | SymbolOf | 0 | 2 | 0 |
58
+ | RelatedTo | 0 | 0 | 1 |
59
 
60
 
61
  ### Data Instances
62
  An example of `train` looks as follows.
63
+ ```shell
64
  {
65
+ "relation": "IsA",
66
+ "head": "baseball",
67
+ "tail": "sport"
68
  }
69
  ```
70
 
71
 
72
+ ## Citation Information
73
  ```
74
  @InProceedings{P16-1137,
75
  author = "Li, Xiang
conceptnet.py CHANGED
@@ -29,7 +29,7 @@ year = {2016}
29
  """
30
 
31
  _HOME_PAGE = "https://github.com/asahi417/relbert"
32
- _URL = f'https://huggingface.co/datasets/relbert/{_NAME}/raw/main/dataset'
33
  _URLS = {
34
  str(datasets.Split.TRAIN): [f'{_URL}/train.jsonl'],
35
  str(datasets.Split.VALIDATION): [f'{_URL}/valid.jsonl'],
 
29
  """
30
 
31
  _HOME_PAGE = "https://github.com/asahi417/relbert"
32
+ _URL = f'https://huggingface.co/datasets/relbert/{_NAME}/raw/main/data'
33
  _URLS = {
34
  str(datasets.Split.TRAIN): [f'{_URL}/train.jsonl'],
35
  str(datasets.Split.VALIDATION): [f'{_URL}/valid.jsonl'],
{dataset → data}/test.jsonl RENAMED
File without changes
{dataset → data}/train.jsonl RENAMED
File without changes
{dataset → data}/valid.jsonl RENAMED
File without changes
get_stats.py DELETED
@@ -1,35 +0,0 @@
1
- import pandas as pd
2
- from datasets import load_dataset
3
-
4
- data = load_dataset('relbert/conceptnet_high_confidence_v2')
5
- stats = []
6
- for k in data.keys():
7
- for i in data[k]:
8
- stats.append({'relation_type': i['relation_type'], 'split': k, 'positives': len(i['positives']), 'negatives': len(i['negatives'])})
9
- df = pd.DataFrame(stats)
10
- df_train = df[df['split'] == 'train']
11
- df_valid = df[df['split'] == 'validation']
12
- stats = []
13
- for r in df['relation_type'].unique():
14
- _df_t = df_train[df_train['relation_type'] == r]
15
- _df_v = df_valid[df_valid['relation_type'] == r]
16
- stats.append({
17
- 'relation_type': r,
18
- 'positive (train)': 0 if len(_df_t) == 0 else _df_t['positives'].values[0],
19
- 'negative (train)': 0 if len(_df_t) == 0 else _df_t['negatives'].values[0],
20
- 'positive (validation)': 0 if len(_df_v) == 0 else _df_v['positives'].values[0],
21
- 'negative (validation)': 0 if len(_df_v) == 0 else _df_v['negatives'].values[0],
22
- })
23
-
24
- df = pd.DataFrame(stats).sort_values(by=['relation_type'])
25
- df.index = df.pop('relation_type')
26
- sum_pairs = df.sum(0)
27
- df = df.T
28
- df['SUM'] = sum_pairs
29
- df = df.T
30
-
31
- df.to_csv('stats.csv')
32
- with open('stats.md', 'w') as f:
33
- f.write(df.to_markdown())
34
-
35
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
stats.py ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from itertools import chain
2
+
3
+ import pandas as pd
4
+ from datasets import load_dataset
5
+
6
+
7
+ def get_stats():
8
+ relation = []
9
+ size = []
10
+ data = load_dataset("relbert/conceptnet")
11
+ splits = data.keys()
12
+ for split in splits:
13
+ df = data[split].to_pandas()
14
+ size.append({
15
+ "number of pairs": len(df),
16
+ "number of unique relation types": len(df["relation"].unique())
17
+ })
18
+ relation.append(df.groupby('relation')['head'].count().to_dict())
19
+ relation = pd.DataFrame(relation, index=[f"number of pairs ({s})" for s in splits]).T
20
+ relation = relation.fillna(0).astype(int)
21
+ size = pd.DataFrame(size, index=splits).T
22
+ return relation, size
23
+
24
+ df_relation, df_size = get_stats()
25
+ print(f"\n- Number of instances\n\n {df_size.to_markdown()}")
26
+ print(f"\n- Number of pairs in each relation type\n\n {df_relation.to_markdown()}")
27
+