asahi417 commited on
Commit
c96bd8a
1 Parent(s): 6a4cca2

model update

Browse files
README.md CHANGED
@@ -14,7 +14,7 @@ model-index:
14
  metrics:
15
  - name: Accuracy
16
  type: accuracy
17
- value: 0.8043055555555555
18
  - task:
19
  name: Analogy Questions (SAT full)
20
  type: multiple-choice-qa
@@ -25,7 +25,7 @@ model-index:
25
  metrics:
26
  - name: Accuracy
27
  type: accuracy
28
- value: 0.6577540106951871
29
  - task:
30
  name: Analogy Questions (SAT)
31
  type: multiple-choice-qa
@@ -36,7 +36,7 @@ model-index:
36
  metrics:
37
  - name: Accuracy
38
  type: accuracy
39
- value: 0.6468842729970327
40
  - task:
41
  name: Analogy Questions (BATS)
42
  type: multiple-choice-qa
@@ -47,7 +47,7 @@ model-index:
47
  metrics:
48
  - name: Accuracy
49
  type: accuracy
50
- value: 0.7581989994441356
51
  - task:
52
  name: Analogy Questions (Google)
53
  type: multiple-choice-qa
@@ -58,7 +58,7 @@ model-index:
58
  metrics:
59
  - name: Accuracy
60
  type: accuracy
61
- value: 0.914
62
  - task:
63
  name: Analogy Questions (U2)
64
  type: multiple-choice-qa
@@ -69,7 +69,7 @@ model-index:
69
  metrics:
70
  - name: Accuracy
71
  type: accuracy
72
- value: 0.6228070175438597
73
  - task:
74
  name: Analogy Questions (U4)
75
  type: multiple-choice-qa
@@ -91,7 +91,7 @@ model-index:
91
  metrics:
92
  - name: Accuracy
93
  type: accuracy
94
- value: 0.4437919463087248
95
  - task:
96
  name: Analogy Questions (TREX Analogy)
97
  type: multiple-choice-qa
@@ -102,7 +102,7 @@ model-index:
102
  metrics:
103
  - name: Accuracy
104
  type: accuracy
105
- value: 0.5956284153005464
106
  - task:
107
  name: Lexical Relation Classification (BLESS)
108
  type: classification
@@ -180,14 +180,14 @@ model-index:
180
  RelBERT based on [roberta-large](https://huggingface.co/roberta-large) fine-tuned on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity) (see the [`relbert`](https://github.com/asahi417/relbert) for more detail of fine-tuning).
181
  This model achieves the following results on the relation understanding tasks:
182
  - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-d-semeval2012/raw/main/analogy.forward.json)):
183
- - Accuracy on SAT (full): 0.6577540106951871
184
- - Accuracy on SAT: 0.6468842729970327
185
- - Accuracy on BATS: 0.7581989994441356
186
- - Accuracy on U2: 0.6228070175438597
187
  - Accuracy on U4: 0.6064814814814815
188
- - Accuracy on Google: 0.914
189
- - Accuracy on ConceptNet Analogy: 0.4437919463087248
190
- - Accuracy on T-Rex Analogy: 0.5956284153005464
191
  - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-d-semeval2012/raw/main/classification.json)):
192
  - Micro F1 score on BLESS: None
193
  - Micro F1 score on CogALexV: None
@@ -195,7 +195,7 @@ This model achieves the following results on the relation understanding tasks:
195
  - Micro F1 score on K&H+N: None
196
  - Micro F1 score on ROOT09: None
197
  - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-d-semeval2012/raw/main/relation_mapping.json)):
198
- - Accuracy on Relation Mapping: 0.8043055555555555
199
 
200
 
201
  ### Usage
 
14
  metrics:
15
  - name: Accuracy
16
  type: accuracy
17
+ value: 0.7419444444444444
18
  - task:
19
  name: Analogy Questions (SAT full)
20
  type: multiple-choice-qa
 
25
  metrics:
26
  - name: Accuracy
27
  type: accuracy
28
+ value: 0.6497326203208557
29
  - task:
30
  name: Analogy Questions (SAT)
31
  type: multiple-choice-qa
 
36
  metrics:
37
  - name: Accuracy
38
  type: accuracy
39
+ value: 0.6528189910979229
40
  - task:
41
  name: Analogy Questions (BATS)
42
  type: multiple-choice-qa
 
47
  metrics:
48
  - name: Accuracy
49
  type: accuracy
50
+ value: 0.8265703168426903
51
  - task:
52
  name: Analogy Questions (Google)
53
  type: multiple-choice-qa
 
58
  metrics:
59
  - name: Accuracy
60
  type: accuracy
61
+ value: 0.934
62
  - task:
63
  name: Analogy Questions (U2)
64
  type: multiple-choice-qa
 
69
  metrics:
70
  - name: Accuracy
71
  type: accuracy
72
+ value: 0.6359649122807017
73
  - task:
74
  name: Analogy Questions (U4)
75
  type: multiple-choice-qa
 
91
  metrics:
92
  - name: Accuracy
93
  type: accuracy
94
+ value: 0.43288590604026844
95
  - task:
96
  name: Analogy Questions (TREX Analogy)
97
  type: multiple-choice-qa
 
102
  metrics:
103
  - name: Accuracy
104
  type: accuracy
105
+ value: 0.6120218579234973
106
  - task:
107
  name: Lexical Relation Classification (BLESS)
108
  type: classification
 
180
  RelBERT based on [roberta-large](https://huggingface.co/roberta-large) fine-tuned on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity) (see the [`relbert`](https://github.com/asahi417/relbert) for more detail of fine-tuning).
181
  This model achieves the following results on the relation understanding tasks:
182
  - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-d-semeval2012/raw/main/analogy.forward.json)):
183
+ - Accuracy on SAT (full): 0.6497326203208557
184
+ - Accuracy on SAT: 0.6528189910979229
185
+ - Accuracy on BATS: 0.8265703168426903
186
+ - Accuracy on U2: 0.6359649122807017
187
  - Accuracy on U4: 0.6064814814814815
188
+ - Accuracy on Google: 0.934
189
+ - Accuracy on ConceptNet Analogy: 0.43288590604026844
190
+ - Accuracy on T-Rex Analogy: 0.6120218579234973
191
  - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-d-semeval2012/raw/main/classification.json)):
192
  - Micro F1 score on BLESS: None
193
  - Micro F1 score on CogALexV: None
 
195
  - Micro F1 score on K&H+N: None
196
  - Micro F1 score on ROOT09: None
197
  - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-d-semeval2012/raw/main/relation_mapping.json)):
198
+ - Accuracy on Relation Mapping: 0.7419444444444444
199
 
200
 
201
  ### Usage
analogy.bidirection.json CHANGED
@@ -1 +1 @@
1
- {"sat_full/test": 0.6711229946524064, "sat/test": 0.658753709198813, "u2/test": 0.6228070175438597, "u4/test": 0.6319444444444444, "google/test": 0.934, "bats/test": 0.7709838799332963, "t_rex_relational_similarity/test": 0.6284153005464481, "conceptnet_relational_similarity/test": 0.46308724832214765, "sat/validation": 0.7837837837837838, "u2/validation": 0.6666666666666666, "u4/validation": 0.6458333333333334, "google/validation": 0.98, "bats/validation": 0.8291457286432161, "semeval2012_relational_similarity/validation": 0.7088607594936709, "t_rex_relational_similarity/validation": 0.2560483870967742, "conceptnet_relational_similarity/validation": 0.368705035971223}
 
1
+ {"sat_full/test": 0.6818181818181818, "sat/test": 0.6884272997032641, "u2/test": 0.6447368421052632, "u4/test": 0.6597222222222222, "google/test": 0.936, "bats/test": 0.8132295719844358, "t_rex_relational_similarity/test": 0.644808743169399, "conceptnet_relational_similarity/test": 0.42449664429530204, "sat/validation": 0.6216216216216216, "u2/validation": 0.5833333333333334, "u4/validation": 0.625, "google/validation": 1.0, "bats/validation": 0.864321608040201, "semeval2012_relational_similarity/validation": 0.6582278481012658, "t_rex_relational_similarity/validation": 0.28024193548387094, "conceptnet_relational_similarity/validation": 0.3462230215827338}
analogy.forward.json CHANGED
@@ -1 +1 @@
1
- {"semeval2012_relational_similarity/validation": 0.759493670886076, "sat_full/test": 0.6577540106951871, "sat/test": 0.6468842729970327, "u2/test": 0.6228070175438597, "u4/test": 0.6064814814814815, "google/test": 0.914, "bats/test": 0.7581989994441356, "t_rex_relational_similarity/test": 0.5956284153005464, "conceptnet_relational_similarity/test": 0.4437919463087248, "sat/validation": 0.7567567567567568, "u2/validation": 0.5833333333333334, "u4/validation": 0.5625, "google/validation": 0.98, "bats/validation": 0.8391959798994975, "t_rex_relational_similarity/validation": 0.22580645161290322, "conceptnet_relational_similarity/validation": 0.35251798561151076}
 
1
+ {"semeval2012_relational_similarity/validation": 0.7088607594936709, "sat_full/test": 0.6497326203208557, "sat/test": 0.6528189910979229, "u2/test": 0.6359649122807017, "u4/test": 0.6064814814814815, "google/test": 0.934, "bats/test": 0.8265703168426903, "t_rex_relational_similarity/test": 0.6120218579234973, "conceptnet_relational_similarity/test": 0.43288590604026844, "sat/validation": 0.6216216216216216, "u2/validation": 0.5833333333333334, "u4/validation": 0.625, "google/validation": 0.96, "bats/validation": 0.8542713567839196, "t_rex_relational_similarity/validation": 0.2762096774193548, "conceptnet_relational_similarity/validation": 0.36960431654676257}
analogy.reverse.json CHANGED
@@ -1 +1 @@
1
- {"sat_full/test": 0.6176470588235294, "sat/test": 0.6112759643916914, "u2/test": 0.6096491228070176, "u4/test": 0.5972222222222222, "google/test": 0.922, "bats/test": 0.7287381878821567, "t_rex_relational_similarity/test": 0.5956284153005464, "conceptnet_relational_similarity/test": 0.4085570469798658, "sat/validation": 0.6756756756756757, "u2/validation": 0.6666666666666666, "u4/validation": 0.5833333333333334, "google/validation": 0.96, "bats/validation": 0.7839195979899497, "semeval2012_relational_similarity/validation": 0.6962025316455697, "t_rex_relational_similarity/validation": 0.24596774193548387, "conceptnet_relational_similarity/validation": 0.31384892086330934}
 
1
+ {"sat_full/test": 0.6363636363636364, "sat/test": 0.6468842729970327, "u2/test": 0.6491228070175439, "u4/test": 0.6597222222222222, "google/test": 0.934, "bats/test": 0.754863813229572, "t_rex_relational_similarity/test": 0.5409836065573771, "conceptnet_relational_similarity/test": 0.3347315436241611, "sat/validation": 0.5405405405405406, "u2/validation": 0.5833333333333334, "u4/validation": 0.625, "google/validation": 0.98, "bats/validation": 0.7989949748743719, "semeval2012_relational_similarity/validation": 0.6329113924050633, "t_rex_relational_similarity/validation": 0.2318548387096774, "conceptnet_relational_similarity/validation": 0.256294964028777}
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "relbert_output/ckpt/nce_semeval2012/template-c/epoch_6",
3
  "architectures": [
4
  "RobertaModel"
5
  ],
 
1
  {
2
+ "_name_or_path": "roberta-large",
3
  "architectures": [
4
  "RobertaModel"
5
  ],
finetuning_config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "template": "Today, I finally discovered the relation between <subj> and <obj> : <subj> is the <mask> of <obj>",
3
  "model": "roberta-large",
4
  "max_length": 64,
5
  "epoch": 10,
 
1
  {
2
+ "template": "Today, I finally discovered the relation between <subj> and <obj> : <mask>",
3
  "model": "roberta-large",
4
  "max_length": 64,
5
  "epoch": 10,
relation_mapping.json CHANGED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json CHANGED
@@ -6,7 +6,7 @@
6
  "errors": "replace",
7
  "mask_token": "<mask>",
8
  "model_max_length": 512,
9
- "name_or_path": "relbert_output/ckpt/nce_semeval2012/template-c/epoch_6",
10
  "pad_token": "<pad>",
11
  "sep_token": "</s>",
12
  "special_tokens_map_file": null,
 
6
  "errors": "replace",
7
  "mask_token": "<mask>",
8
  "model_max_length": 512,
9
+ "name_or_path": "roberta-large",
10
  "pad_token": "<pad>",
11
  "sep_token": "</s>",
12
  "special_tokens_map_file": null,