asahi417 commited on
Commit
122daee
1 Parent(s): 3133e5b

model update

Browse files
README.md CHANGED
@@ -14,7 +14,7 @@ model-index:
14
  metrics:
15
  - name: Accuracy
16
  type: accuracy
17
- value: 0.819484126984127
18
  - task:
19
  name: Analogy Questions (SAT full)
20
  type: multiple-choice-qa
@@ -25,7 +25,7 @@ model-index:
25
  metrics:
26
  - name: Accuracy
27
  type: accuracy
28
- value: 0.6497326203208557
29
  - task:
30
  name: Analogy Questions (SAT)
31
  type: multiple-choice-qa
@@ -36,7 +36,7 @@ model-index:
36
  metrics:
37
  - name: Accuracy
38
  type: accuracy
39
- value: 0.658753709198813
40
  - task:
41
  name: Analogy Questions (BATS)
42
  type: multiple-choice-qa
@@ -47,7 +47,7 @@ model-index:
47
  metrics:
48
  - name: Accuracy
49
  type: accuracy
50
- value: 0.735964424680378
51
  - task:
52
  name: Analogy Questions (Google)
53
  type: multiple-choice-qa
@@ -58,7 +58,7 @@ model-index:
58
  metrics:
59
  - name: Accuracy
60
  type: accuracy
61
- value: 0.896
62
  - task:
63
  name: Analogy Questions (U2)
64
  type: multiple-choice-qa
@@ -69,7 +69,7 @@ model-index:
69
  metrics:
70
  - name: Accuracy
71
  type: accuracy
72
- value: 0.5570175438596491
73
  - task:
74
  name: Analogy Questions (U4)
75
  type: multiple-choice-qa
@@ -80,7 +80,7 @@ model-index:
80
  metrics:
81
  - name: Accuracy
82
  type: accuracy
83
- value: 0.6203703703703703
84
  - task:
85
  name: Analogy Questions (ConceptNet Analogy)
86
  type: multiple-choice-qa
@@ -91,7 +91,7 @@ model-index:
91
  metrics:
92
  - name: Accuracy
93
  type: accuracy
94
- value: 0.41359060402684567
95
  - task:
96
  name: Analogy Questions (TREX Analogy)
97
  type: multiple-choice-qa
@@ -102,7 +102,7 @@ model-index:
102
  metrics:
103
  - name: Accuracy
104
  type: accuracy
105
- value: 0.6229508196721312
106
  - task:
107
  name: Lexical Relation Classification (BLESS)
108
  type: classification
@@ -113,10 +113,10 @@ model-index:
113
  metrics:
114
  - name: F1
115
  type: f1
116
- value: 0.909899050775953
117
  - name: F1 (macro)
118
  type: f1_macro
119
- value: 0.9060888467687004
120
  - task:
121
  name: Lexical Relation Classification (CogALexV)
122
  type: classification
@@ -127,10 +127,10 @@ model-index:
127
  metrics:
128
  - name: F1
129
  type: f1
130
- value: 0.8617370892018781
131
  - name: F1 (macro)
132
  type: f1_macro
133
- value: 0.704064732022559
134
  - task:
135
  name: Lexical Relation Classification (EVALution)
136
  type: classification
@@ -141,10 +141,10 @@ model-index:
141
  metrics:
142
  - name: F1
143
  type: f1
144
- value: 0.6917659804983749
145
  - name: F1 (macro)
146
  type: f1_macro
147
- value: 0.6833995231298724
148
  - task:
149
  name: Lexical Relation Classification (K&H+N)
150
  type: classification
@@ -155,10 +155,10 @@ model-index:
155
  metrics:
156
  - name: F1
157
  type: f1
158
- value: 0.9581971204006399
159
  - name: F1 (macro)
160
  type: f1_macro
161
- value: 0.8741899049737119
162
  - task:
163
  name: Lexical Relation Classification (ROOT09)
164
  type: classification
@@ -169,10 +169,10 @@ model-index:
169
  metrics:
170
  - name: F1
171
  type: f1
172
- value: 0.9015982450642431
173
  - name: F1 (macro)
174
  type: f1_macro
175
- value: 0.9006321541459927
176
 
177
  ---
178
  # relbert/relbert-roberta-large-nce-d-semeval2012
@@ -180,22 +180,22 @@ model-index:
180
  RelBERT based on [roberta-large](https://huggingface.co/roberta-large) fine-tuned on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity) (see the [`relbert`](https://github.com/asahi417/relbert) for more detail of fine-tuning).
181
  This model achieves the following results on the relation understanding tasks:
182
  - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-d-semeval2012/raw/main/analogy.forward.json)):
183
- - Accuracy on SAT (full): 0.6497326203208557
184
- - Accuracy on SAT: 0.658753709198813
185
- - Accuracy on BATS: 0.735964424680378
186
- - Accuracy on U2: 0.5570175438596491
187
- - Accuracy on U4: 0.6203703703703703
188
- - Accuracy on Google: 0.896
189
- - Accuracy on ConceptNet Analogy: 0.41359060402684567
190
- - Accuracy on T-Rex Analogy: 0.6229508196721312
191
  - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-d-semeval2012/raw/main/classification.json)):
192
- - Micro F1 score on BLESS: 0.909899050775953
193
- - Micro F1 score on CogALexV: 0.8617370892018781
194
- - Micro F1 score on EVALution: 0.6917659804983749
195
- - Micro F1 score on K&H+N: 0.9581971204006399
196
- - Micro F1 score on ROOT09: 0.9015982450642431
197
  - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-d-semeval2012/raw/main/relation_mapping.json)):
198
- - Accuracy on Relation Mapping: 0.819484126984127
199
 
200
 
201
  ### Usage
@@ -227,7 +227,7 @@ vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (n_dim, )
227
  - split_valid: validation
228
  - loss_function: nce
229
  - classification_loss: False
230
- - loss_function_config: {'temperature': 0.05, 'num_negative': 400, 'num_positive': 10}
231
  - augment_negative_by_positive: True
232
 
233
  See the full configuration at [config file](https://huggingface.co/relbert/relbert-roberta-large-nce-d-semeval2012/raw/main/finetuning_config.json).
 
14
  metrics:
15
  - name: Accuracy
16
  type: accuracy
17
+ value: 0.8049007936507937
18
  - task:
19
  name: Analogy Questions (SAT full)
20
  type: multiple-choice-qa
 
25
  metrics:
26
  - name: Accuracy
27
  type: accuracy
28
+ value: 0.732620320855615
29
  - task:
30
  name: Analogy Questions (SAT)
31
  type: multiple-choice-qa
 
36
  metrics:
37
  - name: Accuracy
38
  type: accuracy
39
+ value: 0.7359050445103857
40
  - task:
41
  name: Analogy Questions (BATS)
42
  type: multiple-choice-qa
 
47
  metrics:
48
  - name: Accuracy
49
  type: accuracy
50
+ value: 0.8093385214007782
51
  - task:
52
  name: Analogy Questions (Google)
53
  type: multiple-choice-qa
 
58
  metrics:
59
  - name: Accuracy
60
  type: accuracy
61
+ value: 0.952
62
  - task:
63
  name: Analogy Questions (U2)
64
  type: multiple-choice-qa
 
69
  metrics:
70
  - name: Accuracy
71
  type: accuracy
72
+ value: 0.6754385964912281
73
  - task:
74
  name: Analogy Questions (U4)
75
  type: multiple-choice-qa
 
80
  metrics:
81
  - name: Accuracy
82
  type: accuracy
83
+ value: 0.6296296296296297
84
  - task:
85
  name: Analogy Questions (ConceptNet Analogy)
86
  type: multiple-choice-qa
 
91
  metrics:
92
  - name: Accuracy
93
  type: accuracy
94
+ value: 0.4748322147651007
95
  - task:
96
  name: Analogy Questions (TREX Analogy)
97
  type: multiple-choice-qa
 
102
  metrics:
103
  - name: Accuracy
104
  type: accuracy
105
+ value: 0.644808743169399
106
  - task:
107
  name: Lexical Relation Classification (BLESS)
108
  type: classification
 
113
  metrics:
114
  - name: F1
115
  type: f1
116
+ value: 0.9199939731806539
117
  - name: F1 (macro)
118
  type: f1_macro
119
+ value: 0.9173175984713615
120
  - task:
121
  name: Lexical Relation Classification (CogALexV)
122
  type: classification
 
127
  metrics:
128
  - name: F1
129
  type: f1
130
+ value: 0.8497652582159625
131
  - name: F1 (macro)
132
  type: f1_macro
133
+ value: 0.6744248225015879
134
  - task:
135
  name: Lexical Relation Classification (EVALution)
136
  type: classification
 
141
  metrics:
142
  - name: F1
143
  type: f1
144
+ value: 0.6836403033586133
145
  - name: F1 (macro)
146
  type: f1_macro
147
+ value: 0.6776792144071253
148
  - task:
149
  name: Lexical Relation Classification (K&H+N)
150
  type: classification
 
155
  metrics:
156
  - name: F1
157
  type: f1
158
+ value: 0.9563191208179731
159
  - name: F1 (macro)
160
  type: f1_macro
161
+ value: 0.8663013754934635
162
  - task:
163
  name: Lexical Relation Classification (ROOT09)
164
  type: classification
 
169
  metrics:
170
  - name: F1
171
  type: f1
172
+ value: 0.9041052961454089
173
  - name: F1 (macro)
174
  type: f1_macro
175
+ value: 0.9040831832304929
176
 
177
  ---
178
  # relbert/relbert-roberta-large-nce-d-semeval2012
 
180
  RelBERT based on [roberta-large](https://huggingface.co/roberta-large) fine-tuned on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity) (see the [`relbert`](https://github.com/asahi417/relbert) for more detail of fine-tuning).
181
  This model achieves the following results on the relation understanding tasks:
182
  - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-d-semeval2012/raw/main/analogy.forward.json)):
183
+ - Accuracy on SAT (full): 0.732620320855615
184
+ - Accuracy on SAT: 0.7359050445103857
185
+ - Accuracy on BATS: 0.8093385214007782
186
+ - Accuracy on U2: 0.6754385964912281
187
+ - Accuracy on U4: 0.6296296296296297
188
+ - Accuracy on Google: 0.952
189
+ - Accuracy on ConceptNet Analogy: 0.4748322147651007
190
+ - Accuracy on T-Rex Analogy: 0.644808743169399
191
  - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-d-semeval2012/raw/main/classification.json)):
192
+ - Micro F1 score on BLESS: 0.9199939731806539
193
+ - Micro F1 score on CogALexV: 0.8497652582159625
194
+ - Micro F1 score on EVALution: 0.6836403033586133
195
+ - Micro F1 score on K&H+N: 0.9563191208179731
196
+ - Micro F1 score on ROOT09: 0.9041052961454089
197
  - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-large-nce-d-semeval2012/raw/main/relation_mapping.json)):
198
+ - Accuracy on Relation Mapping: 0.8049007936507937
199
 
200
 
201
  ### Usage
 
227
  - split_valid: validation
228
  - loss_function: nce
229
  - classification_loss: False
230
+ - loss_function_config: {'temperature': 0.05, 'gradient_accumulation': 1, 'num_negative': 400, 'num_positive': 10}
231
  - augment_negative_by_positive: True
232
 
233
  See the full configuration at [config file](https://huggingface.co/relbert/relbert-roberta-large-nce-d-semeval2012/raw/main/finetuning_config.json).
analogy.bidirection.json CHANGED
@@ -1 +1 @@
1
- {"sat_full/test": 0.6443850267379679, "sat/test": 0.6528189910979229, "u2/test": 0.5614035087719298, "u4/test": 0.6273148148148148, "google/test": 0.902, "bats/test": 0.7431906614785992, "t_rex_relational_similarity/test": 0.6885245901639344, "conceptnet_relational_similarity/test": 0.436241610738255, "sat/validation": 0.5675675675675675, "u2/validation": 0.7083333333333334, "u4/validation": 0.5208333333333334, "google/validation": 0.98, "bats/validation": 0.7587939698492462, "semeval2012_relational_similarity/validation": 0.7088607594936709, "t_rex_relational_similarity/validation": 0.2600806451612903, "conceptnet_relational_similarity/validation": 0.37050359712230213}
 
1
+ {"sat_full/test": 0.7272727272727273, "sat/test": 0.7299703264094956, "u2/test": 0.7149122807017544, "u4/test": 0.6875, "google/test": 0.962, "bats/test": 0.8354641467481935, "t_rex_relational_similarity/test": 0.644808743169399, "conceptnet_relational_similarity/test": 0.4672818791946309, "sat/validation": 0.7027027027027027, "u2/validation": 0.5833333333333334, "u4/validation": 0.5833333333333334, "google/validation": 1.0, "bats/validation": 0.8793969849246231, "semeval2012_relational_similarity/validation": 0.7341772151898734, "t_rex_relational_similarity/validation": 0.27419354838709675, "conceptnet_relational_similarity/validation": 0.38219424460431656}
analogy.forward.json CHANGED
@@ -1 +1 @@
1
- {"semeval2012_relational_similarity/validation": 0.7088607594936709, "sat_full/test": 0.6497326203208557, "sat/test": 0.658753709198813, "u2/test": 0.5570175438596491, "u4/test": 0.6203703703703703, "google/test": 0.896, "bats/test": 0.735964424680378, "t_rex_relational_similarity/test": 0.6229508196721312, "conceptnet_relational_similarity/test": 0.41359060402684567, "sat/validation": 0.5675675675675675, "u2/validation": 0.625, "u4/validation": 0.5208333333333334, "google/validation": 0.98, "bats/validation": 0.7587939698492462, "t_rex_relational_similarity/validation": 0.2560483870967742, "conceptnet_relational_similarity/validation": 0.34802158273381295}
 
1
+ {"semeval2012_relational_similarity/validation": 0.7468354430379747, "sat_full/test": 0.732620320855615, "sat/test": 0.7359050445103857, "u2/test": 0.6754385964912281, "u4/test": 0.6296296296296297, "google/test": 0.952, "bats/test": 0.8093385214007782, "t_rex_relational_similarity/test": 0.644808743169399, "conceptnet_relational_similarity/test": 0.4748322147651007, "sat/validation": 0.7027027027027027, "u2/validation": 0.625, "u4/validation": 0.5625, "google/validation": 1.0, "bats/validation": 0.8542713567839196, "t_rex_relational_similarity/validation": 0.29435483870967744, "conceptnet_relational_similarity/validation": 0.37859712230215825}
analogy.reverse.json CHANGED
@@ -1 +1 @@
1
- {"sat_full/test": 0.6176470588235294, "sat/test": 0.6231454005934718, "u2/test": 0.5131578947368421, "u4/test": 0.5972222222222222, "google/test": 0.882, "bats/test": 0.708171206225681, "t_rex_relational_similarity/test": 0.6885245901639344, "conceptnet_relational_similarity/test": 0.39429530201342283, "sat/validation": 0.5675675675675675, "u2/validation": 0.6666666666666666, "u4/validation": 0.5833333333333334, "google/validation": 0.96, "bats/validation": 0.7638190954773869, "semeval2012_relational_similarity/validation": 0.5822784810126582, "t_rex_relational_similarity/validation": 0.25201612903225806, "conceptnet_relational_similarity/validation": 0.31654676258992803}
 
1
+ {"sat_full/test": 0.6524064171122995, "sat/test": 0.6468842729970327, "u2/test": 0.6885964912280702, "u4/test": 0.6597222222222222, "google/test": 0.944, "bats/test": 0.7976653696498055, "t_rex_relational_similarity/test": 0.5956284153005464, "conceptnet_relational_similarity/test": 0.40604026845637586, "sat/validation": 0.7027027027027027, "u2/validation": 0.7083333333333334, "u4/validation": 0.625, "google/validation": 0.98, "bats/validation": 0.8592964824120602, "semeval2012_relational_similarity/validation": 0.6708860759493671, "t_rex_relational_similarity/validation": 0.24596774193548387, "conceptnet_relational_similarity/validation": 0.3237410071942446}
classification.json CHANGED
@@ -1 +1 @@
1
- {"lexical_relation_classification/BLESS": {"classifier_config": {"activation": "relu", "alpha": 0.0001, "batch_size": "auto", "beta_1": 0.9, "beta_2": 0.999, "early_stopping": false, "epsilon": 1e-08, "hidden_layer_sizes": [100], "learning_rate": "constant", "learning_rate_init": 0.001, "max_fun": 15000, "max_iter": 200, "momentum": 0.9, "n_iter_no_change": 10, "nesterovs_momentum": true, "power_t": 0.5, "random_state": 0, "shuffle": true, "solver": "adam", "tol": 0.0001, "validation_fraction": 0.1, "verbose": false, "warm_start": false}, "test/accuracy": 0.909899050775953, "test/f1_macro": 0.9060888467687004, "test/f1_micro": 0.909899050775953, "test/p_macro": 0.8943391404664559, "test/p_micro": 0.909899050775953, "test/r_macro": 0.9193101184161824, "test/r_micro": 0.909899050775953}, "lexical_relation_classification/CogALexV": {"classifier_config": {"activation": "relu", "alpha": 0.0001, "batch_size": "auto", "beta_1": 0.9, "beta_2": 0.999, "early_stopping": false, "epsilon": 1e-08, "hidden_layer_sizes": [100], "learning_rate": "constant", "learning_rate_init": 0.001, "max_fun": 15000, "max_iter": 200, "momentum": 0.9, "n_iter_no_change": 10, "nesterovs_momentum": true, "power_t": 0.5, "random_state": 0, "shuffle": true, "solver": "adam", "tol": 0.0001, "validation_fraction": 0.1, "verbose": false, "warm_start": false}, "test/accuracy": 0.861737089201878, "test/f1_macro": 0.704064732022559, "test/f1_micro": 0.8617370892018781, "test/p_macro": 0.7373270203394411, "test/p_micro": 0.861737089201878, "test/r_macro": 0.6772908912463073, "test/r_micro": 0.861737089201878}, "lexical_relation_classification/EVALution": {"classifier_config": {"activation": "relu", "alpha": 0.0001, "batch_size": "auto", "beta_1": 0.9, "beta_2": 0.999, "early_stopping": false, "epsilon": 1e-08, "hidden_layer_sizes": [100], "learning_rate": "constant", "learning_rate_init": 0.001, "max_fun": 15000, "max_iter": 200, "momentum": 0.9, "n_iter_no_change": 10, "nesterovs_momentum": true, "power_t": 0.5, "random_state": 0, "shuffle": true, "solver": "adam", "tol": 0.0001, "validation_fraction": 0.1, "verbose": false, "warm_start": false}, "test/accuracy": 0.6917659804983749, "test/f1_macro": 0.6833995231298724, "test/f1_micro": 0.6917659804983749, "test/p_macro": 0.6820705742395324, "test/p_micro": 0.6917659804983749, "test/r_macro": 0.6863826843303636, "test/r_micro": 0.6917659804983749}, "lexical_relation_classification/K&H+N": {"classifier_config": {"activation": "relu", "alpha": 0.0001, "batch_size": "auto", "beta_1": 0.9, "beta_2": 0.999, "early_stopping": false, "epsilon": 1e-08, "hidden_layer_sizes": [100], "learning_rate": "constant", "learning_rate_init": 0.001, "max_fun": 15000, "max_iter": 200, "momentum": 0.9, "n_iter_no_change": 10, "nesterovs_momentum": true, "power_t": 0.5, "random_state": 0, "shuffle": true, "solver": "adam", "tol": 0.0001, "validation_fraction": 0.1, "verbose": false, "warm_start": false}, "test/accuracy": 0.9581971204006399, "test/f1_macro": 0.8741899049737119, "test/f1_micro": 0.9581971204006399, "test/p_macro": 0.8670115001878431, "test/p_micro": 0.9581971204006399, "test/r_macro": 0.883168446080368, "test/r_micro": 0.9581971204006399}, "lexical_relation_classification/ROOT09": {"classifier_config": {"activation": "relu", "alpha": 0.0001, "batch_size": "auto", "beta_1": 0.9, "beta_2": 0.999, "early_stopping": false, "epsilon": 1e-08, "hidden_layer_sizes": [100], "learning_rate": "constant", "learning_rate_init": 0.001, "max_fun": 15000, "max_iter": 200, "momentum": 0.9, "n_iter_no_change": 10, "nesterovs_momentum": true, "power_t": 0.5, "random_state": 0, "shuffle": true, "solver": "adam", "tol": 0.0001, "validation_fraction": 0.1, "verbose": false, "warm_start": false}, "test/accuracy": 0.9015982450642431, "test/f1_macro": 0.9006321541459927, "test/f1_micro": 0.9015982450642431, "test/p_macro": 0.9002161322813497, "test/p_micro": 0.9015982450642431, "test/r_macro": 0.9011948768823131, "test/r_micro": 0.9015982450642431}}
 
1
+ {"lexical_relation_classification/BLESS": {"classifier_config": {"activation": "relu", "alpha": 0.0001, "batch_size": "auto", "beta_1": 0.9, "beta_2": 0.999, "early_stopping": false, "epsilon": 1e-08, "hidden_layer_sizes": [100], "learning_rate": "constant", "learning_rate_init": 0.001, "max_fun": 15000, "max_iter": 200, "momentum": 0.9, "n_iter_no_change": 10, "nesterovs_momentum": true, "power_t": 0.5, "random_state": 0, "shuffle": true, "solver": "adam", "tol": 0.0001, "validation_fraction": 0.1, "verbose": false, "warm_start": false}, "test/accuracy": 0.9199939731806539, "test/f1_macro": 0.9173175984713615, "test/f1_micro": 0.9199939731806539, "test/p_macro": 0.9129525907822765, "test/p_micro": 0.9199939731806539, "test/r_macro": 0.9231397069650421, "test/r_micro": 0.9199939731806539}, "lexical_relation_classification/CogALexV": {"classifier_config": {"activation": "relu", "alpha": 0.0001, "batch_size": "auto", "beta_1": 0.9, "beta_2": 0.999, "early_stopping": false, "epsilon": 1e-08, "hidden_layer_sizes": [100], "learning_rate": "constant", "learning_rate_init": 0.001, "max_fun": 15000, "max_iter": 200, "momentum": 0.9, "n_iter_no_change": 10, "nesterovs_momentum": true, "power_t": 0.5, "random_state": 0, "shuffle": true, "solver": "adam", "tol": 0.0001, "validation_fraction": 0.1, "verbose": false, "warm_start": false}, "test/accuracy": 0.8497652582159625, "test/f1_macro": 0.6744248225015879, "test/f1_micro": 0.8497652582159625, "test/p_macro": 0.7124501607954181, "test/p_micro": 0.8497652582159625, "test/r_macro": 0.6439012185599183, "test/r_micro": 0.8497652582159625}, "lexical_relation_classification/EVALution": {"classifier_config": {"activation": "relu", "alpha": 0.0001, "batch_size": "auto", "beta_1": 0.9, "beta_2": 0.999, "early_stopping": false, "epsilon": 1e-08, "hidden_layer_sizes": [100], "learning_rate": "constant", "learning_rate_init": 0.001, "max_fun": 15000, "max_iter": 200, "momentum": 0.9, "n_iter_no_change": 10, "nesterovs_momentum": true, "power_t": 0.5, "random_state": 0, "shuffle": true, "solver": "adam", "tol": 0.0001, "validation_fraction": 0.1, "verbose": false, "warm_start": false}, "test/accuracy": 0.6836403033586133, "test/f1_macro": 0.6776792144071253, "test/f1_micro": 0.6836403033586133, "test/p_macro": 0.6859955689335163, "test/p_micro": 0.6836403033586133, "test/r_macro": 0.6723026043339869, "test/r_micro": 0.6836403033586133}, "lexical_relation_classification/K&H+N": {"classifier_config": {"activation": "relu", "alpha": 0.0001, "batch_size": "auto", "beta_1": 0.9, "beta_2": 0.999, "early_stopping": false, "epsilon": 1e-08, "hidden_layer_sizes": [100], "learning_rate": "constant", "learning_rate_init": 0.001, "max_fun": 15000, "max_iter": 200, "momentum": 0.9, "n_iter_no_change": 10, "nesterovs_momentum": true, "power_t": 0.5, "random_state": 0, "shuffle": true, "solver": "adam", "tol": 0.0001, "validation_fraction": 0.1, "verbose": false, "warm_start": false}, "test/accuracy": 0.9563191208179731, "test/f1_macro": 0.8663013754934635, "test/f1_micro": 0.9563191208179731, "test/p_macro": 0.8794831771565157, "test/p_micro": 0.9563191208179731, "test/r_macro": 0.8543598117081819, "test/r_micro": 0.9563191208179731}, "lexical_relation_classification/ROOT09": {"classifier_config": {"activation": "relu", "alpha": 0.0001, "batch_size": "auto", "beta_1": 0.9, "beta_2": 0.999, "early_stopping": false, "epsilon": 1e-08, "hidden_layer_sizes": [100], "learning_rate": "constant", "learning_rate_init": 0.001, "max_fun": 15000, "max_iter": 200, "momentum": 0.9, "n_iter_no_change": 10, "nesterovs_momentum": true, "power_t": 0.5, "random_state": 0, "shuffle": true, "solver": "adam", "tol": 0.0001, "validation_fraction": 0.1, "verbose": false, "warm_start": false}, "test/accuracy": 0.9041052961454089, "test/f1_macro": 0.9040831832304929, "test/f1_micro": 0.9041052961454089, "test/p_macro": 0.9025403374928938, "test/p_micro": 0.9041052961454089, "test/r_macro": 0.9060703873054115, "test/r_micro": 0.9041052961454089}}
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "relbert_output/ckpt/nce_semeval2012/template-d/epoch_9",
3
  "architectures": [
4
  "RobertaModel"
5
  ],
 
1
  {
2
+ "_name_or_path": "roberta-large",
3
  "architectures": [
4
  "RobertaModel"
5
  ],
finetuning_config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "template": "Today, I finally discovered the relation between <subj> and <obj> : <subj> is the <mask> of <obj>",
3
  "model": "roberta-large",
4
  "max_length": 64,
5
  "epoch": 10,
@@ -17,6 +17,7 @@
17
  "classification_loss": false,
18
  "loss_function_config": {
19
  "temperature": 0.05,
 
20
  "num_negative": 400,
21
  "num_positive": 10
22
  },
 
1
  {
2
+ "template": "I wasn\u2019t aware of this relationship, but I just read in the encyclopedia that <subj> is the <mask> of <obj>",
3
  "model": "roberta-large",
4
  "max_length": 64,
5
  "epoch": 10,
 
17
  "classification_loss": false,
18
  "loss_function_config": {
19
  "temperature": 0.05,
20
+ "gradient_accumulation": 1,
21
  "num_negative": 400,
22
  "num_positive": 10
23
  },
relation_mapping.json CHANGED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json CHANGED
@@ -6,7 +6,7 @@
6
  "errors": "replace",
7
  "mask_token": "<mask>",
8
  "model_max_length": 512,
9
- "name_or_path": "relbert_output/ckpt/nce_semeval2012/template-d/epoch_9",
10
  "pad_token": "<pad>",
11
  "sep_token": "</s>",
12
  "special_tokens_map_file": null,
 
6
  "errors": "replace",
7
  "mask_token": "<mask>",
8
  "model_max_length": 512,
9
+ "name_or_path": "roberta-large",
10
  "pad_token": "<pad>",
11
  "sep_token": "</s>",
12
  "special_tokens_map_file": null,