asahi417 commited on
Commit
3fa9213
1 Parent(s): edaaaeb

model update

Browse files
Files changed (1) hide show
  1. README.md +16 -4
README.md CHANGED
@@ -91,7 +91,7 @@ model-index:
91
  metrics:
92
  - name: Accuracy
93
  type: accuracy
94
- value: 0.5531135531135531
95
  - task:
96
  name: Analogy Questions (TREX Analogy)
97
  type: multiple-choice-qa
@@ -102,7 +102,18 @@ model-index:
102
  metrics:
103
  - name: Accuracy
104
  type: accuracy
105
- value: 0.8288690476190477
 
 
 
 
 
 
 
 
 
 
 
106
  - task:
107
  name: Lexical Relation Classification (BLESS)
108
  type: classification
@@ -186,8 +197,9 @@ This model achieves the following results on the relation understanding tasks:
186
  - Accuracy on U2: 0.618421052631579
187
  - Accuracy on U4: 0.5949074074074074
188
  - Accuracy on Google: 0.902
189
- - Accuracy on ConceptNet Analogy: 0.5531135531135531
190
- - Accuracy on T-Rex Analogy: 0.8288690476190477
 
191
  - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-large-triplet-e-semeval2012/raw/main/classification.json)):
192
  - Micro F1 score on BLESS: 0.9026668675606448
193
  - Micro F1 score on CogALexV: 0.8607981220657277
 
91
  metrics:
92
  - name: Accuracy
93
  type: accuracy
94
+ value: 0.3145973154362416
95
  - task:
96
  name: Analogy Questions (TREX Analogy)
97
  type: multiple-choice-qa
 
102
  metrics:
103
  - name: Accuracy
104
  type: accuracy
105
+ value: 0.5683060109289617
106
+ - task:
107
+ name: Analogy Questions (NELL-ONE Analogy)
108
+ type: multiple-choice-qa
109
+ dataset:
110
+ name: NELL-ONE Analogy
111
+ args: relbert/analogy_questions
112
+ type: analogy-questions
113
+ metrics:
114
+ - name: Accuracy
115
+ type: accuracy
116
+ value: 0.61
117
  - task:
118
  name: Lexical Relation Classification (BLESS)
119
  type: classification
 
197
  - Accuracy on U2: 0.618421052631579
198
  - Accuracy on U4: 0.5949074074074074
199
  - Accuracy on Google: 0.902
200
+ - Accuracy on ConceptNet Analogy: 0.3145973154362416
201
+ - Accuracy on T-Rex Analogy: 0.5683060109289617
202
+ - Accuracy on NELL-ONE Analogy: 0.61
203
  - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-large-triplet-e-semeval2012/raw/main/classification.json)):
204
  - Micro F1 score on BLESS: 0.9026668675606448
205
  - Micro F1 score on CogALexV: 0.8607981220657277