model update
Browse files
README.md
CHANGED
@@ -103,6 +103,17 @@ model-index:
|
|
103 |
- name: Accuracy
|
104 |
type: accuracy
|
105 |
value: 0.5191256830601093
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
106 |
- task:
|
107 |
name: Lexical Relation Classification (BLESS)
|
108 |
type: classification
|
@@ -188,6 +199,7 @@ This model achieves the following results on the relation understanding tasks:
|
|
188 |
- Accuracy on Google: 0.908
|
189 |
- Accuracy on ConceptNet Analogy: 0.2986577181208054
|
190 |
- Accuracy on T-Rex Analogy: 0.5191256830601093
|
|
|
191 |
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-large-triplet-a-semeval2012/raw/main/classification.json)):
|
192 |
- Micro F1 score on BLESS: 0.9059816182009944
|
193 |
- Micro F1 score on CogALexV: 0.8582159624413146
|
|
|
103 |
- name: Accuracy
|
104 |
type: accuracy
|
105 |
value: 0.5191256830601093
|
106 |
+
- task:
|
107 |
+
name: Analogy Questions (NELL-ONE Analogy)
|
108 |
+
type: multiple-choice-qa
|
109 |
+
dataset:
|
110 |
+
name: NELL-ONE Analogy
|
111 |
+
args: relbert/analogy_questions
|
112 |
+
type: analogy-questions
|
113 |
+
metrics:
|
114 |
+
- name: Accuracy
|
115 |
+
type: accuracy
|
116 |
+
value: 0.5916666666666667
|
117 |
- task:
|
118 |
name: Lexical Relation Classification (BLESS)
|
119 |
type: classification
|
|
|
199 |
- Accuracy on Google: 0.908
|
200 |
- Accuracy on ConceptNet Analogy: 0.2986577181208054
|
201 |
- Accuracy on T-Rex Analogy: 0.5191256830601093
|
202 |
+
- Accuracy on NELL-ONE Analogy: 0.5916666666666667
|
203 |
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-large-triplet-a-semeval2012/raw/main/classification.json)):
|
204 |
- Micro F1 score on BLESS: 0.9059816182009944
|
205 |
- Micro F1 score on CogALexV: 0.8582159624413146
|