model update
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ model-index:
|
|
14 |
metrics:
|
15 |
- name: Accuracy
|
16 |
type: accuracy
|
17 |
-
value: 0.
|
18 |
- task:
|
19 |
name: Analogy Questions (SAT full)
|
20 |
type: multiple-choice-qa
|
@@ -173,7 +173,7 @@ It achieves the following results on the relation understanding tasks:
|
|
173 |
- Micro F1 score on K&H+N: 0.9636920080684427
|
174 |
- Micro F1 score on ROOT09: 0.9222814164838609
|
175 |
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-large-semeval2012-mask-prompt-c-nce/raw/main/relation_mapping.json)):
|
176 |
-
- Accuracy on Relation Mapping: 0.
|
177 |
|
178 |
|
179 |
### Usage
|
|
|
14 |
metrics:
|
15 |
- name: Accuracy
|
16 |
type: accuracy
|
17 |
+
value: 0.9469642857142857
|
18 |
- task:
|
19 |
name: Analogy Questions (SAT full)
|
20 |
type: multiple-choice-qa
|
|
|
173 |
- Micro F1 score on K&H+N: 0.9636920080684427
|
174 |
- Micro F1 score on ROOT09: 0.9222814164838609
|
175 |
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-large-semeval2012-mask-prompt-c-nce/raw/main/relation_mapping.json)):
|
176 |
+
- Accuracy on Relation Mapping: 0.9469642857142857
|
177 |
|
178 |
|
179 |
### Usage
|