model update
Browse files
README.md
CHANGED
@@ -2,7 +2,7 @@
|
|
2 |
datasets:
|
3 |
- relbert/semeval2012_relational_similarity
|
4 |
model-index:
|
5 |
-
- name: relbert/relbert-roberta-base-nce-
|
6 |
results:
|
7 |
- task:
|
8 |
name: Relation Mapping
|
@@ -186,11 +186,11 @@ model-index:
|
|
186 |
value: 0.8973205317539628
|
187 |
|
188 |
---
|
189 |
-
# relbert/relbert-roberta-base-nce-
|
190 |
|
191 |
RelBERT based on [roberta-base](https://huggingface.co/roberta-base) fine-tuned on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity) (see the [`relbert`](https://github.com/asahi417/relbert) for more detail of fine-tuning).
|
192 |
This model achieves the following results on the relation understanding tasks:
|
193 |
-
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-nce-
|
194 |
- Accuracy on SAT (full): 0.6203208556149733
|
195 |
- Accuracy on SAT: 0.6201780415430267
|
196 |
- Accuracy on BATS: 0.7209560867148416
|
@@ -200,13 +200,13 @@ This model achieves the following results on the relation understanding tasks:
|
|
200 |
- Accuracy on ConceptNet Analogy: 0.438758389261745
|
201 |
- Accuracy on T-Rex Analogy: 0.6666666666666666
|
202 |
- Accuracy on NELL-ONE Analogy: 0.6716666666666666
|
203 |
-
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-nce-
|
204 |
- Micro F1 score on BLESS: 0.9142684948018682
|
205 |
- Micro F1 score on CogALexV: 0.8577464788732394
|
206 |
- Micro F1 score on EVALution: 0.6706392199349945
|
207 |
- Micro F1 score on K&H+N: 0.9408777909160465
|
208 |
- Micro F1 score on ROOT09: 0.8987778125979317
|
209 |
-
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-nce-
|
210 |
- Accuracy on Relation Mapping: 0.8355555555555556
|
211 |
|
212 |
|
@@ -218,7 +218,7 @@ pip install relbert
|
|
218 |
and activate model as below.
|
219 |
```python
|
220 |
from relbert import RelBERT
|
221 |
-
model = RelBERT("relbert/relbert-roberta-base-nce-
|
222 |
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (n_dim, )
|
223 |
```
|
224 |
|
@@ -242,7 +242,7 @@ vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (n_dim, )
|
|
242 |
- loss_function_config: {'temperature': 0.05, 'num_negative': 400, 'num_positive': 10}
|
243 |
- augment_negative_by_positive: True
|
244 |
|
245 |
-
See the full configuration at [config file](https://huggingface.co/relbert/relbert-roberta-base-nce-
|
246 |
|
247 |
### Reference
|
248 |
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.emnlp-main.712/).
|
|
|
2 |
datasets:
|
3 |
- relbert/semeval2012_relational_similarity
|
4 |
model-index:
|
5 |
+
- name: relbert/relbert-roberta-base-nce-semeval2012-mask
|
6 |
results:
|
7 |
- task:
|
8 |
name: Relation Mapping
|
|
|
186 |
value: 0.8973205317539628
|
187 |
|
188 |
---
|
189 |
+
# relbert/relbert-roberta-base-nce-semeval2012-mask
|
190 |
|
191 |
RelBERT based on [roberta-base](https://huggingface.co/roberta-base) fine-tuned on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity) (see the [`relbert`](https://github.com/asahi417/relbert) for more detail of fine-tuning).
|
192 |
This model achieves the following results on the relation understanding tasks:
|
193 |
+
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-nce-semeval2012-mask/raw/main/analogy.forward.json)):
|
194 |
- Accuracy on SAT (full): 0.6203208556149733
|
195 |
- Accuracy on SAT: 0.6201780415430267
|
196 |
- Accuracy on BATS: 0.7209560867148416
|
|
|
200 |
- Accuracy on ConceptNet Analogy: 0.438758389261745
|
201 |
- Accuracy on T-Rex Analogy: 0.6666666666666666
|
202 |
- Accuracy on NELL-ONE Analogy: 0.6716666666666666
|
203 |
+
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-nce-semeval2012-mask/raw/main/classification.json)):
|
204 |
- Micro F1 score on BLESS: 0.9142684948018682
|
205 |
- Micro F1 score on CogALexV: 0.8577464788732394
|
206 |
- Micro F1 score on EVALution: 0.6706392199349945
|
207 |
- Micro F1 score on K&H+N: 0.9408777909160465
|
208 |
- Micro F1 score on ROOT09: 0.8987778125979317
|
209 |
+
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-nce-semeval2012-mask/raw/main/relation_mapping.json)):
|
210 |
- Accuracy on Relation Mapping: 0.8355555555555556
|
211 |
|
212 |
|
|
|
218 |
and activate model as below.
|
219 |
```python
|
220 |
from relbert import RelBERT
|
221 |
+
model = RelBERT("relbert/relbert-roberta-base-nce-semeval2012-mask")
|
222 |
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (n_dim, )
|
223 |
```
|
224 |
|
|
|
242 |
- loss_function_config: {'temperature': 0.05, 'num_negative': 400, 'num_positive': 10}
|
243 |
- augment_negative_by_positive: True
|
244 |
|
245 |
+
See the full configuration at [config file](https://huggingface.co/relbert/relbert-roberta-base-nce-semeval2012-mask/raw/main/finetuning_config.json).
|
246 |
|
247 |
### Reference
|
248 |
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.emnlp-main.712/).
|