asahi417 commited on
Commit
96fa3ee
1 Parent(s): 7f7df17

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -2,7 +2,7 @@
2
  datasets:
3
  - relbert/t_rex_relational_similarity
4
  model-index:
5
- - name: relbert/relbert-roberta-base-nce-d-t-rex
6
  results:
7
  - task:
8
  name: Relation Mapping
@@ -186,11 +186,11 @@ model-index:
186
  value: 0.8753206261467538
187
 
188
  ---
189
- # relbert/relbert-roberta-base-nce-d-t-rex
190
 
191
  RelBERT based on [roberta-base](https://huggingface.co/roberta-base) fine-tuned on [relbert/t_rex_relational_similarity](https://huggingface.co/datasets/relbert/t_rex_relational_similarity) (see the [`relbert`](https://github.com/asahi417/relbert) for more detail of fine-tuning).
192
  This model achieves the following results on the relation understanding tasks:
193
- - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-nce-d-t-rex/raw/main/analogy.forward.json)):
194
  - Accuracy on SAT (full): 0.4090909090909091
195
  - Accuracy on SAT: 0.41543026706231456
196
  - Accuracy on BATS: 0.5186214563646471
@@ -200,13 +200,13 @@ This model achieves the following results on the relation understanding tasks:
200
  - Accuracy on ConceptNet Analogy: 0.13926174496644295
201
  - Accuracy on T-Rex Analogy: 0.6830601092896175
202
  - Accuracy on NELL-ONE Analogy: 0.7033333333333334
203
- - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-nce-d-t-rex/raw/main/classification.json)):
204
  - Micro F1 score on BLESS: 0.9050775952990809
205
  - Micro F1 score on CogALexV: 0.8086854460093896
206
  - Micro F1 score on EVALution: 0.6256771397616468
207
  - Micro F1 score on K&H+N: 0.952145788412047
208
  - Micro F1 score on ROOT09: 0.8790347853337511
209
- - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-nce-d-t-rex/raw/main/relation_mapping.json)):
210
  - Accuracy on Relation Mapping: 0.7073412698412699
211
 
212
 
@@ -218,7 +218,7 @@ pip install relbert
218
  and activate model as below.
219
  ```python
220
  from relbert import RelBERT
221
- model = RelBERT("relbert/relbert-roberta-base-nce-d-t-rex")
222
  vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (n_dim, )
223
  ```
224
 
@@ -242,14 +242,14 @@ vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (n_dim, )
242
  - loss_function_config: {'temperature': 0.05, 'num_negative': 300, 'num_positive': 30}
243
  - augment_negative_by_positive: True
244
 
245
- See the full configuration at [config file](https://huggingface.co/relbert/relbert-roberta-base-nce-d-t-rex/raw/main/finetuning_config.json).
246
 
247
  ### Reference
248
  If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.emnlp-main.712/).
249
 
250
  ```
251
 
252
- @inproceedings{ushio-etal-2021-distilling,
253
  title = "Distilling Relation Embeddings from Pretrained Language Models",
254
  author = "Ushio, Asahi and
255
  Camacho-Collados, Jose and
 
2
  datasets:
3
  - relbert/t_rex_relational_similarity
4
  model-index:
5
+ - name: relbert/relbert-roberta-base-nce-t-rex
6
  results:
7
  - task:
8
  name: Relation Mapping
 
186
  value: 0.8753206261467538
187
 
188
  ---
189
+ # relbert/relbert-roberta-base-nce-t-rex
190
 
191
  RelBERT based on [roberta-base](https://huggingface.co/roberta-base) fine-tuned on [relbert/t_rex_relational_similarity](https://huggingface.co/datasets/relbert/t_rex_relational_similarity) (see the [`relbert`](https://github.com/asahi417/relbert) for more detail of fine-tuning).
192
  This model achieves the following results on the relation understanding tasks:
193
+ - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-base-nce-t-rex/raw/main/analogy.forward.json)):
194
  - Accuracy on SAT (full): 0.4090909090909091
195
  - Accuracy on SAT: 0.41543026706231456
196
  - Accuracy on BATS: 0.5186214563646471
 
200
  - Accuracy on ConceptNet Analogy: 0.13926174496644295
201
  - Accuracy on T-Rex Analogy: 0.6830601092896175
202
  - Accuracy on NELL-ONE Analogy: 0.7033333333333334
203
+ - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-base-nce-t-rex/raw/main/classification.json)):
204
  - Micro F1 score on BLESS: 0.9050775952990809
205
  - Micro F1 score on CogALexV: 0.8086854460093896
206
  - Micro F1 score on EVALution: 0.6256771397616468
207
  - Micro F1 score on K&H+N: 0.952145788412047
208
  - Micro F1 score on ROOT09: 0.8790347853337511
209
+ - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-base-nce-t-rex/raw/main/relation_mapping.json)):
210
  - Accuracy on Relation Mapping: 0.7073412698412699
211
 
212
 
 
218
  and activate model as below.
219
  ```python
220
  from relbert import RelBERT
221
+ model = RelBERT("relbert/relbert-roberta-base-nce-t-rex")
222
  vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (n_dim, )
223
  ```
224
 
 
242
  - loss_function_config: {'temperature': 0.05, 'num_negative': 300, 'num_positive': 30}
243
  - augment_negative_by_positive: True
244
 
245
+ See the full configuration at [config file](https://huggingface.co/relbert/relbert-roberta-base-nce-t-rex/raw/main/finetuning_config.json).
246
 
247
  ### Reference
248
  If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.emnlp-main.712/).
249
 
250
  ```
251
 
252
+ @inproceedings{ushio-etal-2021istilling,
253
  title = "Distilling Relation Embeddings from Pretrained Language Models",
254
  author = "Ushio, Asahi and
255
  Camacho-Collados, Jose and