asahi417 commited on
Commit
cb89512
1 Parent(s): fd521f3

model update

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -159,20 +159,20 @@ RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on
159
  [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity).
160
  Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
161
  It achieves the following results on the relation understanding tasks:
162
- - Analogy Question ([full result](https://huggingface.co/relbert/relbert-roberta-large-semeval2012-average-no-mask-prompt-a-nce/raw/main/analogy.json)):
163
  - Accuracy on SAT (full): 0.7112299465240641
164
  - Accuracy on SAT: 0.7062314540059347
165
  - Accuracy on BATS: 0.782657031684269
166
  - Accuracy on U2: 0.6754385964912281
167
  - Accuracy on U4: 0.6921296296296297
168
  - Accuracy on Google: 0.936
169
- - Lexical Relation Classification ([full result](https://huggingface.co/relbert/relbert-roberta-large-semeval2012-average-no-mask-prompt-a-nce/raw/main/classification.json))):
170
  - Micro F1 score on BLESS: 0.9124604489980412
171
  - Micro F1 score on CogALexV: 0.8607981220657277
172
  - Micro F1 score on EVALution: 0.6863488624052004
173
  - Micro F1 score on K&H+N: 0.9499895666689852
174
  - Micro F1 score on ROOT09: 0.9075524913820119
175
- - Relation Mapping ([full result](https://huggingface.co/relbert/relbert-roberta-large-semeval2012-average-no-mask-prompt-a-nce/raw/main/relation_mapping.json)):
176
  - Accuracy on Relation Mapping: None
177
 
178
 
@@ -214,7 +214,7 @@ The following hyperparameters were used during training:
214
  The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-large-semeval2012-average-no-mask-prompt-a-nce/raw/main/trainer_config.json).
215
 
216
  ### Reference
217
- If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
218
 
219
  ```
220
 
 
159
  [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity).
160
  Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
161
  It achieves the following results on the relation understanding tasks:
162
+ - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-large-semeval2012-average-no-mask-prompt-a-nce/raw/main/analogy.json)):
163
  - Accuracy on SAT (full): 0.7112299465240641
164
  - Accuracy on SAT: 0.7062314540059347
165
  - Accuracy on BATS: 0.782657031684269
166
  - Accuracy on U2: 0.6754385964912281
167
  - Accuracy on U4: 0.6921296296296297
168
  - Accuracy on Google: 0.936
169
+ - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-large-semeval2012-average-no-mask-prompt-a-nce/raw/main/classification.json))):
170
  - Micro F1 score on BLESS: 0.9124604489980412
171
  - Micro F1 score on CogALexV: 0.8607981220657277
172
  - Micro F1 score on EVALution: 0.6863488624052004
173
  - Micro F1 score on K&H+N: 0.9499895666689852
174
  - Micro F1 score on ROOT09: 0.9075524913820119
175
+ - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-large-semeval2012-average-no-mask-prompt-a-nce/raw/main/relation_mapping.json)):
176
  - Accuracy on Relation Mapping: None
177
 
178
 
 
214
  The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-large-semeval2012-average-no-mask-prompt-a-nce/raw/main/trainer_config.json).
215
 
216
  ### Reference
217
+ If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
218
 
219
  ```
220