asahi417 commited on
Commit
f35527e
1 Parent(s): 053807c

model update

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -79,7 +79,7 @@ widget:
79
  # tner/roberta-large-tweetner7-2020-selflabel2020-concat
80
 
81
  This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the
82
- [tner/tweetner7](https://huggingface.co/datasets/tner/tweetner7) dataset (`train` split). This model is fine-tuned on self-labeled dataset which is the `extra_None` split of the [tner/tweetner7](https://huggingface.co/datasets/tner/tweetner7) annotated by [None](https://huggingface.co/None-tweetner7-2020)). Please check [https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper#model-fine-tuning-self-labeling](https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper#model-fine-tuning-self-labeling) for more detail of reproducing the model.
83
  Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository
84
  for more detail). It achieves the following results on the test set of 2021:
85
  - F1 (micro): 0.6545742216194834
79
  # tner/roberta-large-tweetner7-2020-selflabel2020-concat
80
 
81
  This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the
82
+ [tner/tweetner7](https://huggingface.co/datasets/tner/tweetner7) dataset (`train` split). This model is fine-tuned on self-labeled dataset which is the `extra_2020` split of the [tner/tweetner7](https://huggingface.co/datasets/tner/tweetner7) annotated by [tner/roberta-large](https://huggingface.co/tner/roberta-large-tweetner7-2020)). Please check [https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper#model-fine-tuning-self-labeling](https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper#model-fine-tuning-self-labeling) for more detail of reproducing the model.
83
  Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository
84
  for more detail). It achieves the following results on the test set of 2021:
85
  - F1 (micro): 0.6545742216194834