asahi417 commited on
Commit
f20aba7
1 Parent(s): 339baee

model update

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -79,7 +79,7 @@ widget:
79
  # tner/roberta-large-tweetner7-2020-selflabel2021-concat
80
 
81
  This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the
82
- [tner/tweetner7](https://huggingface.co/datasets/tner/tweetner7) dataset (`train` split). This model is fine-tuned on self-labeled dataset which is the `extra_2021` split of the [tner/tweetner7](https://huggingface.co/datasets/tner/tweetner7) annotated by [tner/roberta-large](https://huggingface.co/tner/tner/roberta-large-tweetner7-2020)). Please check [https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper#model-fine-tuning-self-labeling](https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper#model-fine-tuning-self-labeling) for more detail of reproducing the model.
83
  Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository
84
  for more detail). It achieves the following results on the test set of 2021:
85
  - F1 (micro): 0.6451758087201125
 
79
  # tner/roberta-large-tweetner7-2020-selflabel2021-concat
80
 
81
  This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the
82
+ [tner/tweetner7](https://huggingface.co/datasets/tner/tweetner7) dataset (`train` split). This model is fine-tuned on self-labeled dataset which is the `extra_None` split of the [tner/tweetner7](https://huggingface.co/datasets/tner/tweetner7) annotated by [None](https://huggingface.co/None-tweetner7-2020)). Please check [https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper#model-fine-tuning-self-labeling](https://github.com/asahi417/tner/tree/master/examples/tweetner7_paper#model-fine-tuning-self-labeling) for more detail of reproducing the model.
83
  Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository
84
  for more detail). It achieves the following results on the test set of 2021:
85
  - F1 (micro): 0.6451758087201125