Based on our paper we release a new model trained on the LFT dataset.
Note: We use BPEmbeddings instead of the combination of Wikipedia, Common Crawl and character embeddings (as used in the paper), so save space and training/inferencing time.
|Dataset \ Run||Run 1||Run 2||Run 3†||Avg.|
Paper reported an averaged F1-score of 77.51.
† denotes that this model is selected for upload.
- Downloads last month