Update README.md
Browse files
README.md
CHANGED
@@ -48,7 +48,7 @@ widget:
|
|
48 |
---
|
49 |
|
50 |
|
51 |
-
# numind/NuNER-v1.0 fine-tuned on FewNERD-
|
52 |
|
53 |
This is a [NuNER](https://arxiv.org/abs/2402.15343) model fine-tuned on the [FewNERD](https://huggingface.co/datasets/DFKI-SLT/few-nerd) dataset that can be used for Named Entity Recognition. NuNER model uses [RoBERTa-base](https://huggingface.co/FacebookAI/roberta-base) as the backbone encoder and it was trained on the [NuNER dataset](https://huggingface.co/datasets/numind/NuNER), which is a large and diverse dataset synthetically labeled by gpt-3.5-turbo-0301 of 1M sentences. This further pre-training phase allowed the generation of high quality token embeddings, a good starting point for fine-tuning on more specialized datasets.
|
54 |
|
@@ -82,7 +82,7 @@ The model was fine-tuned as a regular BERT-based model for NER task using Huggin
|
|
82 |
|
83 |
>>> classifier = pipeline(
|
84 |
"ner",
|
85 |
-
model="guishe/nuner-
|
86 |
grouped_entities=True
|
87 |
)
|
88 |
>>> classifier(text)
|
|
|
48 |
---
|
49 |
|
50 |
|
51 |
+
# numind/NuNER-v1.0 fine-tuned on FewNERD-coarse-supervised
|
52 |
|
53 |
This is a [NuNER](https://arxiv.org/abs/2402.15343) model fine-tuned on the [FewNERD](https://huggingface.co/datasets/DFKI-SLT/few-nerd) dataset that can be used for Named Entity Recognition. NuNER model uses [RoBERTa-base](https://huggingface.co/FacebookAI/roberta-base) as the backbone encoder and it was trained on the [NuNER dataset](https://huggingface.co/datasets/numind/NuNER), which is a large and diverse dataset synthetically labeled by gpt-3.5-turbo-0301 of 1M sentences. This further pre-training phase allowed the generation of high quality token embeddings, a good starting point for fine-tuning on more specialized datasets.
|
54 |
|
|
|
82 |
|
83 |
>>> classifier = pipeline(
|
84 |
"ner",
|
85 |
+
model="guishe/nuner-v1_fewnerd_coarse_super",
|
86 |
grouped_entities=True
|
87 |
)
|
88 |
>>> classifier(text)
|