Update README.md
Browse files
README.md
CHANGED
@@ -4,7 +4,7 @@ widget:
|
|
4 |
- text: ""
|
5 |
---
|
6 |
# Turkish Named Entity Recognition (NER) Model
|
7 |
-
This model is the fine-tuned model of SZTAKI-HLT/hubert-base-cc
|
8 |
using the famous WikiANN dataset presented
|
9 |
in the "Cross-lingual Name Tagging and Linking for 282 Languages" paper.
|
10 |
|
@@ -25,9 +25,9 @@ model = AutoModelForTokenClassification.from_pretrained("akdeniz27/bert-base-hun
|
|
25 |
tokenizer = AutoTokenizer.from_pretrained("akdeniz27/bert-base-hungarian-cased-ner")
|
26 |
ner = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="first")
|
27 |
ner("<your text here>")
|
28 |
-
|
29 |
-
# Pls refer "https://huggingface.co/transformers/_modules/transformers/pipelines/token_classification.html" for entity grouping with aggregation_strategy parameter.
|
30 |
```
|
|
|
|
|
31 |
# Reference test results:
|
32 |
* accuracy: 0.9774538310923768
|
33 |
* f1: 0.9462099085573904
|
|
|
4 |
- text: ""
|
5 |
---
|
6 |
# Turkish Named Entity Recognition (NER) Model
|
7 |
+
This model is the fine-tuned model of "SZTAKI-HLT/hubert-base-cc"
|
8 |
using the famous WikiANN dataset presented
|
9 |
in the "Cross-lingual Name Tagging and Linking for 282 Languages" paper.
|
10 |
|
|
|
25 |
tokenizer = AutoTokenizer.from_pretrained("akdeniz27/bert-base-hungarian-cased-ner")
|
26 |
ner = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="first")
|
27 |
ner("<your text here>")
|
|
|
|
|
28 |
```
|
29 |
+
Pls refer "https://huggingface.co/transformers/_modules/transformers/pipelines/token_classification.html" for entity grouping with aggregation_strategy parameter.
|
30 |
+
|
31 |
# Reference test results:
|
32 |
* accuracy: 0.9774538310923768
|
33 |
* f1: 0.9462099085573904
|