akdeniz27 commited on
Commit
bfabf0f
1 Parent(s): a1d348d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -4,7 +4,7 @@ widget:
4
  - text: "Mustafa Kemal Atatürk 19 Mayıs 1919'da Samsun'a çıktı."
5
  ---
6
  # Turkish Named Entity Recognition (NER) Model
7
- This model is the fine-tuned model of "xlm-roberta-base"
8
  (a multilingual version of RoBERTa)
9
  using a reviewed version of well known Turkish NER dataset
10
  (https://github.com/stefan-it/turkish-bert/files/4558187/nerdata.txt).
@@ -16,19 +16,19 @@ batch_size = 8
16
  label_list = ['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC']
17
  max_length = 512
18
  learning_rate = 2e-5
19
- num_train_epochs = 4
20
  weight_decay = 0.01
21
  ```
22
  # How to use:
23
  ```
24
  model = AutoModelForTokenClassification.from_pretrained("akdeniz27/xlm-roberta-base-turkish-ner")
25
  tokenizer = AutoTokenizer.from_pretrained("akdeniz27/xlm-roberta-base-turkish-ner")
26
- ner = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="none")
27
  ner("<your text here>")
28
  ```
29
  Pls refer "https://huggingface.co/transformers/_modules/transformers/pipelines/token_classification.html" for entity grouping with aggregation_strategy parameter.
30
  # Reference test results:
31
  * accuracy: 0.9919343118732742
32
- * f1: 0.945422814532762
33
- * precision: 0.9366551398931153
34
- * recall: 0.9543561819346573
 
4
  - text: "Mustafa Kemal Atatürk 19 Mayıs 1919'da Samsun'a çıktı."
5
  ---
6
  # Turkish Named Entity Recognition (NER) Model
7
+ This model is the fine-tuned version of "xlm-roberta-base"
8
  (a multilingual version of RoBERTa)
9
  using a reviewed version of well known Turkish NER dataset
10
  (https://github.com/stefan-it/turkish-bert/files/4558187/nerdata.txt).
 
16
  label_list = ['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC']
17
  max_length = 512
18
  learning_rate = 2e-5
19
+ num_train_epochs = 2
20
  weight_decay = 0.01
21
  ```
22
  # How to use:
23
  ```
24
  model = AutoModelForTokenClassification.from_pretrained("akdeniz27/xlm-roberta-base-turkish-ner")
25
  tokenizer = AutoTokenizer.from_pretrained("akdeniz27/xlm-roberta-base-turkish-ner")
26
+ ner = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="simple")
27
  ner("<your text here>")
28
  ```
29
  Pls refer "https://huggingface.co/transformers/_modules/transformers/pipelines/token_classification.html" for entity grouping with aggregation_strategy parameter.
30
  # Reference test results:
31
  * accuracy: 0.9919343118732742
32
+ * f1: 0.9492100796448622
33
+ * precision: 0.9407349896480332
34
+ * recall: 0.9578392621870883