drvenabili commited on
Commit
81d93b7
1 Parent(s): 9b1dfcc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -19,7 +19,9 @@ pipeline_tag: token-classification
19
 
20
  This is a fine-tuned model on the NER task. The original model is Turku NLP's [bert-base-finnish-uncased-v1](https://huggingface.co/TurkuNLP/bert-base-finnish-uncased-v1), and the fine-tuning dataset is Turku NLP's [turku_ner_corpus](https://huggingface.co/datasets/turku_ner_corpus/).
21
 
22
- Please mention the original dataset if you use this model:
 
 
23
 
24
  ```bibtex
25
  @inproceedings{luoma-etal-2020-broad,
@@ -105,8 +107,8 @@ Or Python API:
105
  ```
106
  from transformers import AutoModelForTokenClassification, AutoTokenizer
107
 
108
- model = AutoModelForTokenClassification.from_pretrained("drvenabili/bert-base-finnish-uncased-ner")
109
- tokenizer = AutoTokenizer.from_pretrained("drvenabili/bert-base-finnish-uncased-ner")
110
 
111
  inputs = tokenizer("Asun Brysselissä, Euroopan pääkaupungissa.", return_tensors="pt")
112
  outputs = model(**inputs)
 
19
 
20
  This is a fine-tuned model on the NER task. The original model is Turku NLP's [bert-base-finnish-uncased-v1](https://huggingface.co/TurkuNLP/bert-base-finnish-uncased-v1), and the fine-tuning dataset is Turku NLP's [turku_ner_corpus](https://huggingface.co/datasets/turku_ner_corpus/).
21
 
22
+ The model is released under CC-BY-SA 4.0.
23
+
24
+ Please mention the training dataset if you use this model:
25
 
26
  ```bibtex
27
  @inproceedings{luoma-etal-2020-broad,
 
107
  ```
108
  from transformers import AutoModelForTokenClassification, AutoTokenizer
109
 
110
+ model = AutoModelForTokenClassification.from_pretrained("iguanodon-ai/bert-base-finnish-uncased-ner")
111
+ tokenizer = AutoTokenizer.from_pretrained("iguanodon-ai/bert-base-finnish-uncased-ner")
112
 
113
  inputs = tokenizer("Asun Brysselissä, Euroopan pääkaupungissa.", return_tensors="pt")
114
  outputs = model(**inputs)