Inference API widget does not work anymore for token classification of POS
Hey everyone,
I've noticed some troubles with the API inference widget. While it works for NER classification (flair/ner-english-fast, flair/ner-multi, flair/ner-english, etc...), the inference for POS (flair/upos-multi, flair/upos-english-fast, flair/upos-english, etc...) always indicate the same warning: [No token was detected]
I mainly checked for Flair models but I haven't found any models where the API inference widget works for POS classification elsewhere.
Do you think it can be fixed?
Thanks.
thanks for reporting! I will have a look at the inference API for the model hub (see here) and try to find a fix for it :)
I debugged it a bit:
from flair.data import Sentence
from flair.models import SequenceTagger
sentence = Sentence("Ich liebe Berlin, as they say")
tagger.predict(sentence, label_name="predicted")
In this case, the sentence.get_spans("predicted")
for the PoS tagging model would be empty, whereas the information is available in sentence.get_labels("predicted")
:
['Token[0]: "Ich"'/'PRON' (1.0),
'Token[1]: "liebe"'/'VERB' (0.9983),
'Token[2]: "Berlin"'/'PROPN' (0.895),
'Token[3]: ","'/'PUNCT' (1.0),
'Token[4]: "as"'/'SCONJ' (0.9972),
'Token[5]: "they"'/'PRON' (0.9991),
'Token[6]: "say"'/'VERB' (0.977)]
So I think we need to differ between PoS tagging and NER models by adding a check if not sentence.get_spans("predicted") and sentence.get_labels("predicted"):
in the inference API code.
@alanakbik Do you know if there's a faster and more elegant way to check this? I can prepare PR at the HF inference API repo to get this fix deployed :)