Tokenizer

#1
by CliffUnger - opened

Hi! Sometimes "tokens-like-this" are tokenized together and the tags correlated are in one token "O" OR "ENT-TYPE" and sometimes they are tokenized separately "tokens" "like" "this" and the tags correlated are three and not one. So i'm asking why the same forms of a piece of a sentence are tokenized differently?

I mean sometimes an entity "like-this" is annotated entirely in one token B- and sometimes is rightly annotated separately with B- I-

Hi, different language models have different tokenizers, and different tokenizers can behave differently as you described. This is a problem when it comes to sequence tagging as the original labels depend on the original token split (pre-tokenization), so we need to align the pre-tokenized input/label to the language model's tokenizer. I put a link to the slide of our paper in EACL 2021, where you can find an explanation of the mismatch of token alignment more in detail, and a simple remedy to fix it implemented in our library (https://github.com/asahi417/tner).

https://www.slideshare.net/asahiushio1/202104-eacl-tner-an-allround-python-library-for-transformerbased-named-entity-recognition

Thanks for your reply and for the resources you share! I'm trying to train spacy's spancat component on this dataset and his default tokenizer always split "token-like-this" into five tokens [token, -, like, - , this] so i'm writing a function that align inputs tokens and tags following spancat tokenizer.
P.S. great project! I love open source :)

Sign up or log in to comment