This is a multilingual NER system trained using a Frustratingly Easy Domain Adaptation architecture. It is based on LaBSE and supports different tagsets all using IOBES formats:

  1. Wikiann (LOC, PER, ORG)
  2. SlavNER 19/21 (EVT, LOC, ORG, PER, PRO)
  3. SlavNER 17 (LOC, MISC, ORG, PER)
  4. SSJ500k (LOC, MISC, ORG, PER)
  5. KPWr (EVT, LOC, ORG, PER, PRO)
  6. CNEC (LOC, ORG, MEDIA, ART, PER, TIME)
  7. Turku (DATE, EVT, LOC, ORG, PER, PRO, TIME)

PER: person, LOC: location, ORG: organization, EVT: event, PRO: product, MISC: Miscellaneous, MEDIA: media, ART: Artifact, TIME: time, DATE: date

You can select the tagset to use in the output by configuring the model. This model manages differently uppercase words.

More information about the model can be found in the paper (https://aclanthology.org/2021.bsnlp-1.12.pdf) and GitHub repository (https://github.com/EMBEDDIA/NER_FEDA).

Downloads last month
12
Inference API
Unable to determine this model’s pipeline type. Check the docs .