deeplima / README.md
Gaël de Chalendar
Add model card
62d4d4c
metadata
language:
  - afr
  - aii
  - akk
  - amh
  - apc
  - apu
  - aqz
  - ara
  - arr
  - bam
  - bej
  - bel
  - ben
  - bho
  - bre
  - bua
  - bul
  - cat
  - ceb
  - ces
  - ckt
  - cop
  - cym
  - dan
  - deu
  - ekk
  - ell
  - eme
  - eng
  - eus
  - fao
  - fas
  - fin
  - fra
  - fro
  - fry
  - gla
  - gle
  - glg
  - glv
  - got
  - grn
  - gsw
  - gun
  - heb
  - hf
  - hin
  - hit
  - hrv
  - hsb
  - hun
  - hye
  - ind
  - isl
  - ita
  - jaa
  - jav
  - jpn
  - kfm
  - koi
  - kom
  - kor
  - krl
  - lat
  - lav
  - lij
  - lit
  - mar
  - mdf
  - mlt
  - mpu
  - myu
  - myv
  - nap
  - nds
  - nld
  - nor
  - nyq
  - olo
  - orv
  - otk
  - pcm
  - pol
  - pom
  - por
  - qub
  - quc
  - ron
  - rus
  - sah
  - san
  - sjo
  - slk
  - slv
  - sme
  - sms
  - soj
  - spa
  - sqi
  - srp
  - swe
  - tam
  - tat
  - tel
  - tgl
  - tha
  - tpn
  - tur
  - uig
  - ukr
  - urb
  - urd
  - vie
  - wbp
  - wol
  - xnr
  - xum
  - yor
  - ypk
  - yue
  - zho
tags:
  - aymara
  - lima
  - tokenization
  - tagging
  - lemmatizing
  - parsing
  - multilingual
license: mit
datasets:
  - universal_dependencies

LIMA libtorch-based models

LIMA is a multilingual linguistic analyzer developed by the CEA LIST, LASTI laboratory (French acronym for Text and Image Semantic Analysis Laboratory). LIMA is Free Software, available under the MIT license.

LIMA has state of the art performance for more than 60 languages thanks to its recent deep learning (neural network) based modules. But it includes also a very powerful rules based mechanism called ModEx allowing to quickly extract information (entities, relations, events…) in new domains where annotated data does not exist.

These models are for the last iteration of LIMA, using libtorch, the C++ implementation of PyTorch. This version is not complete nor completly final, but already better than previous versions.

Read LIMA documentation for installation and usage instructions.


license: mit