--- language: - si license: - cc-by-sa-4.0 task_categories: - other task_ids: - lemmatization - part-of-speech tags: - structure-prediction - normalization - tokenization --- The dataset contains 6273 training samples, 762 validation samples and 749 test samples. Each sample represents a sentence and includes the following features: sentence ID ('sent\_id'), list of tokens ('tokens'), list of normalised word forms ('norms'), list of lemmas ('lemmas'), list of Multext-East tags ('xpos\_tags), list of morphological features ('feats'), and list of UPOS tags ('upos\_tags'), which are encoded as class labels.