--- license: apache-2.0 language: - sr metrics: - accuracy base_model: - classla/bcms-bertic library_name: transformers tags: - legal --- # BERTić-COMtext-SR-legal-lemma-ekavica **BERTić-COMtext-SR-legal-lemma-ekavica** is a variant of the [BERTić](https://huggingface.co/classla/bcms-bertic) model, fine-tuned on the task of lemmatization tag prediction in Serbian legal texts written in the Ekavian pronunciation. The model was fine-tuned for 20 epochs on the Ekavian variant of the [COMtext.SR.legal](https://github.com/ICEF-NLP/COMtext.SR) dataset. # Benchmarking This model was evaluated on the task of lemmatizing Serbian legal texts. Lemmatization was performed using the predicted string edit tags, as described in this JTDH 2024 paper: * [Lemmatizing Serbian and Croatian via String Edit Prediction](https://zenodo.org/records/13937204) The model was compared to previous lemmatization approaches that relied on the [srLex](http://hdl.handle.net/11356/1233) inflectional lexicon: - The [CLASSLA](http://pypi.org/project/classla/) library - A variant of [BERTić](https://huggingface.co/classla/bcms-bertic) fine-tuned for MSD prediction using the [SETimes.SR 2.0](http://hdl.handle.net/11356/1843) corpus of newswire texts - A [variant](https://huggingface.co/ICEF-NLP/bcms-bertic-comtext-sr-legal-msd-ekavica) of [BERTić](https://huggingface.co/classla/bcms-bertic) fine-tuned for MSD prediction using the [COMtext.SR.legal](https://github.com/ICEF-NLP/COMtext.SR) corpus of legal texts - [SrBERTa](http://huggingface.co/nemanjaPetrovic/SrBERTa), a model specially trained on Serbian legal texts, fine-tuned for MSD prediction using the [COMtext.SR.legal](https://github.com/ICEF-NLP/COMtext.SR) corpus of legal texts Accuracy was used as the evaluation metric and gold tokenized text was taken as input. All of the previous large language models were fine-tuned for 15 epochs. CLASSLA and BERTić-SETimes were directly tested on the entire COMtext.SR.legal.ekavica corpus. BERTić-COMtext-SR-legal-MSD-ekavica, BERTić-COMtext-SR-legal-lemma-ekavica, and SrBERTa were fine-tuned and evaluated on the COMtext.SR.legal.ekavica corpus using 10-fold CV. The code and data to run these experiments is available on the [COMtext.SR GitHub repository](https://github.com/ICEF-NLP/COMtext.SR). ## Results | Model | Lemma ACC | | ----------------------------------------- | ---------- | | CLASSLA-SR | 0.9432 | | BERTić-SETimes | 0.9649 | | BERTić-COMtext-SR-legal-MSD-ekavica | 0.9666 | | SrBERTa | 0.9391 | | **BERTić-COMtext-SR-legal-lemma-ekavica** | **0.9850** |