PyTorch
Serbian
Croatian
xlm-roberta

XLMali

Вишејезични модел, 279 милиона параметара

Обучаван над корпусима српског и српскохрватског језика - 20 милијарди речи

Једнака подршка уноса на ћирилици и латиници!

Multilingual model, 279 million parameters

Trained on Serbian and Serbo-Croatian corpora - 20 billion words

Equal support for Cyrillic and Latin input!

>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='te-sla/teslaXLM')
>>> unmasker("Kada bi čovek znao gde će pasti on bi<mask>.")
>>> from transformers import AutoTokenizer, AutoModelForMaskedLM
>>> from torch import LongTensor, no_grad
>>> from scipy import spatial
>>> tokenizer = AutoTokenizer.from_pretrained('te-sla/teslaXLM')
>>> model = AutoModelForMaskedLM.from_pretrained('te-sla/teslaXLM', output_hidden_states=True)
>>> x = " pas"
>>> y = " mačka"
>>> z = " svemir"
>>> tensor_x = LongTensor(tokenizer.encode(x, add_special_tokens=False)).unsqueeze(0)
>>> tensor_y = LongTensor(tokenizer.encode(y, add_special_tokens=False)).unsqueeze(0)
>>> tensor_z = LongTensor(tokenizer.encode(z, add_special_tokens=False)).unsqueeze(0)
>>> model.eval()
>>> with no_grad():
>>>     vektor_x = model(input_ids=tensor_x).hidden_states[-1].squeeze()
>>>     vektor_y = model(input_ids=tensor_y).hidden_states[-1].squeeze()
>>>     vektor_z = model(input_ids=tensor_z).hidden_states[-1].squeeze()
>>>     print(spatial.distance.cosine(vektor_x, vektor_y))
>>>     print(spatial.distance.cosine(vektor_x, vektor_z))
Author
Mihailo Škorić
Author
Saša Petalinkar
Computation
TESLA project


Истраживање jе спроведено уз подршку Фонда за науку Републике Србиjе, #7276, Text Embeddings – Serbian Language Applications – TESLA

This research was supported by the Science Fund of the Republic of Serbia, #7276, Text Embeddings - Serbian Language Applications - TESLA

Downloads last month
16
Inference API
Unable to determine this model's library. Check the docs .

Model tree for te-sla/XLMali

Finetuned
(2661)
this model

Datasets used to train te-sla/XLMali