--- license: mit language: - en - az base_model: - FacebookAI/xlm-roberta-base pipeline_tag: sentence-similarity --- # XLM-RoBERTa model for English and Azerbaijani ## Usage (Sentence-Transformers) ``` pip install -U sentence-transformers ``` ```python from sentence_transformers import SentenceTransformer sentences = ['Bu nümunə cümlədir', 'Bu cümlə bir nümunədir'] model = SentenceTransformer('LocalDoc/xlm-roberta-AZ') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) ```python from transformers import AutoTokenizer, AutoModel import torch def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) sentences = ['Bu nümunə cümlədir', 'Bu cümlə bir nümunədir'] tokenizer = AutoTokenizer.from_pretrained('LocalDoc/xlm-roberta-AZ') model = AutoModel.from_pretrained('LocalDoc/xlm-roberta-AZ') encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') with torch.no_grad(): model_output = model(**encoded_input) sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` # SentenceTransformer Model Architecture ```python SentenceTransformer( (0): Transformer({ 'max_seq_length': 8192, 'do_lower_case': False }) with Transformer model: XLMRobertaModel (1): Pooling({ 'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True }) )