Edit model card

Greek Media SLF (Sentence-Longformer)

This is a sentence-transformers based on the Greek Media Longformer model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.

Usage (Sentence-Transformers)

Using this model becomes easy when you have sentence-transformers installed:

pip install -U sentence-transformers

Then you can use the model like this:

from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]

model = SentenceTransformer('dimitriz/st-greek-media-longformer-4096')
embeddings = model.encode(sentences)
print(embeddings)

Usage (HuggingFace Transformers)

Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.

from transformers import AutoTokenizer, AutoModel
import torch


#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
    token_embeddings = model_output[0] #First element of model_output contains all token embeddings
    input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
    return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)


# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']

# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('dimitriz/st-greek-media-longformer-4096')
model = AutoModel.from_pretrained('dimitriz/st-greek-media-longformer-4096')

# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')

# Compute token embeddings
with torch.no_grad():
    model_output = model(**encoded_input)

# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])

print("Sentence embeddings:")
print(sentence_embeddings)

Evaluation Results

For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net

Training

The model was trained on a custom dataset containing triplets from the combined Greek 'internet', 'social-media' and 'press' domains, described in the paper DACL.

  • The dataset was created by sampling triplets of sentences from the same domain, where the first two sentences are more similar than the third one.
  • Training objective was to maximize the similarity between the first two sentences and minimize the similarity between the first and the third sentence.
  • The model was trained for 3 epochs with a batch size of 2 and a maximum sequence length of 4096 tokens.
  • The model was trained on a single NVIDIA RTX A6000 GPU with 48GB of memory.

The model was trained with the parameters:

DataLoader:

torch.utils.data.dataloader.DataLoader of length 172897 with parameters:

{'batch_size': 1, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}

Loss:

sentence_transformers.losses.TripletLoss.TripletLoss with parameters:

{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}

Parameters of the fit()-Method:

{
    "epochs": 3,
    "evaluation_steps": 1000,
    "evaluator": "sentence_transformers.evaluation.TripletEvaluator.TripletEvaluator",
    "max_grad_norm": 1,
    "optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
    "optimizer_params": {
        "lr": 2e-05
    },
    "scheduler": "WarmupLinear",
    "steps_per_epoch": null,
    "warmup_steps": 17290,
    "weight_decay": 0.01
}

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 4096, 'do_lower_case': False}) with Transformer model: LongformerModel
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)

Citing & Authors

The model has been officially released with the article "From Pre-training to Meta-Learning: A journey in Low-Resource-Language Representation Learning". Dimitrios Zaikis and Ioannis Vlahavas. In: IEEE Access.

If you use the model, please cite the following:


@ARTICLE{10288436,
    author =  {Zaikis, Dimitrios and Vlahavas, Ioannis},
    journal = {IEEE Access},
    title =   {From Pre-training to Meta-Learning: A journey in Low-Resource-Language Representation Learning},
    year =    {2023},
    volume =  {},
    number =  {},
    pages =   {1-1},
    doi =     {10.1109/ACCESS.2023.3326337}
  }
Downloads last month
24
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Evaluation results

  • accuracy_cosinus on all_custom_greek_media_triplets
    self-reported
    0.943
  • accuracy_euclidean on all_custom_greek_media_triplets
    self-reported
    0.943
  • accuracy_manhattan on all_custom_greek_media_triplets
    self-reported
    0.943