rufimelo's picture
Update README.md
1dfa3a1
|
raw
history blame
No virus
4.11 kB
metadata
language:
  - pt
thumbnail: Portugues SBERT for the Legal Domain
pipeline_tag: sentence-similarity
tags:
  - sentence-transformers
  - sentence-similarity
  - transformers
datasets:
  - assin
  - assin2

rufimelo/Legal-SBERTimbau-large

This is a sentence-transformers model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. Legal-SBERTimbau-large is based on Legal-BERTimbau-large whioch derives from BERTimbau Large. It is adapted to the Portuguese legal domain.

Usage (Sentence-Transformers)

Using this model becomes easy when you have sentence-transformers installed:

pip install -U sentence-transformers

Then you can use the model like this:

from sentence_transformers import SentenceTransformer
sentences = ["Isto é um exemplo", "Isto é um outro exemplo"]

model = SentenceTransformer('rufimelo/Legal-SBERTimbau-large')
embeddings = model.encode(sentences)
print(embeddings)

Usage (HuggingFace Transformers)

from transformers import AutoTokenizer, AutoModel
import torch


#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
    token_embeddings = model_output[0] #First element of model_output contains all token embeddings
    input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
    return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)


# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']

# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('rufimelo/Legal-SBERTimbau-large')
model = AutoModel.from_pretrained('rufimelo/Legal-SBERTimbau-large}')

# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')

# Compute token embeddings
with torch.no_grad():
    model_output = model(**encoded_input)

# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])

print("Sentence embeddings:")
print(sentence_embeddings)

Evaluation Results STS

Model Dataset PearsonCorrelation
Legal-SBERTimbau-large Assin 0.766293861
Legal-SBERTimbau-large Assin2 0.823565322
---------------------------------------- ---------- ----------
paraphrase-multilingual-mpnet-base-v2 Assin 0.743740222
paraphrase-multilingual-mpnet-base-v2 Assin2 0.823565322
paraphrase-multilingual-mpnet-base-v2 stsb_multi_mt pt 0.83999
paraphrase-multilingual-mpnet-base-v2 Fine tuned with assin(s) Assin 0.77641
paraphrase-multilingual-mpnet-base-v2 Fine tuned with assin(s) Assin2 0.79831
paraphrase-multilingual-mpnet-base-v2 Fine tuned with assin(s) stsb_multi_mt pt 0.84575

Training

Legal-SBERTimbau-large is based on Legal-BERTimbau-large whioch derives from BERTimbau Large. It was trained for Natural Language Inference (NLI). This was chosen due to the lack of Portuguese available data. In addition to that, it was submitted to a fine tuning stage with the assin and assin2 datasets.

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)

Citing & Authors