rufimelo/Legal-BERTimbau-sts-large-ma-v3
This is a sentence-transformers model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. rufimelo/Legal-BERTimbau-sts-large-ma-v3 is based on Legal-BERTimbau-large which derives from BERTimbau large. It is adapted to the Portuguese legal domain and trained for STS on portuguese datasets.
Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
pip install -U sentence-transformers
Then you can use the model like this:
from sentence_transformers import SentenceTransformer
sentences = ["Isto é um exemplo", "Isto é um outro exemplo"]
model = SentenceTransformer('rufimelo/Legal-BERTimbau-sts-large-ma-v3')
embeddings = model.encode(sentences)
print(embeddings)
Usage (HuggingFace Transformers)
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('rufimelo/Legal-BERTimbau-sts-large-ma-v3')
model = AutoModel.from_pretrained('rufimelo/Legal-BERTimbau-sts-large-ma-v3')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
Evaluation Results STS
Model | Assin | Assin2 | stsb_multi_mt pt | avg |
---|---|---|---|---|
Legal-BERTimbau-sts-base | 0.71457 | 0.73545 | 0.72383 | 0.72462 |
Legal-BERTimbau-sts-base-ma | 0.74874 | 0.79532 | 0.82254 | 0.78886 |
Legal-BERTimbau-sts-base-ma-v2 | 0.75481 | 0.80262 | 0.82178 | 0.79307 |
Legal-BERTimbau-base-TSDAE-sts | 0.78814 | 0.81380 | 0.75777 | 0.78657 |
Legal-BERTimbau-sts-large | 0.76629 | 0.82357 | 0.79120 | 0.79369 |
Legal-BERTimbau-sts-large-v2 | 0.76299 | 0.81121 | 0.81726 | 0.79715 |
Legal-BERTimbau-sts-large-ma | 0.76195 | 0.81622 | 0.82608 | 0.80142 |
Legal-BERTimbau-sts-large-ma-v2 | 0.7836 | 0.8462 | 0.8261 | 0.81863 |
Legal-BERTimbau-sts-large-ma-v3 | 0.7749 | 0.8470 | 0.8364 | 0.81943 |
Legal-BERTimbau-large-v2-sts | 0.71665 | 0.80106 | 0.73724 | 0.75165 |
Legal-BERTimbau-large-TSDAE-sts | 0.72376 | 0.79261 | 0.73635 | 0.75090 |
Legal-BERTimbau-large-TSDAE-sts-v2 | 0.81326 | 0.83130 | 0.786314 | 0.81029 |
Legal-BERTimbau-large-TSDAE-sts-v3 | 0.80703 | 0.82270 | 0.77638 | 0.80204 |
---------------------------------------- | ---------- | ---------- | ---------- | ---------- |
BERTimbau base Fine-tuned for STS | 0.78455 | 0.80626 | 0.82841 | 0.80640 |
BERTimbau large Fine-tuned for STS | 0.78193 | 0.81758 | 0.83784 | 0.81245 |
---------------------------------------- | ---------- | ---------- | ---------- | ---------- |
paraphrase-multilingual-mpnet-base-v2 | 0.71457 | 0.79831 | 0.83999 | 0.78429 |
paraphrase-multilingual-mpnet-base-v2 Fine-tuned with assin(s) | 0.77641 | 0.79831 | 0.84575 | 0.80682 |
Training
rufimelo/Legal-BERTimbau-sts-large-ma-v3 is based on Legal-BERTimbau-large which derives from BERTimbau large.
Firstly, due to the lack of portuguese datasets, it was trained using multilingual knowledge distillation. For the Multilingual Knowledge Distillation process, the teacher model was 'sentence-transformers/stsb-roberta-large', the supposed supported language as English and the language to learn was portuguese.
It was trained for Semantic Textual Similarity, being submitted to a fine tuning stage with the assin, assin2 and stsb_multi_mt pt datasets. (batch 8, 5 epochs 'lr': 1e-5)
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
Citing & Authors
Citing & Authors
If you use this work, please cite:
@inproceedings{souza2020bertimbau,
author = {F{\'a}bio Souza and
Rodrigo Nogueira and
Roberto Lotufo},
title = {{BERT}imbau: pretrained {BERT} models for {B}razilian {P}ortuguese},
booktitle = {9th Brazilian Conference on Intelligent Systems, {BRACIS}, Rio Grande do Sul, Brazil, October 20-23 (to appear)},
year = {2020}
}
@inproceedings{fonseca2016assin,
title={ASSIN: Avaliacao de similaridade semantica e inferencia textual},
author={Fonseca, E and Santos, L and Criscuolo, Marcelo and Aluisio, S},
booktitle={Computational Processing of the Portuguese Language-12th International Conference, Tomar, Portugal},
pages={13--15},
year={2016}
}
@inproceedings{real2020assin,
title={The assin 2 shared task: a quick overview},
author={Real, Livy and Fonseca, Erick and Oliveira, Hugo Goncalo},
booktitle={International Conference on Computational Processing of the Portuguese Language},
pages={406--412},
year={2020},
organization={Springer}
}
@InProceedings{huggingface:dataset:stsb_multi_mt,
title = {Machine translated multilingual STS benchmark dataset.},
author={Philip May},
year={2021},
url={https://github.com/PhilipMay/stsb-multi-mt}
}
- Downloads last month
- 2,528
Datasets used to train rufimelo/Legal-BERTimbau-sts-large-ma-v3
Evaluation results
- Pearson Correlation - assin Datasetself-reported0.775
- Pearson Correlation - assin2 Datasetself-reported0.847
- Pearson Correlation - stsb_multi_mt pt Datasetself-reported0.836