Edit model card

mteb-pt/average_fasttext_wiki.pt.300

This is an adaptation of pre-trained Portuguese fastText Word Embeddings to a sentence-transformers model.

The original pre-trained word embeddings can be found at: https://fasttext.cc/docs/en/pretrained-vectors.html.

This model maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search.

Usage (Sentence-Transformers)

Using this model becomes easy when you have sentence-transformers installed:

pip install -U sentence-transformers

Then you can use the model like this:

from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]

model = SentenceTransformer('mteb-pt/average_fasttext_wiki.pt.300')
embeddings = model.encode(sentences)
print(embeddings)

Evaluation Results

For an automated evaluation of this model, see the Portuguese MTEB Leaderboard: mteb-pt/leaderboard

Full Model Architecture

SentenceTransformer(
  (0): WordEmbeddings(
    (emb_layer): Embedding(592109, 300)
  )
  (1): Pooling({'word_embedding_dimension': 300, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Citing & Authors

@article{bojanowski2017enriching,
    title={Enriching Word Vectors with Subword Information},
    author={Bojanowski, Piotr and Grave, Edouard and Joulin, Armand and Mikolov, Tomas},
    journal={Transactions of the Association for Computational Linguistics},
    volume={5},
    year={2017},
    issn={2307-387X},
    pages={135--146}
}
Downloads last month
0