Edit model card

mteb-pt/average_fasttext_cc.pt.300

This is an adaptation of pre-trained Portuguese fastText Word Embeddings to a sentence-transformers model.

The original pre-trained word embeddings can be found at: https://fasttext.cc/docs/en/crawl-vectors.html.

This model maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search.

Usage (Sentence-Transformers)

Using this model becomes easy when you have sentence-transformers installed:

pip install -U sentence-transformers

Then you can use the model like this:

from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]

model = SentenceTransformer('mteb-pt/average_fasttext_cc.pt.300')
embeddings = model.encode(sentences)
print(embeddings)

Evaluation Results

For an automated evaluation of this model, see the Portuguese MTEB Leaderboard: mteb-pt/leaderboard

Full Model Architecture

SentenceTransformer(
  (0): WordEmbeddings(
    (emb_layer): Embedding(2000001, 300)
  )
  (1): Pooling({'word_embedding_dimension': 300, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Citing & Authors

@inproceedings{grave2018learning,
    title={Learning Word Vectors for 157 Languages},
    author={Grave, Edouard and Bojanowski, Piotr and Gupta, Prakhar and Joulin, Armand and Mikolov, Tomas},
    booktitle={Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018)},
    year={2018}
}
Downloads last month
0