Edit model card

  Cross Encoding
    Model: MiniLM
    Lang: IT

Model description

This is a MiniLMv2 [1] model for the Italian language, obtained using mmarco-mMiniLMv2-L12-H384-v1 as a starting point and focusing it on the Italian language by modifying the embedding layer (as in [2], computing document-level frequencies over the Wikipedia dataset)

The resulting model has 33M parameters, a vocabulary of 30.498 tokens, and a size of ~130 MB.


[1] https://arxiv.org/abs/2012.15828

[2] https://arxiv.org/abs/2010.05609


The model is released under MIT license

Downloads last month

Collection including osiria/minilm-l12-h384-italian-cross-encoder