Performance

#5
by Lingomat - opened

I was looking forward to the performance for a 256-size embedding but I found the speed to be an order of magnitude slower than other small models on the leaderboard like BAAI/bge-small-en-v1.5, is this normal?

Presumably the normalisation steps shown in the example sentence transformers code don't account for that difference?

Nomic AI org

nomic-embed-text-v1.5 is 137M parameters while BGE-small is 33M - so inference for BGE-small will be faster. The advantage of v1.5 is that you can slice the output embeddings down from 768 to smaller sizes without losing performance meaning the end embedding is more versatile.

Thanks for clearing that up.

Lingomat changed discussion status to closed

Sign up or log in to comment