File size: 2,212 Bytes
61cb51b
 
2fb50ad
61cb51b
2fb50ad
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
---
license: mit
pipeline_tag: feature-extraction
---

# bge-m3-onnx-o4

This is `bge-m3-onnx-o4` weights of the original [`BAAI/bge-m3`](https://huggingface.co/BAAI/bge-m3). Why is this model cool?

- [x] Multi-Functionality: It can simultaneously perform the three common retrieval functionalities of embedding model: dense retrieval, multi-vector retrieval, and sparse retrieval.
- [x] Multi-Linguality: It can support more than **100** working languages.
- [x] Multi-Granularity: It is able to process inputs of different granularities, spanning from short sentences to long documents of up to **8192** tokens.

## Usage

### Dense Retrieval

```
# for cuda 
pip install --upgrade-strategy eager optimum[onnxruntime]
```

```python

from optimum.onnxruntime import ORTModelForFeatureExtraction
from transformers import AutoTokenizer
import torch

model = ORTModelForFeatureExtraction.from_pretrained("hooman650/bge-m3-onnx-o4", provider="CUDAExecutionProvider")
tokenizer = AutoTokenizer.from_pretrained("hooman650/bge-m3-onnx-o4")

sentences = [
    "English: The quick brown fox jumps over the lazy dog.",
    "Spanish: El rápido zorro marrón salta sobre el perro perezoso.",
    "French: Le renard brun rapide saute par-dessus le chien paresseux.",
    "German: Der schnelle braune Fuchs springt über den faulen Hund.",
    "Italian: La volpe marrone veloce salta sopra il cane pigro.",
    "Japanese: 速い茶色の狐が怠惰な犬を飛び越える。",
    "Chinese (Simplified): 快速的棕色狐狸跳过懒狗。",
    "Russian: Быстрая коричневая лиса прыгает через ленивую собаку.",
    "Arabic: الثعلب البني السريع يقفز فوق الكلب الكسول.",
    "Hindi: तेज़ भूरी लोमड़ी आलसी कुत्ते के ऊपर कूद जाती है।"
]

encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt').to("cuda")

# Get the embeddings
out=model(**encoded_input,return_dict=True).last_hidden_state

# normalize the embeddings
dense_vecs = torch.nn.functional.normalize(out[:, 0], dim=-1)
```
### Multi-Vector (ColBERT)

`coming soon...`