Jia Huei Tan commited on
Commit
b03fd0e
1 Parent(s): c569cad

Update README

Browse files
Files changed (1) hide show
  1. README.md +43 -0
README.md CHANGED
@@ -1,3 +1,46 @@
1
  ---
 
 
 
 
 
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ pipeline_tag: sentence-similarity
3
+ tags:
4
+ - feature-extraction
5
+ - sentence-similarity
6
+ language: en
7
  license: mit
8
  ---
9
+
10
+ # ONNX Conversion of [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5)
11
+
12
+ - ONNX model for GPU with O4-O2 optimisation
13
+ - We exported the model with `use_raw_attention_mask=True` [due to this issue](https://github.com/microsoft/onnxruntime/issues/18945)
14
+
15
+ ## Usage
16
+
17
+ ```python
18
+ import torch.nn.functional as F
19
+ from optimum.onnxruntime import ORTModelForFeatureExtraction
20
+ from transformers import AutoTokenizer
21
+
22
+ sentences = [
23
+ "The llama (/ˈlɑːmə/) (Lama glama) is a domesticated South American camelid.",
24
+ "The alpaca (Lama pacos) is a species of South American camelid mammal.",
25
+ "The vicuña (Lama vicugna) (/vɪˈkuːnjə/) is one of the two wild South American camelids.",
26
+ ]
27
+
28
+ model_name = "EmbeddedLLM/bge-base-en-v1.5-onnx-o4-o2-gpu"
29
+ device = "cuda"
30
+ provider = "CUDAExecutionProvider"
31
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
32
+ model = ORTModelForFeatureExtraction.from_pretrained(
33
+ model_name, use_io_binding=True, provider=provider, device_map=device
34
+ )
35
+ inputs = tokenizer(
36
+ sentences,
37
+ padding=True,
38
+ truncation=True,
39
+ return_tensors="pt",
40
+ max_length=model.config.max_position_embeddings,
41
+ )
42
+ inputs = inputs.to(device)
43
+ embeddings = model(**inputs).last_hidden_state[:, 0]
44
+ embeddings = F.normalize(embeddings, p=2, dim=1)
45
+ print(embeddings.cpu().numpy().shape)
46
+ ```