prithivida commited on
Commit
92abd3f
1 Parent(s): 18555d3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -44,9 +44,9 @@ pipeline_tag: sentence-similarity
44
  - [With Sentence Transformers:](#with-sentence-transformers)
45
  - [With Huggingface Transformers:](#with-huggingface-transformers)
46
  - [FAQs](#faqs)
47
- - [How can we run these models with out heavy torch dependency?](#how-can-we-run-these-models-with-out-heavy-torch-dependency)
48
- - [How do I optimise vector index cost?](#how-do-i-optimise-vector-index-cost)
49
- - [How do I offer hybrid search to address Vocabulary Mismatch Problem?](#how-do-i-offer)
50
  - [Why not run MTEB?](#why-not-run-mteb)
51
  - [Roadmap](#roadmap)
52
  - [Notes on Reproducing:](#notes-on-reproducing)
@@ -143,13 +143,13 @@ for query, query_embedding in zip(queries, query_embeddings):
143
 
144
  # FAQS
145
 
146
- #### How can we run these models with out heavy torch dependency?
147
  - You can use ONNX flavours of these models via [FlashRetrieve](https://github.com/PrithivirajDamodaran/FlashRetrieve) library.
148
 
149
- #### How do I optimise vector index cost ?
150
  [Use Binary and Scalar Quantisation](https://huggingface.co/blog/embedding-quantization)
151
 
152
- <h4>How do I offer hybrid search to address Vocabulary Mismatch Problem?</h4>
153
  MIRACL paper shows simply combining BM25 is a good starting point for a Hybrid option:
154
  The below numbers are with mDPR model, but miniMiracle_te_v1 should give a even better hybrid performance.
155
 
 
44
  - [With Sentence Transformers:](#with-sentence-transformers)
45
  - [With Huggingface Transformers:](#with-huggingface-transformers)
46
  - [FAQs](#faqs)
47
+ - [How can I reduce overall inference cost ?](#how-can-i-reduce-overall-inference-cost)
48
+ - [How do I reduce vector storage cost?](#how-do-i-reduce-vector-storage-cost)
49
+ - [How do I offer hybrid search to improve accuracy?](#how-do-i-offer-hybrid-search-to-improve-accuracy)
50
  - [Why not run MTEB?](#why-not-run-mteb)
51
  - [Roadmap](#roadmap)
52
  - [Notes on Reproducing:](#notes-on-reproducing)
 
143
 
144
  # FAQS
145
 
146
+ #### How can I reduce overall inference cost ?
147
  - You can use ONNX flavours of these models via [FlashRetrieve](https://github.com/PrithivirajDamodaran/FlashRetrieve) library.
148
 
149
+ #### How do I reduce vector storage cost ?
150
  [Use Binary and Scalar Quantisation](https://huggingface.co/blog/embedding-quantization)
151
 
152
+ #### How do I offer hybrid search to improve accuracy ?
153
  MIRACL paper shows simply combining BM25 is a good starting point for a Hybrid option:
154
  The below numbers are with mDPR model, but miniMiracle_te_v1 should give a even better hybrid performance.
155