michaelfeil commited on
Commit
f3e7f09
1 Parent(s): 74c9a41

Upload intfloat/e5-large-v2 ctranslate2 weights

Browse files
Files changed (2) hide show
  1. README.md +59 -6
  2. model.bin +2 -2
README.md CHANGED
@@ -4,6 +4,9 @@ tags:
4
  - int8
5
  - float16
6
  - mteb
 
 
 
7
  model-index:
8
  - name: e5-large-v2
9
  results:
@@ -2608,7 +2611,7 @@ Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on
2608
 
2609
  quantized version of [intfloat/e5-large-v2](https://huggingface.co/intfloat/e5-large-v2)
2610
  ```bash
2611
- pip install hf-hub-ctranslate2>=2.12.0 ctranslate2>=3.16.0
2612
  ```
2613
 
2614
  ```python
@@ -2648,16 +2651,20 @@ embeddings = model.encode(
2648
  print(embeddings.shape, embeddings)
2649
  scores = (embeddings @ embeddings.T) * 100
2650
 
 
 
 
 
2651
  ```
2652
 
2653
- Checkpoint compatible to [ctranslate2>=3.16.0](https://github.com/OpenNMT/CTranslate2)
2654
  and [hf-hub-ctranslate2>=2.12.0](https://github.com/michaelfeil/hf-hub-ctranslate2)
2655
  - `compute_type=int8_float16` for `device="cuda"`
2656
  - `compute_type=int8` for `device="cpu"`
2657
 
2658
- Converted on 2023-06-19 using
2659
  ```
2660
- ct2-transformers-converter --model intfloat/e5-large-v2 --output_dir ~/tmp-ct2fast-e5-large-v2 --force --copy_files tokenizer.json modules.json README.md tokenizer_config.json sentence_bert_config.json vocab.txt special_tokens_map.json .gitattributes --trust_remote_code
2661
  ```
2662
 
2663
  # Licence and other remarks:
@@ -2706,7 +2713,7 @@ batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=Tru
2706
  outputs = model(**batch_dict)
2707
  embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
2708
 
2709
- # (Optionally) normalize embeddings
2710
  embeddings = F.normalize(embeddings, p=2, dim=1)
2711
  scores = (embeddings[:2] @ embeddings[2:].T) * 100
2712
  print(scores.tolist())
@@ -2721,6 +2728,52 @@ Please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxi
2721
  Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results
2722
  on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316).
2723
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2724
  ## Citation
2725
 
2726
  If you find our paper or models helpful, please consider cite as follows:
@@ -2736,4 +2789,4 @@ If you find our paper or models helpful, please consider cite as follows:
2736
 
2737
  ## Limitations
2738
 
2739
- This model only works for English texts. Long texts will be truncated to at most 512 tokens.
 
4
  - int8
5
  - float16
6
  - mteb
7
+ - Sentence Transformers
8
+ - sentence-similarity
9
+ - sentence-transformers
10
  model-index:
11
  - name: e5-large-v2
12
  results:
 
2611
 
2612
  quantized version of [intfloat/e5-large-v2](https://huggingface.co/intfloat/e5-large-v2)
2613
  ```bash
2614
+ pip install hf-hub-ctranslate2>=2.12.0 ctranslate2>=3.17.1
2615
  ```
2616
 
2617
  ```python
 
2651
  print(embeddings.shape, embeddings)
2652
  scores = (embeddings @ embeddings.T) * 100
2653
 
2654
+ # Hint: you can also host this code via REST API and
2655
+ # via github.com/michaelfeil/infinity
2656
+
2657
+
2658
  ```
2659
 
2660
+ Checkpoint compatible to [ctranslate2>=3.17.1](https://github.com/OpenNMT/CTranslate2)
2661
  and [hf-hub-ctranslate2>=2.12.0](https://github.com/michaelfeil/hf-hub-ctranslate2)
2662
  - `compute_type=int8_float16` for `device="cuda"`
2663
  - `compute_type=int8` for `device="cpu"`
2664
 
2665
+ Converted on 2023-10-13 using
2666
  ```
2667
+ LLama-2 -> removed <pad> token.
2668
  ```
2669
 
2670
  # Licence and other remarks:
 
2713
  outputs = model(**batch_dict)
2714
  embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
2715
 
2716
+ # normalize embeddings
2717
  embeddings = F.normalize(embeddings, p=2, dim=1)
2718
  scores = (embeddings[:2] @ embeddings[2:].T) * 100
2719
  print(scores.tolist())
 
2728
  Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results
2729
  on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316).
2730
 
2731
+ ## Support for Sentence Transformers
2732
+
2733
+ Below is an example for usage with sentence_transformers.
2734
+ ```python
2735
+ from sentence_transformers import SentenceTransformer
2736
+ model = SentenceTransformer('intfloat/e5-large-v2')
2737
+ input_texts = [
2738
+ 'query: how much protein should a female eat',
2739
+ 'query: summit define',
2740
+ "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
2741
+ "passage: Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
2742
+ ]
2743
+ embeddings = model.encode(input_texts, normalize_embeddings=True)
2744
+ ```
2745
+
2746
+ Package requirements
2747
+
2748
+ `pip install sentence_transformers~=2.2.2`
2749
+
2750
+ Contributors: [michaelfeil](https://huggingface.co/michaelfeil)
2751
+
2752
+ ## FAQ
2753
+
2754
+ **1. Do I need to add the prefix "query: " and "passage: " to input texts?**
2755
+
2756
+ Yes, this is how the model is trained, otherwise you will see a performance degradation.
2757
+
2758
+ Here are some rules of thumb:
2759
+ - Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval.
2760
+
2761
+ - Use "query: " prefix for symmetric tasks such as semantic similarity, paraphrase retrieval.
2762
+
2763
+ - Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering.
2764
+
2765
+ **2. Why are my reproduced results slightly different from reported in the model card?**
2766
+
2767
+ Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences.
2768
+
2769
+ **3. Why does the cosine similarity scores distribute around 0.7 to 1.0?**
2770
+
2771
+ This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss.
2772
+
2773
+ For text embedding tasks like text retrieval or semantic similarity,
2774
+ what matters is the relative order of the scores instead of the absolute values,
2775
+ so this should not be an issue.
2776
+
2777
  ## Citation
2778
 
2779
  If you find our paper or models helpful, please consider cite as follows:
 
2789
 
2790
  ## Limitations
2791
 
2792
+ This model only works for English texts. Long texts will be truncated to at most 512 tokens.
model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6a2e972c674871a0be45c33e92a898e4b04256a882fdf6e6a72a2629facaea59
3
- size 1340583884
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:074f41ce6dbf6564f709b9bfa09f88894c3b05c85f0eeb515bea9c2b72a1c67f
3
+ size 670300108