michaelfeil commited on
Commit
486a3b2
1 Parent(s): 22df1af

Upload sentence-transformers/all-MiniLM-L6-v2 ctranslate fp16 weights

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -38,7 +38,7 @@ Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on
38
 
39
  quantized version of [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)
40
  ```bash
41
- pip install hf-hub-ctranslate2>=2.11.0 ctranslate2>=3.16.0
42
  ```
43
 
44
  ```python
@@ -81,7 +81,7 @@ scores = (embeddings @ embeddings.T) * 100
81
  ```
82
 
83
  Checkpoint compatible to [ctranslate2>=3.16.0](https://github.com/OpenNMT/CTranslate2)
84
- and [hf-hub-ctranslate2>=2.11.0](https://github.com/michaelfeil/hf-hub-ctranslate2)
85
  - `compute_type=int8_float16` for `device="cuda"`
86
  - `compute_type=int8` for `device="cpu"`
87
 
 
38
 
39
  quantized version of [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)
40
  ```bash
41
+ pip install hf-hub-ctranslate2>=2.12.0 ctranslate2>=3.16.0
42
  ```
43
 
44
  ```python
 
81
  ```
82
 
83
  Checkpoint compatible to [ctranslate2>=3.16.0](https://github.com/OpenNMT/CTranslate2)
84
+ and [hf-hub-ctranslate2>=2.12.0](https://github.com/michaelfeil/hf-hub-ctranslate2)
85
  - `compute_type=int8_float16` for `device="cuda"`
86
  - `compute_type=int8` for `device="cpu"`
87