michaelfeil
commited on
Commit
•
7f77418
1
Parent(s):
1e810ff
Upload intfloat/e5-large ctranslate fp16 weights
Browse files
README.md
CHANGED
@@ -2608,7 +2608,7 @@ Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on
|
|
2608 |
|
2609 |
quantized version of [intfloat/e5-large](https://huggingface.co/intfloat/e5-large)
|
2610 |
```bash
|
2611 |
-
pip install hf-hub-ctranslate2>=2.
|
2612 |
```
|
2613 |
|
2614 |
```python
|
@@ -2651,7 +2651,7 @@ scores = (embeddings @ embeddings.T) * 100
|
|
2651 |
```
|
2652 |
|
2653 |
Checkpoint compatible to [ctranslate2>=3.16.0](https://github.com/OpenNMT/CTranslate2)
|
2654 |
-
and [hf-hub-ctranslate2>=2.
|
2655 |
- `compute_type=int8_float16` for `device="cuda"`
|
2656 |
- `compute_type=int8` for `device="cpu"`
|
2657 |
|
|
|
2608 |
|
2609 |
quantized version of [intfloat/e5-large](https://huggingface.co/intfloat/e5-large)
|
2610 |
```bash
|
2611 |
+
pip install hf-hub-ctranslate2>=2.12.0 ctranslate2>=3.16.0
|
2612 |
```
|
2613 |
|
2614 |
```python
|
|
|
2651 |
```
|
2652 |
|
2653 |
Checkpoint compatible to [ctranslate2>=3.16.0](https://github.com/OpenNMT/CTranslate2)
|
2654 |
+
and [hf-hub-ctranslate2>=2.12.0](https://github.com/michaelfeil/hf-hub-ctranslate2)
|
2655 |
- `compute_type=int8_float16` for `device="cuda"`
|
2656 |
- `compute_type=int8` for `device="cpu"`
|
2657 |
|