File size: 1,995 Bytes
280763d 920004e 98e3b5a ff39182 920004e 061fed6 9856727 061fed6 920004e 9856727 920004e 9856727 920004e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 |
---
license: bigscience-bloom-rail-1.0
---
# Bloom CTranslate2's model
This is a collection of some of the [Bigscience Bloom](https://huggingface.co/bigscience/bloom) exported to
[CTranslate2](https://github.com/OpenNMT/CTranslate2) model format. This allows to load and usage these models
efficently on CPU or GPU.
## Models
The models have been converted to *float16* and can be load in with any other quantification method (e.g. *int 8*).
| Model name | Description |
| --- | --- |
| [bloom-560m](https://huggingface.co/bigscience/bloom-560m) | 560M parameter model pretrained on ROOTS|
| [bloom-3b](https://huggingface.co/bigscience/bloom-3b) | 3B parameter model pretrained on ROOTS
| [bloomz-7b1](https://huggingface.co/bigscience/bloomz-7b1) | 7.1B parameter model finetuned on xP3|
| [bloomz-7b1-mt](https://huggingface.co/bigscience/bloomz-7b1-mt) | 7.1B parameter model finetuned on xP3mt |
| [mt0-xxl-mt](https://huggingface.co/bigscience/mt0-xxl-mt) | 13B parameter model finetuned on xP3|
See [directories](https://huggingface.co/jordimas/bloom-ctranslate2/tree/main) for the different models available.
## Simple code to use them
Install dependencies:
```shell
pip install huggingface_hub ctranslate2 transformers torch
```
Usage:
```python
import huggingface_hub
import ctranslate2
import transformers
model_name = "bloomz-7b1"
prompt = "Hello, I am Joan and I am from Barcelona and"
repo_id = "jordimas/bloom-ctranslate2"
snapshot_folder = huggingface_hub.snapshot_download(repo_id = repo_id, allow_patterns=f"*{model_name}*")
print(f"folder: {snapshot_folder}")
model = f"{snapshot_folder}/{model_name}"
generator = ctranslate2.Generator(model, compute_type="int8")
tokenizer = transformers.AutoTokenizer.from_pretrained(model)
start_tokens = tokenizer.convert_ids_to_tokens(tokenizer.encode(prompt))
results = generator.generate_batch([start_tokens], max_length=90)
result = tokenizer.decode(results[0].sequences_ids[0])
print(f"Result: {result}")
```
|