borzunov commited on
Commit
ae3a826
1 Parent(s): edbc8d4

Update README.md (#2)

Browse files

- Update README.md (17c2282a5ed2da6e58b769d130156382b662974e)

Files changed (1) hide show
  1. README.md +31 -3
README.md CHANGED
@@ -1,4 +1,32 @@
1
- A post-processed version of [bigscience/bloom](https://huggingface.co/bigscience/bloom) for volunteer computing.
2
 
3
- You can use [Petals](https://github.com/bigscience-workshop/petals) to inference and fine-tunine the model in colab.
4
- More details in [petals.ml](https://petals.ml/)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # BLOOM, a version for Petals
2
 
3
+ This model is a version of [bigscience/bloom](https://huggingface.co/bigscience/bloom)
4
+ post-processed to be run at home using the [Petals](https://github.com/bigscience-workshop/petals#readme) swarm.
5
+
6
+ Please check out:
7
+
8
+ - The [original model card](https://huggingface.co/bigscience/bloom)
9
+ to learn about the model's capabilities, specifications, and terms of use.
10
+ - The [Petals repository](https://github.com/bigscience-workshop/petals#readme)
11
+ to learn how to install Petals and run this model over the Petals swarm.
12
+
13
+ We provide minimal code examples below.
14
+
15
+ ## Using the model
16
+
17
+ ```python
18
+ from petals import DistributedBloomForCausalLM
19
+
20
+ model = DistributedBloomForCausalLM.from_pretrained("bigscience/bloom-petals")
21
+ # Embeddings & prompts are on your device, BLOOM blocks are distributed across the Internet
22
+
23
+ inputs = tokenizer("A cat sat", return_tensors="pt")["input_ids"]
24
+ outputs = model.generate(inputs, max_new_tokens=5)
25
+ print(tokenizer.decode(outputs[0])) # A cat sat on a mat...
26
+ ```
27
+
28
+ ## Serving the model blocks
29
+
30
+ ```bash
31
+ python -m petals.cli.run_server bigscience/bloom-petals
32
+ ```