jeffra commited on
Commit
648a475
1 Parent(s): 348b3ad

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -4
README.md CHANGED
@@ -2,8 +2,11 @@
2
  license: bigscience-bloom-rail-1.0
3
  ---
4
 
5
- This is a custom version of the original [BLOOM weights](https://huggingface.co/bigscience/bloom) to make it fast to use with the [DeepSpeed-Inference](https://www.deepspeed.ai/tutorials/inference-tutorial/) engine which uses Tensor Parallelism. In this repo the tensors are split into 8 shards to target 8 GPUs.
6
 
7
- The full BLOOM documentation is [here](https://huggingface.co/bigscience/bloom).
8
-
9
- To use the weights in repo, you can adapt to your needs the scripts found [here](https://github.com/bigscience-workshop/Megatron-DeepSpeed/tree/main/scripts/inference) (XXX: they are going to migrate soon to HF Transformers code base, so will need to update the link once moved).
 
 
 
2
  license: bigscience-bloom-rail-1.0
3
  ---
4
 
5
+ This is a copy of the original [BLOOM weights](https://huggingface.co/bigscience/bloom) that is more efficient to use with the [DeepSpeed-MII](https://github.com/microsoft/deepspeed-mii) and [DeepSpeed-Inference](https://www.deepspeed.ai/tutorials/inference-tutorial/). In this repo the original tensors are split into 8 shards to target 8 GPUs, this allows the user to run the model with DeepSpeed-inference Tensor Parallelism.
6
 
7
+ For specific details about the BLOOM model itself, please see the [original BLOOM model card](https://huggingface.co/bigscience/bloom).
8
+
9
+ For examples on using this repo please see the following:
10
+ * https://github.com/huggingface/transformers-bloom-inference
11
+ * https://github.com/microsoft/DeepSpeed-MII
12
+