This is a copy of the original [OPT weights](https://huggingface.co/facebook/opt-30b) that is more efficient to use with the [DeepSpeed-MII](https://github.com/microsoft/deepspeed-mii) and [DeepSpeed-Inference](https://www.deepspeed.ai/tutorials/inference-tutorial/). In this repo the original tensors are split into 2 shards to target 2 GPUs, this allows the user to run the model with DeepSpeed-inference Tensor Parallelism. For specific details about the OPT model itself, please see the [original OPT model card](https://huggingface.co/facebook/opt-30b). For examples on using this repo please see the following: * https://github.com/huggingface/transformers-bloom-inference * https://github.com/microsoft/DeepSpeed-MII