File size: 2,699 Bytes
7e7b643 086c481 7e7b643 bb48876 7e7b643 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 |
## About
This is the 8-bit quantized version of Facebook's mbart model.
According to the abstract, MBART is a sequence-to-sequence denoising auto-encoder pretrained on large-scale monolingual corpora in many languages using the BART objective. mBART is one of the first methods for pretraining a complete sequence-to-sequence model by denoising full texts in multiple languages, while previous approaches have focused only on the encoder, decoder, or reconstructing parts of the text.
The Authors’ code can be found [here](https://github.com/facebookresearch/fairseq/tree/main/examples/mbart)
## Usage info
Install requred packages
```!pip install -U bitsandbytes sentencepiece```
then import model from 🤗 transformers library
```python
from transformers import MBartTokenizer, AutoModelForSeq2SeqLM, pipeline
tokenizer = AutoTokenizer.from_pretrained("Ransaka/mbart-large-cc25-8bit")
model = AutoModelForSeq2SeqLM.from_pretrained("Ransaka/mbart-large-cc25-8bit", device_map='auto')
# you'll get an output like this if import succeed
# ===================================BUG REPORT===================================
# Welcome to bitsandbytes. For bug reports, please run
# python -m bitsandbytes
# and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
# ================================================================================
# bin /opt/conda/lib/python3.7/site-packages/bitsandbytes/libbitsandbytes_cuda113_nocublaslt.so
# CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so
# CUDA SETUP: Highest compute capability among GPUs detected: 6.0
# CUDA SETUP: Detected CUDA version 113
# CUDA SETUP: Loading binary /opt/conda/lib/python3.7/site-packages/bitsandbytes/libbitsandbytes_cuda113_nocublaslt.so...
#create summarization pipeline
text = """Right now, major tech firms are clamouring to replicate the runaway success of ChatGPT,
the generative AI chatbot developed by OpenAI using its GPT-3 large language model.
Much like potential game-changers of the past, such as cloud-based Software as a Service
(SaaS) platforms or blockchain technology (emphasis on potential), established companies
and start-ups alike are going public with LLMs and ChatGPT alternatives in fear of being left behind.
"""
pipe = pipeline('text2text-generation', model=model, tokenizer=tokenizer)
pipe(text)
#[{'generated_text': 'theore, major tech are clamouring to replicate the generative AI chatbot developed by OpenAI using its AI'}]
print("Model memory usage: {:.2f} MB".format(pipe.model.get_memory_footprint()/1e6))
# 'Model memory usage: 1893.99 MB'
``` |