How to run this model locally?

#7
by jeasinema - opened

Sorry for this (probably) silly question, but I still don't quite know how to run this model on my local GPU workstation? I tried the commands in "Use in Transformers" and here is what I got

ImportError: cannot import name 'eBart' from 'transformers' (/home/user/anaconda3/envs/dellemini/lib/python3.9/site-packages/transformers

I'm using the latest release of the transformers package.

@julien-c can you help with this? Thank you so much!

Just noticed that you may find some inference code in their github repo https://github.com/borisdayma/dalle-mini/blob/main/tools/inference/inference_pipeline.ipynb

@jeasinema thanks for the hint, that jupyter link has "DalleBart" vice Dall"eBart"

from dalle_mini import DalleBart
theModel = DalleBart.from_pretrained("dalle-mini/dalle-mega")

inferance was made "eBart" when it needed "DalleBart". I went looking in the docs and found no mention of "eBart" in the transformers documentation, https://huggingface.co/docs/transformers/main/en/model_doc/bart

ping @valhalla who's working on this

Sign up or log in to comment