After cloning space every generated image is just an empty black square

#2
by IDontKnowWhatToNameMyself - opened

A lot of times when using the hosted inference API, it would take a long time and then just come out as an empty image. When I tried to duplicate the space, it just started consistently giving me nothing but empty images and taking no time at all.

Some correct me if i'm wrong here but isn't black images output due to NSFW being detected?

Some correct me if i'm wrong here but isn't black images output due to NSFW being detected?

I wouldn't know, but the inputs weren't NSFW at all

Even the prompt "my main in super smash bros" came out black, which is weird because when I just typed "super smash bros" it generated the image and understood what it was

I got this error "Runtime error
Space not ready. Reason: Error, exitCode: 1, message: None" and this "ImportError: cannot import name 'CLIPVisionModelWithProjection' from
'transformers'
(/home/user/.local/lib/python3.8/site-packages/transformers/init.py)" It seems like the duplicate space function is broken with this model :/

i keep getting ("LayerNormKernelImpl" not implemented for 'Half')

i keep getting ("LayerNormKernelImpl" not implemented for 'Half')

Same, don't see where to set --precision full --no-half in hugging faces either

*edit:

I got it processing, had to replace all instances of .float16 with .float32 in app.py though it is definitely impressively slow, it works. Expect to wait 10 min per 512/512 25 steps from what I am seeing .

Sign up or log in to comment