OSError: No file named diffusion_pythorch_model.bin

#5
by marcii98 - opened

Hi, when trying to use the RealVisXL_V4.0_inpainting, I get an error message:

"OSError: Error no file named diffusion_pytorch_model.bin found in directory C:\Users\User1.cache\huggingface\hub\models--OzzyGT--RealVisXL_V4.0_inpainting\snapshots\5f1d01afee449cf27fe062dba7e7c4497245a77a\vae."

I look into my .cache file and actually there is no file besides config.json. So I manually downloaded the file: "diffusion_pytorch_model.fp16.safetensors" for vae, unet, text_encoder and text_encoder2 (found in the Files and versions directory here in huggingface). However, after having these files in my local directory, I still get the same error. Did someone face this issue too? Would be awesome to get help here :D

My code:

class InstructPix2PixTransform(BaseImageTransform):
def init(self, config: Config_Summary):
super().init(config)
self.augm_probability = config.augm_probability
self.prompt = config.prompt
self.pipe = DiffusionPipeline.from_pretrained("OzzyGT/RealVisXL_V4.0_inpainting", torch_dtype=torch.float32)
self.pipe.to("cuda" if torch.cuda.is_available() else "cpu")

def process_image(self, img: Image.Image) -> Image.Image:

    if random.uniform(0, 1) < self.augm_probability:
        augmented_img = self.pipe(prompt=self.prompt, image=img).images[0]
        return augmented_img

    return img

def get_augm_steps(self):
    _augm_steps = super().get_transformation_steps()
    _augm_steps.insert(1, transforms.Lambda(self.process_image))
    return _augm_steps

def get_transform_operations(self):
    augmentation_data = transforms.Compose(self.get_augm_steps())
    return augmentation_data
Owner

Hi, that's because you're trying to load the fp32 version and this repo only has the fp6 version with the "variant" option, you should load this checkpoint like this:

self.pipe = DiffusionPipeline.from_pretrained("OzzyGT/RealVisXL_V4.0_inpainting", variant="fp16", torch_dtype=torch.float16)

Thanks a lot!

marcii98 changed discussion status to closed
marcii98 changed discussion status to open

Now I get following error:

Starting epoch 1/5
<class 'PIL.Image.Image'>
Traceback (most recent call last):
[...]
File "C:\Users\User1\PycharmProjects\data_augmentation\app\preprocessing\RealVisXL4_operation.py", line 27, in process_image
augmented_img = self.pipe(prompt=self.prompt, image=img).images[0]
File "C:\Users\User1\AppData\Local\anaconda3\envs\env1\lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "C:\Users\User1\AppData\Local\anaconda3\envs\env1\lib\site-packages\diffusers\pipelines\stable_diffusion_xl\pipeline_stable_diffusion_xl_inpaint.py", line 1448, in call
mask = self.mask_processor.preprocess(
File "C:\Users\User1\AppData\Local\anaconda3\envs\env1\lib\site-packages\diffusers\image_processor.py", line 658, in preprocess
raise ValueError(
ValueError: Input is in incorrect format. Currently, we only support <class 'PIL.Image.Image'>, <class 'numpy.ndarray'>, <class 'torch.Tensor'>

As can be seen, I print out the type of the image and it is : <class 'PIL.Image.Image'>. Does anyone have an idea, why that error occurs?

this is a model repo that I just did so people could load the inpainting model, you probably won't get an answer here. If you're using diffusers you can open a discussion there with a code example (which is needed to understand where the error is happening in your code). If you're using another training library, you should probably post there.

alright, thank you.

marcii98 changed discussion status to closed

Sign up or log in to comment