Hyper-FLUX.1-dev-8steps-lora.safetensors
#37
by
LHJ0
- opened
Hello, it seems that this flux-fp8 can not support Hyper-FLUX.1-dev-8steps-lora.safetensors in diffusers.
These are my code:
self.transformer = FluxTransformer2DModel.from_single_file(os.path.join(self.model_root, self.config["transformer_path"]), torch_dtype=torch.bfloat16).to(self.device)
quantize(self.transformer, weights=qfloat8)
freeze(self.transformer)
self.text_encoder_2 = T5EncoderModel.from_pretrained(os.path.join(self.model_root, self.config["text_encoder_2_repo"]), torch_dtype=torch.bfloat16).to(self.device)
quantize(self.text_encoder_2, weights=qfloat8)
freeze(self.text_encoder_2)
self.pipe = FluxPipeline.from_pretrained(os.path.join(self.model_root, self.config["flux_repo"]), transformer=None, text_encoder_2=None, torch_dtype=torch.bfloat16).to(self.device)
self.pipe.transformer = self.transformer
self.pipe.text_encoder_2 = self.text_encoder_2
self.pipe.load_lora_weights(load_file(os.path.join(self.model_root, self.config["8steps_lora"]), device=self.device), adapter_name="8steps")
self.pipe.fuse_lora(lora_scale=0.125)