Diffusers documentation
Hybrid Inference API Reference
Hybrid Inference API Reference
Remote Decode
diffusers.utils.remote_decode
< source >( endpoint: str tensor: torch.Tensor processor: typing.Union[ForwardRef('VaeImageProcessor'), ForwardRef('VideoProcessor'), NoneType] = None do_scaling: bool = True scaling_factor: typing.Optional[float] = None shift_factor: typing.Optional[float] = None output_type: typing.Literal['mp4', 'pil', 'pt'] = 'pil' return_type: typing.Literal['mp4', 'pil', 'pt'] = 'pil' image_format: typing.Literal['png', 'jpg'] = 'jpg' partial_postprocess: bool = False input_tensor_type: typing.Literal['binary'] = 'binary' output_tensor_type: typing.Literal['binary'] = 'binary' height: typing.Optional[int] = None width: typing.Optional[int] = None )
Parameters
-  endpoint (str) — Endpoint for Remote Decode.
-  tensor (torch.Tensor) — Tensor to be decoded.
-  processor (VaeImageProcessororVideoProcessor, optional) — Used withreturn_type="pt", andreturn_type="pil"for Video models.
-  do_scaling (bool, defaultTrue, optional) — DEPRECATED. passscaling_factor/shift_factorinstead. still set do_scaling=None/do_scaling=False for no scaling until option is removed WhenTruescaling e.g.latents / self.vae.config.scaling_factoris applied remotely. IfFalse, input must be passed with scaling applied.
-  scaling_factor (float, optional) — Scaling is applied when passed e.g.latents / self.vae.config.scaling_factor.- SD v1: 0.18215
- SD XL: 0.13025
- Flux: 0.3611
If None, input must be passed with scaling applied.
 
-  shift_factor (float, optional) — Shift is applied when passed e.g.latents + self.vae.config.shift_factor.- Flux: 0.1159
If None, input must be passed with scaling applied.
 
- Flux: 0.1159
If 
-  output_type ("mp4"or"pil"or"pt", default“pil”) — Endpoint output type. Subject to change. Report feedback on preferred type."mp4": Supported by video models. Endpoint returnsbytesof video.“pil”: Supported by image and video models. Image models: Endpoint returnsbytesof an image inimage_format. Video models: Endpoint returnstorch.Tensorwith partialpostprocessingapplied. Requiresprocessoras a flag (anyNonevalue will work).“pt”: Support by image and video models. Endpoint returnstorch.Tensor. Withpartial_postprocess=Truethe tensor is postprocesseduint8` image tensor.Recommendations: "pt"withpartial_postprocess=Trueis the smallest transfer for full quality."pt"withpartial_postprocess=Falseis the most compatible with third party code."pil"withimage_format="jpg"is the smallest transfer overall.
-  return_type ("mp4"or"pil"or"pt", default“pil”) — Function return type."mp4": Function returnsbytesof video.“pil”: Function returnsPIL.Image.Image. Withoutput_type=“pil” no further processing is applied. Withoutput_type="pt" aPIL.Image.Imageis created.partial_postprocess=Falseis required.partial_postprocess=Trueis **not** required.“pt”: Function returnstorch.Tensor.processoris **not** required.partial_postprocess=Falsetensor isfloat16orbfloat16, without denormalization.partial_postprocess=Truetensor isuint8`, denormalized.
-  image_format ("png"or"jpg", defaultjpg) — Used withoutput_type="pil". Endpoint returnsjpgorpng.
-  partial_postprocess (bool, defaultFalse) — Used withoutput_type="pt".partial_postprocess=Falsetensor isfloat16orbfloat16, without denormalization.partial_postprocess=Truetensor isuint8, denormalized.
-  input_tensor_type ("binary", default"binary") — Tensor transfer type.
-  output_tensor_type ("binary", default"binary") — Tensor transfer type.
-  height (int, optional) — Required for"packed"latents.
-  width (int, optional) — Required for"packed"latents.
Hugging Face Hybrid Inference that allow running VAE decode remotely.
Remote Encode
diffusers.utils.remote_utils.remote_encode
< source >( endpoint: str image: typing.Union[ForwardRef('torch.Tensor'), PIL.Image.Image] scaling_factor: typing.Optional[float] = None shift_factor: typing.Optional[float] = None )
Parameters
-  endpoint (str) — Endpoint for Remote Decode.
-  image (torch.TensororPIL.Image.Image) — Image to be encoded.
-  scaling_factor (float, optional) — Scaling is applied when passed e.g.latents * self.vae.config.scaling_factor.- SD v1: 0.18215
- SD XL: 0.13025
- Flux: 0.3611
If None, input must be passed with scaling applied.
 
-  shift_factor (float, optional) — Shift is applied when passed e.g.latents - self.vae.config.shift_factor.- Flux: 0.1159
If None, input must be passed with scaling applied.
 
- Flux: 0.1159
If 
Hugging Face Hybrid Inference that allow running VAE encode remotely.