Diffusers documentation
Flux2
Flux2
Flux.2 is the recent series of image generation models from Black Forest Labs, preceded by the Flux.1 series. It is an entirely new model with a new architecture and pre-training done from scratch!
Original model checkpoints for Flux can be found here. Original inference code can be found here.
Flux2 can be quite expensive to run on consumer hardware devices. However, you can perform a suite of optimizations to run it faster and in a more memory-friendly manner. Check out this section for more details. Additionally, Flux can benefit from quantization for memory efficiency with a trade-off in inference latency. Refer to this blog post to learn more.
Caching may also speed up inference by storing and reusing intermediate outputs.
Flux2Pipeline
class diffusers.Flux2Pipeline
< source >( scheduler: FlowMatchEulerDiscreteScheduler vae: AutoencoderKLFlux2 text_encoder: Mistral3ForConditionalGeneration tokenizer: AutoProcessor transformer: Flux2Transformer2DModel )
Parameters
- transformer (Flux2Transformer2DModel) — Conditional Transformer (MMDiT) architecture to denoise the encoded image latents.
- scheduler (FlowMatchEulerDiscreteScheduler) —
A scheduler to be used in combination with
transformerto denoise the encoded image latents. - vae (
AutoencoderKLFlux2) — Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - text_encoder (
Mistral3ForConditionalGeneration) — Mistral3ForConditionalGeneration - tokenizer (
AutoProcessor) — Tokenizer of class PixtralProcessor.
The Flux2 pipeline for text-to-image generation.
Reference: https://bfl.ai/blog/flux-2
__call__
< source >( image: typing.Union[typing.List[PIL.Image.Image], PIL.Image.Image, NoneType] = None prompt: typing.Union[str, typing.List[str]] = None height: typing.Optional[int] = None width: typing.Optional[int] = None num_inference_steps: int = 50 sigmas: typing.Optional[typing.List[float]] = None guidance_scale: typing.Optional[float] = 4.0 num_images_per_prompt: int = 1 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.Tensor] = None prompt_embeds: typing.Optional[torch.Tensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True attention_kwargs: typing.Optional[typing.Dict[str, typing.Any]] = None callback_on_step_end: typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None callback_on_step_end_tensor_inputs: typing.List[str] = ['latents'] max_sequence_length: int = 512 text_encoder_out_layers: typing.Tuple[int] = (10, 20, 30) ) → ~pipelines.flux2.Flux2PipelineOutput or tuple
Parameters
- image (
torch.Tensor,PIL.Image.Image,np.ndarray,List[torch.Tensor],List[PIL.Image.Image], orList[np.ndarray]) —Image, numpy array or tensor representing an image batch to be used as the starting point. For both numpy array and pytorch tensor, the expected value range is between[0, 1]If it’s a tensor or a list or tensors, the expected shape should be(B, C, H, W)or(C, H, W). If it is a numpy array or a list of arrays, the expected shape should be(B, H, W, C)or(H, W, C)It can also accept image latents asimage, but if passing latents directly it is not encoded again. - prompt (
strorList[str], optional) — The prompt or prompts to guide the image generation. If not defined, one has to passprompt_embeds. instead. - guidance_scale (
float, optional, defaults to 1.0) — Guidance scale as defined in Classifier-Free Diffusion Guidance.guidance_scaleis defined aswof equation 2. of Imagen Paper. Guidance scale is enabled by settingguidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the textprompt, usually at the expense of lower image quality. - height (
int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — The height in pixels of the generated image. This is set to 1024 by default for the best results. - width (
int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — The width in pixels of the generated image. This is set to 1024 by default for the best results. - num_inference_steps (
int, optional, defaults to 50) — The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. - sigmas (
List[float], optional) — Custom sigmas to use for the denoising process with schedulers which support asigmasargument in theirset_timestepsmethod. If not defined, the default behavior whennum_inference_stepsis passed will be used. - num_images_per_prompt (
int, optional, defaults to 1) — The number of images to generate per prompt. - generator (
torch.GeneratororList[torch.Generator], optional) — One or a list of torch generator(s) to make generation deterministic. - latents (
torch.Tensor, optional) — Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor will be generated by sampling using the supplied randomgenerator. - prompt_embeds (
torch.Tensor, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, text embeddings will be generated frompromptinput argument. - output_type (
str, optional, defaults to"pil") — The output format of the generate image. Choose between PIL:PIL.Image.Imageornp.array. - return_dict (
bool, optional, defaults toTrue) — Whether or not to return a~pipelines.qwenimage.QwenImagePipelineOutputinstead of a plain tuple. - attention_kwargs (
dict, optional) — A kwargs dictionary that if specified is passed along to theAttentionProcessoras defined underself.processorin diffusers.models.attention_processor. - callback_on_step_end (
Callable, optional) — A function that calls at the end of each denoising steps during the inference. The function is called with the following arguments:callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict).callback_kwargswill include a list of all tensors as specified bycallback_on_step_end_tensor_inputs. - callback_on_step_end_tensor_inputs (
List, optional) — The list of tensor inputs for thecallback_on_step_endfunction. The tensors specified in the list will be passed ascallback_kwargsargument. You will only be able to include variables listed in the._callback_tensor_inputsattribute of your pipeline class. - max_sequence_length (
intdefaults to 512) — Maximum sequence length to use with theprompt. - text_encoder_out_layers (
Tuple[int]) — Layer indices to use in thetext_encoderto derive the final prompt embeddings.
Returns
~pipelines.flux2.Flux2PipelineOutput or tuple
~pipelines.flux2.Flux2PipelineOutput if
return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the
generated images.
Function invoked when calling the pipeline for generation.
Examples:
>>> import torch
>>> from diffusers import Flux2Pipeline
>>> pipe = Flux2Pipeline.from_pretrained("black-forest-labs/FLUX.2-dev", torch_dtype=torch.bfloat16)
>>> pipe.to("cuda")
>>> prompt = "A cat holding a sign that says hello world"
>>> # Depending on the variant being used, the pipeline call will slightly vary.
>>> # Refer to the pipeline documentation for more details.
>>> image = pipe(prompt, num_inference_steps=50, guidance_scale=2.5).images[0]
>>> image.save("flux.png")