AudioLDM 2 was proposed in AudioLDM 2: Learning Holistic Audio Generation with Self-supervised Pretraining by Haohe Liu et al. AudioLDM 2 takes a text prompt as input and predicts the corresponding audio. It can generate text-conditional sound effects, human speech and music.
Inspired by Stable Diffusion, AudioLDM 2 is a text-to-audio latent diffusion model (LDM) that learns continuous audio representations from text embeddings. Two text encoder models are used to compute the text embeddings from a prompt input: the text-branch of CLAP and the encoder of Flan-T5. These text embeddings are then projected to a shared embedding space by an AudioLDM2ProjectionModel. A GPT2 language model (LM) is used to auto-regressively predict eight new embedding vectors, conditional on the projected CLAP and Flan-T5 embeddings. The generated embedding vectors and Flan-T5 text embeddings are used as cross-attention conditioning in the LDM. The UNet of AudioLDM 2 is unique in the sense that it takes two cross-attention embeddings, as opposed to one cross-attention conditioning, as in most other LDMs.
The abstract of the paper is the following:
Although audio generation shares commonalities across different types of audio, such as speech, music, and sound effects, designing models for each type requires careful consideration of specific objectives and biases that can significantly differ from those of other types. To bring us closer to a unified perspective of audio generation, this paper proposes a framework that utilizes the same learning method for speech, music, and sound effect generation. Our framework introduces a general representation of audio, called language of audio (LOA). Any audio can be translated into LOA based on AudioMAE, a self-supervised pre-trained representation learning model. In the generation process, we translate any modalities into LOA by using a GPT-2 model, and we perform self-supervised audio generation learning with a latent diffusion model conditioned on LOA. The proposed framework naturally brings advantages such as in-context learning abilities and reusable self-supervised pretrained AudioMAE and latent diffusion models. Experiments on the major benchmarks of text-to-audio, text-to-music, and text-to-speech demonstrate new state-of-the-art or competitive performance to previous approaches.
This pipeline was contributed by sanchit-gandhi. The original codebase can be found at haoheliu/audioldm2.
AudioLDM2 comes in three variants. Two of these checkpoints are applicable to the general task of text-to-audio generation. The third checkpoint is trained exclusively on text-to-music generation.
All checkpoints share the same model size for the text encoders and VAE. They differ in the size and depth of the UNet. See table below for details on the three checkpoints:
Checkpoint | Task | UNet Model Size | Total Model Size | Training Data / h |
---|---|---|---|---|
audioldm2 | Text-to-audio | 350M | 1.1B | 1150k |
audioldm2-large | Text-to-audio | 750M | 1.5B | 1150k |
audioldm2-music | Text-to-music | 350M | 1.1B | 665k |
num_inference_steps
argument; higher steps give higher quality audio at the expense of slower inference.audio_length_in_s
argument.num_waveforms_per_prompt
to a value greater than 1. Automatic scoring will be performed between the generated waveforms and prompt text, and the audios ranked from best to worst accordingly.The following example demonstrates how to construct good music generation using the aforementioned tips: example.
Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines.
( vae: AutoencoderKL text_encoder: ClapModel text_encoder_2: T5EncoderModel projection_model: AudioLDM2ProjectionModel language_model: GPT2Model tokenizer: typing.Union[transformers.models.roberta.tokenization_roberta.RobertaTokenizer, transformers.models.roberta.tokenization_roberta_fast.RobertaTokenizerFast] tokenizer_2: typing.Union[transformers.models.t5.tokenization_t5.T5Tokenizer, transformers.models.t5.tokenization_t5_fast.T5TokenizerFast] feature_extractor: ClapFeatureExtractor unet: AudioLDM2UNet2DConditionModel scheduler: KarrasDiffusionSchedulers vocoder: SpeechT5HifiGan )
Parameters
UNet2DConditionModel
to denoise the encoded audio latents. unet
to denoise the encoded audio latents. Can be one of
DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. SpeechT5HifiGan
to convert the mel-spectrogram latents to the final audio waveform. Pipeline for text-to-audio generation using AudioLDM2.
This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.).
( prompt: typing.Union[str, typing.List[str]] = None audio_length_in_s: typing.Optional[float] = None num_inference_steps: int = 200 guidance_scale: float = 3.5 negative_prompt: typing.Union[str, typing.List[str], NoneType] = None num_waveforms_per_prompt: typing.Optional[int] = 1 eta: float = 0.0 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.FloatTensor] = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None generated_prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_generated_prompt_embeds: typing.Optional[torch.FloatTensor] = None attention_mask: typing.Optional[torch.LongTensor] = None negative_attention_mask: typing.Optional[torch.LongTensor] = None max_new_tokens: typing.Optional[int] = None return_dict: bool = True callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None callback_steps: typing.Optional[int] = 1 cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None output_type: typing.Optional[str] = 'np' ) → StableDiffusionPipelineOutput or tuple
Parameters
str
or List[str]
, optional) —
The prompt or prompts to guide audio generation. If not defined, you need to pass prompt_embeds
. int
, optional, defaults to 10.24) —
The length of the generated audio sample in seconds. int
, optional, defaults to 200) —
The number of denoising steps. More denoising steps usually lead to a higher quality audio at the
expense of slower inference. float
, optional, defaults to 3.5) —
A higher guidance scale value encourages the model to generate audio that is closely linked to the text
prompt
at the expense of lower sound quality. Guidance scale is enabled when guidance_scale > 1
. str
or List[str]
, optional) —
The prompt or prompts to guide what to not include in audio generation. If not defined, you need to
pass negative_prompt_embeds
instead. Ignored when not using guidance (guidance_scale < 1
). int
, optional, defaults to 1) —
The number of waveforms to generate per prompt. If num_waveforms_per_prompt > 1
, then automatic
scoring is performed between the generated outputs and the text prompt. This scoring ranks the
generated waveforms based on their cosine similarity with the text input in the joint text-audio
embedding space. float
, optional, defaults to 0.0) —
Corresponds to parameter eta (η) from the DDIM paper. Only applies
to the DDIMScheduler, and is ignored in other schedulers. torch.Generator
or List[torch.Generator]
, optional) —
A torch.Generator
to make
generation deterministic. torch.FloatTensor
, optional) —
Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for spectrogram
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
tensor is generated by sampling using the supplied random generator
. torch.FloatTensor
, optional) —
Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
provided, text embeddings are generated from the prompt
input argument. torch.FloatTensor
, optional) —
Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
not provided, negative_prompt_embeds
are generated from the negative_prompt
input argument. torch.FloatTensor
, optional) —
Pre-generated text embeddings from the GPT2 langauge model. Can be used to easily tweak text inputs,
e.g. prompt weighting. If not provided, text embeddings will be generated from prompt
input
argument. torch.FloatTensor
, optional) —
Pre-generated negative text embeddings from the GPT2 language model. Can be used to easily tweak text
inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be computed from
negative_prompt
input argument. torch.LongTensor
, optional) —
Pre-computed attention mask to be applied to the prompt_embeds
. If not provided, attention mask will
be computed from prompt
input argument. torch.LongTensor
, optional) —
Pre-computed attention mask to be applied to the negative_prompt_embeds
. If not provided, attention
mask will be computed from negative_prompt
input argument. int
, optional, defaults to None) —
Number of new tokens to generate with the GPT2 language model. If not provided, number of tokens will
be taken from the config of the model. bool
, optional, defaults to True
) —
Whether or not to return a StableDiffusionPipelineOutput instead of a
plain tuple. Callable
, optional) —
A function that calls every callback_steps
steps during inference. The function is called with the
following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor)
. int
, optional, defaults to 1) —
The frequency at which the callback
function is called. If not specified, the callback is called at
every step. dict
, optional) —
A kwargs dictionary that if specified is passed along to the AttentionProcessor
as defined in
self.processor
. str
, optional, defaults to "np"
) —
The output format of the generated audio. Choose between "np"
to return a NumPy np.ndarray
or
"pt"
to return a PyTorch torch.Tensor
object. Set to "latent"
to return the latent diffusion
model (LDM) output. Returns
StableDiffusionPipelineOutput or tuple
If return_dict
is True
, StableDiffusionPipelineOutput is returned,
otherwise a tuple
is returned where the first element is a list with the generated audio.
The call function to the pipeline for generation.
Examples:
>>> import scipy
>>> import torch
>>> from diffusers import AudioLDM2Pipeline
>>> repo_id = "cvssp/audioldm2"
>>> pipe = AudioLDM2Pipeline.from_pretrained(repo_id, torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> # define the prompts
>>> prompt = "The sound of a hammer hitting a wooden surface."
>>> negative_prompt = "Low quality."
>>> # set the seed for generator
>>> generator = torch.Generator("cuda").manual_seed(0)
>>> # run the generation
>>> audio = pipe(
... prompt,
... negative_prompt=negative_prompt,
... num_inference_steps=200,
... audio_length_in_s=10.0,
... num_waveforms_per_prompt=3,
... generator=generator,
... ).audios
>>> # save the best audio sample (index 0) as a .wav file
>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio[0])
Disable sliced VAE decoding. If enable_vae_slicing
was previously enabled, this method will go back to
computing decoding in one step.
Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
to enable_sequential_cpu_offload
, this method moves one whole model at a time to the GPU when its forward
method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with
enable_sequential_cpu_offload
, but performance is much better due to the iterative execution of the unet
.
Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
( prompt device num_waveforms_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None generated_prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_generated_prompt_embeds: typing.Optional[torch.FloatTensor] = None attention_mask: typing.Optional[torch.LongTensor] = None negative_attention_mask: typing.Optional[torch.LongTensor] = None max_new_tokens: typing.Optional[int] = None ) → prompt_embeds (torch.FloatTensor
)
Parameters
str
or List[str]
, optional) —
prompt to be encoded torch.device
) —
torch device int
) —
number of waveforms that should be generated per prompt bool
) —
whether to use classifier free guidance or not str
or List[str]
, optional) —
The prompt or prompts not to guide the audio generation. If not defined, one has to pass
negative_prompt_embeds
instead. Ignored when not using guidance (i.e., ignored if guidance_scale
is
less than 1
). torch.FloatTensor
, optional) —
Pre-computed text embeddings from the Flan T5 model. Can be used to easily tweak text inputs, e.g.
prompt weighting. If not provided, text embeddings will be computed from prompt
input argument. torch.FloatTensor
, optional) —
Pre-computed negative text embeddings from the Flan T5 model. Can be used to easily tweak text inputs,
e.g. prompt weighting. If not provided, negative_prompt_embeds will be computed from
negative_prompt
input argument. torch.FloatTensor
, optional) —
Pre-generated text embeddings from the GPT2 langauge model. Can be used to easily tweak text inputs,
e.g. prompt weighting. If not provided, text embeddings will be generated from prompt
input
argument. torch.FloatTensor
, optional) —
Pre-generated negative text embeddings from the GPT2 language model. Can be used to easily tweak text
inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be computed from
negative_prompt
input argument. torch.LongTensor
, optional) —
Pre-computed attention mask to be applied to the prompt_embeds
. If not provided, attention mask will
be computed from prompt
input argument. torch.LongTensor
, optional) —
Pre-computed attention mask to be applied to the negative_prompt_embeds
. If not provided, attention
mask will be computed from negative_prompt
input argument. int
, optional, defaults to None) —
The number of new tokens to generate with the GPT2 language model. Returns
prompt_embeds (torch.FloatTensor
)
Text embeddings from the Flan T5 model.
attention_mask (torch.LongTensor
):
Attention mask to be applied to the prompt_embeds
.
generated_prompt_embeds (torch.FloatTensor
):
Text embeddings generated from the GPT2 langauge model.
Encodes the prompt into text encoder hidden states.
Example:
>>> import scipy
>>> import torch
>>> from diffusers import AudioLDM2Pipeline
>>> repo_id = "cvssp/audioldm2"
>>> pipe = AudioLDM2Pipeline.from_pretrained(repo_id, torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> # Get text embedding vectors
>>> prompt_embeds, attention_mask, generated_prompt_embeds = pipe.encode_prompt(
... prompt="Techno music with a strong, upbeat tempo and high melodic riffs",
... device="cuda",
... do_classifier_free_guidance=True,
... )
>>> # Pass text embeddings to pipeline for text-conditional audio generation
>>> audio = pipe(
... prompt_embeds=prompt_embeds,
... attention_mask=attention_mask,
... generated_prompt_embeds=generated_prompt_embeds,
... num_inference_steps=200,
... audio_length_in_s=10.0,
... ).audios[0]
>>> # save generated audio sample
>>> scipy.io.wavfile.write("techno.wav", rate=16000, data=audio)
( inputs_embeds: Tensor = None max_new_tokens: int = 8 **model_kwargs ) → inputs_embeds (
torch.FloatTensorof shape
(batch_size, sequence_length, hidden_size)`)
Parameters
torch.FloatTensor
of shape (batch_size, sequence_length, hidden_size)
) —
The sequence used as a prompt for the generation. int
) —
Number of new tokens to generate. Dict[str, Any]
, optional) —
Ad hoc parametrization of additional model-specific kwargs that will be forwarded to the forward
function of the model. Returns
inputs_embeds (
torch.FloatTensorof shape
(batch_size, sequence_length, hidden_size)`)
The sequence of generated hidden-states.
Generates a sequence of hidden-states from the language model, conditioned on the embedding inputs.
( text_encoder_dim text_encoder_1_dim langauge_model_dim )
Parameters
int
) —
Dimensionality of the text embeddings from the first text encoder (CLAP). int
) —
Dimensionality of the text embeddings from the second text encoder (T5 or VITS). int
) —
Dimensionality of the text embeddings from the language model (GPT2). A simple linear projection model to map two text embeddings to a shared latent space. It also inserts learned
embedding vectors at the start and end of each text embedding sequence respectively. Each variable appended with
_1
refers to that corresponding to the second text encoder. Otherwise, it is from the first.
( hidden_states: typing.Optional[torch.FloatTensor] = None hidden_states_1: typing.Optional[torch.FloatTensor] = None attention_mask: typing.Optional[torch.LongTensor] = None attention_mask_1: typing.Optional[torch.LongTensor] = None )
( sample_size: typing.Optional[int] = None in_channels: int = 4 out_channels: int = 4 flip_sin_to_cos: bool = True freq_shift: int = 0 down_block_types: typing.Tuple[str] = ('CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D', 'DownBlock2D') mid_block_type: typing.Optional[str] = 'UNetMidBlock2DCrossAttn' up_block_types: typing.Tuple[str] = ('UpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'CrossAttnUpBlock2D') only_cross_attention: typing.Union[bool, typing.Tuple[bool]] = False block_out_channels: typing.Tuple[int] = (320, 640, 1280, 1280) layers_per_block: typing.Union[int, typing.Tuple[int]] = 2 downsample_padding: int = 1 mid_block_scale_factor: float = 1 act_fn: str = 'silu' norm_num_groups: typing.Optional[int] = 32 norm_eps: float = 1e-05 cross_attention_dim: typing.Union[int, typing.Tuple[int]] = 1280 transformer_layers_per_block: typing.Union[int, typing.Tuple[int]] = 1 attention_head_dim: typing.Union[int, typing.Tuple[int]] = 8 num_attention_heads: typing.Union[int, typing.Tuple[int], NoneType] = None use_linear_projection: bool = False class_embed_type: typing.Optional[str] = None num_class_embeds: typing.Optional[int] = None upcast_attention: bool = False resnet_time_scale_shift: str = 'default' time_embedding_type: str = 'positional' time_embedding_dim: typing.Optional[int] = None time_embedding_act_fn: typing.Optional[str] = None timestep_post_act: typing.Optional[str] = None time_cond_proj_dim: typing.Optional[int] = None conv_in_kernel: int = 3 conv_out_kernel: int = 3 projection_class_embeddings_input_dim: typing.Optional[int] = None class_embeddings_concat: bool = False )
Parameters
int
or Tuple[int, int]
, optional, defaults to None
) —
Height and width of input/output sample. int
, optional, defaults to 4) — Number of channels in the input sample. int
, optional, defaults to 4) — Number of channels in the output. bool
, optional, defaults to False
) —
Whether to flip the sin to cos in the time embedding. int
, optional, defaults to 0) — The frequency shift to apply to the time embedding. Tuple[str]
, optional, defaults to ("CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D", "DownBlock2D")
) —
The tuple of downsample blocks to use. str
, optional, defaults to "UNetMidBlock2DCrossAttn"
) —
Block type for middle of UNet, it can only be UNetMidBlock2DCrossAttn
for AudioLDM2. Tuple[str]
, optional, defaults to ("UpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "CrossAttnUpBlock2D")
) —
The tuple of upsample blocks to use. bool
or Tuple[bool]
, optional, default to False
) —
Whether to include self-attention in the basic transformer blocks, see
BasicTransformerBlock
. Tuple[int]
, optional, defaults to (320, 640, 1280, 1280)
) —
The tuple of output channels for each block. int
, optional, defaults to 2) — The number of layers per block. int
, optional, defaults to 1) — The padding to use for the downsampling convolution. float
, optional, defaults to 1.0) — The scale factor to use for the mid block. str
, optional, defaults to "silu"
) — The activation function to use. int
, optional, defaults to 32) — The number of groups to use for the normalization.
If None
, normalization and activation layers is skipped in post-processing. float
, optional, defaults to 1e-5) — The epsilon to use for the normalization. int
or Tuple[int]
, optional, defaults to 1280) —
The dimension of the cross attention features. int
or Tuple[int]
, optional, defaults to 1) —
The number of transformer blocks of type BasicTransformerBlock
. Only relevant for
CrossAttnDownBlock2D
, CrossAttnUpBlock2D
,
UNetMidBlock2DCrossAttn
. int
, optional, defaults to 8) — The dimension of the attention heads. int
, optional) —
The number of attention heads. If not defined, defaults to attention_head_dim
str
, optional, defaults to "default"
) — Time scale shift config
for ResNet blocks (see ResnetBlock2D
). Choose from default
or scale_shift
. str
, optional, defaults to None
) —
The type of class embedding to use which is ultimately summed with the time embeddings. Choose from None
,
"timestep"
, "identity"
, "projection"
, or "simple_projection"
. int
, optional, defaults to None
) —
Input dimension of the learnable embedding matrix to be projected to time_embed_dim
, when performing
class conditioning with class_embed_type
equal to None
. str
, optional, defaults to positional
) —
The type of position embedding to use for timesteps. Choose from positional
or fourier
. int
, optional, defaults to None
) —
An optional override for the dimension of the projected time embedding. str
, optional, defaults to None
) —
Optional activation function to use only once on the time embeddings before they are passed to the rest of
the UNet. Choose from silu
, mish
, gelu
, and swish
. str
, optional, defaults to None
) —
The second activation function to use in timestep embedding. Choose from silu
, mish
and gelu
. int
, optional, defaults to None
) —
The dimension of cond_proj
layer in the timestep embedding. int
, optional, default to 3
) — The kernel size of conv_in
layer. int
, optional, default to 3
) — The kernel size of conv_out
layer. int
, optional) — The dimension of the class_labels
input when
class_embed_type="projection"
. Required when class_embed_type="projection"
. bool
, optional, defaults to False
) — Whether to concatenate the time
embeddings with the class embeddings. A conditional 2D UNet model that takes a noisy sample, conditional state, and a timestep and returns a sample
shaped output. Compared to the vanilla UNet2DConditionModel, this variant optionally includes an additional
self-attention layer in each Transformer block, as well as multiple cross-attention layers. It also allows for up
to two cross-attention embeddings, encoder_hidden_states
and encoder_hidden_states_1
.
This model inherits from ModelMixin. Check the superclass documentation for it’s generic methods implemented for all models (such as downloading or saving).
( sample: FloatTensor timestep: typing.Union[torch.Tensor, float, int] encoder_hidden_states: Tensor class_labels: typing.Optional[torch.Tensor] = None timestep_cond: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None encoder_attention_mask: typing.Optional[torch.Tensor] = None return_dict: bool = True encoder_hidden_states_1: typing.Optional[torch.Tensor] = None encoder_attention_mask_1: typing.Optional[torch.Tensor] = None ) → UNet2DConditionOutput or tuple
Parameters
torch.FloatTensor
) —
The noisy input tensor with the following shape (batch, channel, height, width)
. torch.FloatTensor
or float
or int
) — The number of timesteps to denoise an input. torch.FloatTensor
) —
The encoder hidden states with shape (batch, sequence_length, feature_dim)
. torch.Tensor
) —
A cross-attention mask of shape (batch, sequence_length)
is applied to encoder_hidden_states
. If
True
the mask is kept, otherwise if False
it is discarded. Mask will be converted into a bias,
which adds large negative values to the attention scores corresponding to “discard” tokens. bool
, optional, defaults to True
) —
Whether or not to return a UNet2DConditionOutput instead of a plain
tuple. dict
, optional) —
A kwargs dictionary that if specified is passed along to the AttnProcessor
. torch.FloatTensor
, optional) —
A second set of encoder hidden states with shape (batch, sequence_length_2, feature_dim_2)
. Can be
used to condition the model on a different set of embeddings to encoder_hidden_states
. torch.Tensor
, optional) —
A cross-attention mask of shape (batch, sequence_length_2)
is applied to encoder_hidden_states_1
.
If True
the mask is kept, otherwise if False
it is discarded. Mask will be converted into a bias,
which adds large negative values to the attention scores corresponding to “discard” tokens. Returns
UNet2DConditionOutput or tuple
If return_dict
is True, an UNet2DConditionOutput is returned, otherwise
a tuple
is returned where the first element is the sample tensor.
The AudioLDM2UNet2DConditionModel forward method.
( audios: ndarray )
Output class for audio pipelines.