Audio Diffusion by Robert Dargavel Smith.
Audio Diffusion leverages the recent advances in image generation using diffusion models by converting audio samples to and from mel spectrogram images.
The original codebase of this implementation can be found here, including training scripts and example notebooks.
Pipeline | Tasks | Colab |
---|---|---|
pipeline_audio_diffusion.py | Unconditional Audio Generation |
import torch
from IPython.display import Audio
from diffusers import DiffusionPipeline
device = "cuda" if torch.cuda.is_available() else "cpu"
pipe = DiffusionPipeline.from_pretrained("teticio/audio-diffusion-256").to(device)
output = pipe()
display(output.images[0])
display(Audio(output.audios[0], rate=mel.get_sample_rate()))
import torch
from IPython.display import Audio
from diffusers import DiffusionPipeline
device = "cuda" if torch.cuda.is_available() else "cpu"
pipe = DiffusionPipeline.from_pretrained("teticio/latent-audio-diffusion-256").to(device)
output = pipe()
display(output.images[0])
display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate()))
import torch
from IPython.display import Audio
from diffusers import DiffusionPipeline
device = "cuda" if torch.cuda.is_available() else "cpu"
pipe = DiffusionPipeline.from_pretrained("teticio/audio-diffusion-ddim-256").to(device)
output = pipe()
display(output.images[0])
display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate()))
output = pipe(
raw_audio=output.audios[0, 0],
start_step=int(pipe.get_default_steps() / 2),
mask_start_secs=1,
mask_end_secs=1,
)
display(output.images[0])
display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate()))
( vqvae: AutoencoderKL unet: UNet2DConditionModel mel: Mel scheduler: typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_ddpm.DDPMScheduler] )
Parameters
DDIMScheduler
or DDPMScheduler
]) — de-noising scheduler
This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
(
batch_size: int = 1
audio_file: str = None
raw_audio: ndarray = None
slice: int = 0
start_step: int = 0
steps: int = None
generator: Generator = None
mask_start_secs: float = 0
mask_end_secs: float = 0
step_generator: Generator = None
eta: float = 0
noise: Tensor = None
return_dict = True
)
→
List[PIL Image]
Parameters
int
) — number of samples to generate
str
) — must be a file on disk due to Librosa limitation or
np.ndarray
) — audio as numpy array
int
) — slice number of audio to convert
int
) — number of de-noising steps (defaults to 50 for DDIM, 1000 for DDPM)
torch.Generator
) — random number generator or None
float
) — number of seconds of audio to mask (not generate) at start
float
) — number of seconds of audio to mask (not generate) at end
torch.Generator
) — random number generator used to de-noise or None
float
) — parameter between 0 and 1 used with DDIM scheduler
torch.Tensor
) — noise tensor of shape (batch_size, 1, height, width) or None
bool
) — if True return AudioPipelineOutput, ImagePipelineOutput else Tuple
Returns
List[PIL Image]
mel spectrograms (float
, List[np.ndarray]
): sample rate and raw audios
Generate random mel spectrogram from audio input and convert to audio.
(
images: typing.List[PIL.Image.Image]
steps: int = 50
)
→
np.ndarray
Reverse step process: recover noisy image from generated image.
(
x0: Tensor
x1: Tensor
alpha: float
)
→
torch.Tensor
Spherical Linear intERPolation
( x_res: int = 256 y_res: int = 256 sample_rate: int = 22050 n_fft: int = 2048 hop_length: int = 512 top_db: int = 80 n_iter: int = 32 )
Parameters
int
) — x resolution of spectrogram (time)
int
) — y resolution of spectrogram (frequency bins)
int
) — sample rate of audio
int
) — number of Fast Fourier Transforms
int
) — hop length (a higher number is recommended for lower than 256 y_res)
int
) — loudest in decibels
int
) — number of iterations for Griffin Linn mel inversion
(
slice: int
)
→
PIL Image
Convert slice of audio to spectrogram.
(
image: Image
)
→
audio (np.ndarray
)
Converts spectrogram to audio.