{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Transformers installation\n",
    "! pip install transformers datasets evaluate accelerate\n",
    "# To install from source instead of the last release, comment the command above and uncomment the following one.\n",
    "# ! pip install git+https://github.com/huggingface/transformers.git"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Multimodal Generation"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Multimodal (any-to-any) models are language models capable of processing diverse types of input data (e.g., text, images, audio, or video) and generating outputs in any of these modalities. Unlike traditional unimodal or fixed-modality models, they allow flexible combinations of input and output, enabling a single system to handle a wide range of tasks: from text-to-image generation to audio-to-text transcription, image captioning, video understanding, and so on. This task shares many similarities with image-text-to-text, but supports a wider range of input and output modalities.\n",
    "\n",
    "In this guide, we provide a brief overview of any-to-any models and show how to use them with Transformers for inference. Unlike Vision LLMs, which are typically limited to vision-and-language tasks, omni-modal models can accept any combination of modalities (e.g., text, images, audio, video) as input, and generate outputs in different modalities, such as text or images.\n",
    "\n",
    "Let’s begin by installing dependencies:\n",
    "\n",
    "```bash\n",
    "pip install -q transformers accelerate flash_attn\n",
    "```\n",
    "\n",
    "Let's initialize the model and the processor."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from transformers import AutoProcessor, AutoModelForMultimodalLM, infer_device\n",
    "import torch\n",
    "\n",
    "device = torch.device(infer_device())\n",
    "model = AutoModelForMultimodalLM.from_pretrained(\n",
    "    \"Qwen/Qwen2.5-Omni-3B\",\n",
    "    dtype=torch.bfloat16,\n",
    "    attn_implementation=\"flash_attention_2\",\n",
    ").to(device)\n",
    "\n",
    "processor = AutoProcessor.from_pretrained(\"Qwen/Qwen2.5-Omni-3B\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "These models typically include a [chat template](https://huggingface.co/docs/transformers/main/en/tasks/./chat_templating) to structure conversations across modalities. Inputs can mix images, text, audio, or other supported formats in a single turn. Outputs may also vary (e.g., text generation or audio generation), depending on the configuration.\n",
    "\n",
    "Below is an example providing a \"text + audio\" input and requesting a text response."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "messages = [\n",
    "    {\n",
    "        \"role\": \"user\",\n",
    "        \"content\": [\n",
    "            {\"type\": \"audio\", \"url\": \"https://huggingface.co/datasets/raushan-testing-hf/audio-test/resolve/main/f2641_0_throatclearing.wav\"},\n",
    "            {\"type\": \"text\", \"text\": \"What do you hear in this audio?\"},\n",
    "        ]\n",
    "    },\n",
    "]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We will now call the processors' [apply_chat_template()](https://huggingface.co/docs/transformers/main/en/main_classes/processors#transformers.ProcessorMixin.apply_chat_template) method to preprocess its output along with the image inputs."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "inputs = processor.apply_chat_template(\n",
    "    messages,\n",
    "    tokenize=True,\n",
    "    return_dict=True,\n",
    "    return_tensors=\"pt\",\n",
    "    add_generation_prompt=True,\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can now pass the preprocessed inputs to the model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "with torch.no_grad():\n",
    "    generated_ids = model.generate(**inputs, max_new_tokens=100)\n",
    "generated_texts = processor.batch_decode(generated_ids, skip_special_tokens=True)\n",
    "print(generated_texts)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Pipeline"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The fastest way to get started is to use the [Pipeline](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.Pipeline) API. Specify the `\"any-to-any\"` task and the model you want to use."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from transformers import pipeline\n",
    "pipe = pipeline(\"any-to-any\", model=\"mistralai/Voxtral-Mini-3B-2507\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The example below uses chat templates to format the text inputs and uses audio modality as an multimodal data."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "messages = [\n",
    "     {\n",
    "         \"role\": \"user\",\n",
    "         \"content\": [\n",
    "             {\n",
    "                 \"type\": \"audio\",\n",
    "                 \"url\": \"https://huggingface.co/datasets/raushan-testing-hf/audio-test/resolve/main/glass-breaking-151256.mp3\",\n",
    "             },\n",
    "             {\"type\": \"text\", \"text\": \"What do you hear in this audio?\"},\n",
    "         ],\n",
    "     },\n",
    " ]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Pass the chat template formatted text and image to [Pipeline](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.Pipeline) and set `return_full_text=False` to remove the input from the generated output."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "outputs = pipe(text=messages, max_new_tokens=20, return_full_text=False)\n",
    "outputs[0][\"generated_text\"]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Any-to-any pipeline also supports generating audio or images with any-to-any models. For that you need to set `generation_mode` parameter. Do not forget to set video sampling to the desired FPS, otherwise the whole video will be loaded without sampling. Here is an example code:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import soundfile as sf\n",
    "pipe = pipeline(\"any-to-any\", model=\"Qwen/Qwen2.5-Omni-3B\")\n",
    "messages = [\n",
    "    {\n",
    "        \"role\": \"user\",\n",
    "        \"content\": [\n",
    "            {\"type\": \"video\", \"path\": \"https://huggingface.co/datasets/raushan-testing-hf/videos-test/resolve/main/Cooking_cake.mp4\"},\n",
    "            {\"type\": \"text\", \"text\": \"Describe this video.\"},\n",
    "        ],\n",
    "    },\n",
    "]\n",
    "output = pipe(text=messages, fps=1, load_audio_from_video=True, max_new_tokens=20, generation_mode=\"audio\")\n",
    "sf.write(\"generated_audio.wav\", out[0][\"generated_audio\"])"
   ]
  }
 ],
 "metadata": {},
 "nbformat": 4,
 "nbformat_minor": 4
}
