{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## 带有 Whisper 的交互式 Phi 3 Mini 4K Instruct 聊天机器人\n",
        "\n",
        "### 介绍：\n",
        "交互式 Phi 3 Mini 4K Instruct 聊天机器人是一款工具，允许用户通过文本或音频输入与微软 Phi 3 Mini 4K Instruct 演示进行互动。该聊天机器人可用于多种任务，如翻译、天气更新和一般信息收集。"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "cellView": "form",
        "id": "Atl_WEmtR0Yd"
      },
      "outputs": [],
      "source": [
        "# 安装所需的 Python 库\n",
        "!pip install accelerate\n",
        "!pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118\n",
        "!pip install flash-attn --no-build-isolation', env={'FLASH_ATTENTION_SKIP_CUDA_BUILD': \"TRUE\"}, shell=True\n",
        "!pip install transformers\n",
        "!pip install wheel\n",
        "!pip install gradio\n",
        "!pip install pydub==0.25.1\n",
        "!pip install edge-tts\n",
        "!pip install openai-whisper==20231117\n",
        "!pip install ffmpeg==1.4\n",
        "# from IPython.display import clear_output\n",
        "# clear_output()"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {},
      "outputs": [],
      "source": [
        "# 检查 Cuda 支持是否可用 \n",
        "# 输出 True = Cuda\n",
        "# 输出 False = 无 Cuda（需要安装 Cuda 才能在 GPU 上运行模型）\n",
        "import os \n",
        "import torch\n",
        "print(torch.cuda.is_available())\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "MKAUp20H4ZXl"
      },
      "source": [
        "[创建您的 Huggingface 访问令牌](https://huggingface.co/settings/tokens)\n",
        "\n",
        "1. 创建新令牌 \n",
        "2. 提供新名称 \n",
        "3. 选择写入权限\n",
        "4. 复制令牌并保存在安全的地方"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "以下 Python 代码主要执行了两个任务：导入 `os` 模块和设置环境变量。\n",
        "\n",
        "1. 导入 `os` 模块：\n",
        "   - Python 中的 `os` 模块提供了一种与操作系统交互的方式。它允许你执行各种与操作系统相关的任务，例如访问环境变量、处理文件和目录等。\n",
        "   - 在这段代码中，使用 `import` 语句导入了 `os` 模块。此语句使得 `os` 模块的功能在当前 Python 脚本中可用。\n",
        "\n",
        "2. 设置环境变量：\n",
        "   - 环境变量是一个可以被操作系统上运行的程序访问的值。它是一种存储配置设置或其他信息的方法，可以被多个程序使用。\n",
        "   - 在这段代码中，使用 `os.environ` 字典设置了一个新的环境变量。字典的键是 `'HF_TOKEN'`，值是从 `HUGGINGFACE_TOKEN` 变量中获取的。\n",
        "   - `HUGGINGFACE_TOKEN` 变量定义在这段代码片段的上方，并使用 `#@param` 语法赋值为字符串 `\"hf_**************\"`。这种语法通常用于 Jupyter notebooks，以便在 notebook 界面中直接进行用户输入和参数配置。\n",
        "   - 通过设置 `'HF_TOKEN'` 环境变量，它可以被程序的其他部分或在同一操作系统上运行的其他程序访问。\n",
        "\n",
        "总的来说，这段代码导入了 `os` 模块，并使用在 `HUGGINGFACE_TOKEN` 变量中提供的值设置了一个名为 `'HF_TOKEN'` 的环境变量。"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "cellView": "form",
        "id": "N5r2ikbwR68c"
      },
      "outputs": [],
      "source": [
        "import os\n",
        "# set the Hugging Face Token from \n",
        "# add the Hugging Face Token to the environment variables\n",
        "HUGGINGFACE_TOKEN = \"Enter Hugging Face Key\" #@param {type:\"string\"}\n",
        "os.environ['HF_TOKEN'] = HUGGINGFACE_TOKEN"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "cellView": "form",
        "id": "nmXm0dxuRinA"
      },
      "outputs": [],
      "source": [
        "# Download Phi-3-mini-4k-instruct model & Whisper Tiny\n",
        "import torch\n",
        "from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\n",
        "\n",
        "torch.random.manual_seed(0)\n",
        "\n",
        "model = AutoModelForCausalLM.from_pretrained(\n",
        "    \"microsoft/Phi-3-mini-4k-instruct\",\n",
        "    device_map=\"cuda\",\n",
        "    torch_dtype=\"auto\",\n",
        "    trust_remote_code=True,\n",
        ")\n",
        "tokenizer = AutoTokenizer.from_pretrained(\"microsoft/Phi-3-mini-4k-instruct\")\n",
        "\n",
        "#whisper for speech to text()\n",
        "import whisper\n",
        "select_model =\"tiny\" # ['tiny', 'base']\n",
        "whisper_model = whisper.load_model(select_model)\n",
        "\n",
        "#from IPython.display import clear_output\n",
        "#clear_output()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "使用 Edge TTS 服务进行文本转语音（TTS）。让我们逐一了解相关的功能实现：\n",
        "\n",
        "1. `calculate_rate_string(input_value)`：此函数接受一个输入值，并计算 TTS 语音的速率字符串。输入值表示所需的语速，其中值 1 表示正常速度。该函数通过从输入值中减去 1，将其乘以 100，然后根据输入值是否大于或等于 1 来确定符号。函数返回格式为 “{sign}{rate}” 的速率字符串。\n",
        "\n",
        "2. `make_chunks(input_text, language)`：此函数接受一个输入文本和一种语言作为参数。它根据特定语言的规则将输入文本拆分为块。在此实现中，如果语言是“English”，函数将在每个句号（“.”）处拆分文本，并删除任何前导或尾随空格。然后，它将句号附加到每个块上，并返回过滤后的块列表。\n",
        "\n",
        "3. `tts_file_name(text)`：此函数根据输入文本生成 TTS 音频文件的文件名。它对文本执行若干转换操作：去除尾随句号（如果存在），将文本转换为小写，去除前导和尾随空格，并将空格替换为下划线。然后，它将文本截断为最多 25 个字符（如果更长）或使用完整文本（如果为空）。最后，它使用 [`uuid`] 模块生成一个随机字符串，并将其与截断后的文本结合起来，创建格式为 “/content/edge_tts_voice/{truncated_text}_{random_string}.mp3” 的文件名。\n",
        "\n",
        "4. `merge_audio_files(audio_paths, output_path)`：此函数将多个音频文件合并为一个音频文件。它接受一个音频文件路径列表和一个输出路径作为参数。函数初始化一个空的 `AudioSegment` 对象，名为 [`merged_audio`]。然后，它遍历每个音频文件路径，使用 `pydub` 库中的 `AudioSegment.from_file()` 方法加载音频文件，并将当前音频文件附加到 [`merged_audio`] 对象中。最后，它将合并后的音频以 MP3 格式导出到指定的输出路径。\n",
        "\n",
        "5. `edge_free_tts(chunks_list, speed, voice_name, save_path)`：此函数使用 Edge TTS 服务执行 TTS 操作。它接受一个文本块列表、语速、语音名称和保存路径作为参数。如果块的数量大于 1，函数会创建一个目录来存储各个块的音频文件。然后，它遍历每个块，使用 `calculate_rate_string()` 函数、语音名称和块文本构建一个 Edge TTS 命令，并使用 `os.system()` 函数执行该命令。如果命令执行成功，它会将生成的音频文件路径附加到列表中。在处理完所有块之后，它使用 `merge_audio_files()` 函数合并各个音频文件，并将合并后的音频保存到指定的保存路径。如果只有一个块，它会直接生成 Edge TTS 命令并将音频保存到保存路径。最后，它返回生成的音频文件的保存路径。\n",
        "\n",
        "6. `random_audio_name_generate()`：此函数使用 [`uuid`] 模块生成一个随机音频文件名。它生成一个随机 UUID，将其转换为字符串，取前 8 个字符，附加 “.mp3” 扩展名，并返回随机音频文件名。\n",
        "\n",
        "7. `talk(input_text)`：此函数是执行 TTS 操作的主要入口点。它接受一个输入文本作为参数。首先，它检查输入文本的长度，以确定是否为长句子（大于或等于 600 个字符）。根据长度和 `translate_text_flag` 变量的值，它确定语言并使用 `make_chunks()` 函数生成文本块列表。然后，它使用 `random_audio_name_generate()` 函数生成音频文件的保存路径。最后，它调用 `edge_free_tts()` 函数执行 TTS 操作，并返回生成的音频文件的保存路径。\n",
        "\n",
        "总体而言，这些函数协同工作，将输入文本拆分为块，生成音频文件的文件名，使用 Edge TTS 服务执行 TTS 操作，并将各个音频文件合并为一个音频文件。"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "cellView": "form",
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 93
        },
        "id": "Mv4WVhNUz4IL",
        "outputId": "7f177f73-3eb1-4d7c-d5e9-1e7cabe32f63"
      },
      "outputs": [],
      "source": [
        "#@title Edge TTS\n",
        "def calculate_rate_string(input_value):\n",
        "    rate = (input_value - 1) * 100\n",
        "    sign = '+' if input_value >= 1 else '-'\n",
        "    return f\"{sign}{abs(int(rate))}\"\n",
        "\n",
        "\n",
        "def make_chunks(input_text, language):\n",
        "    language=\"English\"\n",
        "    if language == \"English\":\n",
        "      temp_list = input_text.strip().split(\".\")\n",
        "      filtered_list = [element.strip() + '.' for element in temp_list[:-1] if element.strip() and element.strip() != \"'\" and element.strip() != '\"']\n",
        "      if temp_list[-1].strip():\n",
        "          filtered_list.append(temp_list[-1].strip())\n",
        "      return filtered_list\n",
        "\n",
        "\n",
        "import re\n",
        "import uuid\n",
        "def tts_file_name(text):\n",
        "    if text.endswith(\".\"):\n",
        "        text = text[:-1]\n",
        "    text = text.lower()\n",
        "    text = text.strip()\n",
        "    text = text.replace(\" \",\"_\")\n",
        "    truncated_text = text[:25] if len(text) > 25 else text if len(text) > 0 else \"empty\"\n",
        "    random_string = uuid.uuid4().hex[:8].upper()\n",
        "    file_name = f\"/content/edge_tts_voice/{truncated_text}_{random_string}.mp3\"\n",
        "    return file_name\n",
        "\n",
        "\n",
        "from pydub import AudioSegment\n",
        "import shutil\n",
        "import os\n",
        "def merge_audio_files(audio_paths, output_path):\n",
        "    # 创建一个空 AudioSegment\n",
        "    merged_audio = AudioSegment.silent(duration=0)\n",
        "\n",
        "    # 遍历每个音频文件的路径\n",
        "    for audio_path in audio_paths:\n",
        "        # 用 pydub 打开音频文件\n",
        "        audio = AudioSegment.from_file(audio_path)\n",
        "\n",
        "        # 将当前音频文件添加到 merged_audio\n",
        "        merged_audio += audio\n",
        "\n",
        "    # 将合并后的音频文件导出到指定路径\n",
        "    merged_audio.export(output_path, format=\"mp3\")\n",
        "\n",
        "def edge_free_tts(chunks_list,speed,voice_name,save_path):\n",
        "  # print(chunks_list)\n",
        "  if len(chunks_list)>1:\n",
        "    chunk_audio_list=[]\n",
        "    if os.path.exists(\"/content/edge_tts_voice\"):\n",
        "      shutil.rmtree(\"/content/edge_tts_voice\")\n",
        "    os.mkdir(\"/content/edge_tts_voice\")\n",
        "    k=1\n",
        "    for i in chunks_list:\n",
        "      print(i)\n",
        "      edge_command=f'edge-tts  --rate={calculate_rate_string(speed)}% --voice {voice_name} --text \"{i}\" --write-media /content/edge_tts_voice/{k}.mp3'\n",
        "      print(edge_command)\n",
        "      var1=os.system(edge_command)\n",
        "      if var1==0:\n",
        "        pass\n",
        "      else:\n",
        "        print(f\"Failed: {i}\")\n",
        "      chunk_audio_list.append(f\"/content/edge_tts_voice/{k}.mp3\")\n",
        "      k+=1\n",
        "    # print(chunk_audio_list)\n",
        "    merge_audio_files(chunk_audio_list, save_path)\n",
        "  else:\n",
        "    edge_command=f'edge-tts  --rate={calculate_rate_string(speed)}% --voice {voice_name} --text \"{chunks_list[0]}\" --write-media {save_path}'\n",
        "    print(edge_command)\n",
        "    var2=os.system(edge_command)\n",
        "    if var2==0:\n",
        "      pass\n",
        "    else:\n",
        "      print(f\"Failed: {chunks_list[0]}\")\n",
        "  return save_path\n",
        "\n",
        "# text = \"This is Microsoft Phi 3 mini 4k instruct Demo\" Simply update the text variable with the text you want to convert to speech\n",
        "text = 'This is Microsoft Phi 3 mini 4k instruct Demo'  # @param {type: \"string\"}\n",
        "Language = \"English\" # @param ['English']\n",
        "# 只需将声音性别从男性改为女性，然后选择您想使用的声音即可\n",
        "Gender = \"Female\"# @param ['Male', 'Female']\n",
        "female_voice=\"en-US-AriaNeural\"# @param[\"en-US-AriaNeural\",'zh-CN-XiaoxiaoNeural','zh-CN-XiaoyiNeural']\n",
        "speed = 1  # @param {type: \"number\"}\n",
        "translate_text_flag  = False\n",
        "if len(text)>=600:\n",
        "  long_sentence = True\n",
        "else:\n",
        "  long_sentence = False\n",
        "\n",
        "# long_sentence = False # @param {type:\"boolean\"}\n",
        "save_path = ''  # @param {type: \"string\"}\n",
        "if len(save_path)==0:\n",
        "  save_path=tts_file_name(text)\n",
        "if Language == \"English\" :\n",
        "  if Gender==\"Male\":\n",
        "    voice_name=\"en-US-ChristopherNeural\"\n",
        "  if Gender==\"Female\":\n",
        "    voice_name=female_voice\n",
        "    # voice_name=\"en-US-AriaNeural\"\n",
        "\n",
        "\n",
        "if translate_text_flag:\n",
        "  input_text=text\n",
        "  # input_text=translate_text(text, Language)\n",
        "  # print(\"Translateting\")\n",
        "else:\n",
        "  input_text=text\n",
        "if long_sentence==True and translate_text_flag==True:\n",
        "  chunks_list=make_chunks(input_text,Language)\n",
        "elif long_sentence==True and translate_text_flag==False:\n",
        "  chunks_list=make_chunks(input_text,\"English\")\n",
        "else:\n",
        "  chunks_list=[input_text]\n",
        "# print(chunks_list)\n",
        "# edge_save_path=edge_free_tts(chunks_list,speed,voice_name,save_path)\n",
        "# from IPython.display import clear_output\n",
        "# clear_output()\n",
        "# from IPython.display import Audio\n",
        "# Audio(edge_save_path, autoplay=True)\n",
        "\n",
        "from IPython.display import clear_output\n",
        "from IPython.display import Audio\n",
        "if not os.path.exists(\"/content/audio\"):\n",
        "    os.mkdir(\"/content/audio\")\n",
        "import uuid\n",
        "def random_audio_name_generate():\n",
        "  random_uuid = uuid.uuid4()\n",
        "  audio_extension = \".mp3\"\n",
        "  random_audio_name = str(random_uuid)[:8] + audio_extension\n",
        "  return random_audio_name\n",
        "def talk(input_text):\n",
        "  global translate_text_flag,Language,speed,voice_name\n",
        "  if len(input_text)>=600:\n",
        "    long_sentence = True\n",
        "  else:\n",
        "    long_sentence = False\n",
        "\n",
        "  if long_sentence==True and translate_text_flag==True:\n",
        "    chunks_list=make_chunks(input_text,Language)\n",
        "  elif long_sentence==True and translate_text_flag==False:\n",
        "    chunks_list=make_chunks(input_text,\"English\")\n",
        "  else:\n",
        "    chunks_list=[input_text]\n",
        "  save_path=\"/content/audio/\"+random_audio_name_generate()\n",
        "  edge_save_path=edge_free_tts(chunks_list,speed,voice_name,save_path)\n",
        "  return edge_save_path\n",
        "\n",
        "\n",
        "edge_save_path=talk(text)\n",
        "Audio(edge_save_path, autoplay=True)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "实现两个函数：`convert_to_text` 和 `run_text_prompt`，以及声明两个类：`str` 和 `Audio`。\n",
        "\n",
        "`convert_to_text` 函数接收一个 audio_path 作为输入，并使用一个名为 whisper_model 的模型将音频转录为文本。该函数首先检查 `gpu` 标志是否设置为 `True`。如果是，则使用 whisper_model，并设置参数如 `word_timestamps=True`, `fp16=True`, `language='English'`, 和 `task='translate'`。如果 `gpu` 标志为 `False`，则使用 `fp16=False` 的 whisper_model。然后将生成的转录文本保存到名为 'scan.txt' 的文件中，并返回该文本。\n",
        "\n",
        "`run_text_prompt` 函数接收一个消息和一个 chat_history 作为输入。它使用 `phi_demo` 函数根据输入消息生成一个聊天机器人响应。生成的响应随后传递给 `talk` 函数，该函数将响应转换为一个音频文件并返回文件路径。`Audio` 类用于显示和播放音频文件。音频通过 `IPython.display` 模块中的 `display` 函数显示，并使用 `autoplay=True` 参数创建 `Audio` 对象，这样音频会自动开始播放。chat_history 会更新输入消息和生成的响应，并返回一个空字符串和更新后的 chat_history。\n",
        "\n",
        "`Audio` 类是一个自定义类，表示一个音频对象。它用于在 Jupyter Notebook 环境中创建一个音频播放器。该类接受各种参数，如 `data`, `filename`, `url`, `embed`, `rate`, `autoplay` 和 `normalize`。`data` 参数可以是一个 numpy 数组、一个样本列表、一个表示文件名或 URL 的字符串，或原始 PCM 数据。`filename` 参数用于指定要从中加载音频数据的本地文件，`url` 参数用于指定要从中下载音频数据的 URL。`embed` 参数决定音频数据是应该使用数据 URI 嵌入还是从原始来源引用。rate 参数指定音频数据的采样率。`autoplay` 参数决定音频是否应自动开始播放。`normalize` 参数指定音频数据是否应归一化（重新缩放到最大可能范围）。Audio 类还提供像 `reload` 这样的方法，用于从文件或 URL 重新加载音频数据，以及像 `src_attr`, `autoplay_attr` 和 `element_id_attr` 这样的属性，用于检索 HTML 中音频元素的相应属性。\n",
        "\n",
        "总体来说，这些函数和类用于将音频转录为文本，从聊天机器人生成音频响应，并在 Jupyter Notebook 环境中显示和播放音频。"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "cellView": "form",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "0e6aTA6mk7Gi",
        "outputId": "4c4825c9-f1ef-4d9e-d294-83d67248e073"
      },
      "outputs": [],
      "source": [
        "#@title Run gradio app\n",
        "def convert_to_text(audio_path):\n",
        "  gpu=True\n",
        "  if gpu:\n",
        "    result = whisper_model.transcribe(audio_path,word_timestamps=True,fp16=True,language='English',task='translate')\n",
        "  else:\n",
        "    result = whisper_model.transcribe(audio_path,word_timestamps=True,fp16=False,language='English',task='translate')\n",
        "  with open('scan.txt', 'w') as file:\n",
        "    file.write(str(result))\n",
        "  return result[\"text\"]\n",
        "\n",
        "\n",
        "import gradio as gr\n",
        "from IPython.display import Audio, display\n",
        "def run_text_prompt(message, chat_history):\n",
        "    bot_message = phi_demo(message)\n",
        "    edge_save_path=talk(bot_message)\n",
        "    # print(edge_save_path)\n",
        "    display(Audio(edge_save_path, autoplay=True))\n",
        "\n",
        "    chat_history.append((message, bot_message))\n",
        "    return \"\", chat_history\n",
        "\n",
        "\n",
        "def run_audio_prompt(audio, chat_history):\n",
        "    if audio is None:\n",
        "        return None, chat_history\n",
        "    print(audio)\n",
        "    message_transcription = convert_to_text(audio)\n",
        "    _, chat_history = run_text_prompt(message_transcription, chat_history)\n",
        "    return None, chat_history\n",
        "\n",
        "\n",
        "with gr.Blocks() as demo:\n",
        "    chatbot = gr.Chatbot(label=\"Chat with Phi 3 mini 4k instruct\")\n",
        "\n",
        "    msg = gr.Textbox(label=\"Ask anything\")\n",
        "    msg.submit(run_text_prompt, [msg, chatbot], [msg, chatbot])\n",
        "\n",
        "    with gr.Row():\n",
        "        audio = gr.Audio(sources=\"microphone\", type=\"filepath\")\n",
        "\n",
        "        send_audio_button = gr.Button(\"Send Audio\", interactive=True)\n",
        "        send_audio_button.click(run_audio_prompt, [audio, chatbot], [audio, chatbot])\n",
        "\n",
        "demo.launch(share=True,debug=True)"
      ]
    }
  ],
  "metadata": {
    "accelerator": "GPU",
    "colab": {
      "gpuType": "T4",
      "provenance": []
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.11.9"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
