{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "name": "Text2Motion_v1.01.05_en.ipynb",
      "provenance": [],
      "toc_visible": true
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "language_info": {
      "name": "python"
    },
    "accelerator": "GPU"
  },
  "cells": [
    {
      "cell_type": "markdown",
      "source": [
        "# Welcome to Text2Motion v1 for Colab! (Execution)"
      ],
      "metadata": {
        "id": "Fyy5ukQ1Snmq"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "# Beginning\n",
        "\n",
        "The operating status and maintenance information of this tool are posted on Twitter ([@miu200521358](https://twitter.com/miu200521358)).\n",
        "\n",
        "If you get an error or it doesn't start, first check the current distribution status.\n",
        "\n",
        "I also accept inquiries by reply or DM."
      ],
      "metadata": {
        "id": "x0Du9XXeSsPi"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "# Contents\n",
        "\n",
        "This notebook will prepare and run Text2Motion.\n",
        "\n",
        "There are some habits to using notebooks, so please familiarize yourself with how to use \"[Preparation](https://colab.research.google.com/github/miu200521358/motion_trace_colab/blob/master/AutoTraceIntroduction_en.ipynb)\" .\n",
        "\n",
        "Click the “=” (actually three lines) at the top left of the screen. The table of contents opens. (If it is already open, proceed to the next)\n",
        "\n",
        "![目次](https://drive.google.com/uc?export=view&id=1HGk4sJmcPtMbMwcJOvE3aU1GjvKinwA_)\n",
        "\n",
        "Create a `text2motion` folder directly under Google Drive.\n",
        "\n",
        "Check your notebook from top to bottom and follow the steps below one by one.\n",
        "\n",
        "Cells that need to be executed are numbered. Execute in order from ①.\n",
        "\n",
        "- **「①　Environment」**\n",
        "  - Build an environment for Text2Motion v1.\n",
        "- **「②　Text2Motion v1 Run」**\n",
        "    - Run Text2Motion v1.\n"
      ],
      "metadata": {
        "id": "CoaXwL-ASx6q"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "# Errors and warnings that may occur and how to deal with them"
      ],
      "metadata": {
        "id": "f-XzbSo-TgtU"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "## GPU limit exceeded"
      ],
      "metadata": {
        "id": "aEQ4dKwATlsr"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "If you get an error like the one below, you've reached the upper limit of GPU usage.\n",
        "\n",
        "If you are in the free area, you will be able to use it again after waiting at least 24 hours.\n",
        "\n",
        "![上限](https://drive.google.com/uc?export=view&id=1xxg5yM-wgNkAr1FboAM8DS2K-aqmtw5d)\n"
      ],
      "metadata": {
        "id": "Ma1e14jVToVi"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Environment crash"
      ],
      "metadata": {
        "id": "NrT6jPFBTq_x"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "If you get an error like the one below, your environment crashed.\n",
        "\n",
        "Delete the session once and recreate the environment.\n",
        "\n",
        "![クラッシュ](https://drive.google.com/uc?export=view&id=1qMoxfUggI_jDjrDifj7UgOrSmsEOMmHY)\n",
        "\n",
        "![クラッシュ](https://drive.google.com/uc?export=view&id=1WFHmCzEkCLi8XnHyl7goicNQPL67RHhX)\n",
        "\n",
        "![削除](https://drive.google.com/uc?export=view&id=12UaonO4UvI_HCnJI95od15_yagVIVsRD)\n",
        "\n",
        "![削除](https://drive.google.com/uc?export=view&id=1smRW97KjP8fqSy3E5dtfWJHpyBtEMKca)\n"
      ],
      "metadata": {
        "id": "TYONV48qTtSf"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Automatic deletion of environments"
      ],
      "metadata": {
        "id": "n0N0LWYnTw0k"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "If you get the following error, the environment has already been destroyed from Colab due to the passage of time or the above crash.\n",
        "\n",
        "Delete the session once and recreate the environment.\n",
        "\n",
        "![Googleドライブ連携](https://drive.google.com/uc?export=view&id=1Oa5SNwStqzR6qVMEzxg8PdLmO3FTpr1l)\n"
      ],
      "metadata": {
        "id": "ywNefafZTyy8"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "## GPU unused warning"
      ],
      "metadata": {
        "id": "P-eZr-2ST1bk"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "The following warning may appear during the work, but there is no problem if you proceed as it is.\n",
        "\n",
        "(The GPU is used only for AI execution, so it may appear while working on other cells.)\n",
        "\n",
        "![警告](https://drive.google.com/uc?export=view&id=1mRW32urnPQ4LS4xrLEoPdp_XCqlq1HUF)\n"
      ],
      "metadata": {
        "id": "v28ncAfzT3db"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "## Remaining disk space warning"
      ],
      "metadata": {
        "id": "uH6cRFVgmfh4"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "If you get a warning like the one below, you can safely ignore the first one.\n",
        "\n",
        "If the second time appears, the Colab environment disk may be punctured in the middle, so divide the input video at a good point and divide it for processing.\n",
        "\n",
        "(Does not affect Google Drive)"
      ],
      "metadata": {
        "id": "cUxpejEEmlKM"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "\n",
        "![容量オーバー](https://drive.google.com/uc?export=view&id=1EKt3nCK6ZYjgkNoflQNzSnW_0s4WnJYO)\n",
        "\n"
      ],
      "metadata": {
        "id": "Hae7pvkCmdps"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "# Terms of service"
      ],
      "metadata": {
        "id": "tUNmRfXodyvM"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "When using Text2Motion or publishing/distributing its results, please be sure to check the terms of use.\n",
        "\n",
        "The same is true for uses other than MMD, such as using JSON data and applying to Unity.\n",
        "\n",
        "[Text2Motion v1 Terms of service](https://github.com/miu200521358/mmd-text-to-motion/wiki/03_en.Terms-of-service)\n",
        "\n",
        "[Text2Motion v1 Licence](https://github.com/miu200521358/mmd-text-to-motion/wiki/02.%E4%BD%BF%E7%94%A8%E6%8A%80%E8%A1%93-(Technology-used))"
      ],
      "metadata": {
        "id": "UqHhDeUAUA3V"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "# ①　Environment"
      ],
      "metadata": {
        "id": "cGNYDeTFeFWW"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "## ①-A　Notification sound download\n",
        "\n",
        "Download the notification sound from [Sound Effect Lab](https://soundeffect-lab.info/) so that it sounds when the cell starts, completes, or fails.\n",
        "\n",
        "Please execute the cell of 【①-A】 below\n"
      ],
      "metadata": {
        "id": "c2wqFlb4eTe2"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "#@markdown ■■■■■■■■■■■■■■■■■■\n",
        "\n",
        "#@markdown 【①-A】　Notification sound download\n",
        "\n",
        "#@markdown ■■■■■■■■■■■■■■■■■■\n",
        "\n",
        "from enum import Enum\n",
        "\n",
        "class SoundType(Enum):\n",
        "    START = 0\n",
        "    SUCCESS = 1\n",
        "    FAIL = 2\n",
        "\n",
        "# ----------------------------------\n",
        "\n",
        "! wget --no-check-certificate -c \"https://soundeffect-lab.info/sound/anime/mp3/sceneswitch1.mp3\"\n",
        "! wget --no-check-certificate -c \"https://soundeffect-lab.info/sound/anime/mp3/incorrect1.mp3\"\n",
        "! wget --no-check-certificate -c \"https://soundeffect-lab.info/sound/anime/mp3/switch1.mp3\"\n",
        "\n",
        "from IPython.display import Audio, display\n",
        "import torch\n",
        "\n",
        "def play_sound(sound_type: SoundType, autoplay=True):\n",
        "    try:\n",
        "        if sound_type == SoundType.START:\n",
        "            file_name = \"/content/switch1.mp3\"\n",
        "        elif sound_type == SoundType.SUCCESS:\n",
        "            file_name = \"/content/sceneswitch1.mp3\"\n",
        "        else:\n",
        "            file_name = \"/content/incorrect1.mp3\"\n",
        "\n",
        "        display(Audio(file_name, autoplay=autoplay, normalize=False))\n",
        "    except:\n",
        "        print(\"■■■■■■■■■■■■■■■\")\n",
        "        print(\"■　Failed to play sound effects\")\n",
        "        print(\"■■■■■■■■■■■■■■■\")\n",
        "\n",
        "# ----------------------------------\n",
        "\n",
        "exec_dict = {\n",
        "    '①-A': {'exec': False, 'premise': []},\n",
        "    '①-B': {'exec': False, 'premise': ['①-A', ]},\n",
        "    '①-C': {'exec': False, 'premise': ['①-A', '①-B', ]},\n",
        "    '①-D': {'exec': False, 'premise': ['①-A', '①-B', '①-C', ]},\n",
        "    '①-E': {'exec': False, 'premise': ['①-A', '①-B', '①-C', '①-D', ]},\n",
        "    '①-F': {'exec': False, 'premise': ['①-A', '①-B', '①-C', '①-D', '①-E', ]},\n",
        "    '②-A': {'exec': False, 'premise': ['①-A', '①-B', '①-C', '①-D', '①-E', '①-F', ]},\n",
        "    '②-B': {'exec': False, 'premise': ['①-A', '①-B', '①-C', '①-D', '①-E', '①-F', '②-A', ]},\n",
        "}\n",
        "\n",
        "NVIDIA_VISIBLE_DEVICES=all\n",
        "\n",
        "class IpyExit(SystemExit):\n",
        "    def __init__(self):\n",
        "        play_sound(SoundType.FAIL)\n",
        "        pass\n",
        "\n",
        "def check_exec_dict(cell_key: str):\n",
        "    if not torch.cuda.is_available():\n",
        "        print(\"■■■■■■■■■■■■■■■\")\n",
        "        print(\"■　** ERROR **\")\n",
        "        print(\"■　GPU is not enabled.\")\n",
        "        print(\"■　Change the runtime to GPU while referring to the preparation section.\")\n",
        "        print(\"■■■■■■■■■■■■■■■\")\n",
        "        raise IpyExit\n",
        "\n",
        "    if not exec_dict:\n",
        "        print(\"■■■■■■■■■■■■■■■\")\n",
        "        print(\"■　** ERROR **\")\n",
        "        print(\"■　Cell 【①-A】 may not have been executed.\")\n",
        "        print(\"■　Go back from the table of contents and execute.\")\n",
        "        print(\"■■■■■■■■■■■■■■■\")\n",
        "        raise IpyExit\n",
        "\n",
        "    for p in exec_dict[cell_key]['premise']:\n",
        "        # 事前に実行されるべきセルがすべて実行されているか\n",
        "        if not exec_dict[p]['exec']:\n",
        "            print(\"■■■■■■■■■■■■■■■\")\n",
        "            print(\"■　** ERROR **\")\n",
        "            print(f\"■　Cell【{p}】may not have been executed.\")\n",
        "            print(\"■　Go back from the table of contents and execute.\")\n",
        "            print(\"■■■■■■■■■■■■■■■\")\n",
        "            raise IpyExit\n",
        "\n",
        "    # 全部実行されていたらOK\n",
        "    play_sound(SoundType.START)\n",
        "    return True\n",
        "\n",
        "def finish_cell(cell_key: str):\n",
        "    play_sound(SoundType.SUCCESS)\n",
        "\n",
        "    print(\"■■■■■■■■■■■■■■■\")\n",
        "    print(\"■　** OK **\")\n",
        "    print(f\"■　Successfully executed Cell【{cell_key}】\")\n",
        "    print(\"■■■■■■■■■■■■■■■\")\n",
        "\n",
        "    exec_dict[cell_key]['exec'] = True\n",
        "\n",
        "# ----------------------------------\n",
        "\n",
        "# セル実行可否\n",
        "ckey = \"①-A\"\n",
        "check_exec_dict(ckey)\n",
        "\n",
        "# ----------------------------------\n",
        "\n",
        "# セル終了\n",
        "\n",
        "finish_cell(ckey)\n",
        "\n",
        "print(\"\")\n",
        "print(\"\")\n",
        "print(\"■ Start cell execution\")\n",
        "play_sound(SoundType.START, autoplay=False)\n",
        "\n",
        "print(\"\")\n",
        "print(\"\")\n",
        "print(\"■ Cell execution failure\")\n",
        "play_sound(SoundType.FAIL, autoplay=False)\n",
        "\n",
        "print(\"\")\n",
        "print(\"\")\n",
        "print(\"■ Cell execution successful\")\n",
        "play_sound(SoundType.SUCCESS, autoplay=False)"
      ],
      "metadata": {
        "cellView": "form",
        "id": "ENPQ2HkzeZdl"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## ①-B　Check GPU runtime\n",
        "\n",
        "Select \"Runtime\" > \"Change runtime type\" > \"GPU\" in the header.\n",
        "\n",
        "For detailed instructions, please refer to \"[Preparation](https://colab.research.google.com/github/miu200521358/motion_trace_colab/blob/master/AutoTraceIntroduction_en.ipynb)\".\n",
        "\n",
        "If you can change it, please execute the cell of 【①-B】 below."
      ],
      "metadata": {
        "id": "WKqIyaMoe2wP"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "#@markdown ■■■■■■■■■■■■■■■■■■\n",
        "\n",
        "#@markdown 【①-B】　Check GPU runtime\n",
        "\n",
        "#@markdown ■■■■■■■■■■■■■■■■■■\n",
        "\n",
        "# セル実行可否\n",
        "ckey = \"①-B\"\n",
        "check_exec_dict(ckey)\n",
        "\n",
        "# ----------------------------------\n",
        "! nvidia-smi\n",
        "\n",
        "import subprocess\n",
        "try:\n",
        "    subprocess.check_output(\"nvidia-smi\", shell=True)\n",
        "except:\n",
        "    print(\"■■■■■■■■■■■■■■■\")\n",
        "    print(\"■　** ERROR **\")\n",
        "    print(\"■　Failed to execute the nvidia-smi command.\")\n",
        "    print(\"■　① If you haven't used a GPU yet and haven't configured the runtime\")\n",
        "    print(\"■　　Set the runtime to GPU while referring to the preparation section\")\n",
        "    print(\"■　　※If you change the runtime, the environment will be reset, so please start over from 【①-A】\")\n",
        "    print(\"■　② When the usage limit of the GPU is reached and it can only be used with the CPU\")\n",
        "    print(\"■　　In the free area, you can use it again if you leave it for 24 hours or more.\")\n",
        "    print(\"■■■■■■■■■■■■■■■\")\n",
        "\n",
        "# ----------------------------------\n",
        "\n",
        "# セル終了\n",
        "finish_cell(ckey)"
      ],
      "metadata": {
        "cellView": "form",
        "id": "_dGYRo6Me7Df"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## ①-C　Cooperation with Google Drive\n",
        "\n",
        "Please allow this Colab page to access your Google Drive for the following purposes:\n",
        "  - To save the generated result of Text2Motion\n",
        "\n",
        "※ This tool does not touch anything other than the \"`text2motion`\" folder in Google Drive.\n",
        "\n",
        "Execute the 【①-C】 cell below and allow the collaboration from the subwindow.\n",
        "\n",
        "(With the latest Colab, you no longer need to copy and paste the key)"
      ],
      "metadata": {
        "id": "WfRS49WpfC0a"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "#@markdown ■■■■■■■■■■■■■■■■■■\n",
        "\n",
        "#@markdown 【①-C】　Cooperation with Google Drive\n",
        "\n",
        "#@markdown ■■■■■■■■■■■■■■■■■■\n",
        "\n",
        "# セル実行可否\n",
        "ckey = \"①-C\"\n",
        "check_exec_dict(ckey)\n",
        "\n",
        "# ----------------------------------\n",
        "\n",
        "from google.colab import drive\n",
        "import os\n",
        "\n",
        "# Googleドライブマウント\n",
        "drive.mount('/gdrive')\n",
        "\n",
        "# 起点ディレクトリ\n",
        "base_dir_path = \"/gdrive/My Drive/text2motion\"\n",
        "\n",
        "if os.path.exists(base_dir_path):\n",
        "    print(\"■■■■■■■■■■■■■■■\")\n",
        "    print(\"■　** OK **\")\n",
        "    print(\"■　Cooperation with text2motion folder was successful.\")\n",
        "    print(\"■■■■■■■■■■■■■■■\")\n",
        "else:\n",
        "    print(\"■■■■■■■■■■■■■■■\")\n",
        "    print(\"■　** ERROR **\")\n",
        "    print(\"■　I couldn't find the text2motion folder directly under Google Drive.\")\n",
        "    print(\"■　Create a text2motion folder while referring to the preparation section.\")\n",
        "    print(\"■　※If it is a spelling mistake, do not change the folder name, etc., but create a new folder.\")\n",
        "    print(\"■　　Updates after linking with Google Drive are not recognized well.\")\n",
        "    print(\"■　※Since there is a little time lag, it may not be recognized immediately after it is generated.\")\n",
        "    print(\"■　　Wait about 10 seconds and rerun the cell.\")\n",
        "    print(\"■■■■■■■■■■■■■■■\")\n",
        "    raise IpyExit\n",
        "\n",
        "# ----------------------------------\n",
        "\n",
        "# セル終了\n",
        "finish_cell(ckey)"
      ],
      "metadata": {
        "cellView": "form",
        "id": "QkEaktdxfcqE"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## ①-D　Integration with Google SDK\n",
        "\n",
        "In order to download the data necessary for Text2Motion from miu's Google Drive, it works with `GoogleSDK`.\n",
        "\n",
        "As with cell 【①-C】, give access permission from the URL."
      ],
      "metadata": {
        "id": "TAPv-jy7fz4h"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "#@markdown ■■■■■■■■■■■■■■■■■■\n",
        "\n",
        "#@markdown 【①-D】　Integration with Google SDK\n",
        "\n",
        "#@markdown ■■■■■■■■■■■■■■■■■■\n",
        "\n",
        "# セル実行可否\n",
        "ckey = \"①-D\"\n",
        "check_exec_dict(ckey)\n",
        "\n",
        "# ----------------------------------\n",
        "\n",
        "# Googleドライブアクセスライブラリ\n",
        "!pip install -U -q PyDrive\n",
        "\n",
        "from pydrive.auth import GoogleAuth\n",
        "from pydrive.drive import GoogleDrive\n",
        "from google.colab import auth\n",
        "from oauth2client.client import GoogleCredentials\n",
        "\n",
        "try:\n",
        "    auth.authenticate_user()\n",
        "    gauth = GoogleAuth()\n",
        "    gauth.credentials = GoogleCredentials.get_application_default()\n",
        "    drive = GoogleDrive(gauth)\n",
        "\n",
        "    print(\"■■■■■■■■■■■■■■■\")\n",
        "    print(\"■　** OK **\")\n",
        "    print(\"■　Successfully worked with Google SDK.\")\n",
        "    print(\"■■■■■■■■■■■■■■■\")\n",
        "\n",
        "except Exception as e:\n",
        "    print(e.message)\n",
        "    print(\"■■■■■■■■■■■■■■■\")\n",
        "    print(\"■　** Error **\")\n",
        "    print(\"■　Failed to link with Google SDK.\")\n",
        "    print(\"■　Please share this notebook with the author.\")\n",
        "    print(\"■■■■■■■■■■■■■■■\")\n",
        "    raise IpyExit\n",
        "\n",
        "# ----------------------------------\n",
        "\n",
        "# セル終了\n",
        "finish_cell(ckey)"
      ],
      "metadata": {
        "cellView": "form",
        "id": "P51Ag4QigW1q"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## ①-E　Code setup\n",
        "\n",
        "Build the code for Text2Motion.\n",
        "\n",
        "Execute the cell 【①-E】 below.\n",
        "\n",
        "I get a lot of messages. It takes about 5 minutes.\n",
        "\n",
        "**Since the compilation runs on the way, there are places that seem to stop working.**\n",
        "\n",
        "**Please do not interrupt the cell by being surprised or worried. If you just wait, it will finish.**\n",
        "\n",
        "The string `ERROR` may appear, but there is a high possibility that the error is not a problem for executing Text2Motion, so please ignore it and proceed as it is."
      ],
      "metadata": {
        "id": "8hXO0YvKhLa_"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "#@markdown ■■■■■■■■■■■■■■■■■■\n",
        "\n",
        "#@markdown 【①-E】　Code setup\n",
        "\n",
        "#@markdown ■■■■■■■■■■■■■■■■■■\n",
        "\n",
        "# セル実行可否\n",
        "ckey = \"①-E\"\n",
        "check_exec_dict(ckey)\n",
        "\n",
        "# ----------------------------\n",
        "\n",
        "import shutil\n",
        "\n",
        "if os.path.exists('./mmd-text-to-motion'):\n",
        "    shutil.rmtree('./mmd-text-to-motion')\n",
        "\n",
        "# Text2Motion v1 バージョンタグ\n",
        "version_tag = \"v1.00.02\"\n",
        "\n",
        "! git clone  --recursive --depth 1 -b \"$version_tag\" \"https://github.com/miu200521358/mmd-text-to-motion.git\"\n",
        "\n",
        "! python -m spacy download en_core_web_sm\n",
        "! BEZIER_NO_EXTENSION=true BEZIER_INSTALL_PREFIX=true python -m pip install bezier==2021.2.12 --no-binary=bezier\n",
        "! pip install --upgrade -r \"./mmd-text-to-motion/requirements.txt\"\n",
        "\n",
        "! pip install moviepy==2.0.0.dev2\n",
        "! pip install imageio-ffmpeg\n",
        "! pip install google-cloud-secret-manager\n",
        "! pip install deepl\n",
        "# ----------------------------\n",
        "\n",
        "# セル終了\n",
        "finish_cell(ckey)"
      ],
      "metadata": {
        "cellView": "form",
        "id": "Wm_DfN5bhaCl"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## ①-F　Data placement\n",
        "\n",
        "Place the required data in Text2Motion.\n",
        "\n",
        "Execute the cell 【①-F】 below.\n",
        "\n",
        "I get a lot of messages. It takes about 2 minutes."
      ],
      "metadata": {
        "id": "0wpzqPQ4ihVn"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "#@markdown ■■■■■■■■■■■■■■■■■■\n",
        "\n",
        "#@markdown 【①-F】　Data placement\n",
        "\n",
        "#@markdown ■■■■■■■■■■■■■■■■■■\n",
        "\n",
        "# セル実行可否\n",
        "ckey = \"①-F\"\n",
        "check_exec_dict(ckey)\n",
        "\n",
        "# ----------------------------\n",
        "\n",
        "# motion-diffusion-model.zip\n",
        "downloaded = drive.CreateFile({'id': '16Ei9ycHMjfhbjgPIXZXpbkO5hT0PhPEh'})\n",
        "downloaded.GetContentFile('/content/mmd-text-to-motion/data/motion-diffusion-model.zip')\n",
        "! cd mmd-text-to-motion/data && unzip -o ./motion-diffusion-model.zip\n",
        "\n",
        "# ----------------------------\n",
        "\n",
        "# セル終了\n",
        "finish_cell(ckey)"
      ],
      "metadata": {
        "cellView": "form",
        "id": "VGB11PgjisJ1"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "# ②　Text2Motion v1 Run"
      ],
      "metadata": {
        "id": "5ek2ZWYJkMqb"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "## ②-A　Text2Motion v1 Run\n",
        "\n",
        "Please set the prompt for motion generation and the parameters for generation.\n",
        "\n",
        "Executes Text2Motion v1 according to the specified parameters.\n",
        "\n",
        "One motion of 9 seconds takes about 3 minutes.\n",
        "\n",
        "After the motion generation is completed, save the trace result in the `text2motion` folder in Google Drive.\n",
        "\n",
        "  - Please set the prompt in Japanese or English\n",
        "      - If it is Japanese, translate it into English in the tool and generate the motion\n",
        "  - Once you have entered the settings, execute the 【②-A】 cell below."
      ],
      "metadata": {
        "id": "30ajJ1K3kW69"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "#@markdown ■■■■■■■■■■■■■■■■■■\n",
        "\n",
        "#@markdown 【②-A】　Motion generation\n",
        "\n",
        "#@markdown ■■■■■■■■■■■■■■■■■■\n",
        "\n",
        "# セル実行可否\n",
        "ckey = \"②-A\"\n",
        "check_exec_dict(ckey)\n",
        "\n",
        "# ----------------------------\n",
        "\n",
        "\n",
        "#@markdown ### Prompt\n",
        "\n",
        "#@markdown Please enter a sentence in English that specifies what you want to do.\n",
        "\n",
        "p_prompt = \"Person who runs forward, jumps into the air, makes one turn, and lands\"  #@param {type: \"string\"}\n",
        "\n",
        "#@markdown ### Motion seconds\n",
        "\n",
        "#@markdown number of seconds to act\n",
        "\n",
        "p_seconds = 6.0 #@param {type:\"slider\", min:1, max:9.8, step: 0.1}\n",
        "\n",
        "#@markdown ### Number of generated\n",
        "\n",
        "#@markdown Even with the same prompt, the results will change slightly each time you run\n",
        "\n",
        "p_num_repetitions = 2 #@param {type:\"slider\", min:1, max:10, step: 1}\n",
        "\n",
        "#@markdown ### seed\n",
        "\n",
        "#@markdown The source of internal random numbers. If 0, assign randomly internally.\n",
        "\n",
        "p_seed = 0 #@param {type:\"slider\", min:0, max:1024, step: 1}\n",
        "\n",
        "if not p_prompt:\n",
        "    print(\"■■■■■■■■■■■■■■■\")\n",
        "    print(\"■　** ERROR **\")\n",
        "    print(\"■　No prompt is set.\")\n",
        "    print(\"■■■■■■■■■■■■■■■\")\n",
        "    raise IpyExit\n",
        "\n",
        "\n",
        "\n",
        "import datetime\n",
        "process_dir_name = f\"{datetime.datetime.now():%Y%m%d_%H%M%S}\"\n",
        "process_dir_path = f'/content/output/{process_dir_name}'\n",
        "os.makedirs(process_dir_path, exist_ok=True)\n",
        "\n",
        "! export PYTHONIOENCODING=utf-8 \n",
        "! cd /content/mmd-text-to-motion/src && python executor.py --seed $p_seed --seconds $p_seconds --num_repetitions $p_num_repetitions --text \"$p_prompt\" --translated_text \"$p_prompt\" --parent-dir \"$process_dir_path\" --process text2move,mix,motion --verbose 20 --log-mode 0 --lang en\n",
        "\n",
        "import os\n",
        "if os.path.exists(\"/content/mmd-text-to-motion/log/quit.log\"):\n",
        "    raise IpyExit\n",
        "\n",
        "# 結果をGoogleドライブにコピー\n",
        "output_path = os.path.join(base_dir_path, process_dir_name)\n",
        "os.makedirs(output_path, exist_ok=True)\n",
        "\n",
        "import shutil\n",
        "\n",
        "# mix(json)\n",
        "shutil.copytree(os.path.join(process_dir_path, \"02_personal\"), os.path.join(output_path, \"01_personal\"))\n",
        "\n",
        "# viz_pos.html\n",
        "shutil.copy(os.path.join(\"/content/mmd-text-to-motion/data/viz_pos.html\"), os.path.join(output_path, \"viz_pos.html\"))\n",
        "\n",
        "# prompt.txt\n",
        "shutil.copy(os.path.join(process_dir_path, \"prompt.txt\"), os.path.join(output_path, \"prompt.txt\"))\n",
        "\n",
        "os.makedirs(os.path.join(output_path, \"trace_model\"))\n",
        "\n",
        "# pmx\n",
        "shutil.copy(os.path.join(\"/content/mmd-text-to-motion/data/pmx/trace_model.pmx\"), os.path.join(output_path, \"trace_model\", \"trace_model.pmx\"))\n",
        "\n",
        "# pmx(tex)\n",
        "shutil.copytree(\"/content/mmd-text-to-motion/data/pmx/tex\", os.path.join(output_path, \"trace_model/tex\"))\n",
        "\n",
        "# readme\n",
        "shutil.copy(os.path.join(\"/content/mmd-text-to-motion/data/readme_en.txt\"), os.path.join(output_path, \"readme.txt\"))\n",
        "\n",
        "import time\n",
        "time.sleep(20)\n",
        "\n",
        "# ----------------------------\n",
        "\n",
        "# セル終了\n",
        "finish_cell(ckey)"
      ],
      "metadata": {
        "cellView": "form",
        "id": "625x35Vaky9I"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "source": [
        "## ②-B Check generated motion\n",
        "\n",
        "By executing the 【②-B】 cell below, you can play the motion video generated in 【②-A】 on Colab.\n",
        "\n",
        "※ Execution of this cell is optional."
      ],
      "metadata": {
        "id": "9HFSfSp5k7sK"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "#@markdown ■■■■■■■■■■■■■■■■■■\n",
        "\n",
        "#@markdown 【②-B】　Check generated motion\n",
        "\n",
        "#@markdown ■■■■■■■■■■■■■■■■■■\n",
        "\n",
        "# セル実行可否\n",
        "ckey = \"②-B\"\n",
        "check_exec_dict(ckey)\n",
        "\n",
        "# ----------------------------\n",
        "\n",
        "from glob import glob\n",
        "from moviepy.editor import VideoFileClip, clips_array\n",
        "\n",
        "clips = []\n",
        "for video_path in glob(os.path.join(base_dir_path, process_dir_name, \"**\", \"*.avi\"), recursive=True):\n",
        "    clips.append(VideoFileClip(video_path).margin(10))\n",
        "\n",
        "clip = clips_array([clips])\n",
        "\n",
        "# ----------------------------\n",
        "# セル終了\n",
        "finish_cell(ckey)\n",
        "\n",
        "# 最後に出さないと描画失敗する\n",
        "clip.ipython_display(width=(400 * len(clips)))"
      ],
      "metadata": {
        "cellView": "form",
        "id": "K0SwxZOtlGJD"
      },
      "execution_count": null,
      "outputs": []
    }
  ]
}