{
  "cells": [
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "cellView": "form",
        "id": "VxPuPR0j5Gs7"
      },
      "outputs": [],
      "source": [
        "# ------------------------------------------------------------------------------\n",
        "#     Copyright 2022 Google LLC. All Rights Reserved.\n",
        "#\n",
        "#     Licensed under the Apache License, Version 2.0 (the \"License\");\n",
        "#     you may not use this file except in compliance with the License.\n",
        "#     You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0\n",
        "#\n",
        "#     Unless required by applicable law or agreed to in writing, software\n",
        "#     distributed under the License is distributed on an \"AS IS\" BASIS,\n",
        "#     WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
        "#     See the License for the specific language governing permissions and\n",
        "#     limitations under the License.\n",
        "# ------------------------------------------------------------------------------\n",
        "\n",
        "\n",
        "# ------------------------------------------------------------------------------\n",
        "# User Interface\n",
        "# ------------------------------------------------------------------------------\n",
        "\n",
        "#@markdown #  Train your own DDSP-VST Model \n",
        "#@markdown 🎻🎺🎸🎵 [g.co/magenta/train-ddsp-vst](g.co/magenta/train-ddsp-vst)\n",
        "\n",
        "#@markdown \u003cbr/\u003e \n",
        "\n",
        "#@markdown ## Instructions\n",
        "\n",
        "#@markdown * Create a folder in Google Drive with your training audio (`.wav` or `.mp3`)\n",
        "\n",
        "Name = 'My Instrument' #@param {type:\"string\"}\n",
        "Name = Name.replace(' ', '_')\n",
        "\n",
        "\n",
        "#@markdown * Press the ▶️ button in the upper left!\n",
        "\n",
        "#@markdown * Login to your Google account when asked\n",
        "\n",
        "#@markdown *  Select your folder with the file chooser below when asked\n",
        "\n",
        "#@markdown *  Wait (with this window open) for training to finish and download the model\n",
        "\n",
        "#@markdown *  If something breaks, resume training by refreshing this page, press ▶️, and choose the same folder\n",
        "\n",
        "\n",
        "\n",
        "#@markdown \u003cbr/\u003e\n",
        "\n",
        "#@markdown \u003cbr/\u003e\n",
        "\n",
        "#@markdown ## Data\n",
        "#@markdown Custom models can train on as little as 10 minutes of audio (`.wav` or `.mp3`). You can get the best results from \"monophonic\" (only one note at a time) audio from a single recording session (same mic, same reverb). All of your data is private, used locally, and erased as soon as your colab session ends.\n",
        "\n",
        "#@markdown We recommend using Google Drive to load data faster and save your model during training. Just create a folder on your drive with your audio files in it, and select the folder. If you don't use drive, you can still upload audio through the browser (slower) and download the final trained model.\n",
        "\n",
        "\n",
        "#@markdown ## Training\n",
        "#@markdown Training typically takes ~2-3 hours with free Colab, and less than an hour with ColabPro+. Free colab can sometimes disconnects before models finish training, but there are some unofficial [ways around this](https://stackoverflow.com/questions/57113226/how-to-prevent-google-colab-from-disconnecting). If you do get disconnected, don't worry, just press play again and choose the same folder. The training will resume where it left off.\n",
        "\n",
        "\n",
        "\n",
        "\n",
        "#@markdown ## Export\n",
        "\n",
        "#@markdown After training, the colab should automatically export, zip, and download your model folder. To use, just unzip and drop the full folder in plugin model folder (Mac: `~/Documents/Magenta/DDSP/Models`, which you can also find from inside the plugin).\n",
        "\n",
        "#@markdown If it doesn't automatically download, you can also find it in your training folder (`ddsp-training-{date-time}/{Name}`). Also, you'll likely see a bunch of warnings like `Value in checkpoint could not be found in the restored object`, don't worry that's normal :).\n",
        "\n",
        "\n",
        "#@markdown \u003cbr/\u003e \u003cbr/\u003e\n",
        "#@markdown ## Advanced Options\n",
        "\n",
        "##@markdown \u003ca href=\"https://colab.research.google.com/github/magenta/ddsp/blob/main/ddsp/colab/demos/Train_VST.ipynb\" target=\"_parent\"\u003e\u003cimg src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/\u003e\u003c/a\u003e\n",
        "\n",
        "#@markdown \u003cbr/\u003e\n",
        "#@markdown  Usually we will produce good results after training between 30k-50k steps, but your results may vary depending on your audio files/instrument. Too few steps will often have the model sound bland/generic, too many steps can often lead to more \"sputtering\" and big volume fluctuations\n",
        "\n",
        "Training_Steps = 30000 #@param {type:\"integer\"}\n",
        "\n",
        "#@markdown \u003cbr/\u003e\n",
        "#@markdown Ignore previous checkpoints in the folder and start a fresh run from step 0\n",
        "\n",
        "Ignore_Previous = False #@param {type:\"boolean\"}\n",
        "\n",
        "\n",
        "#@markdown \u003cbr/\u003e\n",
        "#@markdown Use Google Drive for training? Otherwise, loads audio from browser, which is much slower\n",
        "Google_Drive = True #@param {type:\"boolean\"}\n",
        "\n",
        "\n",
        "\n",
        "# Sample_Rate = '16kHz'  #@param ['16kHz', '32kHz', '48kHz']\n",
        "# Sample_Rate = {'16kHz': 16000, '32kHz': 32000, '48kHz': 48000}[Sample_Rate]\n",
        "# Model_Gin_File = 'models/vst/vst.gin'\n",
        "\n",
        "\n",
        "#@markdown --- \n",
        "#@markdown \u003csub\u003e This notebook sends anonymous usage data (i.e. training time) to Google Analytics to help improve/debug training. All audio and model information is private, and not sent or stored. For more information, see [Google's privacy policy](https://policies.google.com/privacy). \u003c/sub\u003e\n",
        "\n",
        "\n",
        "\n",
        "\n",
        "# ------------------------------------------------------------------------------\n",
        "# Imports (not DDSP Dependent)\n",
        "# ------------------------------------------------------------------------------\n",
        "# Supress warnings that obscure output.\n",
        "import warnings\n",
        "warnings.filterwarnings(\"ignore\")\n",
        "\n",
        "import datetime\n",
        "import glob\n",
        "import os\n",
        "import shutil\n",
        "import time\n",
        "import IPython\n",
        "import json\n",
        "import subprocess\n",
        "\n",
        "from google.colab import drive\n",
        "import tensorflow as tf\n",
        "\n",
        "!pip install ipyfilechooser==0.6.0 \u0026\u003e /dev/null\n",
        "from ipyfilechooser import FileChooser\n",
        "\n",
        "# ------------------------------------------------------------------------------\n",
        "# Logging\n",
        "# ------------------------------------------------------------------------------\n",
        "# The below functions (load_gtag and log_event) handle Google Analytics event\n",
        "# logging. The logging is anonymous and stores only very basic statistics of the\n",
        "# training (i.e. whether it completed, how long it took, etc.) to help debug\n",
        "# and improve the training experience.\n",
        "# No data or audio is stored or transferred. Everything happens locally to this\n",
        "# colab instance (and your google drive), and is deleted once the browser \n",
        "# window is closed.\n",
        "\n",
        "\n",
        "def load_gtag():\n",
        "  \"\"\"Loads gtag.js.\"\"\"\n",
        "  # Note: gtag.js MUST be loaded in the same cell execution as the one doing\n",
        "  # synthesis. It does NOT persist across cell executions!\n",
        "  html_code = '''\n",
        "\u003c!-- Global site tag (gtag.js) - Google Analytics --\u003e\n",
        "\u003cscript async src=\"https://www.googletagmanager.com/gtag/js?id=G-ZKDC5WXJQN\"\u003e\u003c/script\u003e\n",
        "\u003cscript\u003e\n",
        "  window.dataLayer = window.dataLayer || [];\n",
        "  function gtag(){dataLayer.push(arguments);}\n",
        "  gtag('js', new Date());\n",
        "  gtag('config', 'G-ZKDC5WXJQN',\n",
        "       {'referrer': document.referrer.split('?')[0],\n",
        "        'anonymize_ip': true,\n",
        "        'page_title': '',\n",
        "        'page_referrer': '',\n",
        "        'cookie_prefix': 'magenta',\n",
        "        'cookie_domain': 'auto',\n",
        "        'cookie_expires': 0,\n",
        "        'cookie_flags': 'SameSite=None;Secure'});\n",
        "\u003c/script\u003e\n",
        "'''\n",
        "  IPython.display.display(IPython.display.HTML(html_code))\n",
        "\n",
        "def log_event(event_name, event_details):\n",
        "  \"\"\"Log event with name and details dictionary.\"\"\"\n",
        "  details_json = json.dumps(event_details)\n",
        "  js_string = \"gtag('event', '%s', %s);\" % (event_name, details_json)\n",
        "  IPython.display.display(IPython.display.Javascript(js_string))\n",
        "\n",
        "load_gtag()\n",
        "\n",
        "\n",
        "# ------------------------------------------------------------------------------\n",
        "# Functions\n",
        "# ------------------------------------------------------------------------------\n",
        "def directory_has_files(target_dir):\n",
        "  n_files = len(glob.glob(os.path.join(target_dir, '*')))\n",
        "  return n_files \u003e 0\n",
        "\n",
        "\n",
        "def get_audio_files(drive_dir, audio_dir):\n",
        "  if drive_dir:\n",
        "    mp3_files = glob.glob(os.path.join(drive_dir, '*.mp3'))\n",
        "    wav_files = glob.glob(os.path.join(drive_dir, '*.wav'))\n",
        "    audio_paths = mp3_files + wav_files\n",
        "    if len(audio_paths) \u003c 1:\n",
        "      raise FileNotFoundError(\"Sorry, it seems that there aren't any MP3 or \"\n",
        "                              f\"WAV files in your folder ({drive_dir}). Try \"\n",
        "                              \"running again and choose a different folder.\")\n",
        "  else:\n",
        "    audio_paths, _ = colab_utils.upload()\n",
        "\n",
        "  # Copy Audio.\n",
        "  print('Copying audio to colab for training...')\n",
        "  for src in audio_paths:\n",
        "    target = os.path.join(audio_dir, \n",
        "                          os.path.basename(src).replace(' ', '_'))\n",
        "    print('Copying {} to {}'.format(src, target))\n",
        "    shutil.copy(src, target)\n",
        "    # !cp $src $target\n",
        "\n",
        "\n",
        "def prepare_dataset(audio_dir, \n",
        "                    data_dir,\n",
        "                    sample_rate=16000, \n",
        "                    frame_rate=50, \n",
        "                    example_secs=4.0, \n",
        "                    hop_secs=1.0, \n",
        "                    viterbi=True, \n",
        "                    center=True):\n",
        "  if directory_has_files(data_dir):\n",
        "    print(f'Dataset already exists in `{data_dir}`')\n",
        "    return\n",
        "  else:\n",
        "    # Otherwise prepare new dataset locally.\n",
        "    print(f'Preparing new dataset from `{audio_dir}`')\n",
        "\n",
        "    print()\n",
        "    print('Creating dataset...')\n",
        "    print('This usually takes around 2-3 minutes for each minute of audio')\n",
        "    print('(10 minutes of training audio -\u003e 20-30 minutes)')\n",
        "\n",
        "    audio_filepattern = os.path.join(audio_dir, '*')\n",
        "    audio_fp_str = f'\"{audio_filepattern}\"'    \n",
        "    tfrecord_path_str = f'\"{data_dir}/train.tfrecord\"'\n",
        "\n",
        "    !ddsp_prepare_tfrecord \\\n",
        "    --input_audio_filepatterns=$audio_fp_str \\\n",
        "    --output_tfrecord_path=$tfrecord_path_str \\\n",
        "    --num_shards=10 \\\n",
        "    --sample_rate=$sample_rate \\\n",
        "    --frame_rate=$frame_rate \\\n",
        "    --example_secs=$example_secs \\\n",
        "    --hop_secs=$hop_secs \\\n",
        "    --viterbi=$viterbi \\\n",
        "    --center=$center \u0026\u003e /dev/null\n",
        "\n",
        "\n",
        "def train(model_dir, data_dir, steps=30000):\n",
        "  file_pattern = os.path.join(data_dir, 'train.tfrecord*')\n",
        "  fp_str = f\"TFRecordProvider.file_pattern='{file_pattern}'\"\n",
        "  !ddsp_run \\\n",
        "  --mode=train \\\n",
        "  --save_dir=\"$model_dir\" \\\n",
        "  --gin_file=models/vst/vst.gin \\\n",
        "  --gin_file=datasets/tfrecord.gin \\\n",
        "  --gin_param=\"$fp_str\" \\\n",
        "  --gin_param=\"TFRecordProvider.centered=True\" \\\n",
        "  --gin_param=\"TFRecordProvider.frame_rate=50\" \\\n",
        "  --gin_param=\"batch_size=16\" \\\n",
        "  --gin_param=\"train_util.train.num_steps=$steps\" \\\n",
        "  --gin_param=\"train_util.train.steps_per_save=300\" \\\n",
        "  --gin_param=\"trainers.Trainer.checkpoints_to_keep=3\"\n",
        "\n",
        "  # --gin_param=\"train.data_provider=@ExperimentalDataProvider()\" \\\n",
        "  # --gin_param=\"ExperimentalRecordProvider.data_dir='$data_dir'\" \\\n",
        "  # --gin_param=\"ExperimentalRecordProvider.sample_rate=16000\" \\\n",
        "  # --gin_param=\"ExperimentalRecordProvider.frame_rate=50\" \\\n",
        "\n",
        "\n",
        "def reset_state(data_dir, audio_dir, model_dir):\n",
        "  model_dir_str = f'\"{model_dir}\"'\n",
        "  if tf.io.gfile.exists(data_dir):\n",
        "    !rm -r $data_dir\n",
        "    !rm -r $audio_dir\n",
        "  !mkdir -p $data_dir\n",
        "  !mkdir -p $audio_dir\n",
        "  !mkdir -p $model_dir_str\n",
        "\n",
        "\n",
        "def export_and_download(model_dir, model_name=Name):\n",
        "  export_path = os.path.join(model_dir, model_name)\n",
        "\n",
        "  model_dir_str=f'\"{model_dir}\"'\n",
        "  export_path_str=f'\"{export_path}\"'\n",
        "  \n",
        "  !ddsp_export \\\n",
        "  --name=$model_name \\\n",
        "  --model_path=$model_dir_str \\\n",
        "  --save_dir=$export_path_str \\\n",
        "  --inference_model=vst_stateless_predict_controls \\\n",
        "  --tflite \\\n",
        "  --notfjs\n",
        "\n",
        "  # Zip the whole directory.\n",
        "  zip_fname = f'{model_name}.zip'\n",
        "  zip_fp = os.path.join(model_dir, zip_fname)\n",
        "  print(f'Export complete! Zipping {export_path} to {zip_fp}')\n",
        "  !cd $model_dir_str \u0026\u0026 zip -r $zip_fname ./$model_name\n",
        "\n",
        "  # Download.\n",
        "  print(f'Zipping Complete! Downloading... {zip_fname}')\n",
        "  print(f'You can also find your model at {export_path}')\n",
        "  colab_utils.download(zip_fp)\n",
        "\n",
        "\n",
        "def get_model_dir(base_dir):\n",
        "  base_str = 'ddsp-training'\n",
        "  dirs = tf.io.gfile.glob(os.path.join(base_dir, f'{base_str}-*'))\n",
        "  if dirs and not Ignore_Previous:\n",
        "    model_dir = dirs[-1]  # Sorted, so last is most recent.\n",
        "  else:\n",
        "    now = datetime.datetime.now().strftime('%Y-%m-%d-%H%M')\n",
        "    model_dir = os.path.join(base_dir, f'{base_str}-{now}')\n",
        "  return model_dir\n",
        "\n",
        "\n",
        "def get_gpu_type():\n",
        "    gpu_info = []\n",
        "    try:\n",
        "        bash_command = \"nvidia-smi --query-gpu=name --format=csv\"\n",
        "        output = subprocess.getoutput(bash_command)\n",
        "        lines = output.split(\"\\n\")\n",
        "        lines.pop(0)\n",
        "        return lines[0]\n",
        "    except OSError:\n",
        "        print(\"GPU device is not available\")\n",
        "        return ''\n",
        "\n",
        "# ------------------------------------------------------------------------------\n",
        "# Run\n",
        "# ------------------------------------------------------------------------------\n",
        "def run(Google_Drive=True):\n",
        "  \"\"\"Create and display a FileChooser widget.\"\"\"\n",
        "  log_event('runStarted', {})\n",
        "  gpu_type = get_gpu_type()\n",
        "  print(f'Using a {gpu_type} GPU...')\n",
        "  log_event('gpuType', {'event_category': gpu_type})\n",
        "  if Google_Drive:\n",
        "    log_event('trainingOnDrive', {})\n",
        "  else:\n",
        "    log_event('trainingLocally', {})\n",
        "\n",
        "  if Google_Drive:\n",
        "    print('Mounting Google Drive...')\n",
        "    drive.mount('gdrive', force_remount=True, timeout_ms=10000)    \n",
        "    initial_dir = 'gdrive/MyDrive'\n",
        "\n",
        "    def run_after_select(chooser):\n",
        "      drive_dir = chooser.selected_path\n",
        "      run_training(drive_dir=drive_dir)\n",
        "\n",
        "    fc = FileChooser(initial_dir)\n",
        "    fc.show_only_dirs = True\n",
        "    fc.title = '\u003cb\u003ePick a folder with (.mp3/.wav) files for training. (Files will not be visible here)...\u003c/b\u003e'\n",
        "    fc.register_callback(run_after_select)\n",
        "    display(fc)\n",
        "\n",
        "\n",
        "  else:\n",
        "    print('Skipping Drive Setup...')\n",
        "    print('Upload Audio Manually...')\n",
        "    run_training(drive_dir='')\n",
        "\n",
        "\n",
        "def run_training(drive_dir=''):\n",
        "  log_event('runTrainingStarted', {})\n",
        "  # ------------------------------------------------------------------------------\n",
        "  # Install DDSP here to allow selecting folder first\n",
        "  # ------------------------------------------------------------------------------\n",
        "  print('Installing DDSP...')\n",
        "  print('This should take about 2 minutes...')\n",
        "  !sudo apt-get install libportaudio2 \u0026\u003e /dev/null\n",
        "  !pip install -U ddsp[data_preparation] \u0026\u003e /dev/null\n",
        "\n",
        "  # ------------------------------------------------------------------------------\n",
        "  # Import DDSP\n",
        "  # ------------------------------------------------------------------------------\n",
        "  from ddsp.colab import colab_utils\n",
        "  globals()['colab_utils'] = colab_utils\n",
        "\n",
        "  # ------------------------------------------------------------------------------\n",
        "  # Setup\n",
        "  # ------------------------------------------------------------------------------\n",
        "  # Save data locally, but model on drive.\n",
        "  data_dir = 'data/'\n",
        "  audio_dir = 'audio/'\n",
        "  model_dir = get_model_dir(drive_dir)\n",
        "\n",
        "  reset_state(data_dir, audio_dir, model_dir)\n",
        "\n",
        "  # ------------------------------------------------------------------------------\n",
        "  # Dataset\n",
        "  # ------------------------------------------------------------------------------\n",
        "  tick = time.time()\n",
        "\n",
        "  get_audio_files(drive_dir, audio_dir)\n",
        "  prepare_dataset(audio_dir, data_dir)\n",
        "\n",
        "  log_event('datasetMins', {'value': round((time.time() - tick) // 60)})\n",
        "\n",
        "\n",
        "  # ------------------------------------------------------------------------------\n",
        "  # Train\n",
        "  # ------------------------------------------------------------------------------\n",
        "  tick = time.time()\n",
        "\n",
        "  print()\n",
        "  print('Training...')\n",
        "  train(model_dir, data_dir, steps=Training_Steps)\n",
        "\n",
        "  log_event('trainMins', {\n",
        "      'event_category': str(Training_Steps),\n",
        "      'value': round((time.time() - tick) // 60),\n",
        "  })\n",
        "\n",
        "\n",
        "  # ------------------------------------------------------------------------------\n",
        "  # Export\n",
        "  # ------------------------------------------------------------------------------\n",
        "  tick = time.time()\n",
        "\n",
        "  print()\n",
        "  print('Exporting model...')\n",
        "  export_and_download(model_dir)\n",
        "\n",
        "  log_event('exportMins', {'value': round((time.time() - tick) // 60)})\n",
        "\n",
        "\n",
        "# The single command.\n",
        "run(Google_Drive)\n"
      ]
    }
  ],
  "metadata": {
    "accelerator": "GPU",
    "colab": {
      "collapsed_sections": [],
      "last_runtime": {},
      "name": "Train_VST.ipynb",
      "private_outputs": true,
      "provenance": []
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}
