File size: 11,089 Bytes
c840e14
1
{"guide": {"name": "Gradio-and-Wandb-Integration", "category": "other-tutorials", "pretty_category": "Other Tutorials", "guide_index": null, "absolute_index": 58, "pretty_name": "Gradio And Wandb Integration", "content": "# Gradio and W&B Integration\n\n\n\n\n\n## Introduction\n\nIn this Guide, we'll walk you through:\n\n- Introduction of Gradio, and Hugging Face Spaces, and Wandb\n- How to setup a Gradio demo using the Wandb integration for JoJoGAN\n- How to contribute your own Gradio demos after tracking your experiments on wandb to the Wandb organization on Hugging Face\n\n\n## What is Wandb?\n\nWeights and Biases (W&B) allows data scientists and machine learning scientists to track their machine learning experiments at every stage, from training to production. Any metric can be aggregated over samples and shown in panels in a customizable and searchable dashboard, like below:\n\n<img alt=\"Screen Shot 2022-08-01 at 5 54 59 PM\" src=\"https://user-images.githubusercontent.com/81195143/182252755-4a0e1ca8-fd25-40ff-8c91-c9da38aaa9ec.png\">\n\n## What are Hugging Face Spaces & Gradio?\n\n### Gradio\n\nGradio lets users demo their machine learning models as a web app, all in a few lines of Python. Gradio wraps any Python function (such as a machine learning model's inference function) into a user interface and the demos can be launched inside jupyter notebooks, colab notebooks, as well as embedded in your own website and hosted on Hugging Face Spaces for free.\n\nGet started [here](https://gradio.app/getting_started)\n\n### Hugging Face Spaces\n\nHugging Face Spaces is a free hosting option for Gradio demos. Spaces comes with 3 SDK options: Gradio, Streamlit and Static HTML demos. Spaces can be public or private and the workflow is similar to github repos. There are over 2000+ spaces currently on Hugging Face. Learn more about spaces [here](https://huggingface.co/spaces/launch).\n\n## Setting up a Gradio Demo for JoJoGAN\n\nNow, let's walk you through how to do this on your own. We'll make the assumption that you're new to W&B and Gradio for the purposes of this tutorial.\n\nLet's get started!\n\n1. Create a W&B account\n\n   Follow [these quick instructions](https://app.wandb.ai/login) to create your free account if you don\u2019t have one already. It shouldn't take more than a couple minutes. Once you're done (or if you've already got an account), next, we'll run a quick colab.\n\n2. Open Colab Install Gradio and W&B\n\n   We'll be following along with the colab provided in the JoJoGAN repo with some minor modifications to use Wandb and Gradio more effectively.\n\n   [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mchong6/JoJoGAN/blob/main/stylize.ipynb)\n\n   Install Gradio and Wandb at the top:\n\n   ```sh\n   pip install gradio wandb\n   ```\n\n3. Finetune StyleGAN and W&B experiment tracking\n\n   This next step will open a W&B dashboard to track your experiments and a gradio panel showing pretrained models to choose from a drop down menu from a Gradio Demo hosted on Huggingface Spaces. Here's the code you need for that:\n\n   ```python\n   alpha =  1.0\n   alpha = 1-alpha\n\n   preserve_color = True\n   num_iter = 100\n   log_interval = 50\n\n   samples = []\n   column_names = [\"Reference (y)\", \"Style Code(w)\", \"Real Face Image(x)\"]\n\n   wandb.init(project=\"JoJoGAN\")\n   config = wandb.config\n   config.num_iter = num_iter\n   config.preserve_color = preserve_color\n   wandb.log(\n   {\"Style reference\": [wandb.Image(transforms.ToPILImage()(target_im))]},\n   step=0)\n\n   # load discriminator for perceptual loss\n   discriminator = Discriminator(1024, 2).eval().to(device)\n   ckpt = torch.load('models/stylegan2-ffhq-config-f.pt', map_location=lambda storage, loc: storage)\n   discriminator.load_state_dict(ckpt[\"d\"], strict=False)\n\n   # reset generator\n   del generator\n   generator = deepcopy(original_generator)\n\n   g_optim = optim.Adam(generator.parameters(), lr=2e-3, betas=(0, 0.99))\n\n   # Which layers to swap for generating a family of plausible real images -> fake image\n   if preserve_color:\n       id_swap = [9,11,15,16,17]\n   else:\n       id_swap = list(range(7, generator.n_latent))\n\n   for idx in tqdm(range(num_iter)):\n       mean_w = generator.get_latent(torch.randn([latents.size(0), latent_dim]).to(device)).unsqueeze(1).repeat(1, generator.n_latent, 1)\n       in_latent = latents.clone()\n       in_latent[:, id_swap] = alpha*latents[:, id_swap] + (1-alpha)*mean_w[:, id_swap]\n\n       img = generator(in_latent, input_is_latent=True)\n\n       with torch.no_grad():\n           real_feat = discriminator(targets)\n       fake_feat = discriminator(img)\n\n       loss = sum([F.l1_loss(a, b) for a, b in zip(fake_feat, real_feat)])/len(fake_feat)\n\n       wandb.log({\"loss\": loss}, step=idx)\n       if idx % log_interval == 0:\n           generator.eval()\n           my_sample = generator(my_w, input_is_latent=True)\n           generator.train()\n           my_sample = transforms.ToPILImage()(utils.make_grid(my_sample, normalize=True, range=(-1, 1)))\n           wandb.log(\n           {\"Current stylization\": [wandb.Image(my_sample)]},\n           step=idx)\n       table_data = [\n               wandb.Image(transforms.ToPILImage()(target_im)),\n               wandb.Image(img),\n               wandb.Image(my_sample),\n           ]\n       samples.append(table_data)\n\n       g_optim.zero_grad()\n       loss.backward()\n       g_optim.step()\n\n   out_table = wandb.Table(data=samples, columns=column_names)\n   wandb.log({\"Current Samples\": out_table})\n   ```\n4. Save, Download, and Load Model\n\n    Here's how to save and download your model.\n\n   ```python\n   from PIL import Image\n   import torch\n   torch.backends.cudnn.benchmark = True\n   from torchvision import transforms, utils\n   from util import *\n   import math\n   import random\n   import numpy as np\n   from torch import nn, autograd, optim\n   from torch.nn import functional as F\n   from tqdm import tqdm\n   import lpips\n   from model import *\n   from e4e_projection import projection as e4e_projection\n   \n   from copy import deepcopy\n   import imageio\n   \n   import os\n   import sys\n   import torchvision.transforms as transforms\n   from argparse import Namespace\n   from e4e.models.psp import pSp\n   from util import *\n   from huggingface_hub import hf_hub_download\n   from google.colab import files\n   \n   torch.save({\"g\": generator.state_dict()}, \"your-model-name.pt\")\n   \n   files.download('your-model-name.pt')\n   \n   latent_dim = 512\n   device=\"cuda\"\n   model_path_s = hf_hub_download(repo_id=\"akhaliq/jojogan-stylegan2-ffhq-config-f\", filename=\"stylegan2-ffhq-config-f.pt\")\n   original_generator = Generator(1024, latent_dim, 8, 2).to(device)\n   ckpt = torch.load(model_path_s, map_location=lambda storage, loc: storage)\n   original_generator.load_state_dict(ckpt[\"g_ema\"], strict=False)\n   mean_latent = original_generator.mean_latent(10000)\n   \n   generator = deepcopy(original_generator)\n   \n   ckpt = torch.load(\"/content/JoJoGAN/your-model-name.pt\", map_location=lambda storage, loc: storage)\n   generator.load_state_dict(ckpt[\"g\"], strict=False)\n   generator.eval()\n   \n   plt.rcParams['figure.dpi'] = 150\n   \n   transform = transforms.Compose(\n       [\n           transforms.Resize((1024, 1024)),\n           transforms.ToTensor(),\n           transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),\n       ]\n   )\n   \n   def inference(img):\n       img.save('out.jpg')\n       aligned_face = align_face('out.jpg')\n   \n       my_w = e4e_projection(aligned_face, \"out.pt\", device).unsqueeze(0)\n       with torch.no_grad():\n           my_sample = generator(my_w, input_is_latent=True)\n   \n       npimage = my_sample[0].cpu().permute(1, 2, 0).detach().numpy()\n       imageio.imwrite('filename.jpeg', npimage)\n       return 'filename.jpeg'\n   ````\n\n5. Build a Gradio Demo\n\n   ```python\n   import gradio as gr\n   \n   title = \"JoJoGAN\"\n   description = \"Gradio Demo for JoJoGAN: One Shot Face Stylization. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below.\"\n   \n   demo = gr.Interface(\n       inference,\n       gr.Image(type=\"pil\"),\n       gr.Image(type=\"file\"),\n       title=title,\n       description=description\n   )\n   \n   demo.launch(share=True)\n   ```\n\n6. Integrate Gradio into your W&B Dashboard\n\n   The last step\u2014integrating your Gradio demo with your W&B dashboard\u2014is just one extra line:\n\n   ```python\n   demo.integrate(wandb=wandb)\n   ```\n\n   Once you call integrate, a demo will be created and you can integrate it into your dashboard or report.\n\n   Outside of W&B with Web components, using the `gradio-app` tags, anyone can embed Gradio demos on HF spaces directly into their blogs, websites, documentation, etc.:\n   \n   ```html\n   <gradio-app space=\"akhaliq/JoJoGAN\"> </gradio-app>\n   ```\n\n7. (Optional) Embed W&B plots in your Gradio App\n\n   It's also possible to embed W&B plots within Gradio apps. To do so, you can create a W&B Report of your plots and\n   embed them within your Gradio app within a `gr.HTML` block.\n\n   The Report will need to be public and you will need to wrap the URL within an iFrame like this:\n\n   ```python\n   import gradio as gr\n   \n   def wandb_report(url):\n       iframe = f'<iframe src={url} style=\"border:none;height:1024px;width:100%\">'\n       return gr.HTML(iframe)\n   \n   with gr.Blocks() as demo:\n       report_url = 'https://wandb.ai/_scott/pytorch-sweeps-demo/reports/loss-22-10-07-16-00-17---VmlldzoyNzU2NzAx'\n       report = wandb_report(report_url)\n   \n   demo.launch(share=True)\n   ```\n\n## Conclusion\n\nWe hope you enjoyed this brief demo of embedding a Gradio demo to a W&B report! Thanks for making it to the end. To recap:\n\n- Only one single reference image is needed for fine-tuning JoJoGAN which usually takes about 1 minute on a GPU in colab. After training, style can be applied to any input image. Read more in the paper.\n\n- W&B tracks experiments with just a few lines of code added to a colab and you can visualize, sort, and understand your experiments in a single, centralized dashboard.\n\n- Gradio, meanwhile, demos the model in a user friendly interface to share anywhere on the web.\n\n## How to contribute Gradio demos on HF spaces on the Wandb organization\n\n- Create an account on Hugging Face [here](https://huggingface.co/join).\n- Add Gradio Demo under your username, see this [course](https://huggingface.co/course/chapter9/4?fw=pt) for setting up Gradio Demo on Hugging Face.\n- Request to join wandb organization [here](https://huggingface.co/wandb).\n- Once approved transfer model from your username to Wandb organization\n", "tags": ["WANDB", "SPACES"], "spaces": ["https://huggingface.co/spaces/akhaliq/JoJoGAN"], "url": "/guides/Gradio-and-Wandb-Integration/", "contributor": "Gradio team"}}