Edit model card

MyModelName

Model description

Pix2pix Model is a conditional adversarial networks, a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.

Intended uses & limitations:

Used for reconstruction of images from edges

How to use

from torchvision.transforms import Compose, Resize, ToTensor, Normalize
from PIL import Image
from torchvision.utils import save_image
import cv2
from huggan.pytorch.pix2pix.modeling_pix2pix import GeneratorUNet

transform = Compose(
    [
        Resize((256, 256), Image.BICUBIC),
        ToTensor(),
        Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
    ]
)
model = GeneratorUNet.from_pretrained('huggan/pix2pix-edge2shoes)

def predict_fn(img):
    inp = transform(img).unsqueeze(0)
    out = model(inp)
    save_image(out, 'out.png', normalize=True)
    return 'out.png'

predict_fn(img)

Limitations and bias

  • Gives unrealistic colors in the image
  • Patterns in the edge drawing are not recognize properly

Training data

Training procedure

# clone the repository
git clone https://github.com/huggingface/community-events.git
pip install .

# change directory
cd community-events/huggan/pytorch/pix2pix/

# define config
accelerate config

# launch training with required parameters
accelerate launch train.py --checkpoint_interval 10 --dataset huggan/edges2shoes --push_to_hub --model_name pix2pix-edge2shoes --batch_size 128 --n_epochs 100

Generated Images

Here,

  • First Image Row: Input Sketch
  • Second Image Row: Generated Image
  • Third Image Row: Target Image

image1 image2

BibTeX entry and citation info

@article{pix2pix2017,
  title={Image-to-Image Translation with Conditional Adversarial Networks},
  author={Isola, Phillip and Zhu, Jun-Yan and Zhou, Tinghui and Efros, Alexei A},
  journal={CVPR},
  year={2017}
}
Downloads last month
0
Unable to determine this model's library. Check the docs .

Space using huggan/pix2pix-edge2shoes 1