BertChristiaens commited on
Commit
2f90cfc
1 Parent(s): a9bdadd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -10,8 +10,10 @@ Big thanks to `Google` for lending us TPUv4s to train this model on. Big thanks
10
  To make this demo as good as possible, our team spend a lot of time training a custom model. We used the LAION5B dataset to build our custom dataset, which contains 130k images of 15 types of rooms in almost 30 design styles. After fetching all these images, we started adding metadata such as captions (from the BLIP captioning model) and segmentation maps (from the HuggingFace UperNetForSemanticSegmentation model).
11
 
12
  ## About the model
13
- These were then used to train the controlnet model to generate quality interior design images by using the segmentation maps and prompts as conditioning information for the model. By training on segmentation maps, the enduser has a very finegrained control over which objects they want to place in their room. The resulting model is then used in a community pipeline that supports image2image and inpainting, so the user can keep elements of their room and change specific parts of the image.
14
  The training started from the `lllyasviel/control_v11p_sd15_seg` checkpoint, which is a robustly trained controlnet model conditioned on segmentation maps. This checkpoint got fine-tuned on a TPUv4 with the JAX framework. Afterwards, the checkpoint was converted into a PyTorch checkpoint for easy integration with the diffusers library.
15
 
16
  ## About the demo
17
- Our team made a streamlit demo where you can test out the capabilities of this model. https://huggingface.co/spaces/controlnet-interior-design/controlnet-seg
 
 
 
10
  To make this demo as good as possible, our team spend a lot of time training a custom model. We used the LAION5B dataset to build our custom dataset, which contains 130k images of 15 types of rooms in almost 30 design styles. After fetching all these images, we started adding metadata such as captions (from the BLIP captioning model) and segmentation maps (from the HuggingFace UperNetForSemanticSegmentation model).
11
 
12
  ## About the model
13
+ This dataset was then used to train the controlnet model to generate quality interior design images by using the segmentation maps and prompts as conditioning information for the model. By training on segmentation maps, the end user has a very finegrained control over which objects they want to place in their room.
14
  The training started from the `lllyasviel/control_v11p_sd15_seg` checkpoint, which is a robustly trained controlnet model conditioned on segmentation maps. This checkpoint got fine-tuned on a TPUv4 with the JAX framework. Afterwards, the checkpoint was converted into a PyTorch checkpoint for easy integration with the diffusers library.
15
 
16
  ## About the demo
17
+ Our team made a streamlit demo where you can test out the capabilities of this model.
18
+ The resulting model is used in a community pipeline that supports image2image and inpainting, so the user can keep elements of their room and change specific parts of the image.
19
+ https://huggingface.co/spaces/controlnet-interior-design/controlnet-seg