taki0112 commited on
Commit
aab4477
1 Parent(s): 56f7912
Files changed (1) hide show
  1. app.py +1 -1
app.py CHANGED
@@ -226,7 +226,6 @@ description_md = """
226
  ### 🔥 [[Default ver](https://huggingface.co/spaces/naver-ai/VisualStylePrompting)]
227
  ---
228
  ### ✨ Visual Style Prompting also works on `ControlNet` which specifies the shape of the results by depthmap or keypoints.
229
- ### ‼️ w/ ControlNet ver does not support user style images.
230
  ### 🔥 To try out our demo with ControlNet,
231
  1. Upload an `image for depth control`. An off-the-shelf model will produce the depthmap from it.
232
  2. Choose `ControlNet scale` which determines the alignment to the depthmap.
@@ -234,6 +233,7 @@ description_md = """
234
  4. Enter the `text prompt`. (`Empty text` is okay, but a depthmap description helps.)
235
  5. Choose the `number of outputs`.
236
 
 
237
  ### 👉 To achieve faster results, we recommend lowering the diffusion steps to 30.
238
  ### Enjoy ! 😄
239
  """
 
226
  ### 🔥 [[Default ver](https://huggingface.co/spaces/naver-ai/VisualStylePrompting)]
227
  ---
228
  ### ✨ Visual Style Prompting also works on `ControlNet` which specifies the shape of the results by depthmap or keypoints.
 
229
  ### 🔥 To try out our demo with ControlNet,
230
  1. Upload an `image for depth control`. An off-the-shelf model will produce the depthmap from it.
231
  2. Choose `ControlNet scale` which determines the alignment to the depthmap.
 
233
  4. Enter the `text prompt`. (`Empty text` is okay, but a depthmap description helps.)
234
  5. Choose the `number of outputs`.
235
 
236
+ ### ⚠️ w/ ControlNet ver does not support user style images.
237
  ### 👉 To achieve faster results, we recommend lowering the diffusion steps to 30.
238
  ### Enjoy ! 😄
239
  """