Spaces:
Runtime error
Runtime error
Update app.py
Browse files
app.py
CHANGED
@@ -198,7 +198,9 @@ with image_blocks as demo:
|
|
198 |
with gr.Box(elem_id="mask_radio").style(border=False):
|
199 |
radio = gr.Radio(["draw a mask above", "type what to mask below", "type what to keep"], value="draw a mask above", show_label=False, interactive=True).style(container=False)
|
200 |
word_mask = gr.Textbox(label = "What to find in your image", interactive=False, elem_id="word_mask", placeholder="Disabled").style(container=False)
|
201 |
-
|
|
|
|
|
202 |
prompt = gr.Textbox(label = 'Your prompt (what you want to add in place of what you are removing)')
|
203 |
radio.change(fn=swap_word_mask, inputs=radio, outputs=word_mask,show_progress=False)
|
204 |
radio.change(None, inputs=[], outputs=image_blocks, _js = """
|
@@ -220,7 +222,7 @@ with image_blocks as demo:
|
|
220 |
|
221 |
|
222 |
<div class="acknowledgments" >
|
223 |
-
<article
|
224 |
<p dir="auto">This repository contains the code used in the paper <a href="https://arxiv.org/abs/2112.10003" rel="nofollow">"Image Segmentation Using Text and Image Prompts"</a>.</p>
|
225 |
|
226 |
<p dir="auto"><a target="_blank" rel="noopener noreferrer" href="/ThereforeGames/txt2mask/blob/main/repositories/clipseg/overview.png"><img src="/ThereforeGames/txt2mask/raw/main/repositories/clipseg/overview.png" alt="drawing" style="max-width: 100%;" height="200em"></a></p>
|
@@ -229,32 +231,32 @@ with image_blocks as demo:
|
|
229 |
<li>An arbitrary text query</li>
|
230 |
<li>Or an image with a mask highlighting stuff or an object.</li>
|
231 |
</ul>
|
232 |
-
<h3 dir="auto"><a id="user-content-quick-start"
|
233 |
<p dir="auto">In the <code>Quickstart.ipynb</code> notebook we provide the code for using a pre-trained CLIPSeg model. If you run the notebook locally, make sure you downloaded the <code>rd64-uni.pth</code> weights, either manually or via git lfs extension.
|
234 |
It can also be used interactively using <a href="https://mybinder.org/v2/gh/timojl/clipseg/HEAD?labpath=Quickstart.ipynb" rel="nofollow">MyBinder</a>
|
235 |
(please note that the VM does not use a GPU, thus inference takes a few seconds).</p>
|
236 |
-
<h3 dir="auto"><a id="user-content-dependencies"
|
237 |
<p dir="auto">This code base depends on pytorch, torchvision and clip (<code>pip install git+https://github.com/openai/CLIP.git</code>).
|
238 |
Additional dependencies are hidden for double blind review.</p>
|
239 |
-
<h3 dir="auto"><a id="user-content-datasets"
|
240 |
<ul dir="auto">
|
241 |
<li><code>PhraseCut</code> and <code>PhraseCutPlus</code>: Referring expression dataset</li>
|
242 |
<li><code>PFEPascalWrapper</code>: Wrapper class for PFENet's Pascal-5i implementation</li>
|
243 |
<li><code>PascalZeroShot</code>: Wrapper class for PascalZeroShot</li>
|
244 |
<li><code>COCOWrapper</code>: Wrapper class for COCO.</li>
|
245 |
</ul>
|
246 |
-
<h3 dir="auto"><a id="user-content-models"
|
247 |
<ul dir="auto">
|
248 |
<li><code>CLIPDensePredT</code>: CLIPSeg model with transformer-based decoder.</li>
|
249 |
<li><code>ViTDensePredT</code>: CLIPSeg model with transformer-based decoder.</li>
|
250 |
</ul>
|
251 |
-
<h3 dir="auto"><a id="user-content-third-party-dependencies"
|
252 |
<p dir="auto">For some of the datasets third party dependencies are required. Run the following commands in the <code>third_party</code> folder.</p>
|
253 |
-
<div
|
254 |
git clone https://github.com/Jia-Research-Lab/PFENet.git
|
255 |
git clone https://github.com/ChenyunWu/PhraseCutDataset.git
|
256 |
-
git clone https://github.com/juhongm999/hsnet.git</pre><div
|
257 |
-
<clipboard-copy aria-label="Copy"
|
258 |
git clone https://github.com/Jia-Research-Lab/PFENet.git
|
259 |
git clone https://github.com/ChenyunWu/PhraseCutDataset.git
|
260 |
git clone https://github.com/juhongm999/hsnet.git" tabindex="0" role="button">
|
@@ -266,30 +268,30 @@ git clone https://github.com/juhongm999/hsnet.git" tabindex="0" role="button">
|
|
266 |
</svg>
|
267 |
</clipboard-copy>
|
268 |
</div></div>
|
269 |
-
<h3 dir="auto"><a id="user-content-weights"
|
270 |
<p dir="auto">The MIT license does not apply to these weights.</p>
|
271 |
<ul dir="auto">
|
272 |
<li><a href="https://github.com/timojl/clipseg/raw/master/weights/rd64-uni.pth">CLIPSeg-D64</a> (4.1MB, without CLIP weights)</li>
|
273 |
<li><a href="https://github.com/timojl/clipseg/raw/master/weights/rd16-uni.pth">CLIPSeg-D16</a> (1.1MB, without CLIP weights)</li>
|
274 |
</ul>
|
275 |
-
<h3 dir="auto"><a id="user-content-training-and-evaluation"
|
276 |
<p dir="auto">To train use the <code>training.py</code> script with experiment file and experiment id parameters. E.g. <code>python training.py phrasecut.yaml 0</code> will train the first phrasecut experiment which is defined by the <code>configuration</code> and first <code>individual_configurations</code> parameters. Model weights will be written in <code>logs/</code>.</p>
|
277 |
<p dir="auto">For evaluation use <code>score.py</code>. E.g. <code>python score.py phrasecut.yaml 0 0</code> will train the first phrasecut experiment of <code>test_configuration</code> and the first configuration in <code>individual_configurations</code>.</p>
|
278 |
-
<h3 dir="auto"><a id="user-content-usage-of-pfenet-wrappers"
|
279 |
<p dir="auto">In order to use the dataset and model wrappers for PFENet, the PFENet repository needs to be cloned to the root folder.
|
280 |
<code>git clone https://github.com/Jia-Research-Lab/PFENet.git </code></p>
|
281 |
-
<h3 dir="auto"><a id="user-content-license" class="anchor" aria-hidden="true" href="#license"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a>License</h3>
|
282 |
-
<p dir="auto">The source code files in this repository (excluding model weights) are released under MIT license.</p>
|
283 |
|
284 |
-
|
285 |
-
</article>
|
286 |
</div>
|
287 |
<div class="acknowledgments">
|
288 |
-
|
|
|
|
|
|
|
289 |
The model is licensed with a <a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license" style="text-decoration: underline;" target="_blank">CreativeML Open RAIL-M</a> license. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in this license. The license forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, spread misinformation and target vulnerable groups. For the full list of restrictions please <a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license" target="_blank" style="text-decoration: underline;" target="_blank">read the license</a></p>
|
290 |
<p><h4>Biases and content acknowledgment</h4>
|
291 |
Despite how impressive being able to turn text into image is, beware to the fact that this model may output content that reinforces or exacerbates societal biases, as well as realistic faces, pornography and violence. The model was trained on the <a href="https://laion.ai/blog/laion-5b/" style="text-decoration: underline;" target="_blank">LAION-5B dataset</a>, which scraped non-curated image-text-pairs from the internet (the exception being the removal of illegal content) and is meant for research purposes. You can read more in the <a href="https://huggingface.co/CompVis/stable-diffusion-v1-4" style="text-decoration: underline;" target="_blank">model card</a></p>
|
292 |
</div>
|
|
|
293 |
"""
|
294 |
)
|
295 |
demo.launch()
|
|
|
198 |
with gr.Box(elem_id="mask_radio").style(border=False):
|
199 |
radio = gr.Radio(["draw a mask above", "type what to mask below", "type what to keep"], value="draw a mask above", show_label=False, interactive=True).style(container=False)
|
200 |
word_mask = gr.Textbox(label = "What to find in your image", interactive=False, elem_id="word_mask", placeholder="Disabled").style(container=False)
|
201 |
+
|
202 |
+
img_res = gr.Dropdown(['512*512', '256*256'], label="Image Resolution")
|
203 |
+
|
204 |
prompt = gr.Textbox(label = 'Your prompt (what you want to add in place of what you are removing)')
|
205 |
radio.change(fn=swap_word_mask, inputs=radio, outputs=word_mask,show_progress=False)
|
206 |
radio.change(None, inputs=[], outputs=image_blocks, _js = """
|
|
|
222 |
|
223 |
|
224 |
<div class="acknowledgments" >
|
225 |
+
<article itemprop="text"><h1 dir="auto"><a id="user-content-image-segmentation-using-text-and-image-prompts" aria-hidden="true" href="#image-segmentation-using-text-and-image-prompts"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a>Image Segmentation Using Text and Image Prompts</h1>
|
226 |
<p dir="auto">This repository contains the code used in the paper <a href="https://arxiv.org/abs/2112.10003" rel="nofollow">"Image Segmentation Using Text and Image Prompts"</a>.</p>
|
227 |
|
228 |
<p dir="auto"><a target="_blank" rel="noopener noreferrer" href="/ThereforeGames/txt2mask/blob/main/repositories/clipseg/overview.png"><img src="/ThereforeGames/txt2mask/raw/main/repositories/clipseg/overview.png" alt="drawing" style="max-width: 100%;" height="200em"></a></p>
|
|
|
231 |
<li>An arbitrary text query</li>
|
232 |
<li>Or an image with a mask highlighting stuff or an object.</li>
|
233 |
</ul>
|
234 |
+
<h3 dir="auto"><a id="user-content-quick-start" aria-hidden="true" href="#quick-start"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a>Quick Start</h3>
|
235 |
<p dir="auto">In the <code>Quickstart.ipynb</code> notebook we provide the code for using a pre-trained CLIPSeg model. If you run the notebook locally, make sure you downloaded the <code>rd64-uni.pth</code> weights, either manually or via git lfs extension.
|
236 |
It can also be used interactively using <a href="https://mybinder.org/v2/gh/timojl/clipseg/HEAD?labpath=Quickstart.ipynb" rel="nofollow">MyBinder</a>
|
237 |
(please note that the VM does not use a GPU, thus inference takes a few seconds).</p>
|
238 |
+
<h3 dir="auto"><a id="user-content-dependencies" aria-hidden="true" href="#dependencies"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a>Dependencies</h3>
|
239 |
<p dir="auto">This code base depends on pytorch, torchvision and clip (<code>pip install git+https://github.com/openai/CLIP.git</code>).
|
240 |
Additional dependencies are hidden for double blind review.</p>
|
241 |
+
<h3 dir="auto"><a id="user-content-datasets" aria-hidden="true" href="#datasets"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a>Datasets</h3>
|
242 |
<ul dir="auto">
|
243 |
<li><code>PhraseCut</code> and <code>PhraseCutPlus</code>: Referring expression dataset</li>
|
244 |
<li><code>PFEPascalWrapper</code>: Wrapper class for PFENet's Pascal-5i implementation</li>
|
245 |
<li><code>PascalZeroShot</code>: Wrapper class for PascalZeroShot</li>
|
246 |
<li><code>COCOWrapper</code>: Wrapper class for COCO.</li>
|
247 |
</ul>
|
248 |
+
<h3 dir="auto"><a id="user-content-models" aria-hidden="true" href="#models"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a>Models</h3>
|
249 |
<ul dir="auto">
|
250 |
<li><code>CLIPDensePredT</code>: CLIPSeg model with transformer-based decoder.</li>
|
251 |
<li><code>ViTDensePredT</code>: CLIPSeg model with transformer-based decoder.</li>
|
252 |
</ul>
|
253 |
+
<h3 dir="auto"><a id="user-content-third-party-dependencies" aria-hidden="true" href="#third-party-dependencies"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a>Third Party Dependencies</h3>
|
254 |
<p dir="auto">For some of the datasets third party dependencies are required. Run the following commands in the <code>third_party</code> folder.</p>
|
255 |
+
<div dir="auto"><pre>git clone https://github.com/cvlab-yonsei/JoEm
|
256 |
git clone https://github.com/Jia-Research-Lab/PFENet.git
|
257 |
git clone https://github.com/ChenyunWu/PhraseCutDataset.git
|
258 |
+
git clone https://github.com/juhongm999/hsnet.git</pre><div >
|
259 |
+
<clipboard-copy aria-label="Copy" data-copy-feedback="Copied!" data-tooltip-direction="w" value="git clone https://github.com/cvlab-yonsei/JoEm
|
260 |
git clone https://github.com/Jia-Research-Lab/PFENet.git
|
261 |
git clone https://github.com/ChenyunWu/PhraseCutDataset.git
|
262 |
git clone https://github.com/juhongm999/hsnet.git" tabindex="0" role="button">
|
|
|
268 |
</svg>
|
269 |
</clipboard-copy>
|
270 |
</div></div>
|
271 |
+
<h3 dir="auto"><a id="user-content-weights" aria-hidden="true" href="#weights"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a>Weights</h3>
|
272 |
<p dir="auto">The MIT license does not apply to these weights.</p>
|
273 |
<ul dir="auto">
|
274 |
<li><a href="https://github.com/timojl/clipseg/raw/master/weights/rd64-uni.pth">CLIPSeg-D64</a> (4.1MB, without CLIP weights)</li>
|
275 |
<li><a href="https://github.com/timojl/clipseg/raw/master/weights/rd16-uni.pth">CLIPSeg-D16</a> (1.1MB, without CLIP weights)</li>
|
276 |
</ul>
|
277 |
+
<h3 dir="auto"><a id="user-content-training-and-evaluation" aria-hidden="true" href="#training-and-evaluation"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a>Training and Evaluation</h3>
|
278 |
<p dir="auto">To train use the <code>training.py</code> script with experiment file and experiment id parameters. E.g. <code>python training.py phrasecut.yaml 0</code> will train the first phrasecut experiment which is defined by the <code>configuration</code> and first <code>individual_configurations</code> parameters. Model weights will be written in <code>logs/</code>.</p>
|
279 |
<p dir="auto">For evaluation use <code>score.py</code>. E.g. <code>python score.py phrasecut.yaml 0 0</code> will train the first phrasecut experiment of <code>test_configuration</code> and the first configuration in <code>individual_configurations</code>.</p>
|
280 |
+
<h3 dir="auto"><a id="user-content-usage-of-pfenet-wrappers" aria-hidden="true" href="#usage-of-pfenet-wrappers"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a>Usage of PFENet Wrappers</h3>
|
281 |
<p dir="auto">In order to use the dataset and model wrappers for PFENet, the PFENet repository needs to be cloned to the root folder.
|
282 |
<code>git clone https://github.com/Jia-Research-Lab/PFENet.git </code></p>
|
|
|
|
|
283 |
|
|
|
|
|
284 |
</div>
|
285 |
<div class="acknowledgments">
|
286 |
+
<h4 dir="auto"><a id="user-content-license" aria-hidden="true" href="#license"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a>LICENSE</h3>
|
287 |
+
<p dir="auto">The source code files in this repository (excluding model weights) are released under MIT license.</p>
|
288 |
+
|
289 |
+
<p>
|
290 |
The model is licensed with a <a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license" style="text-decoration: underline;" target="_blank">CreativeML Open RAIL-M</a> license. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in this license. The license forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, spread misinformation and target vulnerable groups. For the full list of restrictions please <a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license" target="_blank" style="text-decoration: underline;" target="_blank">read the license</a></p>
|
291 |
<p><h4>Biases and content acknowledgment</h4>
|
292 |
Despite how impressive being able to turn text into image is, beware to the fact that this model may output content that reinforces or exacerbates societal biases, as well as realistic faces, pornography and violence. The model was trained on the <a href="https://laion.ai/blog/laion-5b/" style="text-decoration: underline;" target="_blank">LAION-5B dataset</a>, which scraped non-curated image-text-pairs from the internet (the exception being the removal of illegal content) and is meant for research purposes. You can read more in the <a href="https://huggingface.co/CompVis/stable-diffusion-v1-4" style="text-decoration: underline;" target="_blank">model card</a></p>
|
293 |
</div>
|
294 |
+
</article>
|
295 |
"""
|
296 |
)
|
297 |
demo.launch()
|