nightfury commited on
Commit
0cfe17d
1 Parent(s): 34abcdd

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +8 -6
app.py CHANGED
@@ -59,7 +59,7 @@ model_id_or_path = "CompVis/stable-diffusion-v1-4"
59
  pipe = StableDiffusionInpaintingPipeline.from_pretrained(
60
  model_id_or_path,
61
  revision="fp16",
62
- torch_dtype=torch.float, #float16
63
  use_auth_token=auth_token
64
  )
65
 
@@ -268,30 +268,32 @@ git clone https://github.com/juhongm999/hsnet.git" tabindex="0" role="button">
268
  </svg>
269
  </clipboard-copy>
270
  </div></div>
 
271
  <h3 dir="auto"><a id="user-content-weights" aria-hidden="true" href="#weights"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a>Weights</h3>
272
  <p dir="auto">The MIT license does not apply to these weights.</p>
273
  <ul dir="auto">
274
  <li><a href="https://github.com/timojl/clipseg/raw/master/weights/rd64-uni.pth">CLIPSeg-D64</a> (4.1MB, without CLIP weights)</li>
275
  <li><a href="https://github.com/timojl/clipseg/raw/master/weights/rd16-uni.pth">CLIPSeg-D16</a> (1.1MB, without CLIP weights)</li>
276
  </ul>
 
277
  <h3 dir="auto"><a id="user-content-training-and-evaluation" aria-hidden="true" href="#training-and-evaluation"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a>Training and Evaluation</h3>
278
  <p dir="auto">To train use the <code>training.py</code> script with experiment file and experiment id parameters. E.g. <code>python training.py phrasecut.yaml 0</code> will train the first phrasecut experiment which is defined by the <code>configuration</code> and first <code>individual_configurations</code> parameters. Model weights will be written in <code>logs/</code>.</p>
279
  <p dir="auto">For evaluation use <code>score.py</code>. E.g. <code>python score.py phrasecut.yaml 0 0</code> will train the first phrasecut experiment of <code>test_configuration</code> and the first configuration in <code>individual_configurations</code>.</p>
 
280
  <h3 dir="auto"><a id="user-content-usage-of-pfenet-wrappers" aria-hidden="true" href="#usage-of-pfenet-wrappers"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a>Usage of PFENet Wrappers</h3>
281
  <p dir="auto">In order to use the dataset and model wrappers for PFENet, the PFENet repository needs to be cloned to the root folder.
282
  <code>git clone https://github.com/Jia-Research-Lab/PFENet.git </code></p>
283
 
284
- </div>
285
- <div class="acknowledgments">
286
- <h4 dir="auto"><a id="user-content-license" aria-hidden="true" href="#license"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a>LICENSE</h3>
287
  <p dir="auto">The source code files in this repository (excluding model weights) are released under MIT license.</p>
288
 
289
  <p>
290
  The model is licensed with a <a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license" style="text-decoration: underline;" target="_blank">CreativeML Open RAIL-M</a> license. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in this license. The license forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, spread misinformation and target vulnerable groups. For the full list of restrictions please <a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license" target="_blank" style="text-decoration: underline;" target="_blank">read the license</a></p>
291
  <p><h4>Biases and content acknowledgment</h4>
292
  Despite how impressive being able to turn text into image is, beware to the fact that this model may output content that reinforces or exacerbates societal biases, as well as realistic faces, pornography and violence. The model was trained on the <a href="https://laion.ai/blog/laion-5b/" style="text-decoration: underline;" target="_blank">LAION-5B dataset</a>, which scraped non-curated image-text-pairs from the internet (the exception being the removal of illegal content) and is meant for research purposes. You can read more in the <a href="https://huggingface.co/CompVis/stable-diffusion-v1-4" style="text-decoration: underline;" target="_blank">model card</a></p>
293
- </div>
294
- </article>
 
295
  """
296
  )
297
  demo.launch()
 
59
  pipe = StableDiffusionInpaintingPipeline.from_pretrained(
60
  model_id_or_path,
61
  revision="fp16",
62
+ torch_dtype=torch.double, #float16
63
  use_auth_token=auth_token
64
  )
65
 
 
268
  </svg>
269
  </clipboard-copy>
270
  </div></div>
271
+
272
  <h3 dir="auto"><a id="user-content-weights" aria-hidden="true" href="#weights"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a>Weights</h3>
273
  <p dir="auto">The MIT license does not apply to these weights.</p>
274
  <ul dir="auto">
275
  <li><a href="https://github.com/timojl/clipseg/raw/master/weights/rd64-uni.pth">CLIPSeg-D64</a> (4.1MB, without CLIP weights)</li>
276
  <li><a href="https://github.com/timojl/clipseg/raw/master/weights/rd16-uni.pth">CLIPSeg-D16</a> (1.1MB, without CLIP weights)</li>
277
  </ul>
278
+
279
  <h3 dir="auto"><a id="user-content-training-and-evaluation" aria-hidden="true" href="#training-and-evaluation"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a>Training and Evaluation</h3>
280
  <p dir="auto">To train use the <code>training.py</code> script with experiment file and experiment id parameters. E.g. <code>python training.py phrasecut.yaml 0</code> will train the first phrasecut experiment which is defined by the <code>configuration</code> and first <code>individual_configurations</code> parameters. Model weights will be written in <code>logs/</code>.</p>
281
  <p dir="auto">For evaluation use <code>score.py</code>. E.g. <code>python score.py phrasecut.yaml 0 0</code> will train the first phrasecut experiment of <code>test_configuration</code> and the first configuration in <code>individual_configurations</code>.</p>
282
+
283
  <h3 dir="auto"><a id="user-content-usage-of-pfenet-wrappers" aria-hidden="true" href="#usage-of-pfenet-wrappers"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a>Usage of PFENet Wrappers</h3>
284
  <p dir="auto">In order to use the dataset and model wrappers for PFENet, the PFENet repository needs to be cloned to the root folder.
285
  <code>git clone https://github.com/Jia-Research-Lab/PFENet.git </code></p>
286
 
287
+ <h4 dir="auto"><a id="user-content-license" aria-hidden="true" href="#license"><svg class="octicon octicon-link" viewBox="0 0 16 16" version="1.1" width="16" height="16" aria-hidden="true"><path fill-rule="evenodd" d="M7.775 3.275a.75.75 0 001.06 1.06l1.25-1.25a2 2 0 112.83 2.83l-2.5 2.5a2 2 0 01-2.83 0 .75.75 0 00-1.06 1.06 3.5 3.5 0 004.95 0l2.5-2.5a3.5 3.5 0 00-4.95-4.95l-1.25 1.25zm-4.69 9.64a2 2 0 010-2.83l2.5-2.5a2 2 0 012.83 0 .75.75 0 001.06-1.06 3.5 3.5 0 00-4.95 0l-2.5 2.5a3.5 3.5 0 004.95 4.95l1.25-1.25a.75.75 0 00-1.06-1.06l-1.25 1.25a2 2 0 01-2.83 0z"></path></svg></a>LICENSE</h4>
 
 
288
  <p dir="auto">The source code files in this repository (excluding model weights) are released under MIT license.</p>
289
 
290
  <p>
291
  The model is licensed with a <a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license" style="text-decoration: underline;" target="_blank">CreativeML Open RAIL-M</a> license. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in this license. The license forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, spread misinformation and target vulnerable groups. For the full list of restrictions please <a href="https://huggingface.co/spaces/CompVis/stable-diffusion-license" target="_blank" style="text-decoration: underline;" target="_blank">read the license</a></p>
292
  <p><h4>Biases and content acknowledgment</h4>
293
  Despite how impressive being able to turn text into image is, beware to the fact that this model may output content that reinforces or exacerbates societal biases, as well as realistic faces, pornography and violence. The model was trained on the <a href="https://laion.ai/blog/laion-5b/" style="text-decoration: underline;" target="_blank">LAION-5B dataset</a>, which scraped non-curated image-text-pairs from the internet (the exception being the removal of illegal content) and is meant for research purposes. You can read more in the <a href="https://huggingface.co/CompVis/stable-diffusion-v1-4" style="text-decoration: underline;" target="_blank">model card</a></p>
294
+
295
+ </article>
296
+ </div>
297
  """
298
  )
299
  demo.launch()