Spaces:
Runtime error
Runtime error
SusiePHaltmann
commited on
Commit
•
22edb1a
1
Parent(s):
86e6fbb
Update app.py
Browse files
app.py
CHANGED
@@ -7,7 +7,20 @@ def main():
|
|
7 |
|
8 |
slider = st.slider("Slider", 0, 255, 128) # default value=128, min=0, max=255
|
9 |
st.title("Haltmann Diffusion Algorithm [C] 20XX ")
|
10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
|
12 |
|
13 |
import streamlit as st
|
|
|
7 |
|
8 |
slider = st.slider("Slider", 0, 255, 128) # default value=128, min=0, max=255
|
9 |
st.title("Haltmann Diffusion Algorithm [C] 20XX ")
|
10 |
+
st.title('Dalle PLFT BETA App 0.1a X')
|
11 |
+
|
12 |
+
# Get user input via a text box - this will be the URL of the image to edit.
|
13 |
+
url = st.text_input("Enter URL of image to edit")
|
14 |
+
|
15 |
+
# Load the image from the URL using pillow. We'll need to use BytesIO instead of just passing in the URL since Pillow expects a file object.
|
16 |
+
response = requests.get(url)
|
17 |
+
img = Image.open(BytesIO(response.content))
|
18 |
+
|
19 |
+
# Resize the image so it's not giant - makes everything run faster!
|
20 |
+
img = img.resize((600,400))
|
21 |
+
|
22 |
+
# Convert the image to grayscale for simpler processing
|
23 |
+
# gray_img = img.convert('L') <-- You can experiment with commenting this line out if you want color halftoning! It tends to produce better results on images that are already pretty low contrast though (like screenshots). Gray scale conversion often introduces additional artifacts too like banding or posterization which may or may not look good depending on your original image and what effect you're going for... So feel free to play around with whether or not you convert to grayscale here! If you do leave it commented out make sure you change all references below from 'gray_img' --> 'img'. Also make sure that when we paste back into our final result at the end we use 'paste(img)' instead of 'paste(gray_img)' One other thing worth noting is that some older versions of Pillow don't support .convert('LA') which is needed for doing alpha compositing with our resulting dithered PNGs later on - newer versions added support starting in late 2019 I believe... In any case if your version doesn't have it then converting directly to L will work fine and just ignore transparency entirely.] gray_img = img.convert('LA') <-- Uncomment this line instead if using a more recent version of Pillow (>= 6?) supporting .convert('LA'). This converts our input images directly into both grayscale AND adds full 8-bit alpha transparency channel simultaneously allowing us easy access later when creating composite masks while avoiding having separate GRAYSCALE & RGB versions floating around taking up memory.. Although technically speaking even converting straight "L" above should add an empty/fully transparent alpha channel by default anyway right? Not 100% sure... Either way including "A" above shouldn't hurt anything either so might as well just include it regardless :) [Update 12/2019]: Apparently there's now an even easier way than using LA conversion thanks to @jeremycole who pointed me towards https://github.com/python-pillow/Pillow/issues/3973#issuecomment-529083824 ! Now we can simply pass `mode='1'` when loading our original input images and THAT automatically sets them up ready for 1-bit dithering without needing any further conversions!! Awesome :D Try uncommenting THIS line below along with ALL subsequent references throughout rest of notebook switching from `gray_img` --> `onebit_image`. Should work identically otherwise :) onebit_image = ImageOps
|
24 |
|
25 |
|
26 |
import streamlit as st
|