Spaces:
Running
on
Zero
How do I use this space with the API?
I copied the example, set handle_file to None, reduced the step size, and increased GPU time, but nothing works, i keep getting either gradio_client.exceptions.AppError: ValueError.
gradio_client.exceptions.AppError: GPU task aborted.
from gradio_client import Client, handle_file
from huggingface_hub import login
login("hf_xxxxxx")
client = Client("John6666/DiffuseCraftMod")
result = client.predict(
param_0="Hello!!",
param_1="lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, worst quality, low quality, very displeasing, (bad)",
param_2=1,
param_3=5,
param_4=7,
param_5=False,
param_6=-1,
param_7="",
param_8=1,
param_9="",
param_10=1,
param_11="",
param_12=1,
param_13="",
param_14=1,
param_15="",
param_16=1,
param_17="",
param_18=1,
param_19="",
param_20=1,
param_21="Euler",
param_22="Automatic",
param_23="Automatic",
param_24=1024,
param_25=1024,
param_26="votepurchase/animagine-xl-3.1",
param_27="None",
param_28="txt2img",
param_29=None,
param_30="Canny",
param_31=512,
param_32=1024,
param_33=["(No style)"],
param_34=None,
param_35=None,
param_36=0.55,
param_37=100,
param_38=200,
param_39=0.1,
param_40=0.1,
param_41=1,
param_42=9,
param_43=1,
param_44=0,
param_45=1,
param_46=False,
param_47="Classic",
param_48=None,
param_49=1.2,
param_50=0,
param_51=8,
param_52=30,
param_53=0.55,
param_54="Use same sampler",
param_55="Hello!!",
param_56="Hello!!",
param_57=False,
param_58=True,
param_59="Use same schedule type",
param_60=-1,
param_61="Automatic",
param_62=1,
param_63=True,
param_64=False,
param_65=True,
param_66=False,
param_67=False,
param_68="model,seed",
param_69="./images/",
param_70=False,
param_71=False,
param_72=False,
param_73=True,
param_74=1,
param_75=0.55,
param_76=False,
param_77=False,
param_78=False,
param_79=True,
param_80=False,
param_81="Use same sampler",
param_82=False,
param_83="Hello!!",
param_84="Hello!!",
param_85=0.35,
param_86=False,
param_87=True,
param_88=False,
param_89=4,
param_90=4,
param_91=32,
param_92=False,
param_93="Hello!!",
param_94="Hello!!",
param_95=0.35,
param_96=False,
param_97=True,
param_98=False,
param_99=4,
param_100=4,
param_101=32,
param_102=True,
param_103=0,
param_104=None,
param_105=None,
param_106="plus_face",
param_107="original",
param_108=0.7,
param_109=None,
param_110=None,
param_111="base",
param_112="style",
param_113=0.7,
param_114=0,
param_115=None,
param_116=1,
param_117=0.5,
param_118=False,
param_119=False,
param_120=50,
api_name="/sd_gen_generate_pipeline"
)
print(result)
Thanks. There was an issue inside the Space. I think I fixed it.
However, I've never used this Space via the API, so I can't guarantee anything about the API...
Thanks it works now. Loaded as API: https://john6666-diffusecraftmod.hf.space β
('GPU task complete in: 9 seconds', {'type': 'update'}, {'type': 'update'})
btw why was the generation step images also downloaded?
why was the generation step images also downloaded?
That's because it actually outputs the image from the generation process for preview purposes.π
Using the following space without previews, where the output is nearly same, for the API usage might be a good idea.
https://huggingface.co/spaces/John6666/votepurchase-multiple-model
DiffuseCraft loaded the wrong model (probably the last used model) i need to add this before sd_gen_generate_pipeline
client.predict(
model_name="votepurchase/animagine-xl-3.1",
vae_model=None,
task="txt2img",
controlnet_model="Automatic",
api_name="/load_new_model",
)
I tried votepurchase, right, it's more straightforward. Thanks.
Hi just so you know, if someone else is using vp, my api call will use their model. In dc that's not a problem https://huggingface.co/spaces/r3gm/DiffuseCraft/discussions/8
Sorry... Thanks! I totally forgot to make it work for multiple users at the same time... I think I fixed it.π
Sorry to bother you again, is there any way the python api can show info and errors like the web version (pop ups or progress indicators)? now itβs very minimal, you either get the image or a null error, don't bother if that's too much work :)
Hmm... Before even considering the burden, it's probably simply impossible given Gradio's API specifications...π
Technically (ideally), there might be ways to implement a streaming API like OpenAI's LLM, but there's likely no way to do that with Zero GPU + Gradio at present...
Endpoint performs fastest when operating in a βsuccess or death. No progress displayedβ manner. To show progress, you need to communicate with the server frequently for the purpose of showing progress...
Try swapping client.predict for client.submit, but that one only returns intermediate results...
result = client.submit(<params>)
for data in result:
print(data)
Probably, according to the Gradio API's design, we'd have to manually iterate the required number of times for the progress...
Was there a standard streaming API...?