This model is awesome
impressive model, very good at understanding prompt and awesome result
where can i follow the upcoming version of this model ?
This model is a .safetensors that was placed in Civitai and converted to Huggingface Diffusers’ format. I have no idea where it was first published, but if you keep an eye on the following you won't miss it.
iNiverse Mix XL(SFW & NSFW)
https://civitai.com/models/226533/iniverse-mix-xlsfw-and-nsfw?modelVersionId=608842
thank you !
can you upload fully real XL 10 ?
https://civitai.com/models/227059/fullyrealxl
this model has been removed but if you find it elsewhere can you upload it ?
Sorry, I don't have one.😅
hey @John6666 can you upload cookie-run-character-style?
https://civitai.com/models/16068/cookie-run-character-style
i like this model by the way :p
I did it, just put the base model repo name in README.md since it's LoRA.
https://huggingface.co/John6666/cookie-run-character-style-v1-sd15-lora
Any LoRA that is fine with weights=1.0 should work this way. (as long as there is a working base model on the huggingface)
alright thanks you
I tried to use the Flux lora i uploaded yesterday when we tested base model but it doesn't work.
Mysterious...
https://huggingface.co/CultriX/flux-nsfw-highress
I wonder if the contents of README.md are still missing? Or is it a private repo that doesn't go to warm status?
Why don't you try duplicating this guy with repo duplicator, for example, and see if it works?
yeah i have saw this page and iwonder how he did
but i will like just to be able to have base model (pony and flux) repo that i can link to one lora i want
Hmmm... what else is suspicious in README.md... how about the following?
instance_prompt: nsfw
Put the LoRA trigger word in this section. It should save you a lot of typing. Or, in the case of Flux, it might be essential.
SDXL and SD1.5 LoRA worked normally without this, but Flux seems to be different in many ways than expected.
---
base_model: black-forest-labs/FLUX.1-dev
tags:
- text-to-image
- lora
- diffusers
- template:sd-lora
instance_prompt: nsfw
license: apache-2.0
---
test
hello john,
i try to install pony base model with one pony lora. So i did all the steps above. And i have this error:
===== Application Startup at 2024-09-22 10:28:31 =====
Fetching model from: https://huggingface.co/iafun/bulmgt
Caching examples at: '/home/user/app/gradio_cached_examples/13'
Caching example 1/1
Traceback (most recent call last):
File "/home/user/app/app.py", line 4, in
demo = gr.load(repo, src="models", hf_token=os.environ.get("HF_TOKEN")).launch()
File "/usr/local/lib/python3.10/site-packages/gradio/external.py", line 60, in load
return load_blocks_from_repo(
File "/usr/local/lib/python3.10/site-packages/gradio/external.py", line 99, in load_blocks_from_repo
blocks: gradio.Blocks = factory_methods[src](name, hf_token, alias, **kwargs)
File "/usr/local/lib/python3.10/site-packages/gradio/external.py", line 395, in from_model
interface = gradio.Interface(**kwargs)
File "/usr/local/lib/python3.10/site-packages/gradio/interface.py", line 532, in init
self.render_examples()
File "/usr/local/lib/python3.10/site-packages/gradio/interface.py", line 880, in render_examples
self.examples_handler = Examples(
File "/usr/local/lib/python3.10/site-packages/gradio/helpers.py", line 81, in create_examples
examples_obj.create()
File "/usr/local/lib/python3.10/site-packages/gradio/helpers.py", line 340, in create
self._start_caching()
File "/usr/local/lib/python3.10/site-packages/gradio/helpers.py", line 391, in _start_caching
client_utils.synchronize_async(self.cache)
File "/usr/local/lib/python3.10/site-packages/gradio_client/utils.py", line 855, in synchronize_async
return fsspec.asyn.sync(fsspec.asyn.get_loop(), func, *args, **kwargs) # type: ignore
File "/usr/local/lib/python3.10/site-packages/fsspec/asyn.py", line 103, in sync
raise return_result
File "/usr/local/lib/python3.10/site-packages/fsspec/asyn.py", line 56, in _runner
result[0] = await coro
File "/usr/local/lib/python3.10/site-packages/gradio/helpers.py", line 517, in cache
prediction = await Context.root_block.process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1945, in process_api
data = await self.postprocess_data(block_fn, result["prediction"], state)
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1768, in postprocess_data
prediction_value = block.postprocess(prediction_value)
File "/usr/local/lib/python3.10/site-packages/gradio/components/image.py", line 226, in postprocess
saved = image_utils.save_image(value, self.GRADIO_CACHE, self.format)
File "/usr/local/lib/python3.10/site-packages/gradio/image_utils.py", line 72, in save_image
raise ValueError(
ValueError: Cannot process this value as an Image, it is of type: <class 'tuple'>
Fetching model from: https://huggingface.co/iafun/bulmgt
Caching examples at: '/home/user/app/gradio_cached_examples/13'
Caching example 1/1
ValueError: Cannot process this value as an Image, it is of type: <class 'tuple'>
What is it?
I know it's returning tuples that Gradio can't handle, but maybe the behaviour changes depending on the contents of README.md, and it may or may not be an error.
What about removing the pipeline and library lines from LoRA's README.md?
At least the behaviour is likely to change.
i'm going to try
i have this error now. I am not going to insist too much if it doesn't work but i felt good to see that the space building time took long before it crashed so i don't know if you feel that i'm close to succeed in the install or if it's wiser to give up
===== Application Startup at 2024-09-22 10:53:30 =====
Fetching model from: https://huggingface.co/iafun/bulmgt
Caching examples at: '/home/user/app/gradio_cached_examples/13'
Caching example 1/1
Traceback (most recent call last):
File "/home/user/app/app.py", line 4, in
demo = gr.load(repo, src="models", hf_token=os.environ.get("HF_TOKEN")).launch()
File "/usr/local/lib/python3.10/site-packages/gradio/external.py", line 60, in load
return load_blocks_from_repo(
File "/usr/local/lib/python3.10/site-packages/gradio/external.py", line 99, in load_blocks_from_repo
blocks: gradio.Blocks = factory_methods[src](name, hf_token, alias, **kwargs)
File "/usr/local/lib/python3.10/site-packages/gradio/external.py", line 395, in from_model
interface = gradio.Interface(**kwargs)
File "/usr/local/lib/python3.10/site-packages/gradio/interface.py", line 532, in init
self.render_examples()
File "/usr/local/lib/python3.10/site-packages/gradio/interface.py", line 880, in render_examples
self.examples_handler = Examples(
File "/usr/local/lib/python3.10/site-packages/gradio/helpers.py", line 81, in create_examples
examples_obj.create()
File "/usr/local/lib/python3.10/site-packages/gradio/helpers.py", line 340, in create
self._start_caching()
File "/usr/local/lib/python3.10/site-packages/gradio/helpers.py", line 391, in _start_caching
client_utils.synchronize_async(self.cache)
File "/usr/local/lib/python3.10/site-packages/gradio_client/utils.py", line 855, in synchronize_async
return fsspec.asyn.sync(fsspec.asyn.get_loop(), func, *args, **kwargs) # type: ignore
File "/usr/local/lib/python3.10/site-packages/fsspec/asyn.py", line 103, in sync
raise return_result
File "/usr/local/lib/python3.10/site-packages/fsspec/asyn.py", line 56, in _runner
result[0] = await coro
File "/usr/local/lib/python3.10/site-packages/gradio/helpers.py", line 517, in cache
prediction = await Context.root_block.process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1945, in process_api
data = await self.postprocess_data(block_fn, result["prediction"], state)
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1768, in postprocess_data
prediction_value = block.postprocess(prediction_value)
File "/usr/local/lib/python3.10/site-packages/gradio/components/image.py", line 226, in postprocess
saved = image_utils.save_image(value, self.GRADIO_CACHE, self.format)
File "/usr/local/lib/python3.10/site-packages/gradio/image_utils.py", line 72, in save_image
raise ValueError(
ValueError: Cannot process this value as an Image, it is of type: <class 'tuple'>
Fetching model from: https://huggingface.co/iafun/bulmgt
Caching examples at: '/home/user/app/gradio_cached_examples/13'
Caching example 1/1
A long SDXL model load can take approximately three minutes; a Flux would be more.😭
So, a possible sign of success...
It might be more stable to use Animagine, which is always used and cached, or a suitable Pony as a base model. It is only meant for testing purposes.
If it's in warm, the load time is practically zero.