it is not work now?

#21
by kama12 - opened

File "D:\Ai art\move to move\1\Rerender\app.py", line 696, in
input_path = gr.Video(label='Input Video',
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\naver\anaconda3\Lib\site-packages\gradio\component_meta.py", line 159, in wrapper
return fn(self, **kwargs)
^^^^^^^^^^^^^^^^^^
TypeError: Video.init() got an unexpected keyword argument 'source'

i can't do this in my local

install the right version of gradio: 3.44.4

This comment has been hidden

Thank you so much.
but i have new problum

i was install "triton-2.0.0-cp310-cp310-win_amd64.whl"
and this is not working i think

A matching Triton is not available, some optimizations will not be enabled
Traceback (most recent call last):
File "C:\Users\naver\anaconda3\Lib\site-packages\xformers_init_.py", line 55, in _is_triton_available
from xformers.triton.softmax import softmax as triton_softmax # noqa
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\naver\anaconda3\Lib\site-packages\xformers\triton\softmax.py", line 14, in
from xformers.triton.k_softmax import _softmax, _softmax_backward
File "C:\Users\naver\anaconda3\Lib\site-packages\xformers\triton\k_softmax.py", line 8, in
import triton.language as tl
ModuleNotFoundError: No module named 'triton.language'
logging improved.
Caching examples at: 'D:\Ai art\move to move\Rerender_A_Video\1\Rerender\gradio_cached_examples\71'
Caching example 1/3
C:\Users\naver\anaconda3\Lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: pytorch_lightning.utilities.distributed.rank_zero_only has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from pytorch_lightning.utilities instead.
rank_zero_deprecation(
ControlLDM: Running in eps-prediction mode
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads.
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Loaded model config from [./ControlNet/models/cldm_v15.yaml]
Loaded state_dict from [C:\Users\naver.cache\huggingface\hub\models--lllyasviel--ControlNet\snapshots\e78a8c4a5052a238198043ee5c0cb44e22abb9f7\models\control_sd15_canny.pth]
Traceback (most recent call last):
File "D:\Ai art\move to move\Rerender_A_Video\1\Rerender\app.py", line 926, in
gr.Examples(examples=args_list,
File "C:\Users\naver\anaconda3\Lib\site-packages\gradio\helpers.py", line 75, in create_examples
client_utils.synchronize_async(examples_obj.create)
File "C:\Users\naver\anaconda3\Lib\site-packages\gradio_client\utils.py", line 540, in synchronize_async
return fsspec.asyn.sync(fsspec.asyn.get_loop(), func, *args, **kwargs) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\naver\anaconda3\Lib\site-packages\fsspec\asyn.py", line 103, in sync
raise return_result
File "C:\Users\naver\anaconda3\Lib\site-packages\fsspec\asyn.py", line 56, in _runner
result[0] = await coro
^^^^^^^^^^
File "C:\Users\naver\anaconda3\Lib\site-packages\gradio\helpers.py", line 277, in create
await self.cache()
File "C:\Users\naver\anaconda3\Lib\site-packages\gradio\helpers.py", line 337, in cache
prediction = await Context.root_block.process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\naver\anaconda3\Lib\site-packages\gradio\blocks.py", line 1437, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\naver\anaconda3\Lib\site-packages\gradio\blocks.py", line 1109, in call_function
prediction = await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\naver\anaconda3\Lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\naver\anaconda3\Lib\site-packages\anyio_backends_asyncio.py", line 2134, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "C:\Users\naver\anaconda3\Lib\site-packages\anyio_backends_asyncio.py", line 851, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\naver\anaconda3\Lib\site-packages\gradio\utils.py", line 650, in wrapper
response = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "C:\Users\naver\anaconda3\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai art\move to move\Rerender_A_Video\1\Rerender\app.py", line 302, in process0
return process(*args[1:])
^^^^^^^^^^^^^^^^^^
File "C:\Users\naver\anaconda3\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai art\move to move\Rerender_A_Video\1\Rerender\app.py", line 291, in process
first_frame = process1(*args)
^^^^^^^^^^^^^^^
File "C:\Users\naver\anaconda3\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai art\move to move\Rerender_A_Video\1\Rerender\app.py", line 311, in process1
global_state.update_sd_model(cfg.sd_model, cfg.control_type)
File "D:\Ai art\move to move\Rerender_A_Video\1\Rerender\app.py", line 111, in update_sd_model
model.load_state_dict(
File "C:\Users\naver\anaconda3\Lib\site-packages\torch\nn\modules\module.py", line 2153, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for ControlLDM:
Unexpected key(s) in state_dict: "cond_stage_model.transformer.text_model.embeddings.position_ids".

Please help ...

Your transformer and diffuser may not be the correct version?

I set it up the same way, but nothing has changed....

A matching Triton is not available, some optimizations will not be enabled
Traceback (most recent call last):
File "C:\Users\naver\anaconda3\Lib\site-packages\xformers_init_.py", line 55, in _is_triton_available
from xformers.triton.softmax import softmax as triton_softmax # noqa
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\naver\anaconda3\Lib\site-packages\xformers\triton\softmax.py", line 14, in
from xformers.triton.k_softmax import _softmax, _softmax_backward
File "C:\Users\naver\anaconda3\Lib\site-packages\xformers\triton\k_softmax.py", line 8, in
import triton.language as tl
ModuleNotFoundError: No module named 'triton.language'
logging improved.
Caching examples at: 'D:\Ai art\move to move\Rerender_A_Video\1\Rerender\gradio_cached_examples\71'
Caching example 1/3
C:\Users\naver\anaconda3\Lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: pytorch_lightning.utilities.distributed.rank_zero_only has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from pytorch_lightning.utilities instead.
rank_zero_deprecation(
ControlLDM: Running in eps-prediction mode
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads.
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla-xformers' with 512 in_channels
building MemoryEfficientAttnBlock with 512 in_channels...
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 1280, context_dim is 768 and using 8 heads.
Loaded model config from [./ControlNet/models/cldm_v15.yaml]
Loaded state_dict from [C:\Users\naver.cache\huggingface\hub\models--lllyasviel--ControlNet\snapshots\e78a8c4a5052a238198043ee5c0cb44e22abb9f7\models\control_sd15_canny.pth]
Traceback (most recent call last):
File "D:\Ai art\move to move\Rerender_A_Video\1\Rerender\app.py", line 926, in
gr.Examples(examples=args_list,
File "C:\Users\naver\anaconda3\Lib\site-packages\gradio\helpers.py", line 75, in create_examples
client_utils.synchronize_async(examples_obj.create)
File "C:\Users\naver\anaconda3\Lib\site-packages\gradio_client\utils.py", line 540, in synchronize_async
return fsspec.asyn.sync(fsspec.asyn.get_loop(), func, *args, **kwargs) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\naver\anaconda3\Lib\site-packages\fsspec\asyn.py", line 103, in sync
raise return_result
File "C:\Users\naver\anaconda3\Lib\site-packages\fsspec\asyn.py", line 56, in _runner
result[0] = await coro
^^^^^^^^^^
File "C:\Users\naver\anaconda3\Lib\site-packages\gradio\helpers.py", line 277, in create
await self.cache()
File "C:\Users\naver\anaconda3\Lib\site-packages\gradio\helpers.py", line 337, in cache
prediction = await Context.root_block.process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\naver\anaconda3\Lib\site-packages\gradio\blocks.py", line 1437, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\naver\anaconda3\Lib\site-packages\gradio\blocks.py", line 1109, in call_function
prediction = await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\naver\anaconda3\Lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\naver\anaconda3\Lib\site-packages\anyio_backends_asyncio.py", line 2134, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "C:\Users\naver\anaconda3\Lib\site-packages\anyio_backends_asyncio.py", line 851, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\naver\anaconda3\Lib\site-packages\gradio\utils.py", line 650, in wrapper
response = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "C:\Users\naver\anaconda3\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai art\move to move\Rerender_A_Video\1\Rerender\app.py", line 302, in process0
return process(*args[1:])
^^^^^^^^^^^^^^^^^^
File "C:\Users\naver\anaconda3\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai art\move to move\Rerender_A_Video\1\Rerender\app.py", line 291, in process
first_frame = process1(*args)
^^^^^^^^^^^^^^^
File "C:\Users\naver\anaconda3\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\Ai art\move to move\Rerender_A_Video\1\Rerender\app.py", line 311, in process1
global_state.update_sd_model(cfg.sd_model, cfg.control_type)
File "D:\Ai art\move to move\Rerender_A_Video\1\Rerender\app.py", line 111, in update_sd_model
model.load_state_dict(
File "C:\Users\naver\anaconda3\Lib\site-packages\torch\nn\modules\module.py", line 2153, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for ControlLDM:
Unexpected key(s) in state_dict: "cond_stage_model.transformer.text_model.embeddings.position_ids".

See
https://github.com/williamyang1991/Rerender_A_Video/issues/106
The same issue is sovled by degrade transformers vision

image.png

my transformers ver is 4.19.2...

As I have explained in that issue, we direct use the ControlNet code without modification of the text_model.
So the best way is to raise an issue or search the solution in the ControlNet repositary, for example https://github.com/lllyasviel/ControlNet/issues/532
You error is just about loading the model. Why not try strict=False?

Sorry I'm not a programmer...
I looked at the error window and searched the Internet to find the answer.
so I don't know what strict=False is.

How do I use strict=False?

model.load_state_dict(XXXX)
-->
model.load_state_dict(XXXX, strict=False)

I fix that in "app.py"?

Which version of Python should I use??
What's verstion PyTorch, Python, torch?
Python is 3.10.6 right?
and webdataset is not have 0.2.5 verstion..

Sign up or log in to comment