Error trying to duplicate
When trying to duplicate, I get the following error:
ValueError: Invalid model path: ckpts/hunyuan-video-t2v-720p/transformers/mp_rank_00_model_states.pt
Has anyone successfully duplicated this Space?
Hi,
I have changed the path to ckpts
. You can retry in 3 ways:
- Synchronize your space from this one
- Replace
tencent_HunyuanVideo
byckpts
inapp.py
- Or duplicate your space a second time
I duplicated the Space again and got this error:
ValueError: Invalid model path: ckpts/hunyuan-video-t2v-720p/transformers/mp_rank_00_model_states.pt
So the same error.
I have added some logs. Do you see in your logs those ones?
initialize_model: ...models_root
exists: ...
Model initialized: ...
And also this one and the following?What is dit_weight: ckpts/hunyuan-video-t2v-720p/transformers/mp_rank_00_model_states.pt
PS: I have slightly changed the code, that may fix the space
This is the output when I just tried to duplicate. It is different from the previous errors.
runtime error
Exit code: 1. Reason: A
mp_rank_00_model_states_fp8.pt: 90%|โโโโโโโโโ | 11.9G/13.2G [00:09<00:01, 1.31GB/s][A
mp_rank_00_model_states_fp8.pt: 100%|โโโโโโโโโโ| 13.2G/13.2G [00:10<00:00, 1.30GB/s]
mp_rank_00_model_states_fp8_map.pt: 0%| | 0.00/104k [00:00<?, ?B/s][A
mp_rank_00_model_states_fp8_map.pt: 100%|โโโโโโโโโโ| 104k/104k [00:00<00:00, 39.7MB/s]
hunyuan-video-t2v-720p/vae/config.json: 0%| | 0.00/785 [00:00<?, ?B/s][A
hunyuan-video-t2v-720p/vae/config.json: 100%|โโโโโโโโโโ| 785/785 [00:00<00:00, 8.40MB/s]
pytorch_model.pt: 0%| | 0.00/986M [00:00<?, ?B/s][A
pytorch_model.pt: 100%|โโโโโโโโโโ| 986M/986M [00:01<00:00, 918MB/s][A
pytorch_model.pt: 100%|โโโโโโโโโโ| 986M/986M [00:02<00:00, 460MB/s]
initialize_model: ckptsmodels_root
exists: ckpts
2025-01-03 07:23:31.750 | INFO | hyvideo.inference:from_pretrained:154 - Got text-to-video model root path: ckpts
2025-01-03 07:23:31.974 | INFO | hyvideo.inference:from_pretrained:189 - Building model...
What is dit_weight: ckpts/hunyuan-video-t2v-720p/transformers/mp_rank_00_model_states.pt
dit_weight.exists(): False
dit_weight.is_file(): False
dit_weight.is_dir(): False
dit_weight.is_symlink(): False
Traceback (most recent call last):
File "/home/user/app/app.py", line 170, in
demo = create_demo("ckpts")
File "/home/user/app/app.py", line 94, in create_demo
model = initialize_model(model_path)
File "/home/user/app/app.py", line 40, in initialize_model
hunyuan_video_sampler = HunyuanVideoSampler.from_pretrained(models_root_path, args=args)
File "/home/user/app/hyvideo/inference.py", line 203, in from_pretrained
model = Inference.load_state_dict(args, model, pretrained_model_path)
File "/home/user/app/hyvideo/inference.py", line 314, in load_state_dict
print('dit_weight.is_junction(): ' + str(dit_weight.is_junction()))
AttributeError: 'PosixPath' object has no attribute 'is_junction'
Container logs:
===== Application Startup at 2025-01-03 06:20:03 =====
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling transformers.utils.move_cache()
.
initialize_model: ckptsmodels_root
exists: ckpts
2025-01-03 07:23:31.750 | INFO | hyvideo.inference:from_pretrained:154 - Got text-to-video model root path: ckpts
2025-01-03 07:23:31.974 | INFO | hyvideo.inference:from_pretrained:189 - Building model...
What is dit_weight: ckpts/hunyuan-video-t2v-720p/transformers/mp_rank_00_model_states.pt
dit_weight.exists(): False
dit_weight.is_file(): False
dit_weight.is_dir(): False
dit_weight.is_symlink(): False
Traceback (most recent call last):
File "/home/user/app/app.py", line 170, in
demo = create_demo("ckpts")
File "/home/user/app/app.py", line 94, in create_demo
model = initialize_model(model_path)
File "/home/user/app/app.py", line 40, in initialize_model
hunyuan_video_sampler = HunyuanVideoSampler.from_pretrained(models_root_path, args=args)
File "/home/user/app/hyvideo/inference.py", line 203, in from_pretrained
model = Inference.load_state_dict(args, model, pretrained_model_path)
File "/home/user/app/hyvideo/inference.py", line 314, in load_state_dict
print('dit_weight.is_junction(): ' + str(dit_weight.is_junction()))
AttributeError: 'PosixPath' object has no attribute 'is_junction'
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling transformers.utils.move_cache()
.
OK, you can retry. (now it download with snapshot and not file by file)
May I guess it's working now? ๐
No, still getting and error. I just kind of got frustrated and gave up.
runtime error
Exit code: 1. Reason: coder model (llm) from: ./ckpts/text_encoder
Traceback (most recent call last):
File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/transformers/utils/hub.py", line 403, in cached_file
resolved_file = hf_hub_download(
File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 106, in _inner_fn
validate_repo_id(arg_value)
File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 154, in validate_repo_id
raise HFValidationError(
huggingface_hub.errors.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': './ckpts/text_encoder'. Use repo_type
argument if needed.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/user/app/app.py", line 167, in
demo = create_demo("ckpts")
File "/home/user/app/app.py", line 86, in create_demo
model = initialize_model(model_path)
File "/home/user/app/app.py", line 32, in initialize_model
hunyuan_video_sampler = HunyuanVideoSampler.from_pretrained(models_root_path, args=args)
File "/home/user/app/hyvideo/inference.py", line 241, in from_pretrained
text_encoder = TextEncoder(
File "/home/user/app/hyvideo/text_encoder/init.py", line 180, in init
self.model, self.model_path = load_text_encoder(
File "/home/user/app/hyvideo/text_encoder/init.py", line 36, in load_text_encoder
text_encoder = AutoModel.from_pretrained(
File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 487, in from_pretrained
resolved_config_file = cached_file(
File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/transformers/utils/hub.py", line 469, in cached_file
raise EnvironmentError(
OSError: Incorrect path_or_model_id: './ckpts/text_encoder'. Please provide either the path to a local folder or the repo_id of a model on the Hub.
FYI, I have updated the space thanks to your log
I duplicated the space and selected ZeroGPU.
Got this error
Traceback (most recent call last):
File "/home/user/app/app.py", line 22, in
preprocess_text_encoder_tokenizer(input_dir = "ckpts/llava-llama-3-8b-v1_1-transformers", output_dir = "ckpts/text_encoder")
TypeError: preprocess_text_encoder_tokenizer() got an unexpected keyword argument 'input_dir'
Hi,
I have fixed this bug. You can retry in 3 ways:
- Synchronize your space from this one (click on Settings and then on Synchronize)
- Copy/paste the new code in
app.py
- Or duplicate your space a second time
Thanks. But after clicking "Generate" , I got this error,
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/gradio/queueing.py", line 536, in process_events
response = await route_utils.call_process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/route_utils.py", line 322, in call_process_api
output = await app.get_blocks().process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1935, in process_api
result = await self.call_function(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1520, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2505, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 1005, in run
result = context.run(func, *args)
File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 826, in wrapper
response = f(*args, **kwargs)
File "/home/user/app/app.py", line 161, in
fn=lambda *inputs: generate_video(model, *inputs),
File "/usr/local/lib/python3.10/site-packages/spaces/zero/wrappers.py", line 201, in gradio_handler
worker.arg_queue.put(((args, kwargs), GradioPartialContext.get()))
File "/usr/local/lib/python3.10/site-packages/spaces/utils.py", line 54, in put
raise PicklingError(message)
_pickle.PicklingError: cannot pickle '_io.TextIOWrapper' object
I have added the dill library because some people has solved the problem this way. You can retry.
@Fabrice-TIERCELIN sorry. Still same error persists. I synced the changes
I have changed a syntax that is possibly the root cause. I have also added some logs to know the moment of the error. Do you see the following messages in the logs?
generate_video (prompt:
generate_video_gpu (prompt:
Predicting video...
Video predicted
I got this . Right from the beginning.
Running on local URL: http://0.0.0.0:7860
To create a public link, set share=True
in launch()
.
generate_video (prompt: A cat walks on the grass, realistic style.)
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/spaces/utils.py", line 46, in put
super().put(obj)
File "/usr/local/lib/python3.10/multiprocessing/queues.py", line 371, in put
obj = _ForkingPickler.dumps(obj)
File "/usr/local/lib/python3.10/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
TypeError: cannot pickle '_io.TextIOWrapper' object
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/gradio/queueing.py", line 536, in process_events
response = await route_utils.call_process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/route_utils.py", line 322, in call_process_api
output = await app.get_blocks().process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1935, in process_api
result = await self.call_function(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1520, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2505, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 1005, in run
result = context.run(func, *args)
File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 826, in wrapper
response = f(*args, **kwargs)
File "/home/user/app/app.py", line 61, in generate_video
return generate_video_gpu(
File "/usr/local/lib/python3.10/site-packages/spaces/zero/wrappers.py", line 201, in gradio_handler
worker.arg_queue.put(((args, kwargs), GradioPartialContext.get()))
File "/usr/local/lib/python3.10/site-packages/spaces/utils.py", line 54, in put
raise PicklingError(message)
_pickle.PicklingError: cannot pickle '_io.TextIOWrapper' object
You said here that you have solved the same error that @RageshAntony and me are facing in the comment above when using this space. How do you solved it? I can't find the fix in the two PR diffs.
@RageshAntony is running the current duplicated space on ZERO and me has no GPU.
I was able to duplicate with an A100, but got this error when trying to generate:
File "/home/user/app/hyvideo/modules/models.py", line 204, in forward
attn = attention(
File "/home/user/app/hyvideo/modules/attenion.py", line 108, in attention
x = flash_attn_varlen_func(
TypeError: 'NoneType' object is not callable
I have set torch instead of flash.
@johnblues , you can retry on A100.
@Fabrice-TIERCELIN
After synced the build didn't start and it threw a configuration error :
No candidate PyTorch version found for ZeroGPU
I have removed torch==2.5.1
in the requirements as it has solved the problem in a similar case.
@RageshAntony , you can retry.
So it's not working yet.
Try a Guidance Scale to 7
And if it's not better, try to increase the steps as much as you can for max 2 minutes of generation.
OK, if you want, you can try with any other parameters. Change the prompt to simple one like "dog", "tree".
Meanwhile, I will update the code and test options. Do not synchronize now.
Are you still on ZeroGPU or dedicated GPU?
Now you can synchronize.
sorry to say. Still the same issue.
Running on local URL: http://0.0.0.0:7860
To create a public link, set share=True
in launch()
.
generate_video (prompt: A cat walks on the grass, steampunk world style )
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/spaces/utils.py", line 46, in put
super().put(obj)
File "/usr/local/lib/python3.10/multiprocessing/queues.py", line 371, in put
obj = _ForkingPickler.dumps(obj)
File "/usr/local/lib/python3.10/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
TypeError: cannot pickle '_io.TextIOWrapper' object
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/gradio/queueing.py", line 536, in process_events
response = await route_utils.call_process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/route_utils.py", line 322, in call_process_api
output = await app.get_blocks().process_api(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1935, in process_api
result = await self.call_function(
File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 1520, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "/usr/local/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2461, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 962, in run
result = context.run(func, *args)
File "/usr/local/lib/python3.10/site-packages/gradio/utils.py", line 826, in wrapper
response = f(*args, **kwargs)
File "/home/user/app/app.py", line 61, in generate_video
return generate_video_gpu(
File "/usr/local/lib/python3.10/site-packages/spaces/zero/wrappers.py", line 201, in gradio_handler
worker.arg_queue.put(((args, kwargs), GradioPartialContext.get()))
File "/usr/local/lib/python3.10/site-packages/spaces/utils.py", line 54, in put
raise PicklingError(message)
_pickle.PicklingError: cannot pickle '_io.TextIOWrapper' object
my duplicated repo .
https://huggingface.co/spaces/RageshAntony/HunyuanVideo
OK, I have replaced all the single quotes by double quotes to get rid off the error _pickle.PicklingError: cannot pickle '_io.TextIOWrapper' object
Can someone synchronize with a ZERO and tell me if we get rid off the error _pickle.PicklingError: cannot pickle '_io.TextIOWrapper' object
?
It would be a step forward and if it works, I will raise this resolution to the project.