runtime error

Exit code: 1. Reason: et r = _request_wrapper( File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 302, in _request_wrapper hf_raise_for_status(response) File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 477, in hf_raise_for_status raise _format(HfHubHTTPError, str(e), response) from e huggingface_hub.errors.HfHubHTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/s3proxy?GET=https%3A%2F%2Fs3.us-east-1.amazonaws.com%2Flfs.huggingface.co%2Frepos%2F63%2Fd3%2F63d375d7768aae89cb7d0a9b0f9e56df29239507b87d149f405c0f48bde326fa%2F47eae01cad5ab095c15e12cfec7c0365bad653d00ff5e228c9f558ba817f786d%3FX-Amz-Algorithm%3DAWS4-HMAC-SHA256%26X-Amz-Content-Sha256%3DUNSIGNED-PAYLOAD%26X-Amz-Credential%3DAKIA4N7VTDGOYNNAVQWR%252F20250217%252Fus-east-1%252Fs3%252Faws4_request%26X-Amz-Date%3D20250217T170804Z%26X-Amz-Expires%3D3600%26X-Amz-Signature%3Daf33572ba762dad2a35934b37ccc05bd4ba5f5d7af74d81a444f12911f604bef%26X-Amz-SignedHeaders%3Dhost%26response-content-disposition%3Dinline%253B%2520filename%252A%253DUTF-8%2527%2527vd-four-flow-v1-0-fp16.pth%253B%2520filename%253D%2522vd-four-flow-v1-0-fp16.pth%2522%253B%26x-id%3DGetObject&HEAD=https%3A%2F%2Fs3.us-east-1.amazonaws.com%2Flfs.huggingface.co%2Frepos%2F63%2Fd3%2F63d375d7768aae89cb7d0a9b0f9e56df29239507b87d149f405c0f48bde326fa%2F47eae01cad5ab095c15e12cfec7c0365bad653d00ff5e228c9f558ba817f786d%3FX-Amz-Algorithm%3DAWS4-HMAC-SHA256%26X-Amz-Content-Sha256%3DUNSIGNED-PAYLOAD%26X-Amz-Credential%3DAKIA4N7VTDGOYNNAVQWR%252F20250217%252Fus-east-1%252Fs3%252Faws4_request%26X-Amz-Date%3D20250217T170804Z%26X-Amz-Expires%3D3600%26X-Amz-Signature%3D8e58da039b7cada5a31a6d107bfe72879ad1eb412b42338a4a72593ebbdaa667%26X-Amz-SignedHeaders%3Dhost&sign=eyJhbGciOiJIUzI1NiJ9.eyJyZWRpcmVjdF9kb21haW4iOiJzMy51cy1lYXN0LTEuYW1hem9uYXdzLmNvbSIsImlhdCI6MTczOTgxMjA4NCwiZXhwIjoxNzM5ODk4NDg0LCJpc3MiOiJodHRwczovL2h1Z2dpbmdmYWNlLmNvIn0.ohTyZ6tLHZAEvbTYnzN97R3S-XJcHiEY2tFg4cczjW4

Container logs:

===== Application Startup at 2025-02-17 17:05:01 =====


########
# v1.0 #
########


#######################
# Running in eps mode #
#######################

making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels


kl-f8.pth:   0%|          | 0.00/335M [00:00<?, ?B/s]
kl-f8.pth: 100%|█████████▉| 335M/335M [00:00<00:00, 411MB/s]
Load hfm from shi-labs/versatile-diffusion-model/pretrained_pth/kl-f8.pth
Load autoencoderkl with total 83653863 parameters,72921.759 parameter sum.
Load optimus_bert_connector with total 109489920 parameters,19050.707 parameter sum.
Load optimus_gpt2_connector with total 132109824 parameters,19032.043 parameter sum.


optimus-vae.pth:   0%|          | 0.00/1.02G [00:00<?, ?B/s]

optimus-vae.pth:  38%|███▊      | 388M/1.02G [00:01<00:01, 386MB/s]
optimus-vae.pth: 100%|█████████▉| 1.02G/1.02G [00:01<00:00, 581MB/s]
Load hfm from shi-labs/versatile-diffusion-model/pretrained_pth/optimus-vae.pth
Load optimus_vae_next with total 241599744 parameters,-344611.688 parameter sum.
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.


0it [00:00, ?it/s]
0it [00:00, ?it/s]
/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/torchvision/datapoints/__init__.py:12: UserWarning: The torchvision.datapoints and torchvision.transforms.v2 namespaces are still Beta. While we do not expect major breaking changes, some APIs may still change according to user feedback. Please submit any feedback you may have in this issue: https://github.com/pytorch/vision/issues/6753, and you can also check out https://github.com/pytorch/vision/issues/7319 to learn more about the APIs that we suspect might involve future changes. You can silence this warning by calling torchvision.disable_beta_transforms_warning().
  warnings.warn(_BETA_TRANSFORMS_WARNING)
/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/torchvision/transforms/v2/__init__.py:54: UserWarning: The torchvision.datapoints and torchvision.transforms.v2 namespaces are still Beta. While we do not expect major breaking changes, some APIs may still change according to user feedback. Please submit any feedback you may have in this issue: https://github.com/pytorch/vision/issues/6753, and you can also check out https://github.com/pytorch/vision/issues/7319 to learn more about the APIs that we suspect might involve future changes. You can silence this warning by calling torchvision.disable_beta_transforms_warning().
  warnings.warn(_BETA_TRANSFORMS_WARNING)


tokenizer_config.json:   0%|          | 0.00/905 [00:00<?, ?B/s]
tokenizer_config.json: 100%|██████████| 905/905 [00:00<00:00, 5.49MB/s]


vocab.json:   0%|          | 0.00/961k [00:00<?, ?B/s]
vocab.json: 100%|██████████| 961k/961k [00:00<00:00, 15.9MB/s]


merges.txt:   0%|          | 0.00/525k [00:00<?, ?B/s]
merges.txt: 100%|██████████| 525k/525k [00:00<00:00, 23.1MB/s]


special_tokens_map.json:   0%|          | 0.00/389 [00:00<?, ?B/s]
special_tokens_map.json: 100%|██████████| 389/389 [00:00<00:00, 4.19MB/s]


tokenizer.json:   0%|          | 0.00/2.22M [00:00<?, ?B/s]
tokenizer.json: 100%|██████████| 2.22M/2.22M [00:00<00:00, 56.8MB/s]


preprocessor_config.json:   0%|          | 0.00/316 [00:00<?, ?B/s]
preprocessor_config.json: 100%|██████████| 316/316 [00:00<00:00, 963kB/s]


config.json:   0%|          | 0.00/4.52k [00:00<?, ?B/s]
config.json: 100%|██████████| 4.52k/4.52k [00:00<00:00, 26.9MB/s]


model.safetensors:   0%|          | 0.00/1.71G [00:00<?, ?B/s]

model.safetensors:   5%|▍         | 83.9M/1.71G [00:01<00:19, 81.4MB/s]

model.safetensors:  50%|█████     | 861M/1.71G [00:02<00:01, 485MB/s]  
model.safetensors: 100%|█████████▉| 1.71G/1.71G [00:02<00:00, 630MB/s]
Load clip_image_context_encoder with total 427616513 parameters,64007.510 parameter sum.
Load clip_text_context_encoder with total 427616513 parameters,64007.510 parameter sum.
Load openai_unet_2d_next with total 859520964 parameters,100333.490 parameter sum.
Load openai_unet_0d_next with total 1706797888 parameters,250095.415 parameter sum.
Load vd_v2_0 with total 3746805485 parameters,206753.996 parameter sum.

###################
# Running in FP16 #
###################

Traceback (most recent call last):
  File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 406, in hf_raise_for_status
    response.raise_for_status()
  File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/requests/models.py", line 1024, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/s3proxy?GET=https%3A%2F%2Fs3.us-east-1.amazonaws.com%2Flfs.huggingface.co%2Frepos%2F63%2Fd3%2F63d375d7768aae89cb7d0a9b0f9e56df29239507b87d149f405c0f48bde326fa%2F47eae01cad5ab095c15e12cfec7c0365bad653d00ff5e228c9f558ba817f786d%3FX-Amz-Algorithm%3DAWS4-HMAC-SHA256%26X-Amz-Content-Sha256%3DUNSIGNED-PAYLOAD%26X-Amz-Credential%3DAKIA4N7VTDGOYNNAVQWR%252F20250217%252Fus-east-1%252Fs3%252Faws4_request%26X-Amz-Date%3D20250217T170804Z%26X-Amz-Expires%3D3600%26X-Amz-Signature%3Daf33572ba762dad2a35934b37ccc05bd4ba5f5d7af74d81a444f12911f604bef%26X-Amz-SignedHeaders%3Dhost%26response-content-disposition%3Dinline%253B%2520filename%252A%253DUTF-8%2527%2527vd-four-flow-v1-0-fp16.pth%253B%2520filename%253D%2522vd-four-flow-v1-0-fp16.pth%2522%253B%26x-id%3DGetObject&HEAD=https%3A%2F%2Fs3.us-east-1.amazonaws.com%2Flfs.huggingface.co%2Frepos%2F63%2Fd3%2F63d375d7768aae89cb7d0a9b0f9e56df29239507b87d149f405c0f48bde326fa%2F47eae01cad5ab095c15e12cfec7c0365bad653d00ff5e228c9f558ba817f786d%3FX-Amz-Algorithm%3DAWS4-HMAC-SHA256%26X-Amz-Content-Sha256%3DUNSIGNED-PAYLOAD%26X-Amz-Credential%3DAKIA4N7VTDGOYNNAVQWR%252F20250217%252Fus-east-1%252Fs3%252Faws4_request%26X-Amz-Date%3D20250217T170804Z%26X-Amz-Expires%3D3600%26X-Amz-Signature%3D8e58da039b7cada5a31a6d107bfe72879ad1eb412b42338a4a72593ebbdaa667%26X-Amz-SignedHeaders%3Dhost&sign=eyJhbGciOiJIUzI1NiJ9.eyJyZWRpcmVjdF9kb21haW4iOiJzMy51cy1lYXN0LTEuYW1hem9uYXdzLmNvbSIsImlhdCI6MTczOTgxMjA4NCwiZXhwIjoxNzM5ODk4NDg0LCJpc3MiOiJodHRwczovL2h1Z2dpbmdmYWNlLmNvIn0.ohTyZ6tLHZAEvbTYnzN97R3S-XJcHiEY2tFg4cczjW4

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/user/app/app.py", line 582, in <module>
    vd_inference = vd_inference(which='v1.0', fp16=True)
  File "/home/user/app/app.py", line 272, in __init__
    temppath = hf_hub_download('shi-labs/versatile-diffusion-model', 'pretrained_pth/vd-four-flow-v1-0-fp16.pth')
  File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 860, in hf_hub_download
    return _hf_hub_download_to_cache_dir(
  File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1009, in _hf_hub_download_to_cache_dir
    _download_to_tmp_and_move(
  File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 1543, in _download_to_tmp_and_move
    http_get(
  File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 369, in http_get
    r = _request_wrapper(
  File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 302, in _request_wrapper
    hf_raise_for_status(response)
  File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/huggingface_hub/utils/_http.py", line 477, in hf_raise_for_status
    raise _format(HfHubHTTPError, str(e), response) from e
huggingface_hub.errors.HfHubHTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/s3proxy?GET=https%3A%2F%2Fs3.us-east-1.amazonaws.com%2Flfs.huggingface.co%2Frepos%2F63%2Fd3%2F63d375d7768aae89cb7d0a9b0f9e56df29239507b87d149f405c0f48bde326fa%2F47eae01cad5ab095c15e12cfec7c0365bad653d00ff5e228c9f558ba817f786d%3FX-Amz-Algorithm%3DAWS4-HMAC-SHA256%26X-Amz-Content-Sha256%3DUNSIGNED-PAYLOAD%26X-Amz-Credential%3DAKIA4N7VTDGOYNNAVQWR%252F20250217%252Fus-east-1%252Fs3%252Faws4_request%26X-Amz-Date%3D20250217T170804Z%26X-Amz-Expires%3D3600%26X-Amz-Signature%3Daf33572ba762dad2a35934b37ccc05bd4ba5f5d7af74d81a444f12911f604bef%26X-Amz-SignedHeaders%3Dhost%26response-content-disposition%3Dinline%253B%2520filename%252A%253DUTF-8%2527%2527vd-four-flow-v1-0-fp16.pth%253B%2520filename%253D%2522vd-four-flow-v1-0-fp16.pth%2522%253B%26x-id%3DGetObject&HEAD=https%3A%2F%2Fs3.us-east-1.amazonaws.com%2Flfs.huggingface.co%2Frepos%2F63%2Fd3%2F63d375d7768aae89cb7d0a9b0f9e56df29239507b87d149f405c0f48bde326fa%2F47eae01cad5ab095c15e12cfec7c0365bad653d00ff5e228c9f558ba817f786d%3FX-Amz-Algorithm%3DAWS4-HMAC-SHA256%26X-Amz-Content-Sha256%3DUNSIGNED-PAYLOAD%26X-Amz-Credential%3DAKIA4N7VTDGOYNNAVQWR%252F20250217%252Fus-east-1%252Fs3%252Faws4_request%26X-Amz-Date%3D20250217T170804Z%26X-Amz-Expires%3D3600%26X-Amz-Signature%3D8e58da039b7cada5a31a6d107bfe72879ad1eb412b42338a4a72593ebbdaa667%26X-Amz-SignedHeaders%3Dhost&sign=eyJhbGciOiJIUzI1NiJ9.eyJyZWRpcmVjdF9kb21haW4iOiJzMy51cy1lYXN0LTEuYW1hem9uYXdzLmNvbSIsImlhdCI6MTczOTgxMjA4NCwiZXhwIjoxNzM5ODk4NDg0LCJpc3MiOiJodHRwczovL2h1Z2dpbmdmYWNlLmNvIn0.ohTyZ6tLHZAEvbTYnzN97R3S-XJcHiEY2tFg4cczjW4

########
# v1.0 #
########


#######################
# Running in eps mode #
#######################

making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels


kl-f8.pth:   0%|          | 0.00/335M [00:00<?, ?B/s]
kl-f8.pth: 100%|█████████▉| 335M/335M [00:00<00:00, 479MB/s]
Load hfm from shi-labs/versatile-diffusion-model/pretrained_pth/kl-f8.pth
Load autoencoderkl with total 83653863 parameters,72921.759 parameter sum.
Load optimus_bert_connector with total 109489920 parameters,19118.613 parameter sum.
Load optimus_gpt2_connector with total 132109824 parameters,19191.708 parameter sum.


optimus-vae.pth:   0%|          | 0.00/1.02G [00:00<?, ?B/s]

optimus-vae.pth:  36%|███▌      | 367M/1.02G [00:01<00:01, 366MB/s]
optimus-vae.pth: 100%|█████████▉| 1.02G/1.02G [00:01<00:00, 677MB/s]
Load hfm from shi-labs/versatile-diffusion-model/pretrained_pth/optimus-vae.pth
Load optimus_vae_next with total 241599744 parameters,-344611.688 parameter sum.
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.


0it [00:00, ?it/s]
0it [00:00, ?it/s]
/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/torchvision/datapoints/__init__.py:12: UserWarning: The torchvision.datapoints and torchvision.transforms.v2 namespaces are still Beta. While we do not expect major breaking changes, some APIs may still change according to user feedback. Please submit any feedback you may have in this issue: https://github.com/pytorch/vision/issues/6753, and you can also check out https://github.com/pytorch/vision/issues/7319 to learn more about the APIs that we suspect might involve future changes. You can silence this warning by calling torchvision.disable_beta_transforms_warning().
  warnings.warn(_BETA_TRANSFORMS_WARNING)
/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/torchvision/transforms/v2/__init__.py:54: UserWarning: The torchvision.datapoints and torchvision.transforms.v2 namespaces are still Beta. While we do not expect major breaking changes, some APIs may still change according to user feedback. Please submit any feedback you may have in this issue: https://github.com/pytorch/vision/issues/6753, and you can also check out https://github.com/pytorch/vision/issues/7319 to learn more about the APIs that we suspect might involve future changes. You can silence this warning by calling torchvision.disable_beta_transforms_warning().
  warnings.warn(_BETA_TRANSFORMS_WARNING)


vocab.json:   0%|          | 0.00/961k [00:00<?, ?B/s]
vocab.json: 100%|██████████| 961k/961k [00:00<00:00, 44.1MB/s]


merges.txt:   0%|          | 0.00/525k [00:00<?, ?B/s]
merges.txt: 100%|██████████| 525k/525k [00:00<00:00, 14.4MB/s]


special_tokens_map.json:   0%|          | 0.00/389 [00:00<?, ?B/s]
special_tokens_map.json: 100%|██████████| 389/389 [00:00<00:00, 3.06MB/s]


tokenizer_config.json:   0%|          | 0.00/905 [00:00<?, ?B/s]
tokenizer_config.json: 100%|██████████| 905/905 [00:00<00:00, 6.96MB/s]


tokenizer.json:   0%|          | 0.00/2.22M [00:00<?, ?B/s]
tokenizer.json: 100%|██████████| 2.22M/2.22M [00:00<00:00, 31.5MB/s]


preprocessor_config.json:   0%|          | 0.00/316 [00:00<?, ?B/s]
preprocessor_config.json: 100%|██████████| 316/316 [00:00<00:00, 1.64MB/s]


config.json:   0%|          | 0.00/4.52k [00:00<?, ?B/s]
config.json: 100%|██████████| 4.52k/4.52k [00:00<00:00, 35.0MB/s]


model.safetensors:   0%|          | 0.00/1.71G [00:00<?, ?B/s]

model.safetensors:   5%|▍         | 83.9M/1.71G [00:01<00:25, 63.4MB/s]

model.safetensors:  56%|█████▋    | 966M/1.71G [00:02<00:01, 487MB/s]  
model.safetensors: 100%|█████████▉| 1.71G/1.71G [00:02<00:00, 594MB/s]
Load clip_image_context_encoder with total 427616513 parameters,64007.510 parameter sum.
Load clip_text_context_encoder with total 427616513 parameters,64007.510 parameter sum.
Load openai_unet_2d_next with total 859520964 parameters,100401.779 parameter sum.