runtime error

Exit code: 1. Reason: 08, 443MB/s] 30%|β–ˆβ–ˆβ–‰ | 1.25G/4.23G [00:03<00:07, 450MB/s] 40%|β–ˆβ–ˆβ–ˆβ–‰ | 1.67G/4.23G [00:04<00:06, 452MB/s] 50%|β–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 2.10G/4.23G [00:05<00:05, 447MB/s] 59%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 2.51G/4.23G [00:06<00:04, 447MB/s] 70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 2.95G/4.23G [00:07<00:03, 454MB/s] 80%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 3.37G/4.23G [00:08<00:02, 449MB/s] 90%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 3.80G/4.23G [00:09<00:01, 451MB/s] 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 4.22G/4.23G [00:10<00:00, 450MB/s] 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 4.23G/4.23G [00:10<00:00, 449MB/s] tokenizer_config.json: 0%| | 0.00/48.0 [00:00<?, ?B/s] tokenizer_config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 48.0/48.0 [00:00<00:00, 278kB/s] vocab.txt: 0%| | 0.00/232k [00:00<?, ?B/s] vocab.txt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 232k/232k [00:00<00:00, 52.6MB/s] tokenizer.json: 0%| | 0.00/466k [00:00<?, ?B/s] tokenizer.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 466k/466k [00:00<00:00, 26.3MB/s] config.json: 0%| | 0.00/570 [00:00<?, ?B/s] config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 570/570 [00:00<00:00, 4.22MB/s] The new embeddings will be initialized from a multivariate normal distribution that has old embeddings' mean and covariance. As described in this article: https://nlp.stanford.edu/~johnhew/vocab-expansion.html. To disable this, use `mean_resizing=False` -------------- ./ram_plus_swin_large_14m.pth -------------- Traceback (most recent call last): File "/home/user/app/app.py", line 44, in <module> ram_model = ram_plus( File "/usr/local/lib/python3.10/site-packages/ram/models/ram_plus.py", line 408, in ram_plus model, msg = load_checkpoint_swinlarge(model, pretrained, kwargs) File "/usr/local/lib/python3.10/site-packages/ram/models/utils.py", line 258, in load_checkpoint_swinlarge raise RuntimeError('checkpoint url or path is invalid') RuntimeError: checkpoint url or path is invalid

Container logs:

Fetching error logs...