why cant i duplicate the space?

#2
by ninjawick - opened

this gives this error Traceback (most recent call last):
File "/home/user/app/app.py", line 14, in
from llava.conversation import conv_templates
ModuleNotFoundError: No module named 'llava.conversation'; 'llava' is not a package

I'm getting a different error, line #44: huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/data/LLaVA-7B-v1'. Use repo_type argument if needed.

Seems an easy-enough fix, if I knew HF enough to know how to clone the repo...

deleted

Same error as zmeggyesi

Same error

PS C:\Program Files\Docker\Docker> docker start 8babfe49799e -i       

==========
== CUDA ==
==========

CUDA Version 11.8.0

Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license

A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.

ls: cannot access '/data': No such file or directory
df: /data: No such file or directory
mv: cannot stat 'llava.py': No such file or directory
mv: cannot stat 'train.py': No such file or directory
Traceback (most recent call last):
  File "/home/user/app/app.py", line 44, in <module>
    tokenizer = transformers.AutoTokenizer.from_pretrained(PATH_LLAVA)
  File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 622, in from_pretrained
    tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)
  File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 466, in get_tokenizer_config
    resolved_config_file = cached_file(
  File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/transformers/utils/hub.py", line 409, in cached_file
    resolved_file = hf_hub_download(
  File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 110, in _inner_fn
    validate_repo_id(arg_value)
  File "/home/user/.pyenv/versions/3.10.13/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 158, in validate_repo_id
    raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/data/LLaVA-7B-v1'. Use `repo_type` argument if needed.
PS C:\Program Files\Docker\Docker> 

As the checkpoint is too big (>10 GB, which may make the building process too slow), we save it in the persistent storage (/data in HuggingFace).
You can download them from here for local installation.

We also have to overwrite LLaVA with our modified version (llava.py and train.py) as here.
Please replace the target path with your own. You can find your LLaVA path via

import LLaVA
print(LLaVA.__file__)

I will make things easier after figuring out how to install local dependencies during the Gradio building.
Sorry for the inconvenience in advance 🙏

This version now can automatically download the checkpoints, where we do not need the persistent storage 🙃
It also simplifies the implementation, without installing the whole LLaVA package.

Note that this space is now in ZeroGPU if you are interested in making a duplicate.

Sign up or log in to comment