can we get a config.json?

#1
by sirus - opened

llm_load_tensors: ggml ctx size = 0.11 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: CPU buffer size = 7338.65 MiB
...................................................................................................
llama_new_context_with_model: n_ctx = 4000
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CPU KV buffer size = 500.00 MiB
llama_new_context_with_model: KV self size = 500.00 MiB, K (f16): 250.00 MiB, V (f16): 250.00 MiB
llama_new_context_with_model: CPU input buffer size = 15.83 MiB
llama_new_context_with_model: CPU compute buffer size = 301.40 MiB
llama_new_context_with_model: graph splits (measure): 1
AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 |
Model metadata: {'tokenizer.ggml.add_eos_token': 'false', 'tokenizer.ggml.padding_token_id': '2', 'tokenizer.ggml.unknown_token_id': '0', 'tokenizer.ggml.eos_token_id': '2', 'general.architecture': 'llama', 'llama.rope.freq_base': '10000.000000', 'llama.context_length': '32768', 'general.name': 'cognitivecomputations_dolphin-2.6-mistral-7b-dpo-laser', 'tokenizer.ggml.add_bos_token': 'true', 'llama.embedding_length': '4096', 'llama.feed_forward_length': '14336', 'llama.attention.layer_norm_rms_epsilon': '0.000010', 'llama.rope.dimension_count': '128', 'tokenizer.ggml.bos_token_id': '1', 'llama.attention.head_count': '32', 'llama.block_count': '32', 'llama.attention.head_count_kv': '8', 'general.quantization_version': '2', 'tokenizer.ggml.model': 'llama', 'general.file_type': '7'}
Traceback (most recent call last):
File "/usr/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py", line 286, in hf_raise_for_status
response.raise_for_status()
File "/usr/lib/python3.11/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/s3nh/intfloat-e5-mistral-7b-instruct-GGUF/resolve/main/config.json

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/usr/lib/python3.11/site-packages/transformers/utils/hub.py", line 389, in cached_file
resolved_file = hf_hub_download(
^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 1238, in hf_hub_download
metadata = get_hf_file_metadata(
^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 1631, in get_hf_file_metadata
r = _request_wrapper(
^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 385, in _request_wrapper
response = _request_wrapper(
^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 409, in _request_wrapper
hf_raise_for_status(response)
File "/usr/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py", line 296, in hf_raise_for_status
raise EntryNotFoundError(message, response) from e
huggingface_hub.utils._errors.EntryNotFoundError: 404 Client Error. (Request ID: Root=1-65c44419-78c9be2554a4a20255f87db9;0af99497-8fda-4ae3-92d6-1dc3a3f8cc5e)

Entry Not Found for url: https://huggingface.co/s3nh/intfloat-e5-mistral-7b-instruct-GGUF/resolve/main/config.json.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/code/git/ontherag/./myrag.py", line 47, in
embed_model = HuggingFaceEmbedding(model_name="s3nh/intfloat-e5-mistral-7b-instruct-GGUF")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/llama_index/embeddings/huggingface.py", line 82, in init
model = AutoModel.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py", line 526, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/transformers/models/auto/configuration_auto.py", line 1082, in from_pretrained
config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/transformers/configuration_utils.py", line 644, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/transformers/configuration_utils.py", line 699, in _get_config_dict
resolved_config_file = cached_file(
^^^^^^^^^^^^
File "/usr/lib/python3.11/site-packages/transformers/utils/hub.py", line 440, in cached_file
raise EnvironmentError(
OSError: s3nh/intfloat-e5-mistral-7b-instruct-GGUF does not appear to have a file named config.json. Checkout 'https://huggingface.co/s3nh/intfloat-e5-mistral-7b-instruct-GGUF/main' for available files.

Sign up or log in to comment