I get an error message "config.json" is missing
#3
by
ahmadhatahet
- opened
Hi jphme,
Thank you for fine tuning this model.
I am trying to load it using
import torch
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
from langchain.llms import HuggingFacePipeline
llm_model_name = "jphme/Llama-2-13b-chat-german-GGML"
tokenizer = AutoTokenizer.from_pretrained(llm_model_name)
llm = AutoModelForCausalLM.from_pretrained(
llm_model_name,
device_map="auto",
torch_dtype=torch.bfloat16,
load_in_4bit=True,
bnb_4bit_quant_type='q4_0',
bnb_4bit_compute_dtype=torch.bfloat16,
cache_folder=str(cache_folder),
trust_remote_code=True)
And I get the following error message: "Llama-2-13b-chat-german-GGML does not appear to have a file named config.json."
I read because the weights are not converted to hugging face weights.
I obtained the license from Meta, however, I don't know what to do with it.
Hi, for use with transformers, please use the standard model (without GGML) here: https://huggingface.co/jphme/Llama-2-13b-chat-german
Thank you π