GGUF
Inference Endpoints
conversational

does not appear to have a file named config.json

#3
by HCCRen - opened

我想用transformer load TAIDE-LX-7B-Chat-4bit 出現以下錯誤,應該是沒有 config.json

code:
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("taide/TAIDE-LX-7B-Chat-4bit", load_in_4bit=True, device_map="auto", token="HF_token")

OSError: taide/TAIDE-LX-7B-Chat-4bit does not appear to have a file named config.json. Checkout

TAIDE org

您好,

repo 需改為 "taide/TAIDE-LX-7B-Chat"。

Best regards.

也有同樣的問題+1,因為想使用TAIDE-LX-7B-Chat-4bit輕量的先測試,一樣是 說沒有 config.json

TAIDE-LX-7B-Chat-4bit裡面只有這些檔案,我是一個個下載到/taide/TAIDE-LX-7B-Chat-4bit路徑下的資料夾裡面
.gitattributes、README.md、README_en.md、taide-7b-a.2-q4_k_m.gguf

但是一樣有這樣的錯誤,不曉得該怎麼修改 謝謝

torch.no_grad()
def stream_generate():

device = "cuda" if torch.cuda.is_available() else "cpu"
model_name = "./taide/TAIDE-LX-7B-Chat-4bit"  ## 我的python程式碼和taide資料夾是同樣的路徑下,不確定是否OK

token = "#########" #put your own authentication token here   ## 這是要輸入自己的HuggingFace的Access Token嗎?
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)

model = LlamaForCausalLM.from_pretrained(model_name,  
                                        torch_dtype=torch.float16,
                                        use_safetensors=True,
                                        device_map=device,
                                        load_in_8bit=True)
model.eval()
TAIDE org

您好,

  1. 請改成下載 https://huggingface.co/taide/TAIDE-LX-7B-Chat
  2. model_name = "./taide/TAIDE-LX-7B-Chat-4bit"
    改成
    model_name = "./taide/TAIDE-LX-7B-Chat"

Best regards.

您好,

  1. 請改成下載 https://huggingface.co/taide/TAIDE-LX-7B-Chat
  2. model_name = "./taide/TAIDE-LX-7B-Chat-4bit"
    改成
    model_name = "./taide/TAIDE-LX-7B-Chat"

Best regards.

謝謝您 那請問
token = "#########" #put your own authentication token here ## 這是要輸入我自己的HuggingFace的Access Token嗎?

TAIDE org
edited May 9

您好,
完整的測試程式碼(只需把 my_token 填入您自己的 access token),如下,請參考:

from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline

# https://huggingface.co/docs/hub/security-tokens#user-access-tokens
my_token = "***********************************************************************"  # 這行需換成您自己的 access token

# load model
model_name = "taide/TAIDE-LX-7B-Chat"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False, token=my_token)
model = AutoModelForCausalLM.from_pretrained(model_name, load_in_4bit=True, device_map="auto", token=my_token)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)

# prepare prompt
question = "臺灣最高的建築物是?"
chat = [
    {"role": "user", "content": f"{question}"},
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False)

# generate response
x = pipe(f"{prompt}", max_new_tokens=1024)
print(f"TAIDE: {x}")

Best regards.

No description provided.
HCCRen changed discussion status to closed

@NCGWRjason 我成功執行TAIDE-LX-7B-Chat-4bit
我的方式如下:
1.將taide-7b-a.2-q4_k_m.gguf下載下來
2.然後我用llama_cpp來運行
只是因模型小所以輸出的回答不多有時發生說到一半而中止

Sign up or log in to comment