Did you upload the wrong model.safetensors.index.json?

#1
by Labmem009 - opened

When I run this version, I encountered error like:
with safe_open(checkpoint_file, framework="pt") as f:
FileNotFoundError: No such file or directory: "Yi-34B-200K-DARE-merge-v5-2.67bpw-exl2-fiction/model-00001-of-00008.safetensors"
Did you upload the wrong model.safetensors.index.json? Or I use a false script?
My script is below:
from transformers import AutoModelForCausalLM, AutoTokenizer

model_path = 'Yi-34B-200K-DARE-merge-v5-2.67bpw-exl2-fiction'

tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)

Since transformers 4.35.0, the GPT-Q/AWQ model can be loaded using AutoModelForCausalLM.

model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
with open('text.txt', 'r', encoding='utf-8-sig') as file:
user_message = file.read()
print(len(user_message))
user_message=user_message+'\n'+'Please take note of the content in these sentences...'
system_message='Below is a task for...'

Prompt content: "hi"

messages = [
{"role": "user", "content": system_message+user_message},
]

input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'),max_new_tokens=10240, do_sample=True, temperature=0.9)
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)

Model response: "Hello! How can I assist you today?"

print(response)

AutoModelForCausalLM.from_pretrained

This is an exl2 model, you have to load it with an exllamav2 loader.

Sign up or log in to comment