Error load Model
#2
by
NickyNicky
- opened
!pip install intel-extension-for-transformers
from transformers import AutoTokenizer, TextStreamer
from intel_extension_for_transformers.transformers import AutoModelForCausalLM
model_name = "Intel/neural-chat-7b-v1-1" # Hugging Face model_id or local model
prompt = "Once upon a time, there existed a little girl,"
tokenizer = AutoTokenizer.from_pretrained(model_name,
trust_remote_code=True)
inputs = tokenizer(prompt, return_tensors="pt").input_ids
streamer = TextStreamer(tokenizer)
model = AutoModelForCausalLM.from_pretrained(model_name,
load_in_8bit=True,
trust_remote_code=True)
outputs = model.generate(inputs, streamer=streamer, max_new_tokens=300)
Error:
!pip install git+"https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/layer_norm"
Thanks for your usage, but from the log you provided, we found that the error happened in this project https://github.com/togethercomputer/stripedhyena, not related to ours project.
I will recommend you to go through following suggestions to fix the error for "togethercomputer/stripedhyena"
There are two solutions for second pip install error
1. Make sure the env is installed Microsoft Visual C++ Build Tools or MinGW-w64 for windows, GCC and cmake for Linux . Therefore the build process for flash-attention may successfully.
2. Install flash-attention through pre build binary rather than build in time
After solve second error and successfully install flash-attention, the first import module error should be solved at same time.