Text Generation
Transformers
PyTorch
llama
Inference Endpoints
text-generation-inference

Some weights of the model checkpoint at declare-lab/flacuna-13b-v1.0 were not used when initializing LlamaForCausalLM:

#2
by ayseozgun - opened

Hi,

I am trying to run the model on Sagemaker, but I am getting the following warning:

"Some weights of the model checkpoint at declare-lab/flacuna-13b-v1.0 were not used when initializing LlamaForCausalLM"

I tried two ways, but both of them gave same issue.(hf and github source code)
How can I solve this? Can you help me please?

import torch
from transformers import LlamaForCausalLM, LlamaTokenizer
base_model = "declare-lab/flacuna-13b-v1.0"
tokenizer = LlamaTokenizer.from_pretrained(base_model)
model = LlamaForCausalLM.from_pretrained(base_model,torch_dtype=torch.float16, device_map="auto")

from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("declare-lab/flacuna-13b-v1.0")
model = AutoModelForCausalLM.from_pretrained("declare-lab/flacuna-13b-v1.0",torch_dtype=torch.float16, device_map="auto")

Sign up or log in to comment