FP16 model and tokenizer

#1
by adnan-ahmad-tub - opened

Hi,

For 16f model, is it mandatory to use

tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf")
model = LlamaForCausalLM.from_pretrained("kaist-ai/Prometheus-13b-v1.0", device_map="auto", torch_dtype=torch.float16)

Or

tokenizer = AutoTokenizer.from_pretrained("kaist-ai/prometheus-13b-v1.0")
model = AutoModelForCausalLM.from_pretrained("kaist-ai/prometheus-13b-v1.0", device_map="auto", torch_dtype=torch.float16)

Will also work?

Thanks!

Sign up or log in to comment