Usage question

#1
by javieritopppp - opened

Hi, I'm trying to run an inference code from the model:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import transformers

model_id = "TechxGenus/Meta-Llama-3-70B-Instruct-AWQ"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={
        "torch_dtype": torch.float16
    },
    device_map="auto"
)
output=pipeline(
    "Hey how are you doing today?",
)

print(output)

But I'm getting the next output:

{'generated_text': 'Hey how are you doing today? FIG Fields проп пропESPoric秦 zas Böylece dystdboolЬ_OBJ'}

Anyone can help me?

Sign up or log in to comment