Message: 'You seem to be using the pipelines sequentially on GPU. In order to maximize efficiency please use a dataset'
#39
by
hmanju
- opened
How can I suppress this warning?
Also is there an alternate way to perform Llama3 inference without using pipeline api?
All the other LLMs on huggingface instantiate an AutoTokenizer and AutoModelForCausalLM, tokenize the input, apply the chat template and pass the input ids through the model for inference.