Parallel Inferences using GPU?

#38
by vermanic - opened

So, I have this basic question that if I call the infer() function parallely using multiple threads, Will that work?

Code:

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

checkpoint = "WizardLM/WizardCoder-15B-V1.0"
device = "cuda" if torch.cuda.is_available() else "cpu"  # "cuda:X" for GPU usage or "cpu" for CPU usage


class Model:
    def __init__(self):
        print("Running in " + device)
        self.tokenizer = AutoTokenizer.from_pretrained(checkpoint)
        self.model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map='auto')

    def infer(self, input_text, token_count):
        inputs = self.tokenizer.encode(input_text, return_tensors="pt").to(device)
        outputs = self.model.generate(inputs, max_new_tokens=token_count)
        return self.tokenizer.decode(outputs[0])

Also, max_new_tokens means the number of tokens I want the model to respond with, right?

Sign up or log in to comment