How can I move the "tokenizer" to GPU?

#7
by Muhammadreza - opened

Hello.
I found your project while working on something similar, and I found your project way better than what I came up with. So I decided to use your model on a GPU, but I couldn't manage to move tokenizer to cuda.
What should I do?
The model itself goes on GPU very well, but on Tokenization, it throws mismatching devices error at me :).

Update:

device = torch.device("cuda")

tokenizer = AutoTokenizer.from_pretrained('sander-wood/text-to-music')
model = AutoModelForSeq2SeqLM.from_pretrained('sander-wood/text-to-music')
model.to(device)

input_ids = tokenizer(text, 
                        return_tensors='pt', 
                        truncation=True, 
                        max_length=max_length).input_ids.to(device)

decoder_start_token_id = model.config.decoder_start_token_id
eos_token_id = model.config.eos_token_id

decoder_input_ids = torch.tensor([[decoder_start_token_id]])

for t_idx in range(max_length):
    outputs = model(input_ids=input_ids, 
    decoder_input_ids=decoder_input_ids)
    probs = outputs.logits[0][-1]
    probs = torch.nn.Softmax(dim=-1)(probs).detach().numpy()
    sampled_id = temperature_sampling(probs=top_p_sampling(probs, 
                                                           top_p=top_p, 
                                                           return_probs=True),
                                      temperature=temperature)
    decoder_input_ids = torch.cat((decoder_input_ids, torch.tensor([[sampled_id]])), 1)
    if sampled_id!=eos_token_id:
        continue
    else:
        tune = "X:1\n"
        tune += tokenizer.decode(decoder_input_ids[0], skip_special_tokens=True)
        print(tune)
        break

Error:

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select)

I'm on a free colab subscription.

Muhammadreza changed discussion status to closed

Sign up or log in to comment