Basic sequence generation usage

#13
by kevinky - opened

Dear Noelia,

Many thanks for this important work and open sourcing it.

I have two simple questions about generating sequences using ProtGPT2:

  1. About the timing, by simply using the example case:

protgpt2 = pipeline('text-generation', model='#path to local protgpt2')
sequences = protgpt2("<|endoftext|>", max_length=100, do_sample=True, top_k=950, repetition_penalty=1.2, num_return_sequences=10, eos_token_id=0)

It takes 62 seconds for me in jupyter notebook with a GPU NVIDIA TITAN Xp. As you described, the generation should be very fast with a GPU. I am wondering if the codes above assign the GPU to do the generation.

  1. As you described in other issue panels (https://huggingface.co/nferruz/ProtGPT2/discussions/6), the generation process will generate sequences with <|endoftext|> as the final token. I randomly run the generation codes above several times but don't see a sequence with that kind of final token yet, and want to confirm if this is normal.

Many thanks,
Kevin

Hi Kevin,

Thanks a lot for your message. It sounds like the code isn't properly utilizing the GPU (maybe). You could force it to use the GPU by explicitly setting the model's device.
For example:

input = "M"
device = torch.device('cuda')
tokenizer = AutoTokenizer.from_pretrained('path-to-the-model') # or nferruz/ProtGTP2
model = GPT2LMHeadModel.from_pretrained('path-to-the-model').to(device) # here is where you define the device
input_ids = tokenizer.encode(input,return_tensors='pt').to(device)
output = model.generate(input_ids,  max_length=100, do_sample=True, top_k=950, repetition_penalty=1.2, num_return_sequences=10, eos_token_id=0)
print(output)

The speed will also depend on the GPU. For example, with an A100 I can generate more than 100 sequences in less than a minute, but with other GPUs, I have to lower that threshold significantly. Another thing that would speed up generation significantly would be the datatype. I haven't explored this, but a few colleagues mentioned using b16 speeds up inference by several orders of magnitude.

For your second point, yes, sequences are separated with the <|endoftext|> token. It should also be the padding token. So if you look at the generated outputs, their last token should be 0 (token for <|endoftext|>). It could be, however, that your threshold length is too short, and the model is truncating them before an <|endoftext|> token is produced.
Also, during decoding, ensure that the skip_special_tokensparameter is not set to True. This way, the 0 token gets decoded as <|endoftext|>. When it is set to true, the decoder considers it special and skips it so you wouldn't see it.

Let me know if this helps!
Noelia

Sign up or log in to comment