Spaced amino acid tokenization

#3
by Roynadler - opened

Hey!

Decoding 'input_id_y' produces spaced proteins, is this expected - Are you intentionally spacing the amino acids before the tokenizer?
If so, what is the intended purpose of this choice? Lower diversity of input tokens for training? A need to keep tokenizations close in length to 3Di tokenizations?

from transformers import T5Tokenizer, T5EncoderModel
import torch
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')

# Load the tokenizer
tokenizer = T5Tokenizer.from_pretrained('Rostlab/ProstT5', do_lower_case=False)

for sample in ProstT5Dataset['train']:
  break

print(tokenizer.decode(sample['input_id_y'], skip_special_tokens=True))

# Outputs:
# M N K G Q W I I S A F V A G A L A T A G A Y I A I Q W N A M P P E G Q A S A D D R A N P N G P M A G Q T S P R L S E Q A A Q T I A V Q T L M G D P Y G R T S A E V L K N M T A K G L E L N A G Q S E W V W E V R I A P S A D M P Q G I N G E L R I N A N D G R L T P V M L P F L D

Thanks for making this dataset public!

Sign up or log in to comment