Can we run this in FP16 instead of FP32 ?

#3
by vince62s - opened

Hi Ricardo
Would that make sense to release a checkpoint in FP16 ? would the accuracy change ?

answering to myself: converting to fp16, changing two lines of code model.half() and in_features.to(torch.float16) makes thing twice faster, twice less ram, same scores.

Sign up or log in to comment