Hosted inference API using model w/o PICARD?

#1
by JuanCadavid - opened

Dear Torsten,

Thanks for sharing this model. Just to confirm that I understand correctly, the inference API from Hugging face (and therefore the inference widget here in the model reference card page) does load your model with fine-tuned weigths, but it does not use PICARD in the pipeline for predictions, is that right?

In order to reproduce your results with PICARD, one has to follow the serve instructions where one makes use of your custom pipeline class; Please correct me if I'm wrong,

Thanks,
Juan

Hi Juan, yes, that’s right. Regards, Torsten

Dear Torsten,
Thanks, glad I got it right. Cheers!

Sign up or log in to comment