Broken link in model card

#5
by fepegar - opened

The .parallelize() link seems to be broken.

This is unfortunate, as I've found difficult to parallelize the model in a usable way. To be able to simply run inference, I've had to create my own device_map so the model is uniformly sharded across GPUs. I wonder if there's a nicer way to do this. I'm happy to share my hacky solution if needed.

Sign up or log in to comment