These model checkpoints are a result of LoRA(Low Rank Adaptation of Language Models) for fine tuning alternatives. Initially when tried on a toy English-Hindi dataset, the model sizes were around 200+ MB(each checkpoint), with LoRA finetuning it is now only 2 MB. This will be really beenficial for the tasks we will do later.
I will see how do we make use of these checkpoints and use it for inference.