FinText (small version) models, pre-trained using the RoBERTa architecture, consist of 51.48 million parameters.