FinText (base version) models, pre-trained using the RoBERTa architecture, consist of 124.65 million parameters.